[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (195)

Search Parameters:
Keywords = sea clutter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 12288 KiB  
Article
Bayesian Distributed Target Detectors in Compound-Gaussian Clutter Against Subspace Interference with Limited Training Data
by Kun Xing, Zhiwen Cao, Weijian Liu, Ning Cui, Zhiyu Wang, Zhongjun Yu and Faxin Yu
Remote Sens. 2025, 17(5), 926; https://doi.org/10.3390/rs17050926 - 5 Mar 2025
Viewed by 99
Abstract
In this article, the problem of Bayesian detecting rank-one distributed targets under subspace interference and compound Gaussian clutter with inverse Gaussian texture is investigated. Due to the clutter heterogeneity, the training data may be insufficient. To tackle this problem, the clutter speckle covariance [...] Read more.
In this article, the problem of Bayesian detecting rank-one distributed targets under subspace interference and compound Gaussian clutter with inverse Gaussian texture is investigated. Due to the clutter heterogeneity, the training data may be insufficient. To tackle this problem, the clutter speckle covariance matrix (CM) is assumed to obey the complex inverse Wishart distribution, and the Bayesian theory is utilized to obtain an effective estimation. Moreover, the target echo is assumed to be with a known steering vector and unknown amplitudes across range cells. The interference is regarded as a steering matrix that is linearly independent of the target steering vector. By utilizing the generalized likelihood ratio test (GLRT), a Bayesian interference-canceling detector that can work in the absence of training data is derived. Moreover, five interference-cancelling detectors based on the maximum a posteriori (MAP) estimate of the speckle CM are proposed with the two-step GLRT, the Rao, Wald, Gradient, and Durbin tests. Experiments with simulated and measured sea clutter data indicate that the Bayesian interference-canceling detectors show better performance than the competitor in scenarios with limited training data. Full article
Show Figures

Figure 1

Figure 1
<p>PFA and threshold curves of the proposed interference-canceling detectors: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>f</mi> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>; (<b>b</b>) Detecting threshold versus <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>PDs of the detectors under different signal energy models with <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>: (<b>a</b>) Model 1; (<b>b</b>) Model 2; (<b>c</b>) Model 3; (<b>d</b>) Model 4; (<b>e</b>) Model 5; (<b>f</b>) Model 6.</p>
Full article ">Figure 2 Cont.
<p>PDs of the detectors under different signal energy models with <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>: (<b>a</b>) Model 1; (<b>b</b>) Model 2; (<b>c</b>) Model 3; (<b>d</b>) Model 4; (<b>e</b>) Model 5; (<b>f</b>) Model 6.</p>
Full article ">Figure 3
<p>Detection performance of the detectors under different number of training data with signal energy model 1: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>24</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3 Cont.
<p>Detection performance of the detectors under different number of training data with signal energy model 1: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>24</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>PDs of the detectors under different ICRs with SCR = 6 dB and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>PDs of the detectors in the presence of prior information mismatch with <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>. (<b>a</b>) different <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> with actual <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>; (<b>b</b>) different <math display="inline"><semantics> <mi>ν</mi> </semantics></math> with actual <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Range-pulse spectrum of the selected data. (<b>a</b>) File 83; (<b>b</b>) File 84.</p>
Full article ">Figure 7
<p>Energy probability density functions based on measure data and estimated parameters: (<b>a</b>) cell 12 (File 83); (<b>b</b>) cell 13 (File 83); (<b>c</b>) cell 14 (File 83); (<b>d</b>) cell 15 (File 83); (<b>e</b>) cell 23 (File 84); (<b>f</b>) cell 24 (File 84); (<b>g</b>) cell 25 (File 84); (<b>h</b>) cell 26 (File 84).</p>
Full article ">Figure 8
<p>PDs of the detectors under different signal energy models with <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> based on measure data: (<b>a</b>) Model 1 (File 83); (<b>b</b>) Model 6 (File 83); (<b>c</b>) Model 1 (File 84); (<b>d</b>) Model 6 (File 84).</p>
Full article ">
25 pages, 20488 KiB  
Article
SAR Small Ship Detection Based on Enhanced YOLO Network
by Tianyue Guan, Sheng Chang, Chunle Wang and Xiaoxue Jia
Remote Sens. 2025, 17(5), 839; https://doi.org/10.3390/rs17050839 - 27 Feb 2025
Viewed by 160
Abstract
Ships are important targets for marine surveillance in both military and civilian domains. Since the rise of deep learning, ship detection in synthetic aperture radar (SAR) images has achieved significant progress. However, the variability in ship size and resolution, especially the widespread presence [...] Read more.
Ships are important targets for marine surveillance in both military and civilian domains. Since the rise of deep learning, ship detection in synthetic aperture radar (SAR) images has achieved significant progress. However, the variability in ship size and resolution, especially the widespread presence of numerous small-sized ships, continues to pose challenges for effective ship detection in SAR images. To address the challenges posed by small ship targets, we propose an enhanced YOLO network to improve the detection accuracy of small targets. Firstly, we propose a Shuffle Re-parameterization (SR) module as a replacement for the C2f module in the original YOLOv8 network. The SR module employs re-parameterized convolution along with channel shuffle operations to improve feature extraction capabilities. Secondly, we employ the space-to-depth (SPD) module to perform down-sampling operations within the backbone network, thereby reducing the information loss associated with pooling operations. Thirdly, we incorporate a Hybrid Attention (HA) module into the neck network to enhance the feature representation of small ship targets while mitigating the interference caused by surrounding sea clutter and speckle noise. Finally, we add the shape-NWD loss to the regression loss, which emphasizes the shape and scale of the bounding box and mitigates the sensitivity of Intersection over Union (IoU) to positional deviations in small ship targets. Extensive experiments were carried out on three publicly available datasets—namely, LS-SSDD, HRSID, and iVision-MRSSD—to demonstrate the effectiveness and reliability of the proposed method. In the small ship dataset LS-SSDD, the proposed method exhibits a notable improvement in average precision at an IoU threshold of 0.5 (AP50), surpassing the baseline network by over 4%, and achieving an AP50 of 77.2%. In the HRSID and iVision-MRSSD datasets, AP50 reaches 91% and 95%, respectively. Additionally, the average precision for small targets (AP) exhibits an increase of approximately 2% across both datasets. Furthermore, the proposed method demonstrates outstanding performance in comparison experiments across all three datasets, outperforming existing state-of-the-art target detection methods. The experimental results offer compelling evidence supporting the superior performance and practical applicability of the proposed method in SAR small ship detection. Full article
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the proposed method.</p>
Full article ">Figure 2
<p>The structure of re-parameterized convolution block (RepConv).</p>
Full article ">Figure 3
<p>(<b>a</b>) The structure of Shuffle Re-parameterization block (SRB). (<b>b</b>) The structure of Shuffle Re-parameterization module (SR).</p>
Full article ">Figure 4
<p>The structure of Space-to-Depth module. (<b>a</b>) is the structure of SPD layer. The character C represents the feature map’s channel numbers, and the character s denotes the width and height of the input feature map. After the SPD layer, both the height and width of the feature map are reduced by half, while the number of channels is increased fourfold. (<b>b</b>) is the SPD module, which consists of the SPD layer. (SPD in the structure diagram (<b>b</b>) refers to SPD layer).</p>
Full article ">Figure 5
<p>The structure of Hybrid Attention (HA). (<b>a</b>) is the Multi-axis External Weights module (MEW). (<b>b</b>) is the Spatial Attention module (SA). (<b>c</b>) is the Hybrid Attention (HA) module, which consists of the MEW module and the SA module.</p>
Full article ">Figure 6
<p>Precision–recall curves when adding different modules.</p>
Full article ">Figure 7
<p>Visualization of the detection results. (<b>a</b>,<b>d</b>,<b>g</b>) represent the ground truth. (<b>b</b>,<b>e</b>,<b>h</b>) represent the baseline. (<b>c</b>,<b>f</b>,<b>i</b>) represent the proposed method. Green: ground truths. Yellow: detection results. Red: missed detections. Blue: false alarms.</p>
Full article ">Figure 8
<p>Visualization of the detection results. (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>) represent the ground truth. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>) represent the baseline. (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>) represent the proposed method. Green: ground truths. Yellow: detection results. Red: missed detections. Blue: false alarms.</p>
Full article ">Figure 9
<p>Visualization of the detection results of various methods on LS-SSDD (<b>a</b>)–(<b>l</b>). (<b>a</b>) Ground truth. (<b>b</b>) Faster R-CNN. (<b>c</b>) CenterNet. (<b>d</b>) FCOS. (<b>e</b>) ATSS. (<b>f</b>) YOLOv5n. (<b>g</b>) YOLOv8n. (<b>h</b>) YOLOv10n. (<b>i</b>) YOLOv11n. (<b>j</b>) SHIP-YOLO. (<b>k</b>) LHSDNet. (<b>l</b>) The proposed method. The green boxes indicate the ground truths; the red boxes indicate detection results.</p>
Full article ">Figure 9 Cont.
<p>Visualization of the detection results of various methods on LS-SSDD (<b>a</b>)–(<b>l</b>). (<b>a</b>) Ground truth. (<b>b</b>) Faster R-CNN. (<b>c</b>) CenterNet. (<b>d</b>) FCOS. (<b>e</b>) ATSS. (<b>f</b>) YOLOv5n. (<b>g</b>) YOLOv8n. (<b>h</b>) YOLOv10n. (<b>i</b>) YOLOv11n. (<b>j</b>) SHIP-YOLO. (<b>k</b>) LHSDNet. (<b>l</b>) The proposed method. The green boxes indicate the ground truths; the red boxes indicate detection results.</p>
Full article ">Figure 9 Cont.
<p>Visualization of the detection results of various methods on LS-SSDD (<b>a</b>)–(<b>l</b>). (<b>a</b>) Ground truth. (<b>b</b>) Faster R-CNN. (<b>c</b>) CenterNet. (<b>d</b>) FCOS. (<b>e</b>) ATSS. (<b>f</b>) YOLOv5n. (<b>g</b>) YOLOv8n. (<b>h</b>) YOLOv10n. (<b>i</b>) YOLOv11n. (<b>j</b>) SHIP-YOLO. (<b>k</b>) LHSDNet. (<b>l</b>) The proposed method. The green boxes indicate the ground truths; the red boxes indicate detection results.</p>
Full article ">
27 pages, 36300 KiB  
Article
Maritime Target Radar Detection and Tracking via DTNet Transfer Learning Using Multi-Frame Images
by Xiaoyang He, Xiaolong Chen, Xiaolin Du, Xinghai Wang, Shuwen Xu and Jian Guan
Remote Sens. 2025, 17(5), 836; https://doi.org/10.3390/rs17050836 - 27 Feb 2025
Viewed by 149
Abstract
Traditional detection and tracking methods struggle with the complex and dynamic maritime environment due to their poor generalization capabilities. To address this, this paper improves the YOLOv5 network by integrating Transformer and a Convolutional Block Attention Module (CBAM) with the multi-frame image information [...] Read more.
Traditional detection and tracking methods struggle with the complex and dynamic maritime environment due to their poor generalization capabilities. To address this, this paper improves the YOLOv5 network by integrating Transformer and a Convolutional Block Attention Module (CBAM) with the multi-frame image information obtained from radar scans. It proposes a detection and tracking method based on the Detection Tracking Network (DTNet), which leverages transfer learning and the DeepSORT tracking algorithm, enhancing the detection capabilities of the model across various maritime environments. First, radar echoes are preprocessed to create a dataset of Plan Position Indicator (PPI) images for different marine conditions. An integrated network for detecting and tracking maritime targets is then designed, utilizing the feature differences between moving targets and sea clutter, along with the coherence of inter-frame information for moving targets, to achieve multi-target detection and tracking. The proposed method was validated on real maritime targets, achieving a precision of 99.06%, which is a 7.36 percentage point improvement over the original YOLOv5, demonstrating superior detection and tracking performance. Additionally, the impact of maritime regions and weather conditions is discussed, showing that, when transferring from Region I to Regions II and III, the precision reached 92.2% and 89%, respectively, and, when facing rainy weather, although there was interference from the sea clutter and rain clutter, the precision was still able to reach 82.4%, indicating strong generalization capabilities compared to the original YOLOv5 network. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Image of the JRC radar.</p>
Full article ">Figure 2
<p>Data from different maritime areas. (<b>a</b>) Maritime Area I. (<b>b</b>) Maritime Area II. (<b>c</b>) Maritime Area III.</p>
Full article ">Figure 3
<p>The actual sea surface conditions under different weather scenarios. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 3 Cont.
<p>The actual sea surface conditions under different weather scenarios. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 4
<p>Actual images of the target vessels. (<b>a</b>) Fishing vessels. (<b>b</b>) Transport ship.</p>
Full article ">Figure 5
<p>Different weather data. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 5 Cont.
<p>Different weather data. (<b>a</b>) Sunny day. (<b>b</b>) Rainy day. (<b>c</b>) Foggy day. (<b>d</b>) Windy day.</p>
Full article ">Figure 6
<p>The overall flow of the DTNet network.</p>
Full article ">Figure 7
<p>Improved YOLOv5 network.</p>
Full article ">Figure 8
<p>Transformer module.</p>
Full article ">Figure 9
<p>Convolutional block attention module.</p>
Full article ">Figure 10
<p>Flowchart of the DeepSORT tracking algorithm.</p>
Full article ">Figure 11
<p>The DTNet transfer learning process.</p>
Full article ">Figure 12
<p>Relevant parameter curve. (<b>a</b>) Loss function curve. (<b>b</b>) Precision–recall curve.</p>
Full article ">Figure 13
<p>AIS and detection tracking results for Maritime Area I. (<b>a</b>) The AIS display of the targets in Maritime Area I. (<b>b</b>) Detection performance. (<b>c</b>) Tracking performance.</p>
Full article ">Figure 14
<p>Classic network comparison curve.</p>
Full article ">Figure 15
<p>Comparison of the effects with classical algorithm detection. (<b>a</b>) DTNet. (<b>b</b>) YOLOv5. (<b>c</b>) Faster R-CNN. (<b>d</b>) RetinaNet.</p>
Full article ">Figure 16
<p>Precision and loss curve for the transfer from Maritime Area I to Maritime Area II.</p>
Full article ">Figure 17
<p>The AIS and detection-tracking results of Maritime Area II. (<b>a</b>) The AIS display of the targets in Maritime Area II. (<b>b</b>) Detection results of the model transfer from Maritime Area I to Maritime Area II. (<b>c</b>) Tracking results of the model transfer from Maritime Area I to Maritime Area II.</p>
Full article ">Figure 18
<p>The precision and loss curve for transfer from Maritime Area I to Maritime Area III.</p>
Full article ">Figure 19
<p>The AIS and detection-tracking results of Maritime Area III. (<b>a</b>) The AIS display of the targets in Maritime Area III. (<b>b</b>) The detection results of the model transfer from Maritime Area I to Maritime Area III. (<b>c</b>) The tracking results of the model transfer from Maritime Area I to Maritime Area III.</p>
Full article ">Figure 20
<p>The precision and loss curve for transfer from Maritime Area I to heavy rain in Maritime Area I.</p>
Full article ">Figure 21
<p>The AIS and detection-tracking results of Maritime Area I during rainy weather. (<b>a</b>) The AIS display of the targets in Maritime Area I during heavy rain. (<b>b</b>) The detection results for transfer from Maritime Area I to heavy rain in Maritime Area I. (<b>c</b>) The tracking results for transfer from Maritime Area I to heavy rain in Maritime Area I.</p>
Full article ">
20 pages, 7061 KiB  
Article
Research on High-Resolution Modeling of Satellite-Derived Marine Environmental Parameters Based on Adaptive Global Attention
by Ruochu Cui, Liwen Ma, Yaning Hu, Jiaji Wu and Haiying Li
Remote Sens. 2025, 17(4), 709; https://doi.org/10.3390/rs17040709 - 19 Feb 2025
Viewed by 129
Abstract
The analysis of marine environmental parameters plays an important role in areas such as sea surface simulation modeling, analysis of sea clutter characteristics, and environmental monitoring. However, ocean observation remote sensing satellites typically deliver large volumes of data with limited spatial resolution, which [...] Read more.
The analysis of marine environmental parameters plays an important role in areas such as sea surface simulation modeling, analysis of sea clutter characteristics, and environmental monitoring. However, ocean observation remote sensing satellites typically deliver large volumes of data with limited spatial resolution, which often does not meet the precision requirements of practical applications. To overcome challenges in constructing high-resolution marine environmental parameters, this study conducts a systematic comparison of various interpolation techniques and deep learning models, aiming to develop a highly effective and efficient model optimized for enhancing the resolution of marine applications. Specifically, we incorporated adaptive global attention (AGA) mechanisms and a spatial gating unit (SGU) into the model. The AGA mechanism dynamically adjusts the weights of different regions in feature maps, enabling the model to focus more on critical spatial features and channel features. The SGU optimizes the utilization of spatial information by controlling the information transmission pathways. The experimental results indicate that for four types of marine environmental parameters from ERA5, our model achieves an overall PSNR of 44.0705, an SSIM of 0.9947, and an MAE of 0.2606 when the resolution is increased by a upscale factor of 2, as well as an overall PSNR of 35.5215, an SSIM of 0.9732, and an MAE of 0.8330 when the resolution is increased by an upscale factor of 4. These experiments demonstrate the model’s effectiveness in enhancing the spatial resolution of satellite-derived marine environmental parameters and its ability to be applied to any marine region, providing data support for many subsequent oceanic studies. Full article
Show Figures

Figure 1

Figure 1
<p>The workflow of proposed model for marine environmental parameters reconstruction.</p>
Full article ">Figure 2
<p>The overall architecture of proposed model for marine environmental parameters reconstruction.</p>
Full article ">Figure 3
<p>A detailed schematic of the spatial gating unit.</p>
Full article ">Figure 4
<p>A detailed schematic of the residual group with the spatial gating unit.</p>
Full article ">Figure 5
<p>A detailed schematic of the adaptive global attention mechanism.</p>
Full article ">Figure 6
<p>Reconstruction error of marine environmental parameters at an upscale factor of 4. The greater the deviation of the data points from the line <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mi>X</mi> </mrow> </semantics></math> is, the larger the corresponding error.</p>
Full article ">Figure 7
<p>Reconstructed marine environmental parameters at 00:00 on 1 December 2021. From top to bottom are MWD, MWP, SWH, WS. From left to right are HR data, Reconstructed data, LR data. LR obtained via bicubic downsampling at a factor of 4. (<b>a</b>) Reconstructed results of MWD at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>b</b>) Reconstructed results of MWP at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>c</b>) Reconstructed results of SWH at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>d</b>) Reconstructed results of WS at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data.</p>
Full article ">Figure 7 Cont.
<p>Reconstructed marine environmental parameters at 00:00 on 1 December 2021. From top to bottom are MWD, MWP, SWH, WS. From left to right are HR data, Reconstructed data, LR data. LR obtained via bicubic downsampling at a factor of 4. (<b>a</b>) Reconstructed results of MWD at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>b</b>) Reconstructed results of MWP at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>c</b>) Reconstructed results of SWH at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data. (<b>d</b>) Reconstructed results of WS at 00:00 on 1 December 2021. From left to right are HR data, Reconstructed data, LR data.</p>
Full article ">Figure 8
<p>Visualization of the reconstructed WS data using different methods at an upscale factor of 2. From left to right and top to bottom: (<b>a</b>) HR data, (<b>b</b>) Bicubic, (<b>c</b>) HYN Model, (<b>d</b>) ATD-Light, (<b>e</b>) CAMixerSR, (<b>f</b>) RRDB, (<b>g</b>) ESRGAN, (<b>h</b>) SwinIR-Light, (<b>i</b>) EDSR, (<b>j</b>) RCAN, (<b>k</b>) Ours. The LR data were obtained from the HR data through bicubic downsampling.</p>
Full article ">Figure 9
<p>Visualization of the reconstructed WS data using different methods at an upscale factor of 4. From left to right and top to bottom: (<b>a</b>) HR data, (<b>b</b>) Bicubic, (<b>c</b>) HYN Model, (<b>d</b>) ATD-Light, (<b>e</b>) CAMixerSR, (<b>f</b>) RRDB, (<b>g</b>) ESRGAN, (<b>h</b>) SwinIR-Light, (<b>i</b>) EDSR, (<b>j</b>) RCAN, (<b>k</b>) Ours. The LR data were obtained from the HR data through bicubic downsampling.</p>
Full article ">Figure 10
<p>Visualization of the reconstructed WS during a typhoon event. From left to right and top to bottom: (<b>a</b>) HR data, (<b>b</b>) LR data, (<b>c</b>) Bicubic, (<b>d</b>) HYN Model, (<b>e</b>) ATD-Light, (<b>f</b>) CAMixerSR, (<b>g</b>) RRDB, (<b>h</b>) ESRGAN, (<b>i</b>) SwinIR-Light, (<b>j</b>) EDSR, (<b>k</b>) RCAN, (<b>l</b>) Ours. LR data obtained via bicubic downsampling at a factor of 4.</p>
Full article ">Figure 10 Cont.
<p>Visualization of the reconstructed WS during a typhoon event. From left to right and top to bottom: (<b>a</b>) HR data, (<b>b</b>) LR data, (<b>c</b>) Bicubic, (<b>d</b>) HYN Model, (<b>e</b>) ATD-Light, (<b>f</b>) CAMixerSR, (<b>g</b>) RRDB, (<b>h</b>) ESRGAN, (<b>i</b>) SwinIR-Light, (<b>j</b>) EDSR, (<b>k</b>) RCAN, (<b>l</b>) Ours. LR data obtained via bicubic downsampling at a factor of 4.</p>
Full article ">Figure 11
<p>Reconstruction error of MWD at 00:00 on 1 December 2021 (<b>a</b>) HR data, (<b>b</b>) MAE of Reconstruction. LR obtained via bicubic downsampling at a factor of 4.</p>
Full article ">
28 pages, 11323 KiB  
Article
Polarimetric SAR Ship Detection Using Context Aggregation Network Enhanced by Local and Edge Component Characteristics
by Canbin Hu, Hongyun Chen, Xiaokun Sun and Fei Ma
Remote Sens. 2025, 17(4), 568; https://doi.org/10.3390/rs17040568 - 7 Feb 2025
Viewed by 384
Abstract
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and [...] Read more.
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and parameter estimation problems in ship detection, which is difficult to adapt to the complex background. In addition, neural network-based detection methods mostly rely on single polarimetric-channel scattering information and fail to fully explore the polarization properties and physical scattering laws of ships. To address these issues, this study constructed two novel characteristics: a helix-scattering enhanced (HSE) local component and a multi-scattering intensity difference (MSID) edge component, which are specifically designed to describe ship scattering characteristics. Based on the characteristic differences of different scattering components in ships, this paper designs a context aggregation network enhanced by local and edge component characteristics to fully utilize the scattering information of polarized SAR data. With the powerful feature extraction capability of a convolutional neural network, the proposed method can significantly enhance the distinction between ships and the sea. Further analysis shows that HSE is able to capture structural information about the target, MSID can increase ship–sea separation capability, and an HV channel retains more detailed information. Compared with other decomposition models, the proposed characteristic combination model performs well in complex backgrounds and can distinguish ship from sea more effectively. The experimental results show that the proposed method achieves a detection precision of 93.6% and a recall rate of 91.5% on a fully polarized SAR dataset, which are better than other popular network algorithms, verifying the reasonableness and superiority of the method. Full article
Show Figures

Figure 1

Figure 1
<p>Ship scattering characteristics in four component decomposition.</p>
Full article ">Figure 2
<p>Enhancement comparison before (<b>a</b>) and after (<b>b</b>) the difference.</p>
Full article ">Figure 3
<p>Structural diagram of context aggregation network based on local and edge component feature enhancement.</p>
Full article ">Figure 4
<p>Scattering Structure Feature Extraction Network.</p>
Full article ">Figure 5
<p>Detailed view of DCNblock module.</p>
Full article ">Figure 6
<p>Structure of the CAM.</p>
Full article ">Figure 7
<p>Low-Level Feature Guided Balanced Fusion Network for PolSAR.</p>
Full article ">Figure 8
<p>Comparison of extracted characteristics from RADARSAT-2 data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 9
<p>Comparison of extracted characteristics from AIRSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 10
<p>Comparison of extracted characteristics from UAVSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 11
<p>3D scatter plots of ship and sea characteristics. (<b>a</b>) Pauli pseudocolor map; (<b>b</b>) Pauli decomposition 3D scatter plot; (<b>c</b>) Freeman–Durden decomposition 3D scatter plot; and (<b>d</b>) proposed characteristics 3D scatter plot.</p>
Full article ">Figure 12
<p>Distribution of target pixel sizes.</p>
Full article ">Figure 13
<p>Comparison of ship detection results under different polarimetric characteristic combinations. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) Pauli components; (<b>c</b>) Freeman–Durden components; (<b>d</b>) Proposed method.</p>
Full article ">Figure 14
<p>Comparison of feature maps under different backbone networks. (<b>a</b>) Pauli image; (<b>b</b>) feature map generated by the backbone network constructed with traditional convolutional blocks; (<b>c</b>) feature map generated by the proposed backbone network employing deformable convolutional blocks.</p>
Full article ">Figure 15
<p>Comparison of ship detection results under different network modules. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) CAM only; (<b>c</b>) DCNblock only; (<b>d</b>) both DCNblock and CAM.</p>
Full article ">Figure 16
<p>Comparison of vessel detection results under different networks. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth, (<b>b</b>) RetinaNet, (<b>c</b>) CenterNet, (<b>d</b>) Faster-RCNN, (<b>e</b>) YOLOv5, (<b>f</b>) YOLOv8, (<b>g</b>) MobileNet, (<b>h</b>) Proposed method.</p>
Full article ">
22 pages, 12425 KiB  
Article
Sea Clutter Suppression Method Based on Ocean Dynamics Using the WRF Model
by Guigeng Li, Zhaoqiang Wei, Yujie Chen, Xiaoxia Meng and Hao Zhang
J. Mar. Sci. Eng. 2025, 13(2), 224; https://doi.org/10.3390/jmse13020224 - 25 Jan 2025
Viewed by 369
Abstract
Sea clutter introduces a significant amount of non-target reflections in the echo signals received by radar, complicating target detection and identification. To address the challenge of existing filter parameters being unable to adapt in real-time to the characteristics of sea clutter, this paper [...] Read more.
Sea clutter introduces a significant amount of non-target reflections in the echo signals received by radar, complicating target detection and identification. To address the challenge of existing filter parameters being unable to adapt in real-time to the characteristics of sea clutter, this paper integrates ocean numerical models into the sea clutter spectrum estimation. By adjusting filter parameters based on the spectral characteristics of sea clutter, the accurate suppression of sea clutter is achieved. In this paper, the Weather Research and Forecasting (WRF) model is employed to simulate the ocean dynamic parameters within the radar detection area. Hydrological data are utilized to calibrate the parameterization scheme of the WRF model. Based on the simulated ocean dynamic parameters, empirical formulas are used to calculate the sea clutter spectrum. The filter coefficients are updated in real-time using the sea clutter spectral parameters, enabling precise suppression of sea clutter. The suppression algorithm is validated using X-band radar-measured sea clutter data, demonstrating an improvement factor of 17.22 after sea clutter suppression. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of sea clutter suppression based on ocean dynamics.</p>
Full article ">Figure 2
<p>Schematic of the X-band radar. (<b>a</b>) The placement of the X-band radar. (<b>b</b>) Actual photograph of the X-band radar.</p>
Full article ">Figure 3
<p>Range–pulse distribution of the X-band radar data.</p>
Full article ">Figure 4
<p>Schematic of the IPIX radar. (<b>a</b>) The placement of the IPIX radar. (<b>b</b>) Actual photograph of the IPIX radar.</p>
Full article ">Figure 5
<p>Range–pulse distribution under different polarization modes. (<b>a</b>) HH polarization. (<b>b</b>) VV polarization.</p>
Full article ">Figure 6
<p>Doppler spectrum of IPIX measured sea clutter data.</p>
Full article ">Figure 7
<p>Time–Doppler diagram of sea clutter.</p>
Full article ">Figure 8
<p>Taylor diagram of the parameterization results for different mp_physics schemes.</p>
Full article ">Figure 9
<p>Simulation of WRF model grid over the X-band radar detection area. The red upward triangle indicates the radar installation location.</p>
Full article ">Figure 10
<p>The WRF model grid over the IPIX radar detection area. The red upward triangle indicates the radar installation location.</p>
Full article ">Figure 11
<p>Comparison of ocean dynamic parameters obtained from WRF simulations with measured data.</p>
Full article ">Figure 12
<p>Measured power spectrum of pure clutter signal.</p>
Full article ">Figure 13
<p>Magnitude and phase responses of the FIR filter.</p>
Full article ">Figure 14
<p>Wavelet and EMD reconstruction suppression algorithm. (<b>a</b>) Wavelet transform-weighted reconstruction. (<b>b</b>) EMD reconstruction.</p>
Full article ">Figure 15
<p>Power spectrum before and after sea clutter suppression.</p>
Full article ">Figure 16
<p>Range–Doppler diagrams of radar before and after sea clutter suppression.</p>
Full article ">Figure 17
<p>Correspondence between improvement factor and target signal frequency.</p>
Full article ">
27 pages, 24936 KiB  
Article
Multipath and Deep Learning-Based Detection of Ultra-Low Moving Targets Above the Sea
by Zhaolong Wang, Xiaokuan Zhang, Weike Feng, Binfeng Zong, Tong Wang, Cheng Qi and Xixi Chen
Remote Sens. 2024, 16(24), 4773; https://doi.org/10.3390/rs16244773 - 21 Dec 2024
Viewed by 526
Abstract
An intelligent approach is proposed and investigated in this paper for the detection of ultra-low-altitude sea-skimming moving targets for airborne pulse Doppler radar. Without suppressing interferences, the proposed method uses both target and multipath information for detection based on their distinguishable image features [...] Read more.
An intelligent approach is proposed and investigated in this paper for the detection of ultra-low-altitude sea-skimming moving targets for airborne pulse Doppler radar. Without suppressing interferences, the proposed method uses both target and multipath information for detection based on their distinguishable image features and deep learning (DL) techniques. First, the image features of the target, multipath, and sea clutter in the real-measured range-Doppler (RD) map are analyzed, based on which the target and multipath are defined together as the generalized target. Then, based on the composite electromagnetic scattering mechanism of the target and the ocean surface, a scattering-based echo generation model is established and validated to generate sufficient data for DL network training. Finally, the RD features of the generalized target are learned by training the DL-based target detector, such as you-only-look-once version 7 (YOLOv7) and Faster R-CNN. The detection results show the high performance of the proposed method on both simulated and real-measured data without suppressing interferences (e.g., clutter, jamming, and noise). In particular, even if the target is submerged in clutter, the target can still be detected by the proposed method based on the multipath feature. Full article
(This article belongs to the Special Issue Array and Signal Processing for Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The geometry model of the airborne radar detecting sea-skimming ultra-low-altitude targets.</p>
Full article ">Figure 2
<p>The proposed generalized target detection via the DL-based method.</p>
Full article ">Figure 3
<p>The real-experiment scenario.</p>
Full article ">Figure 4
<p>The RD map of the real-measured data.</p>
Full article ">Figure 5
<p>Existing CFAR-based moving target detection method.</p>
Full article ">Figure 6
<p>DL-based moving target detection method.</p>
Full article ">Figure 7
<p>Schematic diagram of the range-Doppler cells.</p>
Full article ">Figure 8
<p>SBR method based on ray tracing. (<b>a</b>) Multiple scattering between the sea surface and the target. (<b>b</b>) Schematic diagram of a certain ray.</p>
Full article ">Figure 9
<p>Doppler shift in the multipath scattering.</p>
Full article ">Figure 10
<p>Comparisons of the mono-static composite scattering of the target above a rough sea surface. (<b>a</b>) Vertical–vertical (VV) polarization. (<b>b</b>) Horizontal–horizontal (HH) polarization.</p>
Full article ">Figure 11
<p>The RD map of the simulation data.</p>
Full article ">Figure 12
<p>The RD maps of ultra-low-altitude targets under different parameters.</p>
Full article ">Figure 13
<p>The maximum multipath power versus target height Tz and sea surface wind speed <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math>. (<b>a</b>) The maximum multipath power versus target height with fixed <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 5 m/s. (<b>b</b>) The maximum multipath power versus wind speed with fixed <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 50 m.</p>
Full article ">Figure 14
<p>The label diagram of RD maps. (<b>a</b>) The proposed method based on the generalized target. (<b>b</b>) The conventional method based on the target only.</p>
Full article ">Figure 15
<p>The changes in the value of training loss and validation loss.</p>
Full article ">Figure 16
<p>Comparison of the <span class="html-italic">mAP@0.5</span> curves.</p>
Full article ">Figure 17
<p>Detection results of the simulated RD maps with a single target. Simulation parameters: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 4 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 1978 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 13 m, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 19 MHz, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 28 m/s, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 395 m/s, SNR = 12 dB, target type: missile; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 6 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 1175 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 44 m, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 10 MHz, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = −102 m/s, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 400 m/s, SNR = 9 dB, target type: UAV; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 2 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 1642 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 19 m, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 15 MHz, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = −543 m/s, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 399 m/s, SNR = 5 dB, target type: fighter; (<b>d</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 3 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 1245 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 26 m, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 10 MHz, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = −458 m/s, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 405 m/s, SNR = 19 dB, target type: UAV.</p>
Full article ">Figure 18
<p>Detection results of simulated RD maps with multiple targets. Simulation parameters: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 6 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 38 MHz, SNR = 8 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 587 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 1512 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 47 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = −426 m/s, type of target 1: missile, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 923 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 60 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 117 m/s, type of target 2: fighter; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 4 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 36 MHz, SNR = 7 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 437 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 823 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 51 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 244 m/s, type of target 1: missile, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 1551 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 67 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = −42 m/s, type of target 2: missile; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 5 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 40 MHz, SNR = 9 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 494 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 1132 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 80 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = −398 m/s, type of target 1: fighter, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 1878 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 67 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 21 m/s, type of target 2: missile, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 754 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 55 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 259 m/s, type of target 3: missile; (<b>d</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 3 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 40 MHz, SNR = 20 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 513 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 1103 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 62 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = −582 m/s, type of target 1: fighter, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 679 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 73 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 360 m/s, type of target 2: UAV, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 763 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 55 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 306 m/s, type of target 3: missile.</p>
Full article ">Figure 19
<p>Detection results of simulated RD maps with low-speed targets. Simulation parameters: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 1 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 16 MHz, SNR = 8 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 600 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 817 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 44 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 20 m/s, type of target 1: missile; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 3 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 20 MHz, SNR = 9 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 300 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>y</mi> </msub> </semantics></math> = 493 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics></math> = 50 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = −47 m/s, type of target 1: missile; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 4 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 22 MHz, SNR = 8 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 590 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 425 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 56 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 314 m/s, type of target 1: missile, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 1182 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 40 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = −43 m/s, type of target 2: fighter; (<b>d</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mn>10</mn> </msub> </semantics></math> = 2 m/s, <math display="inline"><semantics> <msub> <mi>B</mi> <mi>W</mi> </msub> </semantics></math> = 30 MHz, SNR = 18 dB, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> = 467 m/s, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 1205 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = 78 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>1</mn> </mrow> </msub> </semantics></math> = −456 m/s, type of target 1: missile, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 519 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 53 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>2</mn> </mrow> </msub> </semantics></math> = 352 m/s, type of target 2: UAV, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 1216 m, <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>z</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = 78 m, <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mi>y</mi> <mn>3</mn> </mrow> </msub> </semantics></math> = −19 m/s, type of target 3: fighter.</p>
Full article ">Figure 20
<p>(<b>a</b>–<b>d</b>) Detection results of real-measured RD maps.</p>
Full article ">Figure 21
<p>Detection results of real-measured RD maps.</p>
Full article ">Figure 22
<p>Schematic of different distance ranges.</p>
Full article ">Figure 23
<p>Average probability that the target is correctly detected at different locations.</p>
Full article ">Figure 24
<p>False alarm rate versus different confidence thresholds.</p>
Full article ">Figure A1
<p>The two-scale model.</p>
Full article ">Figure A2
<p>The <span class="html-italic">i</span>-th reflection on the triangular facet <math display="inline"><semantics> <msub> <mi>S</mi> <mi>n</mi> </msub> </semantics></math>.</p>
Full article ">
24 pages, 8214 KiB  
Article
Research on Sea Clutter Simulation Method Based on Deep Cognition of Characteristic Parameters
by Peng Zeng, Yushi Zhang, Xiaoyun Xia, Jinpeng Zhang, Pengbo Du, Zhiheng Hua and Shuhan Li
Remote Sens. 2024, 16(24), 4741; https://doi.org/10.3390/rs16244741 - 19 Dec 2024
Viewed by 625
Abstract
The development of radar systems requires extensive testing. However, field experiments are costly and time-consuming. Sea clutter simulation is of great significance for evaluating radar system detection performance. Traditional clutter simulation methods are unable to achieve clutter simulation based on the description of [...] Read more.
The development of radar systems requires extensive testing. However, field experiments are costly and time-consuming. Sea clutter simulation is of great significance for evaluating radar system detection performance. Traditional clutter simulation methods are unable to achieve clutter simulation based on the description of environmental parameters, which leads to a certain gap from practical applications. Therefore, this paper proposes a sea clutter simulation method based on the deep cognition of characteristic parameters. Firstly, the proposed method innovatively constructs a shared multi-task neural network, which compensates for the lack of integrated prediction of multi-dimensional characteristic parameters of sea clutter. Furthermore, based on the predicted clutter characteristic parameters combined with the spatial–temporal correlated K-distribution clutter simulation method, and considering the modulation of radar antenna patterns, the whole process of end-to-end simulation from measurement condition parameters to clutter data is accomplished for the first time. Finally, four metrics are cited for a comprehensive evaluation of the simulated clutter data. Based on the experimental results using measured data, the data simulated by this method have a correlation of over 93% in statistical characteristics with the measured data. The results demonstrate that this method can achieve the accurate simulation of sea clutter data based on measured condition parameters. Full article
Show Figures

Figure 1

Figure 1
<p>Data measurement location and collection equipment. (<b>a</b>) Measurement location of sea clutter data; (<b>b</b>) radar; (<b>c</b>) buoy; (<b>d</b>) anemometer.</p>
Full article ">Figure 2
<p>Box plot of marine environment parameter distribution. (<b>a</b>) Distribution of significant wave height, maximum wave height, and average wave height parameters. (<b>b</b>) Distribution of maximum wave period, average wave period, and flow rate parameters. (<b>c</b>) Distribution of maximum wind speed, average wind speed, and water temperature parameters. (<b>d</b>) Distribution of wave direction, wind direction, and flow direction parameters.</p>
Full article ">Figure 3
<p>Heatmap of the correlation between the first ten range cells.</p>
Full article ">Figure 4
<p>A comparison of the spatial correlation of the measured clutter data and the correlation curves calculated using the two methods under different sea state levels: (<b>a</b>) under sea state level 2; (<b>b</b>) under sea state level 5.</p>
Full article ">Figure 5
<p>Shared multi-task neural network model structure based on the combination of the ResNet18 and DNN5 models (The left side of the model is the input measurement condition parameter; the output of the subtask network on the right side of the model is the sea clutter characteristic parameter).</p>
Full article ">Figure 6
<p>The changes of the weights <math display="inline"><semantics> <mrow> <mi>log</mi> <msub> <mi>σ</mi> <mi>i</mi> </msub> </mrow> </semantics></math> for each characteristic parameter prediction task during model training.</p>
Full article ">Figure 7
<p>Scatter density plots for predicting sea clutter characteristic parameters. (<b>a</b>) Shape parameter. (<b>b</b>) Scale parameter. (<b>c</b>) Doppler shift. (<b>d</b>) Spectrum bandwidth. (<b>e</b>) Reflectivity. (<b>f</b>) Attenuation factor.</p>
Full article ">Figure 7 Cont.
<p>Scatter density plots for predicting sea clutter characteristic parameters. (<b>a</b>) Shape parameter. (<b>b</b>) Scale parameter. (<b>c</b>) Doppler shift. (<b>d</b>) Spectrum bandwidth. (<b>e</b>) Reflectivity. (<b>f</b>) Attenuation factor.</p>
Full article ">Figure 8
<p>Schematic diagram of the sea clutter time series simulation method based on the SIRP.</p>
Full article ">Figure 9
<p>Normalized spatial–temporal correlated K-distribution sea clutter simulation method.</p>
Full article ">Figure 10
<p>Comparison of measured clutter and simulated clutter data. (<b>a</b>) Measured clutter data under sea state 2; (<b>b</b>) simulated clutter data under sea state 2; (<b>c</b>) measured clutter data under sea state 5; (<b>d</b>) simulated clutter data under sea state 5.</p>
Full article ">Figure 11
<p>Comparison of time–domain data between measured clutter and simulated clutter. (<b>a</b>) Measured clutter data under sea state 2; (<b>b</b>) simulated clutter data under sea state 2; (<b>c</b>) measured clutter data under sea state 5; (<b>d</b>) simulated clutter data under sea state 5.</p>
Full article ">Figure 12
<p>Comparison of PDF function curves between measured clutter and simulated clutter in time dimension: (<b>a</b>) sea state 2; (<b>b</b>) sea state 5.</p>
Full article ">Figure 13
<p>Comparison of 1-CDF function curves between measured clutter and simulated clutter in the time dimension: (<b>a</b>) sea state 2; (<b>b</b>) sea state 5.</p>
Full article ">Figure 14
<p>Doppler spectrum comparison between measured clutter and simulated clutter: (<b>a</b>) sea state 2; (<b>b</b>) sea state 5.</p>
Full article ">Figure 15
<p>Comparison of correlation between measured clutter and simulated clutter in the distance dimension: (<b>a</b>) sea state 2; (<b>b</b>) sea state 5.</p>
Full article ">
25 pages, 9994 KiB  
Article
A Triple-Channel Network for Maritime Radar Targets Detection Based on Multi-Modal Features
by Kaiqi Wang and Zeyu Wang
Remote Sens. 2024, 16(24), 4662; https://doi.org/10.3390/rs16244662 - 13 Dec 2024
Viewed by 608
Abstract
Sea surface target detectors are often interfered by various complex sea surface factors such as sea clutter. Especially when the signal-to-clutter ratio (SCR) is low, it is difficult to achieve high-performance detection. This paper proposes a triple-channel network model for maritime target detection [...] Read more.
Sea surface target detectors are often interfered by various complex sea surface factors such as sea clutter. Especially when the signal-to-clutter ratio (SCR) is low, it is difficult to achieve high-performance detection. This paper proposes a triple-channel network model for maritime target detection based on the method of multi-modal data fusion. This method comprehensively improves the traditional multi-channel inputs by extracting highly complementary multi-modal features from radar echoes, namely, time-frequency image, phase sequence and correlation coefficient sequence. Appropriate networks are selected to construct a triple-channel network according to the internal data structure of each feature. The three features are utilized as the input of each network channel. To reduce the coupling between multi-channel data, the SE block is introduced to optimize the feature vectors of the channel dimension and improve the data fusion strategy. The detection results are output by the false alarm control unit according to the given probability of false alarm (PFA). The experiments on the IPIX datasets verify that the performance of the proposed detector is better than the existing detectors in dealing with complex ocean scenes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Average SCRs of primary cells in ten datasets at the four polarization modes.</p>
Full article ">Figure 2
<p>Typical TF spectrum of target and clutter samples from IPIX dataset in observation time of 1 s: (<b>a</b>) clutter; (<b>b</b>) target.</p>
Full article ">Figure 3
<p>Typical phase sequence of target and clutter samples from IPIX dataset in observation time of 1 s: (<b>a</b>) clutter; (<b>b</b>) target.</p>
Full article ">Figure 4
<p>Typical correlation coefficient sequence of target and clutter samples from the IPIX dataset in observation time of 1 s: (<b>a</b>) clutter; (<b>b</b>) target.</p>
Full article ">Figure 5
<p>Structure of the proposed detector.</p>
Full article ">Figure 6
<p>Structure of proposed triple-channel model.</p>
Full article ">Figure 7
<p>The network structures used in the Triple-channel model: (<b>a</b>) Layer1 to Layer5 in the TF processing unit; (<b>b</b>) LeNet network model in the correlation processing unit.</p>
Full article ">Figure 8
<p>The LSTM structure used in triple-channel model.</p>
Full article ">Figure 9
<p>The SE attention structure used in triple-channel model.</p>
Full article ">Figure 10
<p>ROC curves of different network models for each of the three input features, respectively, using #IPIX_03 with SCR of 0 dB: (<b>a</b>) ResNet18 and VGG16 performance for TF feature; (<b>b</b>) LeNet and LSTM performance for phase feature; (<b>c</b>) LeNet and LSTM performance for correlation feature.</p>
Full article ">Figure 11
<p>ROC curves of different network models for each of the three input features, respectively, using #IPIX_02 with SCR of 5 dB: (<b>a</b>) ResNet18 and VGG16 performance for TF feature; (<b>b</b>) LeNet and LSTM performance for phase feature; (<b>c</b>) LeNet and LSTM performance for correlation feature.</p>
Full article ">Figure 12
<p>ROC curves of different network models for each of the three input features, respectively, using #IPIX_10 with SCR of 12 dB: (<b>a</b>) ResNet18 and VGG16 performance for TF feature; (<b>b</b>) LeNet and LSTM performance for phase feature; (<b>c</b>) LeNet and LSTM performance for correlation feature.</p>
Full article ">Figure 13
<p>ROC curves of datasets in different sea states: (<b>a</b>) #IPIX_01 (HH, the fourth level sea state); (<b>b</b>) #IPIX_07 (HH, the third level sea state); (<b>c</b>) #IPIX_03 (HH, the second level sea state).</p>
Full article ">Figure 14
<p>False alarm loss curves of datasets in different sea states: (<b>a</b>) #IPIX_01 (HH, the fourth level sea state); (<b>b</b>) #IPIX_07 (HH, the third level sea state); (<b>c</b>) #IPIX_03 (HH, the second level sea state).</p>
Full article ">Figure 15
<p>Doppler shift of sea clutter of the ten IPIX datasets.</p>
Full article ">Figure 16
<p>Images of the #IPIX_01 dataset at the HH polarization: (<b>a</b>) the range-time intensity image; (<b>b</b>) the TF spectrum of the primary cell; (<b>c</b>) the TF spectrum of the clutter-only cell.</p>
Full article ">Figure 17
<p>Images of the #IPIX_03 dataset at the HH polarization: (<b>a</b>) the range-time intensity image; (<b>b</b>) the TF spectrum of the primary cell; (<b>c</b>) the TF spectrum of the clutter-only cell.</p>
Full article ">Figure 18
<p>Images of the #IPIX_10 dataset at the HH polarization: (<b>a</b>) the range-time intensity image; (<b>b</b>) the TF spectrum of the primary cell; (<b>c</b>) the TF spectrum of the clutter-only cell.</p>
Full article ">Figure 19
<p>Detection performance of different detectors in IPIX database in 1.024 s observation time: (<b>a</b>) HH; (<b>b</b>) HV; (<b>c</b>) VH; (<b>d</b>) VV.</p>
Full article ">Figure 20
<p>ROC curves of the proposed detector, the tri-feature detector [<a href="#B11-remotesensing-16-04662" class="html-bibr">11</a>], the TF-tri-feature detector [<a href="#B12-remotesensing-16-04662" class="html-bibr">12</a>], and the phase-feature detector [<a href="#B19-remotesensing-16-04662" class="html-bibr">19</a>] in 1.024 s observation time and different polarization modes: (<b>a</b>) HH; (<b>b</b>) HV; (<b>c</b>) VH; (<b>d</b>) VV.</p>
Full article ">
22 pages, 1347 KiB  
Article
Semi-Empirical Approach to Evaluating Model Fit for Sea Clutter Returns: Focusing on Future Measurements in the Adriatic Sea
by Bojan Vondra
Entropy 2024, 26(12), 1069; https://doi.org/10.3390/e26121069 - 9 Dec 2024
Viewed by 559
Abstract
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing [...] Read more.
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing such as histogram binning. The proposed method converges almost surely, with the proof based on the use of exponentially distributed waiting times. An example demonstrates convergence of the KL divergence and SH distance to their true values when utilising the Generalised Pareto (GP) distribution as empirical data and the K distribution as the model. Another example illustrates the goodness of fit of these (GP and K-distribution) models to real sea clutter data from the widely used Intelligent PIxel processing X-band (IPIX) measurements. The proposed method can be applied to assess the goodness of fit of various models (not limited to GP or K distribution) to clutter measurement data such as those from the Adriatic Sea. Distinctive features of this small and immature sea, like the presence of over 1300 islands that affect local wind and wave patterns, are likely to result in an amplitude distribution of sea clutter returns that differs from predictions of models designed for oceans or open seas. However, to the author’s knowledge, no data on this specific topic are currently available in the open literature, and such measurements have yet to be conducted. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of empirical and semi-empirical estimates of KL divergence. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 2
<p>Comparison of MSE of empirical and semi-empirical estimates of KL divergence. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 3
<p>Comparison of empirical and semi-empirical estimates. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">Figure 4
<p>Comparison of empirical and semi-empirical estimates of KL divergence using GP distribution as model and real sea clutter data. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 5
<p>Comparison of empirical and semi-empirical estimates of KL divergence using K distribution as model and real sea clutter data. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 6
<p>Comparison of variances of empirical and semi-empirical estimates of KL divergence using GP and K distribution as models and real sea clutter data. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 7
<p>Comparison of empirical and semi-empirical estimates of SH distance using GP and K distribution as models and real sea clutter data. (<b>a</b>) K distribution. (<b>b</b>) GP distribution.</p>
Full article ">Figure 8
<p>Comparison of variances of empirical and semi-empirical estimates of SH distance using GP and K distributions as models and real sea clutter data.</p>
Full article ">Figure 9
<p>Semi-empirical estimation of KL divergence between an empirical dataset following a unit-mean exponential distribution, <math display="inline"><semantics> <mrow> <mi>Exp</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>, and a model distribution following a normal distribution, <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) Forward estimation. (<b>b</b>) Reverse estimation.</p>
Full article ">Figure 10
<p>MSE of the KL divergence estimation between an empirical dataset following a unit-mean exponential distribution, <math display="inline"><semantics> <mrow> <mi>Exp</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>, and a model distribution following a normal distribution, <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 11
<p>Semi-empirical estimation of SH distance between empirical dataset of samples from normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> and exponential model distribution <math display="inline"><semantics> <mrow> <mi>Exp</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">Figure 12
<p>Semi-empirical estimation of the KL divergence between two normal distributions, with the empirical dataset following <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and the model distribution following <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) Forward estimation. (<b>b</b>) Reverse estimation.</p>
Full article ">Figure 13
<p>MSE of KL divergence estimation between two normal distributions, empirical dataset following <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and model distribution following <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) Forward. (<b>b</b>) Reverse.</p>
Full article ">Figure 14
<p>Semi-empirical estimation of SH distance between empirical dataset of samples from normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and normal model distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">Figure 15
<p>Semi-empirical estimation of SH distance between empirical dataset of samples from normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and normal model distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">Figure 16
<p>Semi-empirical estimation of SH distance between empirical dataset of samples from normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> and normal model distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">Figure 17
<p>Semi-empirical estimation of SH distance between empirical dataset of samples from normal distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> </semantics></math> and normal model distribution <math display="inline"><semantics> <mrow> <mi mathvariant="script">N</mi> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. (<b>a</b>) SH distance estimation. (<b>b</b>) MSE of SH distance estimation.</p>
Full article ">
21 pages, 16950 KiB  
Article
Retrieval of Three-Dimensional Wave Surfaces from X-Band Marine Radar Images Utilizing Enhanced Pix2Pix Model
by Lingyi Hou, Xiao Wang, Bo Yang, Zhiyuan Wei, Yuwen Sun and Yuxiang Ma
J. Mar. Sci. Eng. 2024, 12(12), 2229; https://doi.org/10.3390/jmse12122229 - 5 Dec 2024
Viewed by 570
Abstract
In this study, we propose a novel method for retrieving the three-dimensional (3D) wave surface from sea clutter using both simulated and measured data. First, the linear wave superposition model and modulation principle are employed to generate simulated datasets comprising 3D wave surfaces [...] Read more.
In this study, we propose a novel method for retrieving the three-dimensional (3D) wave surface from sea clutter using both simulated and measured data. First, the linear wave superposition model and modulation principle are employed to generate simulated datasets comprising 3D wave surfaces and corresponding sea clutter. Subsequently, we develop a Pix2Pix model enhanced with a self-attention mechanism and a multiscale discriminator to effectively capture the nonlinear relationship between the simulated 3D wave surfaces and sea clutter. The model’s performance is evaluated through error analysis, comparisons of wave number spectra, and differences in wave surface reconstructions using a dedicated test set. Finally, the trained model is applied to reconstruct wave surfaces from sea clutter data collected aboard a ship, with results benchmarked against those derived from the Schrödinger equation. The findings demonstrate that the proposed model excels in preserving high-frequency image details while ensuring precise alignment between reconstructed images. Furthermore, it achieves superior retrieval accuracy compared to traditional approaches, highlighting its potential for advancing wave surface retrieval techniques. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

Figure 1
<p>Linear superposition model of sea waves.</p>
Full article ">Figure 2
<p>Example of simulated wave surface.</p>
Full article ">Figure 3
<p>Schematic diagram of shadow modulation.</p>
Full article ">Figure 4
<p>Schematic diagram of tilt modulation.</p>
Full article ">Figure 5
<p>Example of sea clutter.</p>
Full article ">Figure 6
<p>Results of sensitivity analysis. (<b>a</b>) Sensitivity analysis of spatial resolution; (<b>b</b>) sensitivity analysis of time step.</p>
Full article ">Figure 7
<p>3D wave surface and sea clutter data pair. (<b>a</b>) 3D wave surface; (<b>b</b>) sea clutter.</p>
Full article ">Figure 8
<p>Overall model structure.</p>
Full article ">Figure 9
<p>Structure of generator.</p>
Full article ">Figure 10
<p>Structure of multiscale discriminator.</p>
Full article ">Figure 11
<p>Comparison of wave number spectrum. Original wave surface (<b>left</b>); retrieved wave surface (<b>right</b>). (<b>a</b>) Sea state level 3, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>9.24</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>b</b>) Sea state level 4, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>1.26</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>9.45</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>c</b>) Sea state level 5, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>2.66</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>17.59</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>d</b>) Sea state level 6, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>4.14</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>12.4</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Comparison of wave surface difference. Original wave surface (<b>left</b>); retrieved wave surface (<b>right</b>); wave surface difference (down). (<b>a</b>) Sea state level 3, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>9.24</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>b</b>) Sea state level 4, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>1.26</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>9.45</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>c</b>) Sea state level 5, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>2.66</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>17.59</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>d</b>) Sea state level 6, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>H</mi> </mrow> <mrow> <mi>s</mi> </mrow> </msub> <mo>=</mo> <mn>4.14</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>12.4</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Software interface (<b>left</b>), example of measured radar image (<b>right</b>).</p>
Full article ">Figure 14
<p>Image displayed by WinWaMoS (<b>left</b>) and reproduced by MATLAB (<b>right</b>).</p>
Full article ">Figure 15
<p>A sequence of measured radar images.</p>
Full article ">Figure 16
<p>Comparison of wave height and envelopes derived from the NLS equation. (<b>a</b>) Atten-Pix2pix; (<b>b</b>) Pix2pix; (<b>c</b>) CNNSA.</p>
Full article ">
23 pages, 4780 KiB  
Article
Characteristic Description and Statistical Model-Based Method for Sea Clutter Modeling
by Huafeng He, Zhen Li, Xi Zhang, Jianguang Jia, Yaomin He and Yongquan You
Remote Sens. 2024, 16(23), 4429; https://doi.org/10.3390/rs16234429 - 26 Nov 2024
Viewed by 741
Abstract
The modeling and analysis of sea clutter are of great significance in radar target detection studies in marine environments. Sea clutter typically exhibits non-Gaussian characteristics and spatiotemporal correlations, posing challenges for modeling, especially when generating simulation data of continuous correlated non-Gaussian random processes. [...] Read more.
The modeling and analysis of sea clutter are of great significance in radar target detection studies in marine environments. Sea clutter typically exhibits non-Gaussian characteristics and spatiotemporal correlations, posing challenges for modeling, especially when generating simulation data of continuous correlated non-Gaussian random processes. This paper proposes a novel method for sea clutter modeling. First, feature description functions are constructed to individually characterize the amplitude, temporal, and spatial correlations of sea clutter, allowing for an accurate depiction of its characteristics with fewer parameters. Subsequently, simulation data are generated based on these feature description functions, satisfying the amplitude distribution, temporal correlation, and spatial correlation characteristics of sea clutter. Additionally, complex signal forms are introduced in the underlying signal processing to generate texture and speckle components of sea clutter, enhancing the alignment of simulation data with actual data. Through comparison with measured sea clutter data, the proposed method has been shown to accurately simulate complex sea clutter with real-world characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method in this paper.</p>
Full article ">Figure 2
<p>Amplitude feature extraction.</p>
Full article ">Figure 3
<p>Temporal correlation feature extraction.</p>
Full article ">Figure 4
<p>Spatial correlation feature extraction.</p>
Full article ">Figure 5
<p>Characteristic functions of the generated speckle component.</p>
Full article ">Figure 6
<p>Amplitude distribution function of the generated texture component.</p>
Full article ">Figure 7
<p>The feature functions of the mid–short–range sea clutter data from the file “20210106155330_01_staring” generated by the proposed method.</p>
Full article ">Figure 8
<p>The feature functions of the long-range sea clutter data from the file “20210106155330_01_staring “ generated by the proposed method.</p>
Full article ">Figure 9
<p>Characteristic functions of the sea clutter data from the file “19980204_221104_ANTSTEP” generated by the proposed method.</p>
Full article ">Figure 10
<p>Characteristic functions of the sea clutter data from the file “19980204_220325_ANTSTEP” generated by the proposed method.</p>
Full article ">
18 pages, 7440 KiB  
Article
A Novel Method for the Estimation of Sea Surface Wind Speed from SAR Imagery
by Zahra Jafari, Pradeep Bobby, Ebrahim Karami and Rocky Taylor
J. Mar. Sci. Eng. 2024, 12(10), 1881; https://doi.org/10.3390/jmse12101881 - 20 Oct 2024
Cited by 1 | Viewed by 1184
Abstract
Wind is one of the important environmental factors influencing marine target detection as it is the source of sea clutter and also affects target motion and drift. The accurate estimation of wind speed is crucial for developing an efficient machine learning (ML) model [...] Read more.
Wind is one of the important environmental factors influencing marine target detection as it is the source of sea clutter and also affects target motion and drift. The accurate estimation of wind speed is crucial for developing an efficient machine learning (ML) model for target detection. For example, high wind speeds make it more likely to mistakenly detect clutter as a marine target. This paper presents a novel approach for the estimation of sea surface wind speed (SSWS) and direction utilizing satellite imagery through innovative ML algorithms. Unlike existing methods, our proposed technique does not require wind direction information and normalized radar cross-section (NRCS) values and therefore can be used for a wide range of satellite images when the initial calibrated data are not available. In the proposed method, we extract features from co-polarized (HH) and cross-polarized (HV) satellite images and then fuse advanced regression techniques with SSWS estimation. The comparison between the proposed model and three well-known C-band models (CMODs)—CMOD-IFR2, CMOD5N, and CMOD7—further indicates the superior performance of the proposed model. The proposed model achieved the lowest Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), with values of 0.97 m/s and 0.62 m/s for calibrated images, and 1.37 and 0.97 for uncalibrated images, respectively, on the RCM dataset. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Marine Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Distribution of wind direction and wind speed.</p>
Full article ">Figure 2
<p>NRCS vs. incidence angle for different wind speeds and directions using CMOD5N and CMOD7 functions.</p>
Full article ">Figure 3
<p>Scatter plots of real versus calculated wind speed using (<b>a</b>) CMOD5, (<b>b</b>) CMOD-IFR, and (<b>c</b>) CMOD7 models with HH polarization.</p>
Full article ">Figure 4
<p>Scatter plots of real versus calculated wind speed using (<b>a</b>) CMOD5, (<b>b</b>) CMOD-IFR, and (<b>c</b>) CMOD7 models after compensation for polarization.</p>
Full article ">Figure 5
<p>Distribution of intensities for HH and HV polarizations at high and low wind speeds.</p>
Full article ">Figure 6
<p>Block diagram of proposed system.</p>
Full article ">Figure 7
<p>Effect of despeckling filter on RCM image.</p>
Full article ">Figure 8
<p>Histogram of the introduced feature extracted from calibrated data, with orange representing low wind, green representing mid wind, and purple representing high wind.</p>
Full article ">Figure 9
<p>Histogram of the introduced feature extracted from uncalibrated data, with orange representing low wind, green representing mid wind, and purple representing high wind.</p>
Full article ">Figure 10
<p>Comparisons of retrieved SSWS using concatenated models with different features from the calibrated RCM dataset.</p>
Full article ">Figure 11
<p>Comparisons of retrieved SSWS using concatenated models with different features from the uncalibrated RCM dataset.</p>
Full article ">Figure 12
<p>The closest region, where both RCM data and buoy station data are available.</p>
Full article ">Figure 13
<p>ERA5 vs. buoy wind speeds for the south of Greenland across all seasons in 2023.</p>
Full article ">Figure 14
<p>Testing the proposed model in the south of Greenland using buoy wind speed data.</p>
Full article ">
22 pages, 11121 KiB  
Article
Joint Prediction of Sea Clutter Amplitude Distribution Based on a One-Dimensional Convolutional Neural Network with Multi-Task Learning
by Longshuai Wang, Liwen Ma, Tao Wu, Jiaji Wu and Xiang Luo
Remote Sens. 2024, 16(20), 3891; https://doi.org/10.3390/rs16203891 - 19 Oct 2024
Viewed by 1269
Abstract
Accurate modeling of sea clutter amplitude distribution plays a crucial role in enhancing the performance of marine radar. Due to variations in radar system parameters and oceanic environmental factors, sea clutter amplitude distribution exhibits multiple distribution types. Focusing solely on a single type [...] Read more.
Accurate modeling of sea clutter amplitude distribution plays a crucial role in enhancing the performance of marine radar. Due to variations in radar system parameters and oceanic environmental factors, sea clutter amplitude distribution exhibits multiple distribution types. Focusing solely on a single type of amplitude prediction lacks the necessary flexibility in practical applications. Therefore, based on the measured X-band radar sea clutter data from Yantai, China in 2022, this paper proposes a multi-task one-dimensional convolutional neural network (MT1DCNN) and designs a dedicated input feature set for the joint prediction of the type and parameters of sea clutter amplitude distribution. The results indicate that the MT1DCNN model achieves an F1 score of 97.4% for classifying sea clutter amplitude distribution types under HH polarization and a root-mean-square error (RMSE) of 0.746 for amplitude distribution parameter prediction. Under VV polarization, the F1 score is 96.74% and the RMSE is 1.071. By learning the associations between sea clutter amplitude distribution types and parameters, the model’s predictions become more accurate and reliable, providing significant technical support for maritime target detection. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>The architecture of MT1DCNN.</p>
Full article ">Figure 2
<p>The overall temporal characteristics of T1 and T2 pulses: (<b>a</b>) T1 pulse echo (dB). (<b>b</b>) T2 plus Echo (dB).</p>
Full article ">Figure 3
<p>The data distribution of shape parameters for K and Pareto distributions: (<b>a</b>) K distribution of sea clutter in HH polarimetric radar. (<b>b</b>) Pareto distribution of sea clutter in HH polarimetric radar. (<b>c</b>) K distribution of sea clutter in VV polarimetric radar. (<b>d</b>) Pareto distribution of sea clutter in VV polarimetric radar.</p>
Full article ">Figure 3 Cont.
<p>The data distribution of shape parameters for K and Pareto distributions: (<b>a</b>) K distribution of sea clutter in HH polarimetric radar. (<b>b</b>) Pareto distribution of sea clutter in HH polarimetric radar. (<b>c</b>) K distribution of sea clutter in VV polarimetric radar. (<b>d</b>) Pareto distribution of sea clutter in VV polarimetric radar.</p>
Full article ">Figure 4
<p>Sea clutter amplitude distribution joint prediction results of the MT1DCNN model are presented as follows: for the classification task, outcomes are depicted via a confusion matrix heatmap, whereas the regression task results are illustrated using scatter density plots. (<b>a</b>) Amplitude distribution types of HH polarization. (<b>b</b>) Amplitude distribution types of VV polarization. (<b>c</b>) Shape parameter of HH polarization. (<b>d</b>) Shape parameter of VV polarization. (<b>e</b>) Scale parameter of HH polarization. (<b>f</b>) Scale parameter of VV polarization.</p>
Full article ">Figure 4 Cont.
<p>Sea clutter amplitude distribution joint prediction results of the MT1DCNN model are presented as follows: for the classification task, outcomes are depicted via a confusion matrix heatmap, whereas the regression task results are illustrated using scatter density plots. (<b>a</b>) Amplitude distribution types of HH polarization. (<b>b</b>) Amplitude distribution types of VV polarization. (<b>c</b>) Shape parameter of HH polarization. (<b>d</b>) Shape parameter of VV polarization. (<b>e</b>) Scale parameter of HH polarization. (<b>f</b>) Scale parameter of VV polarization.</p>
Full article ">Figure 5
<p>TEIC comparison of two methods: MLE, and MT1DCNN based on MLE. (<b>a</b>) TEIC value comparison under HH polarization. (<b>b</b>) Boxplot of TEIC values under HH polarization. (<b>c</b>) TEIC value comparison under VV polarization. (<b>d</b>) Boxplot of TEIC values under VV polarization.</p>
Full article ">Figure 5 Cont.
<p>TEIC comparison of two methods: MLE, and MT1DCNN based on MLE. (<b>a</b>) TEIC value comparison under HH polarization. (<b>b</b>) Boxplot of TEIC values under HH polarization. (<b>c</b>) TEIC value comparison under VV polarization. (<b>d</b>) Boxplot of TEIC values under VV polarization.</p>
Full article ">Figure 6
<p>Comparison of results for predicting the real sea clutter amplitude distribution using different parameter estimation methods: (<b>a</b>) Prediction results under HH polarization. (<b>b</b>) Prediction results under VV polarization.</p>
Full article ">Figure 7
<p>Comparison of training results of different models and epochs on the HH polarization sea clutter validation set: (<b>a</b>) F1 score. (<b>b</b>) Validation loss.</p>
Full article ">Figure 8
<p>Comparison of training results of different models and epochs on the VV polarization sea clutter validation set: (<b>a</b>) F1 score. (<b>b</b>) Validation loss.</p>
Full article ">
25 pages, 10372 KiB  
Article
A Dynamic False Alarm Rate Control Method for Small Target Detection in Non-Stationary Sea Clutter
by Yunlong Dong, Jifeng Wei, Hao Ding, Ningbo Liu, Zheng Cao and Hengli Yu
J. Mar. Sci. Eng. 2024, 12(10), 1770; https://doi.org/10.3390/jmse12101770 - 5 Oct 2024
Viewed by 977
Abstract
Sea surface non-stationarity poses significant challenges to sea-surface small target detection, particularly in maintaining a stable false alarm rate (FAR). In dynamic maritime scenarios with non-stationary characteristics, the non-stationarity of sea clutter can easily cause significant changes in the clutter feature space, leading [...] Read more.
Sea surface non-stationarity poses significant challenges to sea-surface small target detection, particularly in maintaining a stable false alarm rate (FAR). In dynamic maritime scenarios with non-stationary characteristics, the non-stationarity of sea clutter can easily cause significant changes in the clutter feature space, leading to a notable deviation between the preset FAR and the measured FAR. By analyzing the temporal and spatial variations in sea clutter, we model the relationship between the preset FAR and the measured FAR as a two-parameter linear function. To address the impact of sea surface non-stationarity on FAR, the model parameters are estimated in real time within the environment and used to guide the dynamic adjustment of the decision region. We applied the proposed method to both convex hull and support vector machine (SVM) detectors and conducted experiments using measured X-band sea-detecting datasets. Experiments demonstrate that the proposed method effectively reduces the deviation between the measured mean FAR and the preset FAR. When the preset FAR is 10−2, the proposed method achieves an average FAR of 1.067 × 10−2 with the convex hull detector and 1.043 × 10−2 with the SVM detector. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Measured radar echoes. (<b>a</b>) Measured radar echoes in #1-HH dataset. (<b>b</b>) Measured radar echoes in #4-HH dataset.</p>
Full article ">Figure 2
<p>Schematic diagram of density estimation.</p>
Full article ">Figure 3
<p>The calculation of LFSD. (<b>a</b>) Convex hull after FAR control. (<b>b</b>) LFSD.</p>
Full article ">Figure 4
<p>The partitioning method for training region and testing region.</p>
Full article ">Figure 5
<p>The relationship between the position of false alarm points and LFSD.</p>
Full article ">Figure 6
<p>The relationship between the LFSD difference and FAR.</p>
Full article ">Figure 7
<p>Clutter feature spaces. The area circled in pink is where the density changes due to the influence of sea spikes, outside of the false alarm regions. (<b>a</b>) Clutter feature spaces with different range cells. (<b>b</b>) Clutter feature spaces with different periods.</p>
Full article ">Figure 8
<p>The LFSD difference of improved initial FAR control method and convex hull detector [<a href="#B13-jmse-12-01770" class="html-bibr">13</a>] on all 10 datasets.</p>
Full article ">Figure 9
<p>The FAR of improved initial FAR control method and convex hull detector [<a href="#B13-jmse-12-01770" class="html-bibr">13</a>] on all 10 datasets.</p>
Full article ">Figure 10
<p>Comparison of time consumption before and after revising the initial FAR control method, where left <span class="html-italic">Y</span>-axis represents the time cost of original method and right <span class="html-italic">Y</span>-axis represents the time cost of proposed method.</p>
Full article ">Figure 11
<p>The relationship between the preset FAR and the measured FAR over different time periods. The relationship between the preset FAR and the measured FAR in the green box area can be expressed using a linear function (<b>a</b>) #1-HH. (<b>b</b>) #4-HH.</p>
Full article ">Figure 12
<p>Fitting results of dataset #1-HH. The linear fit curve and the exponential fit curve almost overlap, so we have included the MSE of the three curves in the figure to highlight the differences.</p>
Full article ">Figure 13
<p>MSE of all 10 datasets.</p>
Full article ">Figure 14
<p>The optimal parameters for the all 10 datasets. (<b>a</b>) Quadratic function. (<b>b</b>) Linear function.</p>
Full article ">Figure 15
<p>Modification of the relationship between the preset FAR and the measured FAR.</p>
Full article ">Figure 16
<p>Feature detection procedure under dynamic FAR control.</p>
Full article ">Figure 17
<p>Visualization of false alarms on dataset #4-HH.</p>
Full article ">Figure 18
<p>Measured FAR results of proposed method based on convex hull and convex hull detector [<a href="#B13-jmse-12-01770" class="html-bibr">13</a>]. (<b>a</b>) #1-HH. (<b>b</b>) #5-HH.</p>
Full article ">Figure 19
<p>Original method [<a href="#B13-jmse-12-01770" class="html-bibr">13</a>] and proposed method performance based on convex hull.</p>
Full article ">Figure 20
<p>Original method [<a href="#B27-jmse-12-01770" class="html-bibr">27</a>] and proposed method performance based on SVM.</p>
Full article ">Figure 21
<p>Decision region and target feature density.</p>
Full article ">Figure 22
<p>The relationship between the number of decision region adjustments and the measured FAR.</p>
Full article ">Figure A1
<p>The relationship between the preset FAR and the measured FAR over different time periods on all 10 datasets. The relationship between the preset FAR and the measured FAR in the green box area can be expressed using a linear function (<b>a</b>) #1-HH. (<b>b</b>) #2-HH. (<b>c</b>) #3-HH. (<b>d</b>) #4-HH. (<b>e</b>) #5-HH. (<b>f</b>) #1-VV. (<b>g</b>) #2-VV. (<b>h</b>) #3-VV. (<b>i</b>) #4-VV. (<b>j</b>) #5-VV.</p>
Full article ">Figure A1 Cont.
<p>The relationship between the preset FAR and the measured FAR over different time periods on all 10 datasets. The relationship between the preset FAR and the measured FAR in the green box area can be expressed using a linear function (<b>a</b>) #1-HH. (<b>b</b>) #2-HH. (<b>c</b>) #3-HH. (<b>d</b>) #4-HH. (<b>e</b>) #5-HH. (<b>f</b>) #1-VV. (<b>g</b>) #2-VV. (<b>h</b>) #3-VV. (<b>i</b>) #4-VV. (<b>j</b>) #5-VV.</p>
Full article ">
Back to TopTop