BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images
"> Figure 1
<p>Deep learning-based SAR ship detection methods.</p> "> Figure 2
<p>The overall scheme of the proposed method.</p> "> Figure 3
<p>Random rotation mosaic (RR-Mosaic) data augmentation.</p> "> Figure 4
<p>Structure of Bi-DFFM.</p> "> Figure 5
<p>The oriented bounding box (OBB) of ships in SAR images, where <math display="inline"><semantics> <mi>θ</mi> </semantics></math> is determined by the long side of the rectangle and x-axis. (<b>a</b>) <math display="inline"><semantics> <mi>θ</mi> </semantics></math> belongs to [−90, 0). (<b>b</b>) <math display="inline"><semantics> <mi>θ</mi> </semantics></math> belongs to [0, 90).</p> "> Figure 6
<p>The distributions of the sizes, aspect ratios, angle and corresponding error of the horizontal bounding boxes and oriented bounding boxes in SSDD. (<b>a</b>) Distributions of the bounding boxes’ length in SSDD. (<b>b</b>) Distributions of the bounding boxes’ width in SSDD. (<b>c</b>) Distributions of the bounding boxes’ sizes in SSDD. (<b>d</b>) Distributions of the bounding boxes’ aspect ratios in SSDD. (<b>e</b>) Distributions of corresponding errors of the HBB and OBB in SSDD. (<b>f</b>) Distributions of the oriented bounding boxes’ angle in SSDD.</p> "> Figure 7
<p>The large-scene images. (<b>a</b>) GF-3 HR SAR image. (<b>b</b>) Corresponding optical image.</p> "> Figure 8
<p>The distributions of the sizes, aspect ratios and angle of the oriented bounding boxes in GF-3 dataset. (<b>a</b>) Distributions of the bounding boxes’ length in GF-3 dataset. (<b>b</b>) Distributions of the bounding boxes’ width in GF-3 dataset. (<b>c</b>) Distributions of the bounding boxes’ aspect ratios in GF-3 dataset. (<b>d</b>) Distributions of the oriented bounding boxes’ angle in GF-3 dataset.</p> "> Figure 9
<p>Distributions of the angle on SSDD with different data augmentation methods. (<b>a</b>) Flip. (<b>b</b>) Rotation. (<b>c</b>) Radom Rotation. (<b>d</b>) Flip Mosaic. (<b>e</b>) Rotation Mosaic. (<b>f</b>) RR-Mosaic.</p> "> Figure 10
<p>Experimental results on SSDD. The blue number represents the number of detected ships.</p> "> Figure 11
<p>Precision-Recall (PR) curves of different models on SSDD. (<b>a</b>) PR curves of YOLOv5s-CSL, YOLOv5m-CSL, YOLOv5l-CSL, YOLOv5x-CSL and BiFA-YOLO. (<b>b</b>) PR curves of YOLOv5s-DCL, YOLOv5m- DCL, YOLOv5l-DCL, YOLOv5x-DCL and BiFA-YOLO.</p> "> Figure 12
<p>Comparison of curves of different methods in inshore and offshore scene on SSDD. (<b>a</b>) PR curves of different methods in inshore scene. (<b>b</b>) PR curves of different methods in offshore scene.</p> "> Figure 13
<p>Detection results of different methods in inshore scene. (<b>a</b>,<b>k</b>) ground-truths. (<b>b</b>,<b>l</b>) results of YOLOv5s-CSL. (<b>c</b>,<b>m</b>) results of YOLOv5m-CSL. (<b>d</b>,<b>n</b>) results of YOLOv5l-CSL. (<b>e</b>,<b>o</b>) results of YOLOv5x-CSL. (<b>f</b>,<b>p</b>) results of YOLOv5s-DCL. (<b>g</b>,<b>q</b>) results of YOLOv5m-DCL. (<b>h</b>,<b>r</b>) results of YOLOv5l-DCL. (<b>i</b>,<b>s</b>) results of YOLOv5x-DCL. (<b>j</b>,<b>t</b>) results of BiFA-YOLO. Note that the red boxes represent true positive targets, the yellow ellipses represent false positive targets and the green ellipses represent missed targets.</p> "> Figure 14
<p>Detection results of different methods in offshore scene. (<b>a</b>,<b>k</b>) ground-truths. (<b>b</b>,<b>l</b>) results of YOLOv5s-CSL. (<b>c</b>,<b>m</b>) results of YOLOv5m-CSL. (<b>d</b>,<b>n</b>) results of YOLOv5l-CSL. (<b>e</b>,<b>o</b>) results of YOLOv5x-CSL. (<b>f</b>,<b>p</b>) results of YOLOv5s-DCL. (<b>g</b>,<b>q</b>) results of YOLOv5m-DCL. (<b>h</b>,<b>r</b>) results of YOLOv5l-DCL. (<b>i</b>,<b>s</b>) results of YOLOv5x-DCL. (<b>j</b>,<b>t</b>) results of BiFA-YOLO. Note that the red boxes represent true positive targets, the yellow ellipses represent false positive targets and the green ellipses represent missed targets.</p> "> Figure 15
<p>Detection results of different CNN-based methods on SSDD. (<b>a</b>,<b>g</b>,<b>m</b>) ground-truths. (<b>b</b>,<b>h</b>,<b>n</b>) results of DRBox-v1. (<b>c</b>,<b>i</b>,<b>o</b>) results of SDOE. (<b>d</b>,<b>j</b>,<b>p</b>) results of DRBox-v2. (<b>e</b>,<b>k</b>,<b>q</b>) results of improved R-RetinaNet. (<b>f</b>,<b>l</b>,<b>r</b>) results of proposed BiFA-YOLO. Note that the red boxes represent true positive targets, the yellow ellipses represent false positive targets and the green ellipses represent missed targets; the blue number represents the number of detected ships.</p> "> Figure 16
<p>Detection results in large-scene SAR image. Note that the red boxes represent true positive targets, the yellow ellipses represent false positive targets and the green ellipses represent missed targets; and the blue number represents the number of detected ships.</p> "> Figure 17
<p>Feature map visualization results of feature pyramid with and without Bi-DFFM. (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) represent results without Bi-DFFM. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) represent results with Bi-DFFM.</p> "> Figure 18
<p>Feature map visualization results of three-scale prediction layers with and without Bi-DFFM. (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>,<b>i</b>,<b>j</b>) represent results without Bi-DFFM. (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>,<b>k</b>,<b>l</b>) represent results with Bi-DFFM.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Deep Learning-Based Horizontal SAR Ship Detection Methods
2.2. Deep Learning-Based Arbitrary-Oriented SAR Ship Detection Methods
2.3. Arbitrary-Oriented Object Detection with Angular Classification
3. Proposed Method
3.1. Overall Scheme of the Proposed Method
3.2. Random Rotation Mosaic Data Augmentation (RR-Mosaic)
3.3. Bi-Directional Feature Fusion Module (Bi-DFFM)
3.4. Direction Prediction Based on Angular Classification
3.5. Multi-Task Loss Function
4. Experiments
4.1. Dataset Introduction
4.2. Implementation Details
4.3. Evaluation Metrics
4.4. Analysis of Results
4.4.1. Effect of RR-Mosaic
4.4.2. Effect of Angular Classification
4.4.3. Effect of Bi-Directional Feature Fusion Module
4.4.4. Comparison of Inshore Scene and Offshore Scene
4.4.5. Comparison with State of the Arts
4.4.6. Validation on Large-Scene HR SAR Image
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lang, H.; Xi, Y.; Zhang, X. Ship Detection in High-Resolution SAR Images by Clustering Spatially Enhanced Pixel Descriptor. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5407–5423. [Google Scholar] [CrossRef]
- Leng, X.; Ji, K.; Kuang, G. Radio Frequency Interference Detection and Localization in Sentinel-1 Images. IEEE Trans. Geosci. Remote Sens. 2021, 1–12. [Google Scholar] [CrossRef]
- Zhang, P.; Luo, H.; Ju, M.; He, M.; Chang, Z.; Hui, B. Brain-Inspired Fast Saliency-Based Filtering Algorithm for Ship Detection in High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2021, 1–9. [Google Scholar] [CrossRef]
- Zhang, L.; Leng, X.; Feng, S.; Ma, X.; Ji, K.; Kuang., G.; Liu, L. Domain Knowledge Powered Two-Stream Deep Network for Few-Shot SAR Vehicle Recognition. IEEE Trans. Geosci. Remote Sens. 2021, 1–15. [Google Scholar] [CrossRef]
- Wang, X.; Chen, C.; Pan, Z.; Pan, Z. Fast and Automatic Ship Detection for SAR Imagery Based on Multiscale Contrast Measure. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1834–1838. [Google Scholar] [CrossRef]
- Yang, M.; Guo, C. Ship Detection in SAR Images Based on Lognormal ρ-Metric. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1372–1376. [Google Scholar] [CrossRef]
- Ao, W.; Xu, F.; Li, Y.; Wang, H. Detection and Discrimination of Ship Targets in Complex Background from Spaceborne ALOS-2 SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 536–550. [Google Scholar] [CrossRef]
- Guo, H.; Yang, X.; Wang, N.; Gao, X. A CenterNet++ model for ship detection in SAR images. Pattern Recognit. 2021, 112, 107787. [Google Scholar] [CrossRef]
- Liang, Y.; Sun, K.; Zeng, Y.; Li, G.; Xing, M. An Adaptive Hierarchical Detection Method for Ship Targets in High-Resolution SAR Images. Remote Sens. 2020, 12, 303. [Google Scholar] [CrossRef] [Green Version]
- Leng, X.; Ji, K.; Xiong, B.; Kuang, G. Complex Signal Kurtosis—Indicator of Ship Target Signature in SAR Images. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Liu, T.; Yang, Z.; Yang, J.; Gao, G. CFAR Ship Detection Methods Using Compact Polarimetric SAR in a K-Wishart Distribution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3737–3745. [Google Scholar] [CrossRef]
- Gao, G.; Shi, G. CFAR Ship Detection in Nonhomogeneous Sea Clutter Using Polarimetric SAR Data Based on the Notch Filter. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4811–4824. [Google Scholar] [CrossRef]
- Dai, H.; Du, L.; Wang, Y.; Wang, Z. A Modified CFAR Algorithm Based on Object Proposals for Ship Target Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1925–1929. [Google Scholar] [CrossRef]
- Leng, X.; Ji, K.; Yang, K.; Zou, H. A Bilateral CFAR Algorithm for Ship Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1536–1540. [Google Scholar] [CrossRef]
- Yang, X.; Yan, J.; He, T. Arbitrary-Oriented Object Detection with Circular Smooth Label. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 677–694. [Google Scholar]
- Yang, X.; Hou, L.; Zhou, Y.; Wang, W.; Yan, J. Dense Label Encoding for Boundary Discontinuity Free Rotation Detection. arXiv 2020, arXiv:2011.09670. [Google Scholar]
- Li, J.; Qu, C.; Shao, J. Ship detection in SAR images based on an improved faster R-CNN. In Proceedings of the BIGSARDATA, Beijing, China, 13–14 November 2017; pp. 1–6. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Lin, Z.; Ji, K.; Leng, X.; Kuang, G. Squeeze and Excitation Rank Faster R-CNN for Ship Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 751–755. [Google Scholar] [CrossRef]
- Deng, Z.; Sun, H.; Zhou, S.; Zhao, J. Learning Deep Ship Detector in SAR Images from Scratch. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4021–4039. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR Dataset of Ship Detection for Deep Learning under Complex Backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S. SSD: Single shot multibox detector. arXiv 2015, arXiv:1512.02325. [Google Scholar]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. arXiv 2017, arXiv:1708.02002. [Google Scholar]
- Ai, J.; Tian, R.; Luo, Q.; Jin, J.; Tang, B. Multi-Scale Rotation-Invariant Haar-Like Feature Integrated CNN-Based Ship Detection Algorithm of Multiple-Target Environment in SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10070–10087. [Google Scholar] [CrossRef]
- Wei, S.; Su, H.; Ming, J.; Wang, C.; Yan, M.; Kumar, D.; Shi, J.; Zhang, X. Precise and Robust Ship Detection for High-Resolution SAR Imagery Based on HR-SDNet. Remote Sens. 2020, 12, 167. [Google Scholar] [CrossRef] [Green Version]
- Cui, Z.; Li, Q.; Cao, Z.; Liu, N. Dense Attention Pyramid Networks for Multi-Scale Ship Detection in SAR Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8983–8997. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. Automatic Ship Detection Based on RetinaNet Using Multi-Resolution Gaofen-3 Imagery. Remote Sens. 2019, 11, 531. [Google Scholar] [CrossRef] [Green Version]
- Fu, J.; Sun, X.; Wang, Z.; Fu, K. An Anchor-Free Method Based on Feature Balancing and Refinement Network for Multiscale Ship Detection in SAR Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1331–1344. [Google Scholar] [CrossRef]
- Gao, F.; He, Y.; Wang, J.; Hussain, A.; Zhou, H. Anchor-free Convolutional Network with Dense Attention Feature Aggregation for Ship Detection in SAR Images. Remote Sens. 2020, 12, 2619. [Google Scholar] [CrossRef]
- Cui, Z.; Wang, X.; Liu, N.; Cao, Z. Ship Detection in Large-Scale SAR Images Via Spatial Shuffle-Group Enhance Attention. IEEE Trans. Geosci. Remote Sens. 2021, 59, 379–391. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Zhao, Y.; Zhao, L.; Xiong, B.; Kuang, G. Attention Receptive Pyramid Network for Ship Detection in SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2738–2756. [Google Scholar] [CrossRef]
- Chen, S.; Zhan, R.; Wang, W.; Zhang, J. Learning Slimming SAR Ship Object Detector Through Network Pruning and Knowledge Distillation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1267–1282. [Google Scholar] [CrossRef]
- Yu, L.; Wu, H.; Zhong, Z.; Zheng, L.; Deng, Q.; Hu, H. TWC-Net: A SAR Ship Detection Using Two-Way Convolution and Multiscale Feature Mapping. Remote Sens. 2021, 12, 2558. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X. ShipDeNet-20: An Only 20 Convolution Layers and <1-MB Lightweight SAR Ship Detector. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1234–1238. [Google Scholar] [CrossRef]
- Geng, X.; Shi, L.; Yang, J.; Li, P.; Zhao, L.; Sun, W.; Zhao, J. Ship Detection and Feature Visualization Analysis Based on Lightweight CNN in VH and VV Polarization Images. Remote Sens. 2021, 13, 1184. [Google Scholar] [CrossRef]
- Sun, Z.; Dai, M.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. An Anchor-Free Detection Method for Ship Targets in High-Resolution SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7788–7816. [Google Scholar] [CrossRef]
- Bao, W.; Huang, M.; Zhang, Y.; Xu, Y.; Liu, X.; Xiang, X. Boosting ship detection in SAR images with complementary pretraining techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8941–8954. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Ke, X. Quad-FPN: A Novel Quad Feature Pyramid Network for SAR Ship Detection. Remote Sens. 2021, 13, 2771. [Google Scholar] [CrossRef]
- Hong, Z.; Yang, T.; Tong, X.; Zhang, Y.; Jiang, S.; Zhou, R.; Han, Y.; Wang, J.; Yang, S.; Liu, S. Multi-Scale Ship Detection from SAR and Optical Imagery Via a More Accurate YOLOv3. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6083–6101. [Google Scholar] [CrossRef]
- Zhang, X.; Huo, C.; Xu, N.; Jiang, H.; Cao, Y.; Ni, L.; Pan, C. Multitask Learning for Ship Detection from Synthetic Aperture Radar Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8048–8062. [Google Scholar] [CrossRef]
- Li, D.; Liang, Q.; Liu, H.; Liu, Q.; Liu, H.; Liao, G. A Novel Multidimensional Domain Deep Learning Network for SAR Ship Detection. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Jiang, J.; Fu, X.; Qin, R.; Wang, X.; Ma, Z. High-Speed Lightweight Ship Detection Algorithm Based on YOLO-V4 for Three-Channels RGB SAR Image. Remote Sens. 2021, 13, 1909. [Google Scholar] [CrossRef]
- Tang, G.; Zhuge, Y.; Claramunt, C.; Men, S. N-YOLO: A SAR Ship Detection Using Noise-Classifying and Complete-Target Extraction. Remote Sens. 2021, 13, 871. [Google Scholar] [CrossRef]
- Xu, P.; Li, Q.; Zhang, B.; Wu, F.; Zhao, K.; Du, X.; Yang, C.; Zhong, R. On-Board Real-Time Ship Detection in HISEA-1 SAR Images Based on CFAR and Lightweight Deep Learning. Remote Sens. 2021, 13, 1995. [Google Scholar] [CrossRef]
- Wu, Z.; Hou, B.; Ren, B.; Ren, Z.; Wang, S.; Jiao, L. A Deep Detection Network Based on Interaction of Instance Segmentation and Object Detection for SAR Images. Remote Sens. 2021, 13, 2582. [Google Scholar] [CrossRef]
- Wang, J.; Lu, C.; Jiang, W. Simultaneous Ship Detection and Orientation Estimation in SAR Images Based on Attention Module and Angle Regression. Sensors 2018, 18, 2851. [Google Scholar] [CrossRef] [Green Version]
- Chen, S.; Zhang, J.; Zhan, R. R2FA-Det: Delving into High-Quality Rotatable Boxes for Ship Detection in SAR Images. Remote Sens. 2020, 12, 2031. [Google Scholar] [CrossRef]
- Pan, Z.; Yang, R.; Zhang, Z. MSR2N: Multi-Stage Rotational Region Based Network for Arbitrary-Oriented Ship Detection in SAR Images. Sensors 2020, 20, 2340. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, R.; Pan, Z.; Jia, X.; Zhang, L.; Deng, Y. A Novel CNN-Based Detector for Ship Detection Based on Rotatable Bounding Box in SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1938–1958. [Google Scholar] [CrossRef]
- An, Q.; Pan, Z.; Liu, L.; You, H. DRBox-v2: An improved detector with rotatable boxes for target detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8333–8349. [Google Scholar] [CrossRef]
- Yang, R.; Wang, G.; Pan, Z.; Lu, H.; Zhang, H.; Jia, X. A Novel False Alarm Suppression Method for CNN-Based SAR Ship Detector. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1401–1405. [Google Scholar] [CrossRef]
- Chen, C.; He, C.; Hu, C.; Pei, H.; Jiao, L. MSARN: A Deep Neural Network Based on an Adaptive Recalibration Mechanism for Multiscale and Arbitrary-Oriented SAR Ship Detection. IEEE Access 2019, 7, 159262–159283. [Google Scholar] [CrossRef]
- An, Q.; Pan, Z.; You, H.; Hu, Y. Transitive Transfer Learning-Based Anchor Free Rotatable Detector for SAR Target Detection with Few Samples. IEEE Access 2021, 9, 24011–24025. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. arXiv 2016, arXiv:1612.03144. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. arXiv 2018, arXiv:1803.01534. [Google Scholar]
- Liu, S.; Huang, D.; Wang, Y. Learning Spatial Fusion for Single-Shot Object Detection. arXiv 2019, arXiv:1911.09516. [Google Scholar]
- Ghiasi, G.; Lin, T.Y.; Pang, R.; Le, Q.V. NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. arXiv 2019, arXiv:1911.09516. [Google Scholar]
- Qiao, S.; Chen, L.; Yuille, A. DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution. arXiv 2020, arXiv:2006.02334. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. arXiv 2019, arXiv:1911.09070. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ren, D. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. arXiv 2019, arXiv:1911.08287. [Google Scholar] [CrossRef]
- Yang, X.; Yang, J.; Yan, J.; Zhang, Y.; Zhang, T.; Guo, Z.; Xian, S.; Fu, K. SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects. arXiv 2018, arXiv:1811.07126. [Google Scholar]
Range | Categories | |
---|---|---|
0–180° | 1 | 180 |
0–180° | 2 | 90 |
0–90° | 1 | 90 |
0–90° | 2 | 45 |
Angle Category | 1 | 45 | 90 | 135 | 179 |
---|---|---|---|---|---|
Binary Coded Label | 00000001 | 01000000 | 10000000 | 11000000 | 11111111 |
Gray Coded Label | 00000001 | 01100000 | 11000000 | 10100000 | 10000000 |
Dataset | Training | Test | ALL |
---|---|---|---|
SSDD | 812 | 348 | 1160 |
GF-3 Dataset | 10,150 | 4729 | 14,879 |
Project | Model/Parameter |
---|---|
CPU | Intel i7-10875H |
RAM | 32 GB |
GPU | NVIDIA RTX 2070 |
System | windows 10 |
Code | python3.8 |
Framework | CUDA10.1/cudnn7.6.5/torch 1.6 |
Method | Precision (%) | Recall (%) | AP (%) | F1 |
---|---|---|---|---|
Flip | 93.84 | 93.02 | 92.23 | 0.9343 |
Rotation | 93.65 | 93.23 | 92.25 | 0.9344 |
Random Rotation | 94.80 | 93.33 | 92.72 | 0.9408 |
Flip Mosaic | 93.49 | 94.13 | 93.06 | 0.9380 |
Rotation Mosaic | 93.25 | 93.46 | 92.49 | 0.9335 |
RR-Mosaic | 94.85 | 93.97 | 93.90 | 0.9441 |
Method | Precision (%) | Recall (%) | AP (%) | F1 |
---|---|---|---|---|
YOLOv5 | 90.30 | 92.00 | 90.80 | 0.9114 |
BiFA-YOLO + CSL | 94.85 | 93.97 | 93.90 | 0.9441 |
BiFA-YOLO + DCL | 94.25 | 93.66 | 92.59 | 0.9395 |
Method | Precision (%) | Recall (%) | AP (%) | F1 | Time (ms) | Params (M) | Model (M) |
---|---|---|---|---|---|---|---|
YOLOv5s-CSL | 91.73 | 88.79 | 86.66 | 0.9024 | 12.1 | 7.38 | 14.9 |
YOLOv5m-CSL | 91.86 | 93.02 | 90.78 | 0.9244 | 13.2 | 21.19 | 42.6 |
YOLOv5l-CSL | 93.03 | 93.23 | 92.25 | 0.9313 | 13.8 | 46.13 | 92.6 |
YOLOv5x-CSL | 94.60 | 93.66 | 92.73 | 94.22 | 16.2 | 85.50 | 171.0 |
BiFA-YOLO | 94.85 | 93.97 | 93.90 | 0.9441 | 13.3 | 19.57 | 39.4 |
Method | Precision (%) | Recall (%) | AP (%) | F1 | Time (ms) | Params (M) | Model (M) |
---|---|---|---|---|---|---|---|
YOLOv5s-DCL | 91.93 | 88.69 | 85.75 | 0.9028 | 12.0 | 6.94 | 14.0 |
YOLOv5m-DCL | 92.20 | 92.49 | 89.69 | 0.9235 | 13.6 | 20.53 | 41.3 |
YOLOv5l-DCL | 92.23 | 92.92 | 90.63 | 0.9257 | 15.4 | 45.24 | 90.8 |
YOLOv5x-DCL | 94.69 | 92.39 | 91.29 | 0.9353 | 15.7 | 84.39 | 169.0 |
BiFA-YOLO | 94.85 | 93.97 | 93.90 | 0.9441 | 13.3 | 19.57 | 39.4 |
Scene | Precision (%) | Recall (%) | AP (%) | F1 |
---|---|---|---|---|
Inshore | 92.81 | 91.60 | 91.15 | 0.9220 |
Offshore | 96.16 | 95.55 | 94.81 | 0.9585 |
Method | Bounding Box | Framework | AP (%) | Time (ms) |
---|---|---|---|---|
R-FPN [49] | Oriented | Two-Stages | 84.38 | - |
R-Faster-RCNN [49] | Oriented | Two-Stages | 82.22 | - |
RRPN [53] | Oriented | Two-Stages | 74.82 | 316.0 |
R2CNN [53] | Oriented | Two-Stages | 80.26 | 210.8 |
R-DFPN [53] | Oriented | Two-Stages | 83.44 | 370.5 |
MSR2N [49] | Oriented | Two-Stages | 93.93 | 103.3 |
SCRDet [63] | Oriented | Two-Stages | 92.04 | 120.8 |
Cascade RCNN [48] | Oriented | Multi-Stages | 88.45 | 357.6 |
R-YOLOv3 [53] | Oriented | One-Stage | 73.15 | 34.2 |
R- Attention-ResNet [53] | Oriented | One-Stage | 76.40 | 39.6 |
R-RetinaNet [50] | Oriented | One-Stage | 92.34 | 46.5 |
R2FA-Det [48] | Oriented | One-Stage | 94.72 | 63.2 |
DRBox-v1 [51] | Oriented | One-Stage | 86.41 | - |
DRBox-v2 [51] | Oriented | One-Stage | 92.81 | 55.1 |
MSARN [53] | Oriented | One-Stage | 76.24 | 35.4 |
CSAP [54] | Oriented | One-Stage | 90.75 | - |
SDOE [47] | Oriented | One-Stage | 84.20 | 25.0 |
BiFA-YOLO | Oriented | One-Stage | 93.90 | 13.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Z.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images. Remote Sens. 2021, 13, 4209. https://doi.org/10.3390/rs13214209
Sun Z, Leng X, Lei Y, Xiong B, Ji K, Kuang G. BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images. Remote Sensing. 2021; 13(21):4209. https://doi.org/10.3390/rs13214209
Chicago/Turabian StyleSun, Zhongzhen, Xiangguang Leng, Yu Lei, Boli Xiong, Kefeng Ji, and Gangyao Kuang. 2021. "BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images" Remote Sensing 13, no. 21: 4209. https://doi.org/10.3390/rs13214209
APA StyleSun, Z., Leng, X., Lei, Y., Xiong, B., Ji, K., & Kuang, G. (2021). BiFA-YOLO: A Novel YOLO-Based Method for Arbitrary-Oriented Ship Detection in High-Resolution SAR Images. Remote Sensing, 13(21), 4209. https://doi.org/10.3390/rs13214209