Localization and Classification of Venusian Volcanoes Using Image Detection Algorithms
<p>The example of a radar signature [<a href="#B8-sensors-23-01224" class="html-bibr">8</a>].</p> "> Figure 2
<p>The reflection of the radar beam from the volcano [<a href="#B8-sensors-23-01224" class="html-bibr">8</a>].</p> "> Figure 3
<p>Defined volcano classes in the dataset: (<b>a</b>) Category 1, (<b>b</b>) Category 2, (<b>c</b>) Category 3, (<b>d</b>) Category 4 [<a href="#B8-sensors-23-01224" class="html-bibr">8</a>].</p> "> Figure 4
<p>Image coordinate system.</p> "> Figure 5
<p>Difference between Magellan (<b>left</b>) and YOLO (<b>right</b>) annotations.</p> "> Figure 6
<p>The method of transferring labels from Magellan format to YOLO format.</p> "> Figure 7
<p>Analysis graph of the Magellan data set.</p> "> Figure 8
<p>Schematic representation of dataset augmentation.</p> "> Figure 9
<p>Examples of standard dataset augmentation techniques.</p> "> Figure 10
<p>Examples of an image with a “black zone” [<a href="#B8-sensors-23-01224" class="html-bibr">8</a>].</p> "> Figure 11
<p>Example of an cutout agumentation.</p> "> Figure 12
<p>Schematic representation of augmentation pipeline.</p> "> Figure 13
<p>Concept of mosaic augmentation technique.</p> "> Figure 14
<p>An ilustration on YOLOv5-based object detection.</p> "> Figure 15
<p>YOLOv5 structure.</p> "> Figure 16
<p>Illustration of a cross-validation procedure.</p> "> Figure 17
<p>Input and output image used for testing the model-1: (<b>a</b>) Input image with given objects for detection. (<b>b</b>) Output image with detected objects.</p> "> Figure 18
<p>Input and output image used for testing the model-2: (<b>a</b>) Input image with given objects for detection. (<b>b</b>) Output image with the detected objects.</p> "> Figure 19
<p>Input and output image used for testing model-3: (<b>a</b>) Input image with given objects for detection. (<b>b</b>) Output image with the detected object.</p> "> Figure A1
<p>Base model <span class="html-italic">P–R</span> curve—Experiment 1.</p> "> Figure A2
<p>Base model <span class="html-italic">P–R</span> curve—Experiment 2.</p> "> Figure A3
<p>Base model <span class="html-italic">P–R</span> curve—Experiment 3.</p> "> Figure A4
<p>Base model <span class="html-italic">P–R</span> curve—Experiment 4.</p> "> Figure A5
<p>Base model <span class="html-italic">P–R</span> curve—Experiment 5.</p> "> Figure A6
<p>Model trained with classic augmentation techniques <span class="html-italic">P–R</span> curve—Experiment 1.</p> "> Figure A7
<p>Model trained with classic augmentation techniques <span class="html-italic">P–R</span> curve—Experiment 2.</p> "> Figure A8
<p>Model trained with classic augmentation techniques <span class="html-italic">P–R</span> curve—Experiment 3.</p> "> Figure A9
<p>Model trained with classic augmentation techniques <span class="html-italic">P–R</span> curve—Experiment 4.</p> "> Figure A10
<p>Model trained with classic augmentation techniques <span class="html-italic">P–R</span> curve—Experiment 5.</p> "> Figure A11
<p>Model trained with augmentation pipeline <span class="html-italic">P–R</span> curve—Experiment 1.</p> "> Figure A12
<p>Model trained with augmentation pipeline <span class="html-italic">P–R</span> curve—Experiment 2.</p> "> Figure A13
<p>Model trained with augmentation pipeline <span class="html-italic">P–R</span> curve—Experiment 3.</p> "> Figure A14
<p>Model trained with augmentation pipeline <span class="html-italic">P–R</span> curve—Experiment 4.</p> "> Figure A15
<p>Model trained with augmentation pipeline <span class="html-italic">P–R</span> curve—Experiment 5.</p> ">
Abstract
:1. Introduction
- Is it possible to achieve good classification accuracy with the YOLOv5 algorithm on SAR imagery and such a small dataset?
- Can classical training data augmentation techniques improve classification accuracy, or are some advanced methods are required?
2. Materials and Methods
- Formatting the dataset into appropriate YOLO format from a Magellan format;
- Splitting the dataset into training and testing sets;
- Performing the augmentation on a training set to increase the number of training images.
2.1. Dataset Description
- “Category 1—almost certainly a volcano ); the image clearly shows a summit pit, a bright-dark pair, and a circular planimetric outline.
- Category 2—probably a volcano (); the image shows only two of the three category 1 characteristics.
- Category 3—possibly a volcano (); the image shows evidence of bright-dark flanks or a circular outline; the summit pit may or may not be visible.
- Category 4—a pit (); the image shows a visible pit but does not provide conclusive evidence for flanks or a circular outline.
- Category 5—not a volcano ()” [8].
2.2. Dataset Preparation
2.3. Dataset Augmentation
- Rotations for 90°, 180°, and 270°;
- Mirroring across horizontal and vertical axes;
- Change of contrast, brightness;
- Cutout technique.
Augmentation Pipeline
2.4. YOLOv5 Algorithm
YOLOv5 Structure
2.5. Evaluation Method
- A represents the predicted bounding box;
- B represents the correct prediction.
- AP@α represents average precision;
- p(r) represents the P–R curve function;
- α represents a treshold value.
- mAP@α represents mean average precision;
- n represents number of classes;
- APi represents average precision for a given class i.
2.6. K-Fold Cross Validation
- P represents average precision from all experiments;
- N represents number of experiments;
- Pi represents precision for a given experiment i.
- δ (P) represents standard deviation of precision from all experiments;
- N represents number of experiments;
- Pi represents precision for given experiment i;
- P represents the average value of Precision from all experiments.
3. Results
4. Discussion
Testing the Developed Model
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial intelligence |
YOLO | You Only Look Once |
CNN | Convolutional neural network |
mAP | Mean average precision |
JARTool | JPL adaptive recognition tool |
PCA | Principal components analysis |
FOA | Focus of attention |
RCNN | Region-based convolutional neural network |
SAR | Synthetic-aperture radar |
TP | True positive |
FP | False positive |
FN | False negative |
P | Precision |
R | Recall |
IoU | Intersection over union |
PR Curve | Precision–Recall curve |
AP | Average Precision |
KCV | K-fold cross-validation |
Appendix A
Appendix B
Appendix C
References
- Agarwal, N.; Chiang, C.W.; Sharma, A. A Study on Computer Vision Techniques for Self-driving Cars. Front. Comput. 2019, 629–634. [Google Scholar] [CrossRef]
- Bi, W.L.; Hosny, A.; Schabath, M.B.; Giger, M.L.; Birkbak, N.J.; Mehrtash, A.; Allison, T.; Arnaout, O.; Abbosh, C.; Dunn, I.F.; et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. ACS J. 2019, 69, 127–157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kent, S.M. Sloan digital sky survey. Astrophys. Space Sci. 1994, 217, 27–30. [Google Scholar] [CrossRef]
- Kaiser, N. Pan-STARRS: A wide-field optical survey telescope array. Ground-Based Telesc. SPIE 2004, 5489, 11–22. [Google Scholar]
- Thomas, S.J.; Barr, J.; Callahan, S.; Clements, A.W.; Daruich, F.; Fabrega, J.; Ingraham, P.; Gressler, W.; Munoz, F.; Neill, D.; et al. Rubin Observatory: Telescope and site status. Ground-Based Airborne Telesc. VIII SPIE 2020, 11445, 68–82. [Google Scholar]
- Kremer, J.; Stensbo-Smidt, K.; Gieseke, F.; Pedersen, K.S.; Igel, C. Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy. IEEE Intell. Syst. 2017, 32, 16–22. [Google Scholar] [CrossRef]
- Saunders, R.S.; Spear, A.J.; Allin, P.C.; Austin, R.S.; Berman, A.L.; Chandlee, R.C.; Clark, J.; Decharon, A.V.; De Jong, E.M.; Griffith, D.G.; et al. Magellan mission summary. J. Geophys. Res. Planets 1992, 97, 13067–13090. [Google Scholar] [CrossRef]
- Burl, M.C.; Asker, L.; Smyth, P.; Fayyad, U.; Perona, P.; Crumpler, L.; Aubele, J. Learning to Recognize Volcanoes on Venus. Mach. Learn. 1998, 30, 165–194; [Google Scholar] [CrossRef] [Green Version]
- Venus. Available online: https://volcano.oregonstate.edu/venus (accessed on 23 October 2022).
- Burl, M.C.; Fayyad, U.M.; Perona, P.; Smyth, P.; Burl, M.P. Automating the hunt for volcanoes on Venus. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 302–309. [Google Scholar] [CrossRef]
- Kumar, B.; Ranjan, R.K.; Husain, A. A Multi-Objective Enhanced Fruit Fly Optimization (MO-EFOA) Framework for Despeckling SAR Images using DTCWT based Local Adaptive Thresholding. Int. J. Remote Sens. 2021, 42, 5493–5514. [Google Scholar] [CrossRef]
- Zhu, X.X.; Montazeri, S.; Ali, M.; Hua, Y.; Wang, Y.; Mou, L.; Shi, Y.; Xu, F.; Bamler, R. Deep Learning Meets SAR: Concepts, models, pitfalls, and perspectives. IEEE Geosci. Remote Sens. Mag. 2021, 9, 5493–5514. [Google Scholar] [CrossRef]
- Parikh, H.; Patel, S.; Patel, V. Classification of SAR and PolSAR images using deep learning: A review. Int. J. Image Data Fusion 2020, 11, 1–32. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H. SAR target recognition based on deep learning. In Proceedings of the International Conference on Data Science and Advanced Analytics (DSAA), Shanghai, China, 30 October–1 November 2014; pp. 541–547. [Google Scholar] [CrossRef]
- Liu, Y.; Wu, L. Geological Disaster Recognition on Optical Remote Sensing Images Using Deep Learning. Procedia Comput. Sci. 2014, 91, 566–575. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Jocher, G. Ultralytics/yolov5: V7.0—YOLOv5 SOTA Realtime Instance Segmentation. Available online: https://zenodo.org/record/7347926#.Y66053bMJPZ (accessed on 23 October 2022).
- Mostafa, T.; Chowdhury, S.J.; Rhaman, M.K.; Alam, M.G.R. Occluded Object Detection for Autonomous Vehicles Employing YOLOv5, YOLOX and Faster R-CNN. In Proceedings of the 2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 12–15 October 2022; pp. 0405–0410. [Google Scholar] [CrossRef]
- Wang, H.; Nie, D.; Zuo, Y.; Tang, L.; Zhang, M. Nonlinear Ship Wake Detection in SAR Images Based on Electromagnetic Scattering Model and YOLOv5. Remote Sens. 2022, 14, 5788. [Google Scholar] [CrossRef]
- Xu, X.; Zhang, X.; Zhang, T. Lite-YOLOv5: A Lightweight Deep Learning Detector for On-Board Ship Detection in Large-Scene Sentinel-1 SAR Images. Remote Sens. 2022, 14, 1018. [Google Scholar] [CrossRef]
- Yoshida, T.; Ouchi, K. Detection of Ships Cruising in the Azimuth Direction Using Spotlight SAR Images with a Deep Learning Method. Remote Sens. 2022, 14, 4691. [Google Scholar] [CrossRef]
- Adedeji, O.; Owoade, P.; Ajayi, O.; Arowolo, O. Image Augmentation for Satellite Images. arXiv 2022, arXiv:2207.14580. [Google Scholar]
- Hu, B.; Lei, C.; Wang, D.; Zhang, S.; Chen, Z. A preliminary study on data augmentation of deep learning for image classification. arXiv 2019, arXiv:1906.11887. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Mishra, M.; Jain, V.; Singh, S.K.; Maity, D. Two-stage method based on the you only look once framework and image segmentation for crack detection in concrete structures. Archit. Struct. Constr. 2022, 1–18. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer vision and pattern recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 9197–9206. [Google Scholar]
- Cengil, E.; Çinar, A.; Yildrim, M. A Case Study: Cat-Dog Face Detector Based on YOLOv5. In Proceedings of the 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Zallaq, Bahrain, 29–30 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 149–153. [Google Scholar]
- Zhang, P.; Su, W. Statistical inference on recall, precision and average precision under random selection. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2021; pp. 1348–1352. [Google Scholar]
- Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [Green Version]
- Rezatofighsi, H.; Tsoi, N.; Gwak, J.Y.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Francies, M.L.; Ata, M.M.; Mohamed, M.A. A robust multiclass 3D object recognition based on modern YOLO deep learning algorithms. Concurr. Comput. Pract 2022, 34, e6517. [Google Scholar] [CrossRef]
- Anguita, D.; Ghelardoni, L.; Ghio, A.; Oneto, L.; Ridella, S. The ‘K’in K-fold cross validation. In Proceedings of the 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 25–27 April 2012; pp. 441–446. [Google Scholar]
- Rodriguez, J.D.; Perez, A.; Lozano, J.A. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 569–575. [Google Scholar] [CrossRef]
- Khasawneh, N.; Fraiwan, M.; Fraiwan, L. Detection of K-complexes in EEG signals using deep transfer learning and YOLOv3. Clust. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
- Jung, Y. Multiple predicting K-fold cross-validation for model selection. J. Nonparametr. Stat. 2018, 30, 197–215. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Yu, Y.; Zhao, J.; Gong, Q.; Huang, C.; Zheng, G.; Ma, J. Real-Time Underwater Maritime Object Detection in Side-Scan Sonar Images Based on Transformer-YOLOv5. Remote Sens. 2021, 13, 3555. [Google Scholar] [CrossRef]
Telescope (year) | Data Rate (bytes/night) |
---|---|
VLT (1998) | 10 GB |
SDSS (2000) | 200 GB |
VISTA (2009) | 315 GB |
LSST (2019) | 30 TB |
TMT (2022) | 90 TB |
Class | X Center | Y Center | Radius |
---|---|---|---|
1 | 273 | 720 | 21.3 |
2 | 130 | 450 | 50.2 |
1 | 423 | 123 | 70.1 |
Class | X Center | Y Center | Width | Height |
---|---|---|---|---|
0 | 0.286 | 0.775 | 0.429 | 0.429 |
1 | 0.127 | 0.439 | 0.098 | 0.098 |
0 | 0.413 | 0.120 | 0.137 | 0.137 |
Hyperparameter | Value |
---|---|
Number of iterations | 100 |
Optimizer | SGD |
Input image resolution | 1536 × 1536 pixels |
Batch size | 8 |
lr0 | 0.01 |
lr1 | 0.01 |
momentum | 0.937 |
weight_decay | 0.0005 |
warmup_epochs | 3.0 |
warmup_momentum | 0.8 |
warmup_bias_lr | 0.1 |
box | 0.05 |
cls | 0.5 |
cls | 1.0 |
obj | 1.0 |
obj | 1.0 |
iou | 0.2 |
anchor | 4.0 |
fl | 0.0 |
hsv | 0.015 |
hsv | 0.7 |
hsv | 0.4 |
degrees | 0.0 |
translate | 0.1 |
scale | 0.5 |
shear | 0.0 |
perspective | 0.0 |
flipud | 0.0 |
fliplr | 0.5 |
mosaic | 1.0 |
mixup | 0.0 |
copy_paste | 0.0 |
Metric | Experiment 1 | Experiment 2 | Experiment 3 | Experiment 4 | Experiment 5 |
---|---|---|---|---|---|
K1 Precision | 0.496 | 0.455 | 0.581 | 0.455 | 0.272 |
K1 Recall | 0.493 | 0.483 | 0.600 | 0.483 | 0.500 |
K1 [email protected] | 0.421 | 0.425 | 0.598 | 0.488 | 0.317 |
K2 Precision | 0.252 | 0.294 | 0.249 | 0.294 | 0.232 |
K2 Recall | 0.265 | 0.421 | 0.500 | 0.421 | 0.241 |
K2 [email protected] | 0.151 | 0.241 | 0.182 | 0.204 | 0.162 |
K3 Precision | 0.19 | 0.289 | 0.230 | 0.289 | 0.309 |
K3 Recall | 0.267 | 0.430 | 0.490 | 0.430 | 0.324 |
K3 [email protected] | 0.107 | 0.241 | 0.281 | 0.194 | 0.304 |
K4 Precision | 0.231 | 0.348 | 0.364 | 0.348 | 0.364 |
K4 Recall | 0.505 | 0.456 | 0.444 | 0.456 | 0.446 |
K4 [email protected] | 0.251 | 0.284 | 0.290 | 0.248 | 0.345 |
Precision (all) | 0.292 | 0.346 | 0.356 | 0.346 | 0.294 |
Recall (all) | 0.382 | 0.448 | 0.509 | 0.448 | 0.378 |
[email protected] (all) | 0.233 | 0.298 | 0.338 | 0.284 | 0.282 |
F1-Score (all) | 0.320 | 0.390 | 0.410 | 0.360 | 0.330 |
Metric | Average Value | Standard Deviaton |
---|---|---|
K1 Precision | 0.457 | 0.101 |
K1 Recall | 0.524 | 0.043 |
K1 [email protected] | 0.449 | 0.092 |
K2 Precision | 0.260 | 0.021 |
K2 Recall | 0.356 | 0.09 |
K2 [email protected] | 0.188 | 0.032 |
K3 Precision | 0.256 | 0.042 |
K3 Recall | 0.368 | 0.080 |
K3 [email protected] | 0.225 | 0.070 |
K4 Precision | 0.316 | 0.054 |
K4 Recall | 0.449 | 0.036 |
K4 [email protected] | 0.284 | 0.035 |
Precision (all) | 0.322 | 0.026 |
Recall (all) | 0.424 | 0.042 |
[email protected] (all) | 0.284 | 0.033 |
F1-Score (all) | 0.362 | 0.034 |
Metric | Experiment 1 | Experiment 2 | Experiment 3 | Experiment 4 | Experiment 5 |
---|---|---|---|---|---|
K1 Precision | 0.530 | 0.503 | 0.644 | 0.467 | 0.546 |
K1 Recall | 0.500 | 0.579 | 0.686 | 0.484 | 0.500 |
K1 [email protected] | 0.503 | 0.473 | 0.643 | 0.450 | 0.531 |
K2 Precision | 0.271 | 0.228 | 0.178 | 0.338 | 0.237 |
K2 Recall | 0.309 | 0.382 | 0.250 | 0.446 | 0.276 |
K2 [email protected] | 0.180 | 0.190 | 0.172 | 0.239 | 0.194 |
K3 Precision | 0.228 | 0.412 | 0.305 | 0.257 | 0.352 |
K3 Recall | 0.313 | 0.355 | 0.308 | 0.143 | 0.338 |
K3 [email protected] | 0.144 | 0.328 | 0.246 | 0.113 | 0.256 |
K4 Precision | 0.310 | 0.471 | 0.364 | 0.237 | 0.444 |
K4 Recall | 0.434 | 0.353 | 0.318 | 0.400 | 0.373 |
K4 [email protected] | 0.303 | 0.319 | 0.277 | 0.265 | 0.360 |
Precision (all) | 0.335 | 0.403 | 0.373 | 0.325 | 0.444 |
Recall (all) | 0.389 | 0.417 | 0.405 | 0.368 | 0.373 |
[email protected] (all) | 0.283 | 0.327 | 0.335 | 0.267 | 0.335 |
F1-Score (all) | 0.360 | 0.410 | 0.390 | 0.340 | 0.410 |
Metric | Average Value | Standard Deviaton |
---|---|---|
K1 Precision | 0.538 | 0.059 |
K1 Recall | 0.549 | 0.076 |
K1 [email protected] | 0.520 | 0.067 |
K2 Precision | 0.250 | 0.055 |
K2 Recall | 0.333 | 0.072 |
K2 [email protected] | 0.195 | 0.023 |
K3 Precision | 0.311 | 0.065 |
K3 Recall | 0.306 | 0.084 |
K3 [email protected] | 0.217 | 0.078 |
K4 Precision | 0.375 | 0.072 |
K4 Recall | 0.376 | 0.039 |
K4 [email protected] | 0.305 | 0.033 |
Precision (all) | 0.376 | 0.044 |
Recall (all) | 0.390 | 0.018 |
[email protected] (all) | 0.309 | 0.028 |
F1-Score (all) | 0.362 | 0.034 |
Metric | Experiment 1 | Experiment 2 | Experiment 3 | Experiment 4 | Experiment 5 |
---|---|---|---|---|---|
K1 Precision | 0.916 | 0.863 | 0.880 | 0.948 | 0.770 |
K1 Recall | 0.857 | 0.895 | 0.886 | 0.871 | 0.900 |
K1 [email protected] | 0.906 | 0.932 | 0.924 | 0.927 | 0.846 |
K2 Precision | 0.961 | 0.933 | 0.892 | 0.870 | 0.874 |
K2 Recall | 0.857 | 0.750 | 0.731 | 0.823 | 0.717 |
K2 [email protected] | 0.824 | 0.835 | 0.823 | 0.878 | 0.815 |
K3 Precision | 0.872 | 0.935 | 0.931 | 0.914 | 0.895 |
K3 Recall | 0.822 | 0.855 | 0.828 | 0.688 | 0.649 |
K3 [email protected] | 0.873 | 0.900 | 0.886 | 0.792 | 0.762 |
K4 Precision | 0.892 | 0.928 | 0.885 | 0.907 | 0.847 |
K4 Recall | 0.778 | 0.757 | 0.590 | 0.728 | 0.554 |
K4 [email protected] | 0.796 | 0.819 | 0.702 | 0.775 | 0.686 |
Precision (all) | 0.910 | 0.915 | 0.897 | 0.910 | 0.847 |
Recall (all) | 0.793 | 0.814 | 0.759 | 0.777 | 0.705 |
[email protected] (all) | 0.850 | 0.872 | 0.834 | 0.843 | 0.777 |
F1-Score (all) | 0.850 | 0.86 | 0.820 | 0.840 | 0.760 |
Metric | Average Value | Standard Deviaton |
---|---|---|
K1 Precision | 0.974 | 0.06 |
K1 Recall | 0.882 | 0.015 |
K1 [email protected] | 0.907 | 0.032 |
K2 Precision | 0.906 | 0.035 |
K2 Recall | 0.747 | 0.039 |
K2 [email protected] | 0.835 | 0.022 |
K3 Precision | 0.909 | 0.023 |
K3 Recall | 0.768 | 0.083 |
K3 [email protected] | 0.842 | 0.055 |
K4 Precision | 0.891 | 0.026 |
K4 Recall | 0.681 | 0.009 |
K4 [email protected] | 0.756 | 0.052 |
Precision (all) | 0.896 | 0.025 |
Recall (all) | 0.769 | 0.037 |
[email protected] (all) | 0.835 | 0.032 |
F1-Score (all) | 0.826 | 0.036 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Đuranović, D.; Baressi Šegota, S.; Lorencin, I.; Car, Z. Localization and Classification of Venusian Volcanoes Using Image Detection Algorithms. Sensors 2023, 23, 1224. https://doi.org/10.3390/s23031224
Đuranović D, Baressi Šegota S, Lorencin I, Car Z. Localization and Classification of Venusian Volcanoes Using Image Detection Algorithms. Sensors. 2023; 23(3):1224. https://doi.org/10.3390/s23031224
Chicago/Turabian StyleĐuranović, Daniel, Sandi Baressi Šegota, Ivan Lorencin, and Zlatan Car. 2023. "Localization and Classification of Venusian Volcanoes Using Image Detection Algorithms" Sensors 23, no. 3: 1224. https://doi.org/10.3390/s23031224
APA StyleĐuranović, D., Baressi Šegota, S., Lorencin, I., & Car, Z. (2023). Localization and Classification of Venusian Volcanoes Using Image Detection Algorithms. Sensors, 23(3), 1224. https://doi.org/10.3390/s23031224