[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Saliency-based classification of objects in unconstrained underwater environments

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Exploration of the deep-sea underwater environment is a challenging and non-trivial task. Underwater vehicles used for the exploration of such environments capture videos continuously. The processing of these videos is a major bottleneck for scientific research in this area. This paper presents a methodology for the classification of the objects in the unconstrained underwater environments into two broad classes namely - man-made and natural. The classification of the objects is achieved using the saliency gradient based morphological active contour models. A bag of features acquired from the contours of the objects is used for the classification using various classifiers. Principal Component Analysis is used for the removal of redundancy in the feature set. The proposed features classify the objects present in the unconstrained underwater environment into a man-made/natural class using the proposed features. The results show that all the classifiers performed well; though KNN and ensemble subspace KNN, performed marginally better.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Zhu J, Yu S, Han Z, Tang Y, Wu C (2019) Underwater object recognition using transformable template matching based on prior knowledge, Mathematical Problems in Engineering, vol 2019

  2. Chapple P, Dell T, Bongiorno D (2017) Enhanced detection and classification of mine-like objects using situational awareness and deep learning

  3. Denos K, Ravaut M, Fagette A, Lim H-S (2017) Deep learning applied to underwater mine warfare, in OCEANS 2017-Aberdeen, pp. 1–7

  4. Walther D, Edgington DR, Koch C (2004) Detection and tracking of objects in underwater video, in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, pp. I-544-I-549 Vol. 1

  5. Lee D, Kim G, Kim D, Myung H, Choi H-T (2012) Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean Eng 48:59–68

    Article  Google Scholar 

  6. Kumar N, Sardana H, Shome S, Mittal N (2020) Saliency subtraction inspired automated event detection in underwater environments. Cogn Comput 12:115–127

    Article  Google Scholar 

  7. Kumar N, Sardana H, Shome S (2019) Saliency based shape extraction of objects in unconstrained underwater environment. Multimed Tools Appl 78:15121–15139

    Article  Google Scholar 

  8. Olmos A, Trucco E (2002) Detecting man-made objects in unconstrained subsea videos, in BMVC, pp. 1–10

  9. Moussa M, Ei-Sheimy N (2010) Manmade objects classification from satellite/aerial imagery using neural networks, in Canadian Geomatics Conference

  10. Pentland AP (1984) Fractal-based description of natural scenes. IEEE Trans Pattern Anal Mach Intell PAMI-6:661–674

    Article  Google Scholar 

  11. Leibe B, Schiele B (2003) Analyzing appearance and contour based methods for object categorization, in Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, pp. II-409

  12. Kühne G, Richter S, Beier M (2001) Motion-based segmentation and contour-based classification of video objects, in Proceedings of the ninth ACM international conference on Multimedia, pp. 41–50

  13. Kim M, Park C, Koo K (2005) Natural/man-made object classification based on gabor characteristics, Image and Video Retrieval, pp. 592–592

  14. Zhang D, Lu G (2004) Review of shape representation and description techniques. Pattern Recogn 37:1–19

    Article  Google Scholar 

  15. Latecki LJ, Lakamper R (2000) Shape similarity measure based on correspondence of visual parts. IEEE Trans Pattern Anal Mach Intell 22:1185–1190

    Article  Google Scholar 

  16. Fan D-P, Ji G-P, Sun G, Cheng M-M, Shen J, Shao L (2020) Camouflaged object detection, in IEEE CVPR

  17. Palazzo S, Kavasidis I, Spampinato C (2013) Covariance based modeling of underwater scenes for fish detection, in ICIP, pp. 1481–1485

  18. Spampinato C, Palazzo S, Kavasidis I (2014) A texton-based kernel density estimation approach for background modeling under extreme conditions. Comput Vis Image Underst 122:74–83

    Article  Google Scholar 

  19. Heikkilä M, Pietikäinen M, Schmid C (2009) Description of interest regions with local binary patterns. Pattern Recogn 42:425–436

    Article  Google Scholar 

  20. Liao S, Zhao G, Kellokumpu V, Pietikäinen M, Li SZ (2010) Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes, in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 1301–1306

  21. Spampinato C, Chen-Burger Y-H, Nadarajan G, Fisher RB (2008) Detecting, Tracking and Counting Fish in Low Quality Unconstrained Underwater Videos, VISAPP (2), vol. 2008, pp. 514–519

  22. Spampinato C, Beauxis-Aussalet E, Palazzo S, Beyan C, van Ossenbruggen J, He J, Boom B, Huang X (2014) A rule-based event detection system for real-life underwater domain. Mach Vis Appl 25:99–117

    Article  Google Scholar 

  23. Jalali S, Seekings PJ, Tan C, Tan HZ, Lim J-H, Taylor EA (2013) Classification of marine organisms in underwater images using CQ-HMAX biologically inspired color approach, in Neural Networks (IJCNN), The 2013 International Joint Conference on, pp. 1–8

  24. Mahmood A, Bennamoun M, An S, Sohel F, Boussaid F (2020) ResFeats: residual network based features for underwater image classification. Image Vis Comput 93:103811

    Article  Google Scholar 

  25. Irfan M, Zheng J, Iqbal M, Arif MH (2020) A Novel Feature Extraction Model to Enhance Underwater Image Classification," in International Symposium on Intelligent Computing Systems, pp. 78–91.

  26. Li Y, Lu H, Li J, Li X, Li Y, Serikawa S (2016) Underwater image de-scattering and classification by deep neural network. Computers & Electrical Engineering 54:68–77

    Article  Google Scholar 

  27. Li G, Liu Z, Ling H (2020) ICNet: information conversion network for RGB-D based salient object detection. IEEE Trans Image Process 29:4873–4884

    Article  Google Scholar 

  28. Piao Y, Ji W, Li J, Zhang M, Lu H (2019) Depth-Induced Multi-Scale Recurrent Attention Network for Saliency Detection, in Proceedings of the IEEE International Conference on Computer Vision, pp. 7254–7263

  29. Zhang J, Fan D-P, Dai Y, Anwar S, Saleh FS, Zhang T, Barnes N (2020) UC-Net: uncertainty inspired rgb-d saliency detection via conditional variational autoencoders, arXiv preprint arXiv:2004.05763

  30. Zhao J-X, Cao Y, Fan D-P, Cheng M-M, Li X-Y, Zhang L (2019) Contrast prior and fluid pyramid integration for RGBD salient object detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3927–3936

  31. Fan D-P, Cheng M-M, Liu J-J, Gao S-H, Hou Q, Borji A (2018) Salient objects in clutter: Bringing salient object detection to the foreground, in Proceedings of the European conference on computer vision (ECCV), pp. 186–202.

  32. Zhao J-X, Liu J-J, Fan D-P, Cao Y, Yang J, Cheng M-M (2019) EGNet: Edge guidance network for salient object detection, in Proceedings of the IEEE International Conference on Computer Vision, pp. 8779–8788

  33. Csurka G, Dance C, Fan L, Willamowski J, Bray C (2004) Visual categorization with bags of keypoints," in Workshop on statistical learning in computer vision, ECCV, pp. 1–2

  34. Barnes C, Best M, Bornhold B, Juniper S, Pirenne B, Phibbs P (2007) The NEPTUNE Project-a cabled ocean observatory in the NE Pacific: Overview, challenges and scientific objectives for the installation and operation of Stage I in Canadian waters," in 2007 Symposium on Underwater Technology and Workshop on Scientific Use of Submarine Cables and Related Technologies, pp. 308–313

  35. Gebali A, Albu AB, Hoeberechts M (2012) Detection f salient events in large datasets of underwater video: IEEE

  36. Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemom Intell Lab Syst 2:37–52

    Article  Google Scholar 

  37. Hosmer DW Jr, Lemeshow S, and Sturdivant RX (2013) Applied logistic regression vol. 398: John Wiley & Sons

  38. Hsu C-W, Lin C-J (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13:415–425

    Article  Google Scholar 

  39. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244

    MATH  Google Scholar 

  40. Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40:139–157

    Article  Google Scholar 

  41. García-Pedrajas N, Ortiz-Boyer D (2009) Boosting k-nearest neighbor classifier by means of input space projection. Expert Syst Appl 36:10570–10582

    Article  Google Scholar 

Download references

Acknowledgments

Nitin Kumar is thankful to the CSIR-CSIO, Chandigarh for providing the funding and opportunity to carry out this work under the grant UnWaR. The authors gratefully acknowledge ONC for providing the underwater videos for this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. K. Sardana.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The appendix shows some examples of man-made objects used for training the classifiers.

figure e

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, N., Sardana, H.K., Shome, S.N. et al. Saliency-based classification of objects in unconstrained underwater environments. Multimed Tools Appl 79, 25835–25851 (2020). https://doi.org/10.1007/s11042-020-09221-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09221-w

Keywords

Navigation