[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Direction Selective Contour Detection for Salient Objects

Published: 01 February 2019 Publication History

Abstract

The active contour model is a widely used technique for automatic object contour extraction. Existing methods based on this model can perform with high accuracy, even in the case of complex contours, but challenging issues remain, like the need for precise contour initialization for high curvature boundary segments or the handling of cluttered backgrounds. To deal with such issues, this paper presents a salient object extraction method, the first step of which is the introduction of an improved edge map that incorporates edge direction as a feature. The direction information in the small neighborhoods of image feature points is extracted, and the images’ prominent orientations are defined for direction-selective edge extraction. Using such improved edge information, we provide a highly accurate shape contour representation, which we also combine with texture features. The principle of the paper is to interpret an object as the fusion of its components: its extracted contour and its inner texture. Our goal in fusing textural and structural information is twofold: it is applied for automatic contour initialization, and it is also used to establish an improved external force field. This fusion then produces highly accurate salient object extractions. We performed extensive evaluations, which confirm that the presented object extraction method outperforms parametric active contour models and achieves higher efficiency than the majority of the evaluated automatic saliency methods.

References

[1]
M. Kass, A. Witkin, and D. Terzopoulos, “ Snakes: Active contour models,” Int. J. Comput. Vis., vol. Volume 1, no. Issue 4, pp. 321–331, 1988.
[2]
C. Xu and J. L. Prince, “ Gradient vector flow: A new external force for snakes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 1997, pp. 66–71.
[3]
J. Cheng and S. W. Foo, “ Dynamic directional gradient vector flow for snakes,” IEEE Trans. Image Process., vol. Volume 15, no. Issue 6, pp. 1563–1571, 2006.
[4]
B. Li and S. T. Acton, “ Active contour external force using vector field convolution for image segmentation,” IEEE Trans. Image Process., vol. Volume 16, no. Issue 8, pp. 2096–2106, 2007.
[5]
A. Kovacs and T. Sziranyi, “ Harris function based active contour external force for image segmentation,” Pattern Recognit. Lett., vol. Volume 33, no. Issue 9, pp. 1180–1187, 2012.
[6]
V. Caselles, R. Kimmel, and G. Sapiro, “ Geodesic active contours,” Int. J. Comput. Vis., vol. Volume 22, no. Issue 1, pp. 61–79, 1997.
[7]
T. F. Chan and L. A. Vese, “ Active contours without edges,” IEEE Trans. Image Process., vol. Volume 10, no. Issue 2, pp. 266–277, 2001.
[8]
J. Melonakos, E. Pichon, S. Angenent, and A. Tannenbaum, “ Finsler active contours,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 30, no. Issue 3, pp. 412–423, 2008.
[9]
J. Yang, B. Price, S. Cohen, H. Lee, and M.-H. Yang, “ Object contour detection with a fully convolutional encoder-decoder network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 193–202.
[10]
S. Osher and J. A. Sethian, “ Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton–Jacobi formulations,” J. Comput. Phys., vol. Volume 79, no. Issue 1, pp. 12–49, 1988.
[11]
D. Mumford and J. Shah, “ Optimal approximations by piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math., vol. Volume 42, no. Issue 5, pp. 577–685, 1989.
[12]
K. Fergani, D. Lui, C. Scharfenberger, A. Wong, and D. A. Clausi, “ Hybrid structural and texture distinctiveness vector field convolution for region segmentation,” Comput. Vis. Image Understand., vol. Volume 125, pp. 85–96, 2014.
[13]
L. Sun and T. Shibata, “ Unsupervised object extraction by contour delineation and texture discrimination based on oriented edge features,” IEEE Trans. Circuits Syst. Video Technol., vol. Volume 24, no. Issue 5, pp. 780–788, 2014.
[14]
J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “ Background prior-based salient object detection via deep reconstruction residual,” IEEE Trans. Circuits Syst. Video Technol., vol. Volume 25, no. Issue 8, pp. 1309–1321, 2015.
[15]
A. Borji, “ What is a salient object? A dataset and a baseline model for salient object detection,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 2, pp. 742–756, 2015.
[16]
J. Yang, “ Discovering primary objects in videos by saliency fusion and iterative appearance estimation,” IEEE Trans. Circuits Syst. Video Technol., vol. Volume 26, no. Issue 6, pp. 1070–1083, 2016.
[17]
A. K. Mishra, P. W. Fieguth, and D. A. Clausi, “ Decoupled active contour (DAC) for boundary detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 33, no. Issue 2, pp. 310–324, 2011.
[18]
D. Lui, C. Scharfenberger, K. Fergani, A. Wong, and D. A. Clausi, “ Enhanced decoupled active contour using structural and textural variation energy functionals,” IEEE Trans. Image Process., vol. Volume 23, no. Issue 2, pp. 855–869, 2014.
[19]
G. Sundaramoorthi and A. Yezzi, “ Global regularizing flows with topology preservation for active contours and polygons,” IEEE Trans. Image Process., vol. Volume 16, no. Issue 3, pp. 803–812, 2007.
[20]
C. Tauber, H. Batatia, and A. Ayache, “ Quasi-automatic initialization for parametric active contours,” Pattern Recognit. Lett., vol. Volume 31, no. Issue 1, pp. 83–90, 2010.
[21]
L. Kovacs and T. Sziranyi, “ Focus area extraction by blind deconvolution for defining regions of interest,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 29, no. Issue 6, pp. 1080–1085, 2007.
[22]
M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu, “ Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 37, no. Issue 3, pp. 569–582, 2015.
[23]
C. Scharfenberger, A. Wong, K. Fergani, J. S. Zelek, and D. A. Clausi, “ Statistical textural distinctiveness for salient region detection in natural images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 979–986.
[24]
J. Han, S. He, X. Qian, D. Wang, L. Guo, and T. Liu, “ An object-oriented visual saliency detection framework based on sparse coding representations,” IEEE Trans. Circuits Syst. Video Technol., vol. Volume 23, no. Issue 12, pp. 2009–2021, 2013.
[25]
C. Harris and M. Stephens, “ A combined corner and edge detector,” in Proc. 4th Alvey Vis. Conf., 1988, pp. 147–151.
[26]
D. G. Lowe, “ Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. Volume 60, no. Issue 2, pp. 91–110, 2004.
[27]
A. Kovács and T. Szirányi, “ Improved Harris feature point set for orientation-sensitive urban-area detection in aerial images,” IEEE Geosci. Remote Sens. Lett., vol. Volume 10, no. Issue 4, pp. 796–800, 2013.
[28]
A. Kovács and T. Szirányi, “ Improved force field for vector field convolution method,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2011, pp. 2853–2856.
[29]
N. Jifeng, W. Chengke, L. Shigang, and Y. Shuqin, “ NGVF: An improved external force field for active contour model,” Pattern Recognit. Lett., vol. Volume 28, no. Issue 1, pp. 58–63, 2007.
[30]
G. Zhu, S. Zhang, Q. Zeng, and C. Wang, “ Gradient vector flow active contours with prior directional information,” Pattern Recognit. Lett., vol. Volume 31, no. Issue 9, pp. 845–856, 2010.
[31]
I. Zingman, D. Saupe, and K. Lambers, “ A morphological approach for distinguishing texture and individual features in images,” Pattern Recognit. Lett., vol. Volume 47, pp. 129–138, 2014.
[32]
S. Alpert, M. Galun, R. Basri, and A. Brandt, “ Image segmentation by probabilistic bottom-up aggregation and cue integration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2007, pp. 1–8.
[33]
R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk, “ Frequency-tuned salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 1597–1604.
[34]
A. Manno-Kovacs, “ Direction selective vector field convolution for contour detection,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Oct. 2014, pp. 4722–4726.
[35]
W. Wang, J. Shen, and L. Shao, “ Consistent video saliency using local gradient flow optimization and global refinement,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 11, pp. 4185–4196, 2015.
[36]
Y. Zhang, X. Qian, X. Tan, J. Han, and Y. Tang, “ Sketch-based image retrieval by salient contour reinforcement,” IEEE Trans. Multimedia, vol. Volume 18, no. Issue 8, pp. 1604–1615, 2016.
[37]
F. Zhang, B. Du, and L. Zhang, “ Saliency-guided unsupervised feature learning for scene classification,” IEEE Trans. Geosci. Remote Sens., vol. Volume 53, no. Issue 4, pp. 2175–2184, 2015.
[38]
A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “ Salient object detection: A benchmark,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 12, pp. 5706–5722, 2015.
[39]
M.-M. Cheng. 2016 . {Online}. Available: http://mmcheng.net/salobjbenchmark/
[40]
E. Rahtu, J. Kannala, M. Salo, and J. Heikkilä, “ Segmenting salient objects from images and videos,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2010, pp. 366–379.
[41]
H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li, “ Automatic salient object segmentation based on context and shape prior,” in Proc. BMVC, 2011, vol. Volume 6 . no. Issue 7, p. pp.9.
[42]
X. Hou, J. Harel, and C. Koch, “ Image signature: Highlighting sparse salient regions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 34, no. Issue 1, pp. 194–201, 2012.
[43]
X. Li, H. Lu, L. Zhang, X. Ruan, and M.-H. Yang, “ Saliency detection via dense and sparse reconstruction,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2013, pp. 2976–2983.
[44]
Y. Xie, H. Lu, and M.-H. Yang, “ Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process., vol. Volume 22, no. Issue 5, pp. 1689–1698, 2013.
[45]
M.-M. Cheng, J. Warrell, W.-Y. Lin, S. Zheng, V. Vineet, and N. Crook, “ Efficient salient region detection with soft image abstraction,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2013, pp. 1529–1536.
[46]
H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “ Salient object detection: A discriminative regional feature integration approach,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 2083–2090.
[47]
J. Kim, D. Han, Y.-W. Tai, and J. Kim, “ Salient region detection via high-dimensional color transform and local spatial support,” IEEE Trans. Image Process., vol. Volume 25, no. Issue 1, pp. 9–23, 2016.
[48]
Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “ The secrets of salient object segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 280–287.
[49]
K.-F. Yang, H. Li, C.-Y. Li, and Y.-J. Li, “ A unified framework for salient structure detection by contour-guided visual search,” IEEE Trans. Image Process., vol. Volume 25, no. Issue 8, pp. 3475–3488, 2016.
[50]
J. Han, D. Zhang, S. Wen, L. Guo, T. Liu, and X. Li, “ Two-stage learning to predict human eye fixations via SDAEs,” IEEE Trans. Cybern., vol. Volume 46, no. Issue 2, pp. 487–498, 2016.
[51]
D. Zhang, J. Han, J. Han, and L. Shao, “ Cosaliency detection based on intrasaliency prior transfer and deep intersaliency mining,” IEEE Trans. Neural Netw. Learn. Syst., vol. Volume 27, no. Issue 6, pp. 1163–1176, 2016.
[52]
W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, “ DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 3982–3991.
[53]
G. Li and Y. Yu, “ Visual saliency based on multiscale deep features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 5455–5463.
[54]
G. Li and Y. Yu, “ Deep contrast learning for salient object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 478–487.
[55]
N. Otsu, “ A threshold selection method from gray-level histograms,” IEEE Trans. Syst., Man, Cybern., vol. Volume SMC-9, no. Issue 1, pp. 62–66, 1979.
[56]
C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “ The Quickhull algorithm for convex hulls,” ACM Trans. Math. Softw., vol. Volume 22, no. Issue 4, pp. 469–483, 1996.
[57]
Y. Lu, W. Zhang, H. Lu, and X. Xue, “ Salient object detection using concavity context,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Nov. 2011, pp. 233–240.
[58]
C. Yang, L. Zhang, and H. Lu, “ Graph-regularized saliency detection with convex-hull-based center prior,” IEEE Signal Process. Lett., vol. Volume 20, no. Issue 7, pp. 637–640, 2013.
[59]
A. Manno-Kovacs and T. Sziranyi, “ Orientation-selective building detection in aerial images,” ISPRS J. Photogramm. Remote Sens., vol. Volume 108, pp. 94–112, 2015.
[60]
S. Kumar and M. Hebert, “ Man-made structure detection in natural images using a causal multiscale random field,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jan. 2003, pp. 119–126.
[61]
C. Benedek, X. Descombes, and J. Zerubia, “ Building development monitoring in multitemporal remotely sensed image pairs with stochastic birth-death dynamics,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 34, no. Issue 1, pp. 33–50, 2012.
[62]
J. Canny, “ A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume PAMI-8, no. Issue 6, pp. 679–698, 1986.
[63]
S. Yi, D. Labate, G. R. Easley, and H. Krim, “ A shearlet approach to edge analysis and detection,” IEEE Trans. Image Process., vol. Volume 18, no. Issue 5, pp. 929–941, 2009.
[64]
R. Mester, “ Orientation estimation: Conventional techniques and a new non-differential approach,” in Proc. 10th Eur. Signal Process. Conf., Sep. 2000, pp. 921–924.
[65]
T. Liu, “ Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 33, no. Issue 2, pp. 353–367, 2011.
[66]
Q. Yan, L. Xu, J. Shi, and J. Jia, “ Hierarchical saliency detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 1155–1162.

Cited By

View all
  • (2022)A low-complexity residual deep neural network for image edge detectionApplied Intelligence10.1007/s10489-022-04062-653:9(11282-11299)Online publication date: 2-Sep-2022
  • (2021)Contour-Aware Loss: Boundary-Aware Learning for Salient Object SegmentationIEEE Transactions on Image Processing10.1109/TIP.2020.303753630(431-443)Online publication date: 1-Jan-2021

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Circuits and Systems for Video Technology
IEEE Transactions on Circuits and Systems for Video Technology  Volume 29, Issue 2
February 2019
318 pages

Publisher

IEEE Press

Publication History

Published: 01 February 2019

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)A low-complexity residual deep neural network for image edge detectionApplied Intelligence10.1007/s10489-022-04062-653:9(11282-11299)Online publication date: 2-Sep-2022
  • (2021)Contour-Aware Loss: Boundary-Aware Learning for Salient Object SegmentationIEEE Transactions on Image Processing10.1109/TIP.2020.303753630(431-443)Online publication date: 1-Jan-2021

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media