[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104504692B - The extracting method of notable object in image based on region contrast - Google Patents

The extracting method of notable object in image based on region contrast Download PDF

Info

Publication number
CN104504692B
CN104504692B CN201410781285.7A CN201410781285A CN104504692B CN 104504692 B CN104504692 B CN 104504692B CN 201410781285 A CN201410781285 A CN 201410781285A CN 104504692 B CN104504692 B CN 104504692B
Authority
CN
China
Prior art keywords
map
threshold
saliency
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410781285.7A
Other languages
Chinese (zh)
Other versions
CN104504692A (en
Inventor
刘志
叶林伟
李君浩
李利娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI UNIVERSITY
Original Assignee
SHANGHAI UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI UNIVERSITY filed Critical SHANGHAI UNIVERSITY
Priority to CN201410781285.7A priority Critical patent/CN104504692B/en
Publication of CN104504692A publication Critical patent/CN104504692A/en
Application granted granted Critical
Publication of CN104504692B publication Critical patent/CN104504692B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:(1)、输入原始图像,原始图像的显著性图记为,原始图像的对象概率图记为;(2)、计算显著性图和对象概率图的融合系数;(3)、根据融合系数,计算区域对比度融合图,提取图像中的显著对象。该方法采用显著性图和对象概率图计算区域对比度图融合图,提取图像中显著对象,相对于单一使用显著性图或对象概率图提取显著对象的方法,能够更准确、完整地提取图像中的显著对象。

The invention discloses a method for extracting salient objects in an image based on regional contrast, and its specific steps are as follows: (1), input the original image, and the saliency mark of the original image is , the object probability map of the original image is ; (2), calculate the fusion coefficient of the saliency map and the object probability map; (3), calculate the regional contrast fusion map according to the fusion coefficient, and extract the salient objects in the image. This method uses the saliency map and the object probability map to calculate the fusion map of the regional contrast map, and extracts the salient objects in the image. significant object.

Description

基于区域对比度的图像中显著对象的提取方法Extraction Method of Salient Objects in Images Based on Region Contrast

技术领域technical field

本发明涉及计算机信息、图像处理技术领域,具体地说是涉及一种提取图像中的显著对象的方法。The invention relates to the technical fields of computer information and image processing, in particular to a method for extracting salient objects in an image.

背景技术Background technique

根据心理学以及人类视觉的研究,当人观察一幅图像时,对图像各区域的关注并不平均,从而产生与关注度相对应的显著性图。在大多数情况下,人观察一幅图像时,不会在整个图像上平均分配注意力,而是会将注意力集中在图像中的某个对象,这样的对象被称为显著对象。如果能够自动地将显著对象提取出来,将对图像缩放、图像识别、图像检索等应用领域提供极大的帮助。显著对象提取方法正是在这种背景下应用而生,它旨在准确地提取图像中去除了背景之后的显著对象,例如,Rother等人在2004年美国计算机协会图形学学报中发表了“Grabcut:交互式前景提取用图切割方法”一文,该文利用人工手动画矩形窗,指定候选显著对象区域,再用图切割方法提取显著对象。但因为需要人工手动指定且只能用矩形窗定义显著对象候选区域,限制了该方法的广泛应用。Cheng等人在2011年美国电气电子工程师协会计算机视觉和模式识别会议上发表了 “基于全局对比度的显著性区域检测”一文,该文利用全局颜色对比度和空间区域对比度得到显著性图,然后根据显著性图用迭代Grabcut图切割方法进行图像分割,提取图像中的显著对象,该方法的具体步骤如下:According to the research of psychology and human vision, when people observe an image, they do not pay equal attention to each area of the image, thus producing a saliency map corresponding to the degree of attention. In most cases, when people observe an image, they will not distribute attention evenly on the entire image, but will focus on an object in the image, and such an object is called a salient object. If the salient objects can be extracted automatically, it will be of great help to image scaling, image recognition, image retrieval and other application fields. The salient object extraction method was applied in this context. It aims to accurately extract the salient objects in the image after the background is removed. For example, Rother et al. published "Grabcut : Graph Cutting Method for Interactive Foreground Extraction", this paper uses artificially animated rectangular windows to designate candidate salient object regions, and then uses graph cutting method to extract salient objects. However, the wide application of this method is limited because it needs to be manually specified and only rectangular windows can be used to define salient object candidate regions. Cheng et al. published the paper "Saliency Region Detection Based on Global Contrast" at the IEEE Computer Vision and Pattern Recognition Conference in 2011. This paper uses global color contrast and spatial region contrast to obtain a saliency map, and then according The iterative Grabcut graph cutting method is used to segment the image and extract the salient objects in the image. The specific steps of the method are as follows:

(1)一个像素的显著性值用该像素和图像中其他像素颜色的对比度来定义,具有相同颜色的像素分配相同的显著性值。(1) The saliency value of a pixel is defined by the color contrast between the pixel and other pixels in the image, and pixels with the same color are assigned the same saliency value.

(2)基于直方图统计丢弃出现频率较小的颜色,每个颜色的显著性值被替换为相似颜色显著性值的加权平均。(2) Colors with less frequent occurrence are discarded based on histogram statistics, and the significance value of each color is replaced by the weighted average of similar color significance values.

(3)用基于图的图像分割方法将图像分割成若干区域,利用两个区域重心的欧氏距离,计算空间区域对比度,得到显著性图。(3) Use a graph-based image segmentation method to divide the image into several regions, and use the Euclidean distance between the centers of gravity of the two regions to calculate the spatial region contrast to obtain a saliency map.

(4)对显著性图取固定阈值,用Grabcut图切割方法进行图像分割。(4) Take a fixed threshold for the saliency map, and use the Grabcut map cutting method for image segmentation.

(5)用膨胀、腐蚀操作图像分割后的结果图,得到新的待分割图,再用Grabcut图切割方法进行图像分割。(5) Use dilation and erosion to manipulate the resulting image after image segmentation to obtain a new image to be segmented, and then use the Grabcut image cutting method to perform image segmentation.

(6)重复步骤(5)直到收敛。得到最终的结果图,即提取的显著对象。(6) Repeat step (5) until convergence. The final result map is obtained, that is, the extracted salient objects.

Liu等人在2014年美国电气和电子工程师协会的图像处理学报中发表了“显著性树:一个新的显著性检测框架”一文,该文用树形结构的节点表示图像中一个个小区域,通过测量全局对比度、空间稀疏性和对象优先性,合并原始小区域,生成显著性图,最后运用最大类间方差值二值化显著性图,提取图像中的显著对象。该方法提高了显著性图的精确度,但是,该方法中的最大类间方差值方法在提取显著对象时,仍不能完整提取图像中的多个显著对象。Alexe等人在2012年美国电气和电子工程师协会的模式分析与机器智能学报中发表了“测量图像窗中的对象”,该文提出了用图像窗,即矩形窗检测对象的概念和计算方法,通过计算大量矩形窗包含对象的概率,利用贝叶斯公式联合多线索求出显著对象所在区域的位置概率,得到对象概率图。该方法的具体步骤如下:Liu et al. published the article "Saliency Tree: A New Saliency Detection Framework" in the Image Processing Journal of the Institute of Electrical and Electronics Engineers in 2014. This paper uses tree-shaped nodes to represent small areas in the image. By measuring the global contrast, spatial sparsity and object priority, merging the original small regions to generate a saliency map, and finally using the maximum between-class variance value to binarize the saliency map to extract salient objects in the image. This method improves the accuracy of the saliency map, however, the maximum inter-class variance value method in this method still cannot fully extract multiple salient objects in the image when extracting salient objects. Alexe et al published "Measuring Objects in Image Window" in the Journal of Pattern Analysis and Machine Intelligence of the Institute of Electrical and Electronics Engineers in 2012. This paper proposed the concept and calculation method of detecting objects with image windows, that is, rectangular windows. By calculating the probability that a large number of rectangular windows contain objects, the Bayesian formula combined with multiple clues is used to obtain the location probability of the area where the salient objects are located, and the object probability map is obtained. The concrete steps of this method are as follows:

(1)用频域残差法得到多尺度显著性线索,并产生大量矩形窗。(1) Use the frequency-domain residual method to obtain multi-scale saliency cues, and generate a large number of rectangular windows.

(2)用颜色空间直方图的卡方距离,计算矩形窗间颜色对比度线索。(2) Use the chi-square distance of the color space histogram to calculate the color contrast clues between rectangular windows.

(3)用坎尼算子检测边界,得到边缘密度线索。(3) Use the Canny operator to detect the boundary and get the edge density clue.

(4)用基于图的图像分割方法将图像分割成若干区域,根据矩形窗内区域和矩形窗外区域的最小区域差异,得到区域分叉线索。(4) Use a graph-based image segmentation method to segment the image into several regions, and obtain regional bifurcation clues according to the minimum regional difference between the region inside the rectangular window and the region outside the rectangular window.

(5)利用高斯分布估计矩形窗的位置和大小线索。(5) Gaussian distribution is used to estimate the location and size clues of the rectangular window.

(6)用贝叶斯公式整合由步骤(1)-(5)得到的线索,计算显著对象所在区域的位置概率,得到对象概率图。(6) Use the Bayesian formula to integrate the clues obtained from steps (1)-(5), calculate the location probability of the area where the salient object is located, and obtain the object probability map.

但是。上述方法存在的不足是,该方法只是用矩形窗表示出显著对象的位置概率,而不包含准确的显著对象的轮廓信息,不能准确地提取图像中的显著对象。but. The disadvantage of the above method is that this method only uses a rectangular window to represent the position probability of the salient object, but does not contain accurate contour information of the salient object, and cannot accurately extract the salient object in the image.

综上所述,现有的图像中显著对象的提取的方法,不能准确、完整地提取图像中的显著对象,这影响了显著对象提取的广泛应用。To sum up, the existing methods for extracting salient objects in images cannot accurately and completely extract salient objects in images, which affects the wide application of salient object extraction.

发明内容Contents of the invention

本发明的目的在于针对已有技术中存在的缺陷,提出一种基于区域对比度的图像中显著对象的提取方法,该方法能够较为准确、完整地提取图像中的显著对象。The object of the present invention is to propose a method for extracting salient objects in an image based on regional contrast, which can accurately and completely extract the salient objects in the image.

为了达到上述目的,本发明采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:

一种基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:A method for extracting salient objects in an image based on regional contrast, the specific steps are as follows:

(1)、输入原始图像,原始图像的显著性图记为,原始图像的对象概率图记为(1), input the original image, the saliency map of the original image is marked as , the object probability map of the original image is ;

(2)、计算显著性图和对象概率图的融合系数;(2), calculating the fusion coefficient of the saliency map and the object probability map;

(3)、根据融合系数,计算区域对比度融合图,提取图像中的显著对象。(3) According to the fusion coefficient, the regional contrast fusion map is calculated, and the salient objects in the image are extracted.

上述步骤(2)所述的计算显著性图和对象概率图的融合系数,其具体步骤如下:The fusion coefficient of the described calculation saliency map and object probability map of above-mentioned steps (2), its concrete steps are as follows:

(2-1), 设显著性图的第一阈值图、显著性图的第二阈值图、显著性图的第三阈值图,对象概率图的阈值图为,其具体如下:(2-1), set the first threshold value map of the saliency map , the second threshold map of the saliency map , the third threshold map of the saliency map , and the threshold map of the object probability map is , which are as follows:

对显著性图中的像素的显著性值归一化处理,其归一化值为[0,1],其中,设3个不同的阈值,分别是,第一阈值记为=0.75;设第二阈值为显著性图的最大类间方差值,记为;设第三阈值为显著性图的平均显著性值,记为;显著性图中显著性值大于等于上述阈值的像素的像素值记为1,小于上述阈值的像素的像素值记为0,得到显著性图对应的阈值图,The saliency value of the pixels in the saliency map is normalized, and its normalized value is [0,1]. Among them, three different thresholds are set, respectively, the first threshold is recorded as , = 0.75; set the second threshold as the maximum between-class variance value of the significance map, denoted as ; Let the third threshold be the average saliency value of the saliency map, denoted as ; In the saliency map, the pixel value of the pixel whose saliency value is greater than or equal to the above threshold is recorded as 1, and the pixel value of the pixel smaller than the above threshold is recorded as 0, and the threshold map corresponding to the saliency map is obtained,

设与第一阈值对应的阈值图为显著性图的第一阈值图,记为, 设与第二阈值对应的阈值图为显著性图的第二阈值图,记为,设与第三阈值对应的阈值图为显著性图的第三阈值图,记为,对象概率图中对象概率值大于等于第一阈值的像素的像素值记为1,小于第一阈值的像素的像素值记为0,得到对象概率图的阈值图Set and the first threshold The corresponding threshold map is the first threshold map of the saliency map, denoted as , set with the second threshold The corresponding threshold map is the second threshold map of the saliency map, denoted as , set with the third threshold The corresponding threshold map is the third threshold map of the saliency map, denoted as , the object probability map Medium object probability value is greater than or equal to the first threshold The pixel value of the pixel is recorded as 1, which is less than the first threshold The pixel value of the pixel is recorded as 0, and the threshold map of the object probability map is obtained ;

(2-2), 将显著性图的第一阈值图与显著性图的第二阈值图之间的像素的重合比例值与显著性图的第一阈值图与显著性图的第三阈值图之间的像素的重合比例值相加之和,记为,其计算式为:(2-2), the overlap ratio value of the pixels between the first threshold map of the saliency map and the second threshold map of the saliency map and the first threshold map of the saliency map and the third threshold map of the saliency map The sum of the coincidence ratio values of the pixels between is denoted as , whose calculation formula is:

其中, p表示阈值图中的像素,表示属于显著性图的第一阈值图中的像素,为显著性图的第一阈值图中的像素p对应的像素值,表示属于显著性图的第二阈值图中的像素,为显著性图的第二阈值图中的像素p对应的像素值,表示属于显著性图的第三阈值图中的像素,为显著性图的第三阈值图中的像素p对应的像素值;;Among them, p represents the pixel in the threshold map, represents a pixel belonging to the first threshold map of the saliency map, is the pixel value corresponding to pixel p in the first threshold map of the saliency map, represents a pixel belonging to the second threshold map of the saliency map, is the pixel value corresponding to pixel p in the second threshold value map of the saliency map, represents a pixel belonging to the third threshold map of the saliency map, is the pixel value corresponding to pixel p in the third threshold map of the saliency map;

(2-3), 计算显著性图的第一阈值图和对象概率图的阈值图的交叠面积,记为,其计算式为:(2-3), calculate the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, denoted as , whose calculation formula is:

(2) (2)

其中,表示的交集,表示的并集,表示显著性图的第一阈值图和对象概率图的阈值图的交叠面积;in, express and the intersection of express and the union of represents the overlap area of the first threshold map of the saliency map and the threshold map of the object probability map;

(2-4),设原始图像的对角线长度,记为D,计算显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的归一化质心距离,记为,其计算式为:(2-4), set the diagonal length of the original image, denoted as D, and calculate the normalized centroid distance between the centroid point of the first threshold map of the saliency map and the centroid point of the threshold map of the object probability map, recorded as , whose calculation formula is:

(3) (3)

其中,表示显著性图的第一阈值图的质心点,表示对象检测图的阈值图的质心点,是计算之间的欧式距离;in, represents the centroid point of the first threshold map of the saliency map, A threshold map representing the object detection map the centroid point of is calculation arrive Euclidean distance between;

(2-5), 根据所述的各阈值图之间的像素的重合比例值相加之和、所述的显著性图的第一阈值图和对象概率图的阈值图的交叠面积、所述的显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的质心距离值计算显著性图和对象概率图融合系数,记为,其计算式为:(2-5), according to the sum of the overlapping ratio values of pixels between the threshold maps, the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, and the The centroid distance value between the centroid point of the first threshold value map of the above-mentioned saliency map and the centroid point of the threshold value map of the object probability map is used to calculate the fusion coefficient of the saliency map and the object probability map, which is denoted as , its calculation formula is:

(4) (4)

上述步骤(3)所述的根据融合系数,计算区域对比度融合图,提取图像中的显著对象,其具体步骤如下:According to the fusion coefficient described in the above-mentioned steps (3), calculate the regional contrast fusion map, extract the salient object in the image, and its specific steps are as follows:

(3-1), 根据显著性图和对象概率图的融合系数,融合显著性图和对象概率图,计算区域对比度融合图,记为,其计算式为:(3-1), according to the fusion coefficient of the saliency map and the object probability map, the saliency map and the object probability map are fused, and the regional contrast fusion map is calculated, which is denoted as , whose calculation formula is:

(5) (5)

其中是显著性图,表示与1之间的最大数,,exp表示e指数,表示对象概率图中像素概率值大于等于的像素的像素值,表示对象概率图中像素概率值小于等于的像素的像素值,表示对象概率图中像素概率值介于之间的像素的像素值,表示区域对比度融合图;in is the saliency map, express Pick the largest number between and 1, , exp represents the e index, Indicates that the pixel probability value in the object probability map is greater than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is less than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is between , The pixel values of the pixels between, Represents the region contrast fusion map;

(3-2), 设第四阈值为0.2,区域对比度融合图大于等于第四阈值的像素的像素值记为1,小于第四阈值的像素的像素值记为0,得到区域对比度融合图对应的阈值图,再采用Grabcut图切割方法进行图像分割,提取图像中的显著对象。(3-2), set the fourth threshold to 0.2, regional contrast fusion map The pixel value of the pixel greater than or equal to the fourth threshold is recorded as 1, and the pixel value of the pixel smaller than the fourth threshold is recorded as 0, and the threshold value map corresponding to the regional contrast fusion map is obtained, and then the Grabcut image cutting method is used for image segmentation, and the image is extracted notable objects.

本发明的基于区域对比度的图像中显著对象的提取方法与现有的技术相比,具有如下优点:该方法采用显著性图和对象概率图计算区域对比度融合图,提取图像中显著对象,相对于单一使用显著性图或对象概率图提取显著对象的方法,能够更准确、完整地提取图像中的显著对象。Compared with the existing technology, the method for extracting salient objects in an image based on regional contrast has the following advantages: the method uses a saliency map and an object probability map to calculate a regional contrast fusion map to extract salient objects in an image. A single method of extracting salient objects using a saliency map or an object probability map can more accurately and completely extract salient objects in an image.

附图说明Description of drawings

图1是本发明的基于区域对比度的图像中显著对象的提取方法的流程图;Fig. 1 is the flow chart of the extraction method of salient object in the image based on regional contrast of the present invention;

图2(a)是输入的原始图像;Figure 2(a) is the input original image;

图2(b)是原始图像的显著性图;Figure 2(b) is the saliency map of the original image;

图2(c)是原始图像的对象概率图;Figure 2(c) is the object probability map of the original image;

图3(a)是区域对比度融合图;Figure 3(a) is a regional contrast fusion map;

图3(b)是区域对比度融合图的阈值图用Grabcut图切割后提取的显著对象。Figure 3(b) is the salient objects extracted after the threshold map of the region contrast fusion map is cut with the Grabcut map.

具体实施方式detailed description

下面结合说明书附图对本发明的实施例作进一步详细说明。Embodiments of the present invention will be described in further detail below in conjunction with the accompanying drawings.

本发明进行的仿真实验是在CPU为3.5GHz、内存为16G的PC测试平台上编程实现。The simulation experiment carried out by the present invention is programmed on a PC test platform with a CPU of 3.5GHz and a memory of 16G.

如图1所示,本发明的基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:As shown in Figure 1, the method for extracting salient objects in an image based on regional contrast of the present invention, its specific steps are as follows:

(1)、输入原始图像、原始图像的显著性图和对象概率图,其具体步骤如下:(1), input the original image, the saliency map and the object probability map of the original image, the specific steps are as follows:

输入原始图像,如图2(a),原始图像的显著性图,记为,如图2(b),原始图像的对象概率图,记为,如图2(c);Input the original image, as shown in Figure 2(a), the saliency map of the original image is denoted as , as shown in Figure 2(b), the object probability map of the original image, denoted as , as shown in Figure 2(c);

(2), 计算显著性图和对象概率图的融合系数,其具体步骤如下:(2), calculate the fusion coefficient of the saliency map and the object probability map, the specific steps are as follows:

(2-1), 设显著性图的第一阈值图、显著性图的第二阈值图、显著性图的第三阈值图,对象概率图的阈值图,其具体如下:(2-1), set the first threshold value map of the saliency map , the second threshold map of the saliency map , the third threshold map of the saliency map , the threshold map of the object probability map , which are as follows:

对显著性图中的像素的显著性值归一化处理,其归一化值为[0,1],其中,设3个不同的阈值,分别是,第一阈值记为=0.75;设第二阈值为显著性图的最大类间方差值,记为;设第三阈值为显著性图的平均显著性值,记为;显著性图中显著性值大于等于上述阈值的像素的像素值记为1,小于上述阈值的像素的像素值记为0,得到显著性图对应的阈值图,The saliency value of the pixels in the saliency map is normalized, and its normalized value is [0,1]. Among them, three different thresholds are set, respectively, the first threshold is recorded as , = 0.75; set the second threshold as the maximum between-class variance value of the significance map, denoted as ; Let the third threshold be the average saliency value of the saliency map, denoted as ; In the saliency map, the pixel value of the pixel whose saliency value is greater than or equal to the above threshold is recorded as 1, and the pixel value of the pixel smaller than the above threshold is recorded as 0, and the threshold map corresponding to the saliency map is obtained,

设与第一阈值对应的阈值图为显著性图的第一阈值图,记为, 设与第二阈值对应的阈值图为显著性图的第二阈值图,记为,设与第三阈值对应的阈值图为显著性图的第三阈值图,记为Set and the first threshold The corresponding threshold map is the first threshold map of the saliency map, denoted as , set with the second threshold The corresponding threshold map is the second threshold map of the saliency map, denoted as , set with the third threshold The corresponding threshold map is the third threshold map of the saliency map, denoted as ,

对象概率图中对象概率值大于等于第一阈值的像素的像素值记为1,小于第一阈值的像素的像素值记为0,得到对象概率图的阈值图Object Probability Map Medium object probability value is greater than or equal to the first threshold The pixel value of the pixel is recorded as 1, which is less than the first threshold The pixel value of the pixel is recorded as 0, and the threshold map of the object probability map is obtained ;

(2-2), 将显著性图的第一阈值图与显著性图的第二阈值图之间的像素的重合比例值与显著性图的第一阈值图与显著性图的第三阈值图之间的像素的重合比例值相加之和,记为,其计算式为:(2-2), the overlap ratio value of the pixels between the first threshold map of the saliency map and the second threshold map of the saliency map and the first threshold map of the saliency map and the third threshold map of the saliency map The sum of the coincidence ratio values of the pixels between is denoted as , whose calculation formula is:

(1) (1)

其中, p表示阈值图中的像素,表示属于显著性图的第一阈值图中的像素,为显著性图的第一阈值图中的像素p对应的像素值,表示属于显著性图的第二阈值图中的像素,为显著性图的第二阈值图中的像素p对应的像素值,表示属于显著性图的第三阈值图中的像素,为显著性图的第三阈值图中的像素p对应的像素值;;Among them, p represents the pixel in the threshold map, represents a pixel belonging to the first threshold map of the saliency map, is the pixel value corresponding to pixel p in the first threshold map of the saliency map, represents a pixel belonging to the second threshold map of the saliency map, is the pixel value corresponding to pixel p in the second threshold value map of the saliency map, represents a pixel belonging to the third threshold map of the saliency map, is the pixel value corresponding to pixel p in the third threshold map of the saliency map;

(2-3), 计算显著性图的第一阈值图和对象概率图的阈值图的交叠面积,记为,其计算式为:(2-3), calculate the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, denoted as , whose calculation formula is:

(2) (2)

其中,表示的交集,表示的并集,为显著性图的第一阈值图和对象概率图的阈值图的交叠面积;in, express and the intersection of express and the union of is the overlap area of the first threshold map of the saliency map and the threshold map of the object probability map;

(2-4),设原始图像的对角线长度,记为D,计算显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的归一化质心距离,记为,其计算式为:(2-4), set the diagonal length of the original image, denoted as D, and calculate the normalized centroid distance between the centroid point of the first threshold map of the saliency map and the centroid point of the threshold map of the object probability map, recorded as , whose calculation formula is:

(3) (3)

其中,表示显著性图的第一阈值图的质心点,表示对象检测图的阈值图的质心点,是计算之间的欧式距离;in, represents the centroid point of the first threshold map of the saliency map, A threshold map representing the object detection map the centroid point of is calculation arrive Euclidean distance between;

(2-5), 根据所述的各阈值图之间的像素的重合比例值相加之和、所述的显著性图的第一阈值图和对象概率图的阈值图的交叠面积、所述的显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的质心距离值计算显著性图和对象概率图融合系数,记为,其计算式为:(2-5), according to the sum of the overlapping ratio values of pixels between the threshold maps, the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, and the The centroid distance value between the centroid point of the first threshold value map of the above-mentioned saliency map and the centroid point of the threshold value map of the object probability map is used to calculate the fusion coefficient of the saliency map and the object probability map, which is denoted as , its calculation formula is:

(4) (4)

(3), 根据融合系数,计算区域对比度融合图,提取图像中的显著对象,其具体步骤如下:(3), according to the fusion coefficient, calculate the regional contrast fusion map, and extract the salient objects in the image, the specific steps are as follows:

(3-1), 根据显著性图和对象概率图的阈值融合系数,融合显著性图和对象概率图,计算区域对比度融合图,记为,其计算式为:(3-1), according to the threshold fusion coefficient of the saliency map and the object probability map, the saliency map and the object probability map are fused to calculate the regional contrast fusion map, denoted as , whose calculation formula is:

(5) (5)

其中,是显著性图,表示与1之间的最大数,,exp表示e指数,表示对象概率图中像素概率值大于等于的像素的像素值,表示对象概率图中像素概率值小于等于的像素的像素值, 表示对象概率图中像素概率值介于之间的像素的像素值,表示区域对比度融合图,如图3(a)所示,图3(a)中的区域对比度融合图相对于图2(b)中的显著性图更完整地突出了整个显著对象的躯干,图3(a)中的区域对比度融合图相对于图2(c)中的对象概率图更准确地描绘了整个显著对象的轮廓;in, is the saliency map, express Pick the largest number between and 1, , exp represents the e index, Indicates that the pixel probability value in the object probability map is greater than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is less than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is between , The pixel values of the pixels between, Represents the regional contrast fusion map, as shown in Figure 3(a), the regional contrast fusion map in Figure 3(a) highlights the torso of the entire salient object more completely than the saliency map in Figure 2(b), Figure 3(a) The region-contrast fusion map in 3(a) more accurately delineates the outline of the entire salient object compared to the object probability map in Fig. 2(c);

(3-2), 设第四阈值为0.2,区域对比度融合图大于等于第四阈值的像素的像素值记为1,小于第四阈值的像素的像素值记为0,得到区域对比度融合图对应的阈值图,再采用Grabcut图切割方法进行图像分割,提取图像中的显著对象,如图3(b)所示。(3-2), set the fourth threshold to 0.2, regional contrast fusion map The pixel value of the pixel greater than or equal to the fourth threshold is recorded as 1, and the pixel value of the pixel smaller than the fourth threshold is recorded as 0, and the threshold value map corresponding to the regional contrast fusion map is obtained, and then the Grabcut image cutting method is used for image segmentation, and the image is extracted The salient objects of , as shown in Figure 3(b).

从上述仿真实验结果可以看出,本发明的方法利用显著性图和对象概率图,得到区域对比度融合图,从区域对比度融合图中提取显著对象,能够更准确、完整。It can be seen from the above simulation experiment results that the method of the present invention utilizes the saliency map and the object probability map to obtain a regional contrast fusion map, and extract salient objects from the regional contrast fusion map, which can be more accurate and complete.

Claims (2)

1.一种基于区域对比度的图像中显著对象的提取方法,其特征在于,其具体步骤如下:1. an extraction method of salient object in the image based on regional contrast, it is characterized in that, its concrete steps are as follows: (1)、输入原始图像,原始图像的显著性图记为S,原始图像的对象概率图记为O;(2)、计算显著性图和对象概率图的融合系数,其具体步骤如下:(1), input the original image, the saliency map of the original image is marked as S, and the object probability map of the original image is marked as O; (2), calculate the fusion coefficient of the saliency map and the object probability map, and its concrete steps are as follows: (2-1),设显著性图的第一阈值图Sf、显著性图的第二阈值图So、显著性图的第三阈值图Sa,对象概率图的阈值图为Of,其具体如下:(2-1), let the first threshold map S f of the saliency map, the second threshold map S o of the saliency map, the third threshold map S a of the saliency map, and the threshold map of the object probability map be O f , The details are as follows: 对显著性图中的像素的显著性值归一化处理,其归一化值为[0,1],其中,设3个不同的阈值,分别是,第一阈值记为Tf,Tf=0.75;设第二阈值为显著性图的最大类间方差值,记为To;设第三阈值为显著性图的平均显著性值,记为Ta;显著性图中显著性值大于等于上述阈值的像素的像素值记为1,小于上述阈值的像素的像素值记为0,得到显著性图对应的阈值图,The saliency value of the pixels in the saliency map is normalized, and its normalized value is [0,1], where three different thresholds are set, respectively, the first threshold is denoted as T f , T f =0.75; set the second threshold as the maximum between-class variance value of the significance map, denoted as T o ; set the third threshold as the average significance value of the significance map, denoted as T a ; the significance value of the significance map The pixel value of the pixel greater than or equal to the above threshold is recorded as 1, and the pixel value of the pixel smaller than the above threshold is recorded as 0, and the threshold map corresponding to the saliency map is obtained, 设与第一阈值Tf对应的阈值图为显著性图的第一阈值图,记为Sf,设与第二阈值To对应的阈值图为显著性图的第二阈值图,记为So,设与第三阈值Ta对应的阈值图为显著性图的第三阈值图,记为Sa,对象概率图O中对象概率值大于等于第一阈值Tf的像素的像素值记为1,小于第一阈值Tf的像素的像素值记为0,得到对象概率图的阈值图OfLet the threshold map corresponding to the first threshold T f be the first threshold map of the saliency map, denoted as S f , let the threshold map corresponding to the second threshold T o be the second threshold map of the saliency map, denoted as S o , let the threshold map corresponding to the third threshold T a be the third threshold map of the saliency map, denoted as S a , and the pixel value of the pixel whose object probability value is greater than or equal to the first threshold T f in the object probability map O is denoted as 1, the pixel value of the pixel smaller than the first threshold T f is recorded as 0, and the threshold map O f of the object probability map is obtained; (2-2),将显著性图的第一阈值图与显著性图的第二阈值图之间的像素的重合比例值与显著性图的第一阈值图与显著性图的第三阈值图之间的像素的重合比例值相加之和,记为Q(S),其计算式为:(2-2), the overlap ratio value of the pixels between the first threshold map of the saliency map and the second threshold map of the saliency map and the first threshold map of the saliency map and the third threshold map of the saliency map The sum of the coincidence ratio values of the pixels between is denoted as Q(S), and its calculation formula is: QQ (( SS )) == &Sigma;&Sigma; pp &Element;&Element; SS ff SS ff (( pp )) &Sigma;&Sigma; pp &Element;&Element; SS oo SS oo (( pp )) ++ &Sigma;&Sigma; pp &Element;&Element; SS ff SS ff (( pp )) &Sigma;&Sigma; pp &Element;&Element; SS aa SS aa (( pp )) -- -- -- (( 11 )) 其中,p表示阈值图中的像素,p∈Sf表示属于显著性图的第一阈值图中的像素,Sf(p)为显著性图的第一阈值图中的像素p对应的像素值,p∈So表示属于显著性图的第二阈值图中的像素,So(p)为显著性图的第二阈值图中的像素p对应的像素值,p∈Sa表示属于显著性图的第三阈值图中的像素,Sa(p)为显著性图的第三阈值图中的像素p对应的像素值;Among them, p represents the pixel in the threshold map, p ∈ S f represents the pixel in the first threshold map belonging to the saliency map, and S f (p) is the pixel value corresponding to the pixel p in the first threshold map of the saliency map , p∈S o represents the pixel belonging to the second threshold map of the saliency map, S o (p) is the pixel value corresponding to the pixel p in the second threshold map of the saliency map, and p∈S a represents the pixel belonging to the saliency map The pixel in the third threshold map of the figure, S a (p) is the pixel value corresponding to the pixel p in the third threshold map of the saliency map; (2-3),计算显著性图的第一阈值图和对象概率图的阈值图的交叠面积,记为r(Sf,Of),其计算式为:(2-3), calculate the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, denoted as r(S f , O f ), and its calculation formula is: rr (( SS ff ,, Oo ff )) == SS ff &cap;&cap; Oo ff SS ff &cup;&cup; Oo ff -- -- -- (( 22 )) 其中,Sf∩Of表示Sf与Of的交集,Sf∪Of表示Sf与Of的并集,r(Sf,Of)表示显著性图的第一阈值图和对象概率图的阈值图的交叠面积;Among them, S f ∩ O f represents the intersection of S f and O f , S f ∪ O f represents the union of S f and O f , r(S f , O f ) represents the first threshold map of the saliency map and the object the overlap area of the threshold map of the probability map; (2-4),设原始图像的对角线长度,记为D,计算显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的归一化质心距离,记为d(Sf,Of),其计算式为:(2-4), set the diagonal length of the original image, denoted as D, and calculate the normalized centroid distance between the centroid point of the first threshold map of the saliency map and the centroid point of the threshold map of the object probability map, It is recorded as d(S f ,O f ), and its calculation formula is: dd (( SS ff ,, Oo ff )) == || || &mu;&mu; SS -- &mu;&mu; Oo || || 22 DD. -- -- -- (( 33 )) 其中,μS表示显著性图的第一阈值图的质心点,μo表示对象概率图的阈值图Of的质心点,||·||2是计算μS到μo之间的欧式距离;Among them, μ S denotes the centroid point of the first threshold map of the saliency map, μ o denotes the centroid point of the threshold map O f of the object probability map, and ||·|| 2 is the Euclidean distance between μ S and μ o ; (2-5),根据各阈值图之间的像素的重合比例值相加之和、所述的显著性图的第一阈值图和对象概率图的阈值图的交叠面积、所述的显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的质心距离值计算显著性图和对象概率图融合系数,记为IS,其计算式为:(2-5), according to the sum of the overlapping ratio values of the pixels between the threshold maps, the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, the salience The centroid distance value between the centroid point of the first threshold value map of the property map and the centroid point of the threshold value map of the object probability map is used to calculate the saliency map and the fusion coefficient of the object probability map, which is denoted as IS, and its calculation formula is: II SS == 11 44 {{ QQ (( SS )) ++ rr (( SS ff ,, Oo ff )) ++ &lsqb;&lsqb; 11 -- dd (( SS ff ,, Oo ff )) &rsqb;&rsqb; }} -- -- -- (( 44 )) ;; (3)、根据融合系数,计算区域对比度融合图,提取图像中的显著对象。(3) According to the fusion coefficient, the regional contrast fusion map is calculated, and the salient objects in the image are extracted. 2.根据权利要求1所述的基于区域对比度的图像中显著对象的提取方法,其特征在于,上述步骤(3)所述的根据融合系数,计算区域对比度融合图,提取图像中的显著对象,其具体步骤如下:2. the method for extracting salient objects in the image based on regional contrast according to claim 1, is characterized in that, according to fusion coefficient described in above-mentioned step (3), calculates regional contrast fusion map, extracts the salient object in the image, The specific steps are as follows: (3-1),根据显著性图和对象概率图的融合系数,融合显著性图和对象概率图,计算区域对比度融合图,记为Sfusion,其计算式为:(3-1), according to the fusion coefficient of the saliency map and the object probability map, the saliency map and the object probability map are fused to calculate the regional contrast fusion map, which is denoted as S fusion , and its calculation formula is: Sfusion=S+αfO(p|O(p)≥Tf)+αbO(p|O(p)≤Tb)+O(p|Tb<O(p)<Tf) (5)S fusion =S+α f O(p|O(p)≥T f )+α b O(p|O(p)≤T b )+O(p|T b <O(p)<T f ) (5) 其中S是显著性图,Tf是上述步骤(2-1)中的第一阈值,Ta是上述步骤(2-1)中的第三阈值,IS是上述步骤(2-5)中的显著性图和对象概率图融合系数,αf=max(αb+0.7,1)表示αf取αb+0.7与1之间的最大数,αb=exp(-IS),exp表示e指数,O(p|O(p)≥Tf)表示对象概率图中像素概率值大于等于Tf的像素的像素值,O(p|O(p)≤Tb)表示对象概率图中像素概率值小于等于Tb的像素的像素值,O(p|Tb<O(p)<Tf)表示对象概率图中像素概率值介于Tf、Tb之间的像素的像素值,Sfusion表示区域对比度融合图;where S is the saliency map, T f is the first threshold in the above step (2-1), T a is the third threshold in the above step (2-1), IS is the Fusion coefficient of saliency map and object probability map, α f =max(α b +0.7,1) means α f takes the maximum number between α b +0.7 and 1, α b =exp(-IS), exp means e Index, O(p|O(p)≥T f ) represents the pixel value of the pixel whose probability value is greater than or equal to T f in the object probability map, O(p|O(p)≤T b ) represents the pixel in the object probability map The pixel value of the pixel whose probability value is less than or equal to T b , O(p|T b <O(p)<T f ) means the pixel value of the pixel whose probability value of the pixel in the object probability map is between T f and T b , S fusion represents the regional contrast fusion map; (3-2),设第四阈值为0.2,区域对比度融合图Sfusion大于等于第四阈值的像素的像素值记为1,小于第四阈值的像素的像素值记为0,得到区域对比度融合图对应的阈值图,再采用Grabcut图切割方法进行图像分割,提取图像中的显著对象。(3-2), assuming that the fourth threshold is 0.2, the pixel value of the pixel in the regional contrast fusion map S fusion greater than or equal to the fourth threshold is recorded as 1, and the pixel value of the pixel smaller than the fourth threshold is recorded as 0, and the regional contrast fusion is obtained The threshold value map corresponding to the graph, and then use the Grabcut graph cutting method to segment the image and extract the salient objects in the image.
CN201410781285.7A 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast Expired - Fee Related CN104504692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410781285.7A CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410781285.7A CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Publications (2)

Publication Number Publication Date
CN104504692A CN104504692A (en) 2015-04-08
CN104504692B true CN104504692B (en) 2017-06-23

Family

ID=52946086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410781285.7A Expired - Fee Related CN104504692B (en) 2014-12-17 2014-12-17 The extracting method of notable object in image based on region contrast

Country Status (1)

Country Link
CN (1) CN104504692B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978B (en) * 2016-09-24 2020-10-30 上海大学 A method for salient object detection in unconstrained video combined with similarity
CN106886995B (en) 2017-01-13 2019-09-20 北京航空航天大学 Image Salient Object Segmentation with Multilinear Example Regressor Aggregation
CN107730564A (en) * 2017-09-26 2018-02-23 上海大学 A kind of image edit method based on conspicuousness
CN108428240B (en) * 2018-03-08 2022-03-25 南京大学 A salient object segmentation method adaptive to input information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208125A (en) * 2013-03-14 2013-07-17 上海大学 Visual salience algorithm of color and motion overall contrast in video frame image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Fusing Generic Objectness and Visual Saliency for Salient Object Detection;Kai-Yueh Chang 等;《2011 IEEE International Conference on Computer Vision》;20111113;第1节,第3节,第5节,图1,图3,图6 *
Learning to Detect a Salient Object;Tie Liu 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20110228;第33卷(第2期);第353-367页 *
Measuring the objectness of image windows;Bogdan Alexe 等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20121231;第34卷(第11期);第1-14页 *
Salient Region Detection by UFO: Uniqueness, Focusness and Objectness;Peng Jiang 等;《ICCV 2013》;20131231;第1976-1983页 *

Also Published As

Publication number Publication date
CN104504692A (en) 2015-04-08

Similar Documents

Publication Publication Date Title
CN105261017B (en) The method that image segmentation based on road surface constraint extracts pedestrian&#39;s area-of-interest
CN105894502B (en) RGBD image saliency detection method based on hypergraph model
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
WO2017190656A1 (en) Pedestrian re-recognition method and device
CN107330897B (en) Image segmentation method and system thereof
CN103577875B (en) A kind of area of computer aided CAD demographic method based on FAST
CN108629286B (en) Remote sensing airport target detection method based on subjective perception significance model
CN107369158B (en) Indoor scene layout estimation and target area extraction method based on RGB-D images
CN102103690A (en) Method for automatically portioning hair area
CN103679154A (en) Three-dimensional gesture action recognition method based on depth images
CN107527054B (en) Foreground automatic extraction method based on multi-view fusion
CN106327507A (en) Color image significance detection method based on background and foreground information
CN104778464A (en) Garment positioning and detecting method based on depth convolution nerve network
CN107103303A (en) A kind of pedestrian detection method based on GMM backgrounds difference and union feature
CN103413303A (en) Infrared target segmentation method based on joint obviousness
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN103106409A (en) Composite character extraction method aiming at head shoulder detection
CN104504692B (en) The extracting method of notable object in image based on region contrast
CN106446890A (en) Candidate area extraction method based on window scoring and superpixel segmentation
CN108090485A (en) Display foreground extraction method based on various visual angles fusion
CN105761260A (en) Skin image affected part segmentation method
CN105809089A (en) Multi-face detection method and device under complex background
CN114565675A (en) A method for removing dynamic feature points in the front end of visual SLAM
CN108133218A (en) Infrared target detection method, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170623

Termination date: 20211217

CF01 Termination of patent right due to non-payment of annual fee