CN104504692B - The extracting method of notable object in image based on region contrast - Google Patents
The extracting method of notable object in image based on region contrast Download PDFInfo
- Publication number
- CN104504692B CN104504692B CN201410781285.7A CN201410781285A CN104504692B CN 104504692 B CN104504692 B CN 104504692B CN 201410781285 A CN201410781285 A CN 201410781285A CN 104504692 B CN104504692 B CN 104504692B
- Authority
- CN
- China
- Prior art keywords
- map
- threshold
- saliency
- pixel
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000004927 fusion Effects 0.000 claims abstract description 48
- 239000000284 extract Substances 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 5
- 238000003709 image segmentation Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:(1)、输入原始图像,原始图像的显著性图记为,原始图像的对象概率图记为;(2)、计算显著性图和对象概率图的融合系数;(3)、根据融合系数,计算区域对比度融合图,提取图像中的显著对象。该方法采用显著性图和对象概率图计算区域对比度图融合图,提取图像中显著对象,相对于单一使用显著性图或对象概率图提取显著对象的方法,能够更准确、完整地提取图像中的显著对象。
The invention discloses a method for extracting salient objects in an image based on regional contrast, and its specific steps are as follows: (1), input the original image, and the saliency mark of the original image is , the object probability map of the original image is ; (2), calculate the fusion coefficient of the saliency map and the object probability map; (3), calculate the regional contrast fusion map according to the fusion coefficient, and extract the salient objects in the image. This method uses the saliency map and the object probability map to calculate the fusion map of the regional contrast map, and extracts the salient objects in the image. significant object.
Description
技术领域technical field
本发明涉及计算机信息、图像处理技术领域,具体地说是涉及一种提取图像中的显著对象的方法。The invention relates to the technical fields of computer information and image processing, in particular to a method for extracting salient objects in an image.
背景技术Background technique
根据心理学以及人类视觉的研究,当人观察一幅图像时,对图像各区域的关注并不平均,从而产生与关注度相对应的显著性图。在大多数情况下,人观察一幅图像时,不会在整个图像上平均分配注意力,而是会将注意力集中在图像中的某个对象,这样的对象被称为显著对象。如果能够自动地将显著对象提取出来,将对图像缩放、图像识别、图像检索等应用领域提供极大的帮助。显著对象提取方法正是在这种背景下应用而生,它旨在准确地提取图像中去除了背景之后的显著对象,例如,Rother等人在2004年美国计算机协会图形学学报中发表了“Grabcut:交互式前景提取用图切割方法”一文,该文利用人工手动画矩形窗,指定候选显著对象区域,再用图切割方法提取显著对象。但因为需要人工手动指定且只能用矩形窗定义显著对象候选区域,限制了该方法的广泛应用。Cheng等人在2011年美国电气电子工程师协会计算机视觉和模式识别会议上发表了 “基于全局对比度的显著性区域检测”一文,该文利用全局颜色对比度和空间区域对比度得到显著性图,然后根据显著性图用迭代Grabcut图切割方法进行图像分割,提取图像中的显著对象,该方法的具体步骤如下:According to the research of psychology and human vision, when people observe an image, they do not pay equal attention to each area of the image, thus producing a saliency map corresponding to the degree of attention. In most cases, when people observe an image, they will not distribute attention evenly on the entire image, but will focus on an object in the image, and such an object is called a salient object. If the salient objects can be extracted automatically, it will be of great help to image scaling, image recognition, image retrieval and other application fields. The salient object extraction method was applied in this context. It aims to accurately extract the salient objects in the image after the background is removed. For example, Rother et al. published "Grabcut : Graph Cutting Method for Interactive Foreground Extraction", this paper uses artificially animated rectangular windows to designate candidate salient object regions, and then uses graph cutting method to extract salient objects. However, the wide application of this method is limited because it needs to be manually specified and only rectangular windows can be used to define salient object candidate regions. Cheng et al. published the paper "Saliency Region Detection Based on Global Contrast" at the IEEE Computer Vision and Pattern Recognition Conference in 2011. This paper uses global color contrast and spatial region contrast to obtain a saliency map, and then according The iterative Grabcut graph cutting method is used to segment the image and extract the salient objects in the image. The specific steps of the method are as follows:
(1)一个像素的显著性值用该像素和图像中其他像素颜色的对比度来定义,具有相同颜色的像素分配相同的显著性值。(1) The saliency value of a pixel is defined by the color contrast between the pixel and other pixels in the image, and pixels with the same color are assigned the same saliency value.
(2)基于直方图统计丢弃出现频率较小的颜色,每个颜色的显著性值被替换为相似颜色显著性值的加权平均。(2) Colors with less frequent occurrence are discarded based on histogram statistics, and the significance value of each color is replaced by the weighted average of similar color significance values.
(3)用基于图的图像分割方法将图像分割成若干区域,利用两个区域重心的欧氏距离,计算空间区域对比度,得到显著性图。(3) Use a graph-based image segmentation method to divide the image into several regions, and use the Euclidean distance between the centers of gravity of the two regions to calculate the spatial region contrast to obtain a saliency map.
(4)对显著性图取固定阈值,用Grabcut图切割方法进行图像分割。(4) Take a fixed threshold for the saliency map, and use the Grabcut map cutting method for image segmentation.
(5)用膨胀、腐蚀操作图像分割后的结果图,得到新的待分割图,再用Grabcut图切割方法进行图像分割。(5) Use dilation and erosion to manipulate the resulting image after image segmentation to obtain a new image to be segmented, and then use the Grabcut image cutting method to perform image segmentation.
(6)重复步骤(5)直到收敛。得到最终的结果图,即提取的显著对象。(6) Repeat step (5) until convergence. The final result map is obtained, that is, the extracted salient objects.
Liu等人在2014年美国电气和电子工程师协会的图像处理学报中发表了“显著性树:一个新的显著性检测框架”一文,该文用树形结构的节点表示图像中一个个小区域,通过测量全局对比度、空间稀疏性和对象优先性,合并原始小区域,生成显著性图,最后运用最大类间方差值二值化显著性图,提取图像中的显著对象。该方法提高了显著性图的精确度,但是,该方法中的最大类间方差值方法在提取显著对象时,仍不能完整提取图像中的多个显著对象。Alexe等人在2012年美国电气和电子工程师协会的模式分析与机器智能学报中发表了“测量图像窗中的对象”,该文提出了用图像窗,即矩形窗检测对象的概念和计算方法,通过计算大量矩形窗包含对象的概率,利用贝叶斯公式联合多线索求出显著对象所在区域的位置概率,得到对象概率图。该方法的具体步骤如下:Liu et al. published the article "Saliency Tree: A New Saliency Detection Framework" in the Image Processing Journal of the Institute of Electrical and Electronics Engineers in 2014. This paper uses tree-shaped nodes to represent small areas in the image. By measuring the global contrast, spatial sparsity and object priority, merging the original small regions to generate a saliency map, and finally using the maximum between-class variance value to binarize the saliency map to extract salient objects in the image. This method improves the accuracy of the saliency map, however, the maximum inter-class variance value method in this method still cannot fully extract multiple salient objects in the image when extracting salient objects. Alexe et al published "Measuring Objects in Image Window" in the Journal of Pattern Analysis and Machine Intelligence of the Institute of Electrical and Electronics Engineers in 2012. This paper proposed the concept and calculation method of detecting objects with image windows, that is, rectangular windows. By calculating the probability that a large number of rectangular windows contain objects, the Bayesian formula combined with multiple clues is used to obtain the location probability of the area where the salient objects are located, and the object probability map is obtained. The concrete steps of this method are as follows:
(1)用频域残差法得到多尺度显著性线索,并产生大量矩形窗。(1) Use the frequency-domain residual method to obtain multi-scale saliency cues, and generate a large number of rectangular windows.
(2)用颜色空间直方图的卡方距离,计算矩形窗间颜色对比度线索。(2) Use the chi-square distance of the color space histogram to calculate the color contrast clues between rectangular windows.
(3)用坎尼算子检测边界,得到边缘密度线索。(3) Use the Canny operator to detect the boundary and get the edge density clue.
(4)用基于图的图像分割方法将图像分割成若干区域,根据矩形窗内区域和矩形窗外区域的最小区域差异,得到区域分叉线索。(4) Use a graph-based image segmentation method to segment the image into several regions, and obtain regional bifurcation clues according to the minimum regional difference between the region inside the rectangular window and the region outside the rectangular window.
(5)利用高斯分布估计矩形窗的位置和大小线索。(5) Gaussian distribution is used to estimate the location and size clues of the rectangular window.
(6)用贝叶斯公式整合由步骤(1)-(5)得到的线索,计算显著对象所在区域的位置概率,得到对象概率图。(6) Use the Bayesian formula to integrate the clues obtained from steps (1)-(5), calculate the location probability of the area where the salient object is located, and obtain the object probability map.
但是。上述方法存在的不足是,该方法只是用矩形窗表示出显著对象的位置概率,而不包含准确的显著对象的轮廓信息,不能准确地提取图像中的显著对象。but. The disadvantage of the above method is that this method only uses a rectangular window to represent the position probability of the salient object, but does not contain accurate contour information of the salient object, and cannot accurately extract the salient object in the image.
综上所述,现有的图像中显著对象的提取的方法,不能准确、完整地提取图像中的显著对象,这影响了显著对象提取的广泛应用。To sum up, the existing methods for extracting salient objects in images cannot accurately and completely extract salient objects in images, which affects the wide application of salient object extraction.
发明内容Contents of the invention
本发明的目的在于针对已有技术中存在的缺陷,提出一种基于区域对比度的图像中显著对象的提取方法,该方法能够较为准确、完整地提取图像中的显著对象。The object of the present invention is to propose a method for extracting salient objects in an image based on regional contrast, which can accurately and completely extract the salient objects in the image.
为了达到上述目的,本发明采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:
一种基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:A method for extracting salient objects in an image based on regional contrast, the specific steps are as follows:
(1)、输入原始图像,原始图像的显著性图记为,原始图像的对象概率图记为;(1), input the original image, the saliency map of the original image is marked as , the object probability map of the original image is ;
(2)、计算显著性图和对象概率图的融合系数;(2), calculating the fusion coefficient of the saliency map and the object probability map;
(3)、根据融合系数,计算区域对比度融合图,提取图像中的显著对象。(3) According to the fusion coefficient, the regional contrast fusion map is calculated, and the salient objects in the image are extracted.
上述步骤(2)所述的计算显著性图和对象概率图的融合系数,其具体步骤如下:The fusion coefficient of the described calculation saliency map and object probability map of above-mentioned steps (2), its concrete steps are as follows:
(2-1), 设显著性图的第一阈值图、显著性图的第二阈值图、显著性图的第三阈值图,对象概率图的阈值图为,其具体如下:(2-1), set the first threshold value map of the saliency map , the second threshold map of the saliency map , the third threshold map of the saliency map , and the threshold map of the object probability map is , which are as follows:
对显著性图中的像素的显著性值归一化处理,其归一化值为[0,1],其中,设3个不同的阈值,分别是,第一阈值记为,=0.75;设第二阈值为显著性图的最大类间方差值,记为;设第三阈值为显著性图的平均显著性值,记为;显著性图中显著性值大于等于上述阈值的像素的像素值记为1,小于上述阈值的像素的像素值记为0,得到显著性图对应的阈值图,The saliency value of the pixels in the saliency map is normalized, and its normalized value is [0,1]. Among them, three different thresholds are set, respectively, the first threshold is recorded as , = 0.75; set the second threshold as the maximum between-class variance value of the significance map, denoted as ; Let the third threshold be the average saliency value of the saliency map, denoted as ; In the saliency map, the pixel value of the pixel whose saliency value is greater than or equal to the above threshold is recorded as 1, and the pixel value of the pixel smaller than the above threshold is recorded as 0, and the threshold map corresponding to the saliency map is obtained,
设与第一阈值对应的阈值图为显著性图的第一阈值图,记为, 设与第二阈值对应的阈值图为显著性图的第二阈值图,记为,设与第三阈值对应的阈值图为显著性图的第三阈值图,记为,对象概率图中对象概率值大于等于第一阈值的像素的像素值记为1,小于第一阈值的像素的像素值记为0,得到对象概率图的阈值图;Set and the first threshold The corresponding threshold map is the first threshold map of the saliency map, denoted as , set with the second threshold The corresponding threshold map is the second threshold map of the saliency map, denoted as , set with the third threshold The corresponding threshold map is the third threshold map of the saliency map, denoted as , the object probability map Medium object probability value is greater than or equal to the first threshold The pixel value of the pixel is recorded as 1, which is less than the first threshold The pixel value of the pixel is recorded as 0, and the threshold map of the object probability map is obtained ;
(2-2), 将显著性图的第一阈值图与显著性图的第二阈值图之间的像素的重合比例值与显著性图的第一阈值图与显著性图的第三阈值图之间的像素的重合比例值相加之和,记为,其计算式为:(2-2), the overlap ratio value of the pixels between the first threshold map of the saliency map and the second threshold map of the saliency map and the first threshold map of the saliency map and the third threshold map of the saliency map The sum of the coincidence ratio values of the pixels between is denoted as , whose calculation formula is:
其中, p表示阈值图中的像素,表示属于显著性图的第一阈值图中的像素,为显著性图的第一阈值图中的像素p对应的像素值,表示属于显著性图的第二阈值图中的像素,为显著性图的第二阈值图中的像素p对应的像素值,表示属于显著性图的第三阈值图中的像素,为显著性图的第三阈值图中的像素p对应的像素值;;Among them, p represents the pixel in the threshold map, represents a pixel belonging to the first threshold map of the saliency map, is the pixel value corresponding to pixel p in the first threshold map of the saliency map, represents a pixel belonging to the second threshold map of the saliency map, is the pixel value corresponding to pixel p in the second threshold value map of the saliency map, represents a pixel belonging to the third threshold map of the saliency map, is the pixel value corresponding to pixel p in the third threshold map of the saliency map;
(2-3), 计算显著性图的第一阈值图和对象概率图的阈值图的交叠面积,记为,其计算式为:(2-3), calculate the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, denoted as , whose calculation formula is:
(2) (2)
其中,表示与的交集,表示与的并集,表示显著性图的第一阈值图和对象概率图的阈值图的交叠面积;in, express and the intersection of express and the union of represents the overlap area of the first threshold map of the saliency map and the threshold map of the object probability map;
(2-4),设原始图像的对角线长度,记为D,计算显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的归一化质心距离,记为,其计算式为:(2-4), set the diagonal length of the original image, denoted as D, and calculate the normalized centroid distance between the centroid point of the first threshold map of the saliency map and the centroid point of the threshold map of the object probability map, recorded as , whose calculation formula is:
(3) (3)
其中,表示显著性图的第一阈值图的质心点,表示对象检测图的阈值图的质心点,是计算到之间的欧式距离;in, represents the centroid point of the first threshold map of the saliency map, A threshold map representing the object detection map the centroid point of is calculation arrive Euclidean distance between;
(2-5), 根据所述的各阈值图之间的像素的重合比例值相加之和、所述的显著性图的第一阈值图和对象概率图的阈值图的交叠面积、所述的显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的质心距离值计算显著性图和对象概率图融合系数,记为,其计算式为:(2-5), according to the sum of the overlapping ratio values of pixels between the threshold maps, the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, and the The centroid distance value between the centroid point of the first threshold value map of the above-mentioned saliency map and the centroid point of the threshold value map of the object probability map is used to calculate the fusion coefficient of the saliency map and the object probability map, which is denoted as , its calculation formula is:
(4) (4)
上述步骤(3)所述的根据融合系数,计算区域对比度融合图,提取图像中的显著对象,其具体步骤如下:According to the fusion coefficient described in the above-mentioned steps (3), calculate the regional contrast fusion map, extract the salient object in the image, and its specific steps are as follows:
(3-1), 根据显著性图和对象概率图的融合系数,融合显著性图和对象概率图,计算区域对比度融合图,记为,其计算式为:(3-1), according to the fusion coefficient of the saliency map and the object probability map, the saliency map and the object probability map are fused, and the regional contrast fusion map is calculated, which is denoted as , whose calculation formula is:
(5) (5)
其中是显著性图,表示取与1之间的最大数,,exp表示e指数,表示对象概率图中像素概率值大于等于的像素的像素值,表示对象概率图中像素概率值小于等于的像素的像素值,表示对象概率图中像素概率值介于、之间的像素的像素值,表示区域对比度融合图;in is the saliency map, express Pick the largest number between and 1, , exp represents the e index, Indicates that the pixel probability value in the object probability map is greater than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is less than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is between , The pixel values of the pixels between, Represents the region contrast fusion map;
(3-2), 设第四阈值为0.2,区域对比度融合图大于等于第四阈值的像素的像素值记为1,小于第四阈值的像素的像素值记为0,得到区域对比度融合图对应的阈值图,再采用Grabcut图切割方法进行图像分割,提取图像中的显著对象。(3-2), set the fourth threshold to 0.2, regional contrast fusion map The pixel value of the pixel greater than or equal to the fourth threshold is recorded as 1, and the pixel value of the pixel smaller than the fourth threshold is recorded as 0, and the threshold value map corresponding to the regional contrast fusion map is obtained, and then the Grabcut image cutting method is used for image segmentation, and the image is extracted notable objects.
本发明的基于区域对比度的图像中显著对象的提取方法与现有的技术相比,具有如下优点:该方法采用显著性图和对象概率图计算区域对比度融合图,提取图像中显著对象,相对于单一使用显著性图或对象概率图提取显著对象的方法,能够更准确、完整地提取图像中的显著对象。Compared with the existing technology, the method for extracting salient objects in an image based on regional contrast has the following advantages: the method uses a saliency map and an object probability map to calculate a regional contrast fusion map to extract salient objects in an image. A single method of extracting salient objects using a saliency map or an object probability map can more accurately and completely extract salient objects in an image.
附图说明Description of drawings
图1是本发明的基于区域对比度的图像中显著对象的提取方法的流程图;Fig. 1 is the flow chart of the extraction method of salient object in the image based on regional contrast of the present invention;
图2(a)是输入的原始图像;Figure 2(a) is the input original image;
图2(b)是原始图像的显著性图;Figure 2(b) is the saliency map of the original image;
图2(c)是原始图像的对象概率图;Figure 2(c) is the object probability map of the original image;
图3(a)是区域对比度融合图;Figure 3(a) is a regional contrast fusion map;
图3(b)是区域对比度融合图的阈值图用Grabcut图切割后提取的显著对象。Figure 3(b) is the salient objects extracted after the threshold map of the region contrast fusion map is cut with the Grabcut map.
具体实施方式detailed description
下面结合说明书附图对本发明的实施例作进一步详细说明。Embodiments of the present invention will be described in further detail below in conjunction with the accompanying drawings.
本发明进行的仿真实验是在CPU为3.5GHz、内存为16G的PC测试平台上编程实现。The simulation experiment carried out by the present invention is programmed on a PC test platform with a CPU of 3.5GHz and a memory of 16G.
如图1所示,本发明的基于区域对比度的图像中显著对象的提取方法,其具体步骤如下:As shown in Figure 1, the method for extracting salient objects in an image based on regional contrast of the present invention, its specific steps are as follows:
(1)、输入原始图像、原始图像的显著性图和对象概率图,其具体步骤如下:(1), input the original image, the saliency map and the object probability map of the original image, the specific steps are as follows:
输入原始图像,如图2(a),原始图像的显著性图,记为,如图2(b),原始图像的对象概率图,记为,如图2(c);Input the original image, as shown in Figure 2(a), the saliency map of the original image is denoted as , as shown in Figure 2(b), the object probability map of the original image, denoted as , as shown in Figure 2(c);
(2), 计算显著性图和对象概率图的融合系数,其具体步骤如下:(2), calculate the fusion coefficient of the saliency map and the object probability map, the specific steps are as follows:
(2-1), 设显著性图的第一阈值图、显著性图的第二阈值图、显著性图的第三阈值图,对象概率图的阈值图,其具体如下:(2-1), set the first threshold value map of the saliency map , the second threshold map of the saliency map , the third threshold map of the saliency map , the threshold map of the object probability map , which are as follows:
对显著性图中的像素的显著性值归一化处理,其归一化值为[0,1],其中,设3个不同的阈值,分别是,第一阈值记为,=0.75;设第二阈值为显著性图的最大类间方差值,记为;设第三阈值为显著性图的平均显著性值,记为;显著性图中显著性值大于等于上述阈值的像素的像素值记为1,小于上述阈值的像素的像素值记为0,得到显著性图对应的阈值图,The saliency value of the pixels in the saliency map is normalized, and its normalized value is [0,1]. Among them, three different thresholds are set, respectively, the first threshold is recorded as , = 0.75; set the second threshold as the maximum between-class variance value of the significance map, denoted as ; Let the third threshold be the average saliency value of the saliency map, denoted as ; In the saliency map, the pixel value of the pixel whose saliency value is greater than or equal to the above threshold is recorded as 1, and the pixel value of the pixel smaller than the above threshold is recorded as 0, and the threshold map corresponding to the saliency map is obtained,
设与第一阈值对应的阈值图为显著性图的第一阈值图,记为, 设与第二阈值对应的阈值图为显著性图的第二阈值图,记为,设与第三阈值对应的阈值图为显著性图的第三阈值图,记为,Set and the first threshold The corresponding threshold map is the first threshold map of the saliency map, denoted as , set with the second threshold The corresponding threshold map is the second threshold map of the saliency map, denoted as , set with the third threshold The corresponding threshold map is the third threshold map of the saliency map, denoted as ,
对象概率图中对象概率值大于等于第一阈值的像素的像素值记为1,小于第一阈值的像素的像素值记为0,得到对象概率图的阈值图;Object Probability Map Medium object probability value is greater than or equal to the first threshold The pixel value of the pixel is recorded as 1, which is less than the first threshold The pixel value of the pixel is recorded as 0, and the threshold map of the object probability map is obtained ;
(2-2), 将显著性图的第一阈值图与显著性图的第二阈值图之间的像素的重合比例值与显著性图的第一阈值图与显著性图的第三阈值图之间的像素的重合比例值相加之和,记为,其计算式为:(2-2), the overlap ratio value of the pixels between the first threshold map of the saliency map and the second threshold map of the saliency map and the first threshold map of the saliency map and the third threshold map of the saliency map The sum of the coincidence ratio values of the pixels between is denoted as , whose calculation formula is:
(1) (1)
其中, p表示阈值图中的像素,表示属于显著性图的第一阈值图中的像素,为显著性图的第一阈值图中的像素p对应的像素值,表示属于显著性图的第二阈值图中的像素,为显著性图的第二阈值图中的像素p对应的像素值,表示属于显著性图的第三阈值图中的像素,为显著性图的第三阈值图中的像素p对应的像素值;;Among them, p represents the pixel in the threshold map, represents a pixel belonging to the first threshold map of the saliency map, is the pixel value corresponding to pixel p in the first threshold map of the saliency map, represents a pixel belonging to the second threshold map of the saliency map, is the pixel value corresponding to pixel p in the second threshold value map of the saliency map, represents a pixel belonging to the third threshold map of the saliency map, is the pixel value corresponding to pixel p in the third threshold map of the saliency map;
(2-3), 计算显著性图的第一阈值图和对象概率图的阈值图的交叠面积,记为,其计算式为:(2-3), calculate the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, denoted as , whose calculation formula is:
(2) (2)
其中,表示与的交集,表示与的并集,为显著性图的第一阈值图和对象概率图的阈值图的交叠面积;in, express and the intersection of express and the union of is the overlap area of the first threshold map of the saliency map and the threshold map of the object probability map;
(2-4),设原始图像的对角线长度,记为D,计算显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的归一化质心距离,记为,其计算式为:(2-4), set the diagonal length of the original image, denoted as D, and calculate the normalized centroid distance between the centroid point of the first threshold map of the saliency map and the centroid point of the threshold map of the object probability map, recorded as , whose calculation formula is:
(3) (3)
其中,表示显著性图的第一阈值图的质心点,表示对象检测图的阈值图的质心点,是计算到之间的欧式距离;in, represents the centroid point of the first threshold map of the saliency map, A threshold map representing the object detection map the centroid point of is calculation arrive Euclidean distance between;
(2-5), 根据所述的各阈值图之间的像素的重合比例值相加之和、所述的显著性图的第一阈值图和对象概率图的阈值图的交叠面积、所述的显著性图的第一阈值图的质心点和对象概率图的阈值图的质心点之间的质心距离值计算显著性图和对象概率图融合系数,记为,其计算式为:(2-5), according to the sum of the overlapping ratio values of pixels between the threshold maps, the overlapping area of the first threshold map of the saliency map and the threshold map of the object probability map, and the The centroid distance value between the centroid point of the first threshold value map of the above-mentioned saliency map and the centroid point of the threshold value map of the object probability map is used to calculate the fusion coefficient of the saliency map and the object probability map, which is denoted as , its calculation formula is:
(4) (4)
(3), 根据融合系数,计算区域对比度融合图,提取图像中的显著对象,其具体步骤如下:(3), according to the fusion coefficient, calculate the regional contrast fusion map, and extract the salient objects in the image, the specific steps are as follows:
(3-1), 根据显著性图和对象概率图的阈值融合系数,融合显著性图和对象概率图,计算区域对比度融合图,记为,其计算式为:(3-1), according to the threshold fusion coefficient of the saliency map and the object probability map, the saliency map and the object probability map are fused to calculate the regional contrast fusion map, denoted as , whose calculation formula is:
(5) (5)
其中,是显著性图,表示取与1之间的最大数,,exp表示e指数,表示对象概率图中像素概率值大于等于的像素的像素值,表示对象概率图中像素概率值小于等于的像素的像素值, 表示对象概率图中像素概率值介于、之间的像素的像素值,表示区域对比度融合图,如图3(a)所示,图3(a)中的区域对比度融合图相对于图2(b)中的显著性图更完整地突出了整个显著对象的躯干,图3(a)中的区域对比度融合图相对于图2(c)中的对象概率图更准确地描绘了整个显著对象的轮廓;in, is the saliency map, express Pick the largest number between and 1, , exp represents the e index, Indicates that the pixel probability value in the object probability map is greater than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is less than or equal to The pixel value of the pixel, Indicates that the pixel probability value in the object probability map is between , The pixel values of the pixels between, Represents the regional contrast fusion map, as shown in Figure 3(a), the regional contrast fusion map in Figure 3(a) highlights the torso of the entire salient object more completely than the saliency map in Figure 2(b), Figure 3(a) The region-contrast fusion map in 3(a) more accurately delineates the outline of the entire salient object compared to the object probability map in Fig. 2(c);
(3-2), 设第四阈值为0.2,区域对比度融合图大于等于第四阈值的像素的像素值记为1,小于第四阈值的像素的像素值记为0,得到区域对比度融合图对应的阈值图,再采用Grabcut图切割方法进行图像分割,提取图像中的显著对象,如图3(b)所示。(3-2), set the fourth threshold to 0.2, regional contrast fusion map The pixel value of the pixel greater than or equal to the fourth threshold is recorded as 1, and the pixel value of the pixel smaller than the fourth threshold is recorded as 0, and the threshold value map corresponding to the regional contrast fusion map is obtained, and then the Grabcut image cutting method is used for image segmentation, and the image is extracted The salient objects of , as shown in Figure 3(b).
从上述仿真实验结果可以看出,本发明的方法利用显著性图和对象概率图,得到区域对比度融合图,从区域对比度融合图中提取显著对象,能够更准确、完整。It can be seen from the above simulation experiment results that the method of the present invention utilizes the saliency map and the object probability map to obtain a regional contrast fusion map, and extract salient objects from the regional contrast fusion map, which can be more accurate and complete.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410781285.7A CN104504692B (en) | 2014-12-17 | 2014-12-17 | The extracting method of notable object in image based on region contrast |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410781285.7A CN104504692B (en) | 2014-12-17 | 2014-12-17 | The extracting method of notable object in image based on region contrast |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104504692A CN104504692A (en) | 2015-04-08 |
CN104504692B true CN104504692B (en) | 2017-06-23 |
Family
ID=52946086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410781285.7A Expired - Fee Related CN104504692B (en) | 2014-12-17 | 2014-12-17 | The extracting method of notable object in image based on region contrast |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104504692B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407978B (en) * | 2016-09-24 | 2020-10-30 | 上海大学 | A method for salient object detection in unconstrained video combined with similarity |
CN106886995B (en) | 2017-01-13 | 2019-09-20 | 北京航空航天大学 | Image Salient Object Segmentation with Multilinear Example Regressor Aggregation |
CN107730564A (en) * | 2017-09-26 | 2018-02-23 | 上海大学 | A kind of image edit method based on conspicuousness |
CN108428240B (en) * | 2018-03-08 | 2022-03-25 | 南京大学 | A salient object segmentation method adaptive to input information |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208125A (en) * | 2013-03-14 | 2013-07-17 | 上海大学 | Visual salience algorithm of color and motion overall contrast in video frame image |
-
2014
- 2014-12-17 CN CN201410781285.7A patent/CN104504692B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208125A (en) * | 2013-03-14 | 2013-07-17 | 上海大学 | Visual salience algorithm of color and motion overall contrast in video frame image |
Non-Patent Citations (4)
Title |
---|
Fusing Generic Objectness and Visual Saliency for Salient Object Detection;Kai-Yueh Chang 等;《2011 IEEE International Conference on Computer Vision》;20111113;第1节,第3节,第5节,图1,图3,图6 * |
Learning to Detect a Salient Object;Tie Liu 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20110228;第33卷(第2期);第353-367页 * |
Measuring the objectness of image windows;Bogdan Alexe 等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20121231;第34卷(第11期);第1-14页 * |
Salient Region Detection by UFO: Uniqueness, Focusness and Objectness;Peng Jiang 等;《ICCV 2013》;20131231;第1976-1983页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104504692A (en) | 2015-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105261017B (en) | The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest | |
CN105894502B (en) | RGBD image saliency detection method based on hypergraph model | |
CN111145209B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN104156693B (en) | A kind of action identification method based on the fusion of multi-modal sequence | |
WO2017190656A1 (en) | Pedestrian re-recognition method and device | |
CN107330897B (en) | Image segmentation method and system thereof | |
CN103577875B (en) | A kind of area of computer aided CAD demographic method based on FAST | |
CN108629286B (en) | Remote sensing airport target detection method based on subjective perception significance model | |
CN107369158B (en) | Indoor scene layout estimation and target area extraction method based on RGB-D images | |
CN102103690A (en) | Method for automatically portioning hair area | |
CN103679154A (en) | Three-dimensional gesture action recognition method based on depth images | |
CN107527054B (en) | Foreground automatic extraction method based on multi-view fusion | |
CN106327507A (en) | Color image significance detection method based on background and foreground information | |
CN104778464A (en) | Garment positioning and detecting method based on depth convolution nerve network | |
CN107103303A (en) | A kind of pedestrian detection method based on GMM backgrounds difference and union feature | |
CN103413303A (en) | Infrared target segmentation method based on joint obviousness | |
CN103093470A (en) | Rapid multi-modal image synergy segmentation method with unrelated scale feature | |
CN103106409A (en) | Composite character extraction method aiming at head shoulder detection | |
CN104504692B (en) | The extracting method of notable object in image based on region contrast | |
CN106446890A (en) | Candidate area extraction method based on window scoring and superpixel segmentation | |
CN108090485A (en) | Display foreground extraction method based on various visual angles fusion | |
CN105761260A (en) | Skin image affected part segmentation method | |
CN105809089A (en) | Multi-face detection method and device under complex background | |
CN114565675A (en) | A method for removing dynamic feature points in the front end of visual SLAM | |
CN108133218A (en) | Infrared target detection method, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170623 Termination date: 20211217 |
|
CF01 | Termination of patent right due to non-payment of annual fee |