CN102509072A - Method for detecting salient object in image based on inter-area difference - Google Patents
Method for detecting salient object in image based on inter-area difference Download PDFInfo
- Publication number
- CN102509072A CN102509072A CN2011103120919A CN201110312091A CN102509072A CN 102509072 A CN102509072 A CN 102509072A CN 2011103120919 A CN2011103120919 A CN 2011103120919A CN 201110312091 A CN201110312091 A CN 201110312091A CN 102509072 A CN102509072 A CN 102509072A
- Authority
- CN
- China
- Prior art keywords
- representing
- map
- significance
- saliency
- saliency map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000004364 calculation method Methods 0.000 claims description 22
- 230000005484 gravity Effects 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 7
- 238000010845 search algorithm Methods 0.000 claims description 5
- 206010021703 Indifference Diseases 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a salient object in an image based on inter-area difference. The method specifically comprises the following steps of: (1) inputting an original image and calculating a saliency map of the original image; (2) calculating and modifying the saliency map; and (3) updating the saliency map through iteration and finding a target rectangle in the greatest difference from the external area, wherein the image content of the internal area of the target rectangle is the detected salient object. The method can accurately detect the salient object in the image without setting any parameter.
Description
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a method for detecting a salient object in an image based on region difference.
Background
The research results of psychological and perceptual science have shown that: when a person observes an image, the attention to the regions of the image is not averaged, resulting in a saliency map corresponding to the degree of attention. In most cases, a person looking at an image focuses on a region of the image, which is referred to as a salient object. In other words, the salient object captures a higher degree of attention than other regions of the image. If the salient object can be detected, the salient object detection method can provide great help for a plurality of applications such as salient object identification, image adaptation, image compression, image retrieval and the like. It is against this background that the salient object detection method is applied, and it aims to accurately and quickly detect a salient object in an image by using a salient image corresponding to the attention of the image. The result of the detection appears to mark a rectangular area in the image that contains as many salient objects as possible and as few backgrounds as possible. At present, a salient object detection method has been preliminarily studied, for example, in the article "salient object detection based on learning" published by Liu et al in the conference of computer vision and pattern recognition of the institute of electrical and electronics engineers, us 6.2007, the salient object detection described herein is to search a target rectangle on a salient image by using an exhaustive algorithm, and the target rectangle frames at least 95% of pixels with high significance. This detection method requires setting of a threshold value, and the detection speed of the salient object is slow, and the detection effect depends on the effect of the salient image. The salient object detection in the article "salient model based on concentric curves and colors" published by valenci et al in 2009 society of electrical and electronics engineers computer vision conference in the united states, is to search a target rectangle on a salient image by using an efficient sub-window search algorithm, which accelerates the speed of searching the target rectangle, but cannot accurately detect a salient object, and the efficient sub-window search algorithm has the following specific steps:
(1) setting p as an empty ordered queue, forming a point set by four vertex coordinates of the image, and taking the point set as a point set of a head of the ordered queue p;
(2) dividing the point set at the head of the ordered queue p into two subsets from the edge with the largest interval;
(3) calculating an upper bound for each subset by a boundary quality function;
(4) inserting the two subsets obtained in the second step into the ordered queue p according to the upper bound calculated in the third step;
(5) and (4) repeating the steps (2) to (4) until the subset taken out from the head of the ordered queue p only comprises one rectangle, wherein the rectangle is a global maximum value and is a searched target rectangle.
Luo et al published a "maximum saliency density-based object detection" article in 2010 Asian computer vision conference, and the "object detection" article is that a maximum saliency density algorithm is adopted to search a target rectangle on a saliency image, and the algorithm improves the accuracy of salient object detection, but the method needs to design different parameters for different saliency models, and cannot realize adaptivity. Liu et al published "non-parameter significance detection based on kernel density estimation" in the international conference on image processing of the American institute of Electrical and electronics Engineers, 9.2010, which established a non-parameter significance model using a non-parameter kernel density estimation algorithm for obtaining a significance map of an image, the algorithm comprising the following specific steps:
(1) pre-dividing the image into a plurality of regions by using a mean shift algorithm;
(2) calculating the color similarity of each pixel point in the image and each region by using a nonparametric kernel density estimation algorithm;
(3) calculating the color distance between each region by using the color similarity between each pixel point in the image and each region to form a color saliency map of the image;
(4) calculating the spatial distance between each region by using the color similarity between each pixel point in the image and each region to form a spatial saliency map of the image;
(5) and forming a final saliency map of the image from the color saliency map and the spatial saliency map of the image.
In summary, the existing salient object detection method needs to set corresponding parameters for various salient models to achieve accurate detection of salient objects, which affects the wide application of salient object detection.
Disclosure of Invention
The invention aims to provide a method for detecting a salient object in an image based on region difference aiming at the defects in the prior art, which can accurately detect the salient object without setting corresponding parameters aiming at various salient models.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a method for detecting a salient object in an image based on region difference comprises the following specific steps:
(1) inputting an original image, and calculating a saliency map of the original image;
(2) calculating a modified saliency map;
(3) and iteratively updating the saliency map to find a target rectangle with the largest difference with the outer region, wherein the image content of the inner region of the target rectangle is the detected salient object.
Inputting the original image and calculating the saliency map of the original image in the step (1), wherein the specific steps are as follows:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
(1)
wherein,is shown asThe number of the regions is one,representing a divided regionThe number of the pixel points in (1),indicating areaThe number of the pixel points in (1),representing pixel pointsThe color characteristics of (a) or (b),representing pixel pointsThe color characteristics of (a) or (b),is shown asThe kernel function of each of the regions is,to representPixel point of (2)And a firstColor similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
(2)
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe color saliency of an individual region is such that,a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows: (3)
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe spatial significance of the individual regions is such that,a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
wherein,a color saliency map representing an original image,a spatial saliency map representing an original image,a saliency map representing an original image.
The value of each pixel point position in the significance map is the significance value of the pixel point, the value range of the significance value is 0-255, the greater the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is.
The calculation of the modified saliency map in the step (2) is specifically as follows:
(2-2) calculating each pixel point on the significance mapTo the center of gravityEuropean distance ofThe calculation formula is as follows:
wherein,、representing coordinates of pixel points,、Representing coordinates of center of gravity,Representing pixel pointsEuclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
wherein W and H represent the width and height of the image, respectively,representing pixel pointsEuclidean distance to the center of gravity of the saliency map,the original significance map is shown,a modified saliency map is represented.
In the step (3), the saliency map is iteratively updated to find the target rectangle with the largest difference from the outer region of the saliency map, and the image content of the inner region of the target rectangle is the detected salient object, which specifically includes the following steps:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-2) is provided withIs shown inSignificance map updated in sub-iteration, significance map at initial stateWhereinRepresenting the modified saliency map obtained in step (2);
(3-1-3) is provided withIs shown inThe rectangular area obtained in the sub-iteration,a rectangular area in an initial state is represented and is the whole saliency map;
(3-1-4) is provided withIs shown asThe difference value between the rectangular area and the outer area obtained in the secondary iteration is obtained, the outer area is the area of the saliency map except the rectangular area, and the difference value between the rectangular area and the outer area in the initial state;
(3-1-5) is provided withIs shown asSignificance map in sub-iterationThe mean value of the significance values of all the pixel points, and a significance map in an initial stateThe mean value of the significance values of all the pixels is;
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) in the second placeIn the second iteration, useSignificance map updated in 1 iterationSubtracting the saliency map from the saliency value of each pixel pointMean value of significance values of all the pixelsObtaining an updated saliency map;
(3-2-2) adopting an efficient sub-window search algorithm to update the significance mapTo obtain a rectangular areaThe sum of all pixel points in the rectangular area is larger than the updated significance mapThe sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)The difference value from the outer region is calculated as:
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),is shown asThe difference value between the rectangular region and the outer region in the sub-iteration,represents an intermediate variable;
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),representing rectangular areasThe number of the internal pixel points is increased,representing rectangular areasThe mean of the saliency values of all pixels inside,representing rectangular areasThe number of the external pixel points is,representing rectangular areasThe mean value of the significance values of all the external pixel points;
(3-2-4) updating significance mapMean value of significance values of all the pixelsThe calculation formula is as follows:
wherein,representing pre-update saliency mapsThe mean of the saliency values of all the above pixels,representing rectangular areasThe mean of the saliency values of all pixels inside,is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),representing post-update saliency mapsThe mean value of the significance values of all the pixel points;
(3-2-5) in the significance mapTo go toRectangular regions obtained in sub-iterationsAll other pixel points have significance values set to;
(3-3) if inDifference values between rectangular region and outer region obtained in sub-iterationThen, thenTo obtainTarget rectangle of (2)(ii) a Otherwise, continuing to update the saliency map in the step (3-2) through iteration to obtain a target rectangle, wherein the image content of the inner area of the target rectangle is the detected salient object.
Compared with the prior art, the method for detecting the salient objects in the image based on the difference between the areas has the following advantages: the method can accurately realize the detection of the salient object in the image without setting any parameter.
Drawings
FIG. 1 is a flow chart of a method of detecting salient objects in an image based on inter-region differences in accordance with the present invention;
FIG. 2 is an input original image;
FIG. 3 is a saliency image of an original image;
FIG. 4 is a target histogram taken over a modified saliency map;
fig. 5 is a diagram of the acquisition of detected salient objects on an original image.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings attached to the specification.
The simulation experiment carried out by the invention is realized by programming on a PC test platform with a CPU of 2.53GHz and a memory of 1.96 GB.
As shown in fig. 1, the method for detecting a salient object in an image based on a difference between regions according to the present invention is described by the following steps:
(1) inputting an original image, as shown in fig. 2 (a), calculating a saliency map of the original image, which comprises the following specific steps:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
wherein,is shown asThe number of the regions is one,representing a divided regionThe number of the pixel points in (1),indicating areaThe number of the pixel points in (1),representing pixel pointsThe color characteristics of (a) or (b),representing pixel pointsThe color characteristics of (a) or (b),is shown asThe kernel function of each of the regions is,to representPixel point of (2)And a firstColor similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe color saliency of an individual region is such that,a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows: (3)
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe spatial significance of the individual regions is such that,a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
wherein,a color saliency map representing an original image,a spatial saliency map representing an original image,a saliency map of the original image is shown as fig. 2 (b).
The value of each pixel point position in the significance map is the significance value of the pixel point, and the value range of the significance value is 0~255, the larger the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is;
(2) calculating and modifying the saliency map, wherein the specific steps are as follows:
(2-1) setting the center of gravity of the saliency map;
(2-2) calculating each pixel point on the significance mapTo the center of gravityEuropean distance ofThe calculation formula is as follows:
wherein,、representing coordinates of pixel points,、Representing coordinates of center of gravity,Representing pixel pointsEuclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
wherein W and H represent the width and height of the image, respectively,representing pixel pointsEuclidean distance to the center of gravity of the saliency map,representing the original saliency map, as in figure 2 (b),representing the modified saliency map, as in fig. 3 (a);
(3) and iteratively updating the saliency map to find a target rectangle with the maximum difference with the outer region of the saliency map, wherein the image content of the inner region of the target rectangle is a detected salient object, and the specific steps are as follows:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-2) is provided withIs shown inSignificance map updated in sub-iteration, significance map at initial stateWhereinRepresenting the modified saliency map obtained in step (2);
(3-1-3) is provided withIs shown inThe rectangular area obtained in the sub-iteration,a rectangular area in an initial state is represented and is the whole saliency map; in this experiment, the resolution of the original image was 378 × 400, a rectangular area of the original state378X 400;
(3-1-4) is provided withIs shown asThe difference value between the rectangular area and the outer area obtained in the secondary iteration, wherein the outer area is the area of the saliency map except the rectangular area, and the rectangular area and the outer area are in the initial stateDifference value of partial region;
(3-1-5) is provided withIs shown asSignificance map in sub-iterationThe mean value of the significance values of all the pixel points, and a significance map in an initial stateThe mean value of the significance values of all the pixels isWhereinGraph representing significanceCoordinates of middle pixel pointThe value of the significance of (a) is,graph representing significanceThe number of the pixel points in (1),a saliency map representing an initial state;
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) taking the 1 st iteration as an example, the significance map of the initial state is usedSubtracting the significance value of each pixel pointMean value of significance values of all the pixelsObtaining an updated saliency map;
(3-2-2) adopting an efficient sub-window search algorithm to update the significance mapTo obtain a rectangular areaThe sum of all pixel points in the rectangular area is larger than the updated significance mapThe sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)The difference value from the outer region is calculated as:
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atA rectangular area of upper, such as the purple rectangle in FIG. 3 (a),is shown asThe difference value between the rectangular region and the outer region in the sub-iteration,represents an intermediate variable;
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),representing rectangular areasThe number of the internal pixel points is increased,representing rectangular areasThe mean of the saliency values of all pixels inside,representing rectangular areasThe number of the external pixel points is,representing rectangular areasThe mean value of the significance values of all the external pixel points;
for example, calculate the rectangular region obtained in iteration 1The difference value from the outer region is obtained according to the formula (7)Whereinrepresenting the rectangular area obtained in iteration 1,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),represents the intermediate variables obtained from equation (8);
(3-2-4) updating significance mapMean value of significance values of all the pixelsThe calculation formula is as follows:
wherein,representing pre-update saliency mapsThe mean of the saliency values of all the above pixels,representing rectangular areasThe mean of the saliency values of all pixels inside,is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),representing post-update saliency mapsSignificance value of last all pixel pointsThe mean value of (a);
for example, in iteration 1, the saliency map is updated according to equation (9)The mean value of the significance values of all the pixels is obtained;
(3-2-5) in the significance mapTo go toRectangular regions obtained in sub-iterationsAll other pixel points have significance values set to;
(3-3) if inDifference values between rectangular region and outer region obtained in sub-iterationThen, thenTo obtain a target rectangle(ii) a Otherwise, continuing to the step (3-2) to update the saliency map through iteration to obtain a target rectangle, wherein the image content of the inner region of the target rectangle is the detected salient object, for example, in the 1 st iteration,,,then, the step (3-2) is continued to update the saliency map through iteration to obtain a target rectangle, for example, the yellow rectangle in fig. 3 (b) is an objective correct target rectangle, the purple rectangle is a target rectangle detected on the original image, and the image content in the inner region of the target rectangle is a detected salient object.
As can be seen from the simulation experiment results, the method can accurately detect the significant object without setting any parameter.
Claims (4)
1. A method for detecting a salient object in an image based on region difference comprises the following specific steps:
(1) inputting an original image, and calculating a saliency map of the original image;
(2) calculating a modified saliency map;
(3) and iteratively updating the saliency map to find a target rectangle with the largest difference with the outer region, wherein the image content of the inner region of the target rectangle is the detected salient object.
2. The method for detecting salient objects in images based on inter-region differences as claimed in claim 1, wherein said step (1) of inputting the original image and calculating the saliency map of the original image comprises the following specific steps:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
wherein,is shown asThe number of the regions is one,representing a divided regionThe number of the pixel points in (1),indicating areaThe number of the pixel points in (1),representing pixel pointsColor of the spotThe step of performing the sign operation,representing pixel pointsThe color characteristics of (a) or (b),is shown asThe kernel function of each of the regions is,to representPixel point of (2)And a firstColor similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe color saliency of an individual region is such that,a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows:
wherein,is shown asA number of regions, n representing the total number of regions,to representPixel point of (2)And a firstThe degree of similarity of the colors of the individual regions,is shown asThe spatial significance of the individual regions is such that,a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
wherein,a color saliency map representing an original image,a spatial saliency map representing an original image,representing the saliency of an original imageThe value of each pixel point position in the significance map is the significance value of the pixel point, and the value range of the significance value is 0~255, the larger the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is.
3. The method according to claim 2, wherein the step (2) of calculating and modifying the saliency map comprises the following steps:
(2-2) calculating each pixel point on the significance mapTo the center of gravityEuropean distance ofThe calculation formula is as follows:
(5)
wherein,、representing coordinates of pixel points,、Representing coordinates of center of gravity,Representing pixel pointsEuclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
4. The method according to claim 3, wherein the step (3) of iteratively updating the saliency map to find a target rectangle with the largest difference from the outer region of the saliency map, wherein the image content in the inner region of the target rectangle is the detected salient object, comprises the following steps:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-2) is provided withIs shown inSignificance map updated in sub-iteration, significance map at initial stateWhereinRepresenting the modified saliency map obtained in step (2);
(3-1-3) is provided withIs shown inThe rectangular area obtained in the sub-iteration,a rectangular area in an initial state is represented and is the whole saliency map;
(3-1-4) is provided withIs shown asThe difference value between the rectangular area and the outer area obtained in the secondary iteration is obtained, the outer area is the area of the saliency map except the rectangular area, and the difference value between the rectangular area and the outer area in the initial state;
(3-1-5) is provided withIs shown asSignificance map in sub-iterationThe mean value of the significance values of all the pixel points, and a significance map in an initial stateThe mean value of the significance values of all the pixels is;
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) in the second placeIn the second iteration, useSignificance map updated in 1 iterationSubtracting the saliency map from the saliency value of each pixel pointMean value of significance values of all the pixelsObtaining an updated saliency map;
(3-2-2) adopting an efficient sub-window search algorithm to update the significance mapTo obtain a rectangular areaThe sum of all pixel points in the rectangular area is larger than the updated significance mapThe sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)The difference value from the outer region is calculated as:
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),is shown asThe difference value between the rectangular region and the outer region in the sub-iteration,represents an intermediate variable;
wherein,is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),representing rectangular areasThe number of the internal pixel points is increased,representing rectangular areasThe mean of the saliency values of all pixels inside,representing rectangular areasThe number of the external pixel points is,representing rectangular areasThe mean value of the significance values of all the external pixel points;
(3-2-4) updating significance mapMean value of significance values of all the pixelsThe calculation formula is as follows:
wherein,representing pre-update saliency mapsThe mean of the saliency values of all the above pixels,representing rectangular areasThe mean of the saliency values of all pixels inside,is prepared by reacting withIs correspondingly atThe rectangular area of (a) above (b),is shown inThe rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),representing post-update saliency mapsThe mean value of the significance values of all the pixel points;
(3-2-5) in the significance mapTo go toRectangular regions obtained in sub-iterationsAll other pixel points have significance values set to;
(3-3) if inDifference values between rectangular region and outer region obtained in sub-iterationThen, thenTo obtain a target rectangle(ii) a Otherwise, continuing to update the saliency map in the step (3-2) through iteration to obtain a target rectangle, wherein the image content of the inner area of the target rectangle is the detected salient object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110312091 CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201110312091 CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102509072A true CN102509072A (en) | 2012-06-20 |
CN102509072B CN102509072B (en) | 2013-08-28 |
Family
ID=46221153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201110312091 Expired - Fee Related CN102509072B (en) | 2011-10-17 | 2011-10-17 | Method for detecting salient object in image based on inter-area difference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102509072B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938139A (en) * | 2012-11-09 | 2013-02-20 | 清华大学 | Automatic synthesis method for fault finding game images |
CN103218832A (en) * | 2012-10-15 | 2013-07-24 | 上海大学 | Visual saliency algorithm based on overall color contrast ratio and space distribution in image |
CN106407978A (en) * | 2016-09-24 | 2017-02-15 | 上海大学 | Unconstrained in-video salient object detection method combined with objectness degree |
CN110689007A (en) * | 2019-09-16 | 2020-01-14 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN111461139A (en) * | 2020-03-27 | 2020-07-28 | 武汉工程大学 | Multi-target visual saliency layered detection method in complex scene |
CN113114943A (en) * | 2016-12-22 | 2021-07-13 | 三星电子株式会社 | Apparatus and method for processing image |
US11670068B2 (en) | 2016-12-22 | 2023-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
-
2011
- 2011-10-17 CN CN 201110312091 patent/CN102509072B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
Non-Patent Citations (1)
Title |
---|
郎丛妍等: "一种基于模糊信息粒化的视频时空显著单元提取方法", 《电子学报》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218832A (en) * | 2012-10-15 | 2013-07-24 | 上海大学 | Visual saliency algorithm based on overall color contrast ratio and space distribution in image |
CN103218832B (en) * | 2012-10-15 | 2016-01-13 | 上海大学 | Based on the vision significance algorithm of global color contrast and spatial distribution in image |
CN102938139A (en) * | 2012-11-09 | 2013-02-20 | 清华大学 | Automatic synthesis method for fault finding game images |
CN102938139B (en) * | 2012-11-09 | 2015-03-04 | 清华大学 | Automatic synthesis method for fault finding game images |
CN106407978A (en) * | 2016-09-24 | 2017-02-15 | 上海大学 | Unconstrained in-video salient object detection method combined with objectness degree |
CN106407978B (en) * | 2016-09-24 | 2020-10-30 | 上海大学 | Method for detecting salient object in unconstrained video by combining similarity degree |
CN113114943A (en) * | 2016-12-22 | 2021-07-13 | 三星电子株式会社 | Apparatus and method for processing image |
US11670068B2 (en) | 2016-12-22 | 2023-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image |
CN113114943B (en) * | 2016-12-22 | 2023-08-04 | 三星电子株式会社 | Apparatus and method for processing image |
CN110689007A (en) * | 2019-09-16 | 2020-01-14 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN110689007B (en) * | 2019-09-16 | 2022-04-15 | Oppo广东移动通信有限公司 | Subject recognition method and device, electronic equipment and computer-readable storage medium |
CN111461139A (en) * | 2020-03-27 | 2020-07-28 | 武汉工程大学 | Multi-target visual saliency layered detection method in complex scene |
Also Published As
Publication number | Publication date |
---|---|
CN102509072B (en) | 2013-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903331B (en) | Convolutional neural network target detection method based on RGB-D camera | |
CN105844669B (en) | A kind of video object method for real time tracking based on local Hash feature | |
CN102509072A (en) | Method for detecting salient object in image based on inter-area difference | |
CN107292234B (en) | Indoor scene layout estimation method based on information edge and multi-modal features | |
US20160358035A1 (en) | Saliency information acquisition device and saliency information acquisition method | |
Kuo et al. | 3D object detection and pose estimation from depth image for robotic bin picking | |
CN108647694B (en) | Context-aware and adaptive response-based related filtering target tracking method | |
US20160196467A1 (en) | Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud | |
CN108052624A (en) | Processing Method of Point-clouds, device and computer readable storage medium | |
CN106485651B (en) | The image matching method of fast robust Scale invariant | |
CN108564120B (en) | Feature point extraction method based on deep neural network | |
CN105335725A (en) | Gait identification identity authentication method based on feature fusion | |
CN111860494A (en) | Optimization method and device for image target detection, electronic equipment and storage medium | |
CN105404886A (en) | Feature model generating method and feature model generating device | |
CN105225226A (en) | A kind of cascade deformable part model object detection method based on Iamge Segmentation | |
CN104978582B (en) | Shelter target recognition methods based on profile angle of chord feature | |
CN105957107A (en) | Pedestrian detecting and tracking method and device | |
CN104123554A (en) | SIFT image characteristic extraction method based on MMTD | |
CN107507226A (en) | A kind of method and device of images match | |
KR101182683B1 (en) | A Visual Shape Descriptor Generating Method Using Sectors and Shape Context of Contour Lines and the Recording Medium thereof | |
CN105427333A (en) | Real-time registration method of video sequence image, system and shooting terminal | |
CN108256567B (en) | Target identification method and system based on deep learning | |
CN105374030B (en) | A kind of background model and Mobile object detection method and system | |
CN106407978B (en) | Method for detecting salient object in unconstrained video by combining similarity degree | |
CN105631849B (en) | The change detecting method and device of target polygon |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130828 Termination date: 20201017 |
|
CF01 | Termination of patent right due to non-payment of annual fee |