[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN102509072A - Method for detecting salient object in image based on inter-area difference - Google Patents

Method for detecting salient object in image based on inter-area difference Download PDF

Info

Publication number
CN102509072A
CN102509072A CN2011103120919A CN201110312091A CN102509072A CN 102509072 A CN102509072 A CN 102509072A CN 2011103120919 A CN2011103120919 A CN 2011103120919A CN 201110312091 A CN201110312091 A CN 201110312091A CN 102509072 A CN102509072 A CN 102509072A
Authority
CN
China
Prior art keywords
representing
map
significance
saliency
saliency map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103120919A
Other languages
Chinese (zh)
Other versions
CN102509072B (en
Inventor
史冉
刘志
杜欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN 201110312091 priority Critical patent/CN102509072B/en
Publication of CN102509072A publication Critical patent/CN102509072A/en
Application granted granted Critical
Publication of CN102509072B publication Critical patent/CN102509072B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a salient object in an image based on inter-area difference. The method specifically comprises the following steps of: (1) inputting an original image and calculating a saliency map of the original image; (2) calculating and modifying the saliency map; and (3) updating the saliency map through iteration and finding a target rectangle in the greatest difference from the external area, wherein the image content of the internal area of the target rectangle is the detected salient object. The method can accurately detect the salient object in the image without setting any parameter.

Description

Method for detecting salient objects in image based on difference between areas
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a method for detecting a salient object in an image based on region difference.
Background
The research results of psychological and perceptual science have shown that: when a person observes an image, the attention to the regions of the image is not averaged, resulting in a saliency map corresponding to the degree of attention. In most cases, a person looking at an image focuses on a region of the image, which is referred to as a salient object. In other words, the salient object captures a higher degree of attention than other regions of the image. If the salient object can be detected, the salient object detection method can provide great help for a plurality of applications such as salient object identification, image adaptation, image compression, image retrieval and the like. It is against this background that the salient object detection method is applied, and it aims to accurately and quickly detect a salient object in an image by using a salient image corresponding to the attention of the image. The result of the detection appears to mark a rectangular area in the image that contains as many salient objects as possible and as few backgrounds as possible. At present, a salient object detection method has been preliminarily studied, for example, in the article "salient object detection based on learning" published by Liu et al in the conference of computer vision and pattern recognition of the institute of electrical and electronics engineers, us 6.2007, the salient object detection described herein is to search a target rectangle on a salient image by using an exhaustive algorithm, and the target rectangle frames at least 95% of pixels with high significance. This detection method requires setting of a threshold value, and the detection speed of the salient object is slow, and the detection effect depends on the effect of the salient image. The salient object detection in the article "salient model based on concentric curves and colors" published by valenci et al in 2009 society of electrical and electronics engineers computer vision conference in the united states, is to search a target rectangle on a salient image by using an efficient sub-window search algorithm, which accelerates the speed of searching the target rectangle, but cannot accurately detect a salient object, and the efficient sub-window search algorithm has the following specific steps:
(1) setting p as an empty ordered queue, forming a point set by four vertex coordinates of the image, and taking the point set as a point set of a head of the ordered queue p;
(2) dividing the point set at the head of the ordered queue p into two subsets from the edge with the largest interval;
(3) calculating an upper bound for each subset by a boundary quality function;
(4) inserting the two subsets obtained in the second step into the ordered queue p according to the upper bound calculated in the third step;
(5) and (4) repeating the steps (2) to (4) until the subset taken out from the head of the ordered queue p only comprises one rectangle, wherein the rectangle is a global maximum value and is a searched target rectangle.
Luo et al published a "maximum saliency density-based object detection" article in 2010 Asian computer vision conference, and the "object detection" article is that a maximum saliency density algorithm is adopted to search a target rectangle on a saliency image, and the algorithm improves the accuracy of salient object detection, but the method needs to design different parameters for different saliency models, and cannot realize adaptivity. Liu et al published "non-parameter significance detection based on kernel density estimation" in the international conference on image processing of the American institute of Electrical and electronics Engineers, 9.2010, which established a non-parameter significance model using a non-parameter kernel density estimation algorithm for obtaining a significance map of an image, the algorithm comprising the following specific steps:
(1) pre-dividing the image into a plurality of regions by using a mean shift algorithm;
(2) calculating the color similarity of each pixel point in the image and each region by using a nonparametric kernel density estimation algorithm;
(3) calculating the color distance between each region by using the color similarity between each pixel point in the image and each region to form a color saliency map of the image;
(4) calculating the spatial distance between each region by using the color similarity between each pixel point in the image and each region to form a spatial saliency map of the image;
(5) and forming a final saliency map of the image from the color saliency map and the spatial saliency map of the image.
In summary, the existing salient object detection method needs to set corresponding parameters for various salient models to achieve accurate detection of salient objects, which affects the wide application of salient object detection.
Disclosure of Invention
The invention aims to provide a method for detecting a salient object in an image based on region difference aiming at the defects in the prior art, which can accurately detect the salient object without setting corresponding parameters aiming at various salient models.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a method for detecting a salient object in an image based on region difference comprises the following specific steps:
(1) inputting an original image, and calculating a saliency map of the original image;
(2) calculating a modified saliency map;
(3) and iteratively updating the saliency map to find a target rectangle with the largest difference with the outer region, wherein the image content of the inner region of the target rectangle is the detected salient object.
Inputting the original image and calculating the saliency map of the original image in the step (1), wherein the specific steps are as follows:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
(1)
wherein,
Figure 763187DEST_PATH_IMAGE002
is shown as
Figure 731143DEST_PATH_IMAGE002
The number of the regions is one,
Figure 827275DEST_PATH_IMAGE003
representing a divided region
Figure 601196DEST_PATH_IMAGE004
The number of the pixel points in (1),
Figure 731963DEST_PATH_IMAGE005
indicating area
Figure 933137DEST_PATH_IMAGE004
The number of the pixel points in (1),
Figure 313303DEST_PATH_IMAGE006
representing pixel pointsThe color characteristics of (a) or (b),
Figure 813871DEST_PATH_IMAGE007
representing pixel points
Figure 185947DEST_PATH_IMAGE008
The color characteristics of (a) or (b),is shown as
Figure 106815DEST_PATH_IMAGE002
The kernel function of each of the regions is,
Figure 946595DEST_PATH_IMAGE010
to represent
Figure 427255DEST_PATH_IMAGE008
Pixel point of (2)And a first
Figure 642040DEST_PATH_IMAGE002
Color similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
(2)
wherein,
Figure 95521DEST_PATH_IMAGE002
is shown as
Figure 553047DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 367420DEST_PATH_IMAGE010
to represent
Figure 189882DEST_PATH_IMAGE008
Pixel point of (2)
Figure 519232DEST_PATH_IMAGE011
And a first
Figure 718132DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,
Figure 449328DEST_PATH_IMAGE014
is shown as
Figure 442692DEST_PATH_IMAGE002
The color saliency of an individual region is such that,
Figure 197021DEST_PATH_IMAGE015
a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows:
Figure 261929DEST_PATH_IMAGE016
(3)
wherein,
Figure 785315DEST_PATH_IMAGE002
is shown as
Figure 11896DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 253522DEST_PATH_IMAGE010
to represent
Figure 794225DEST_PATH_IMAGE008
Pixel point of (2)
Figure 234433DEST_PATH_IMAGE011
And a first
Figure 835179DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,
Figure 375487DEST_PATH_IMAGE017
is shown as
Figure 719880DEST_PATH_IMAGE002
The spatial significance of the individual regions is such that,
Figure 952279DEST_PATH_IMAGE018
a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
Figure 786242DEST_PATH_IMAGE019
(4)
wherein,
Figure 736881DEST_PATH_IMAGE015
a color saliency map representing an original image,
Figure 681703DEST_PATH_IMAGE018
a spatial saliency map representing an original image,
Figure 34187DEST_PATH_IMAGE020
a saliency map representing an original image.
The value of each pixel point position in the significance map is the significance value of the pixel point, the value range of the significance value is 0-255, the greater the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is.
The calculation of the modified saliency map in the step (2) is specifically as follows:
(2-1) setting the center of gravity of the saliency map
Figure 976735DEST_PATH_IMAGE021
(2-2) calculating each pixel point on the significance map
Figure 476987DEST_PATH_IMAGE011
To the center of gravityEuropean distance of
Figure 370173DEST_PATH_IMAGE022
The calculation formula is as follows:
Figure 545940DEST_PATH_IMAGE023
(5)
wherein,
Figure 471170DEST_PATH_IMAGE024
Figure 757795DEST_PATH_IMAGE025
representing coordinates of pixel points
Figure 84871DEST_PATH_IMAGE026
Figure 103643DEST_PATH_IMAGE027
Figure 578487DEST_PATH_IMAGE028
Representing coordinates of center of gravity
Figure 606486DEST_PATH_IMAGE029
Figure 850385DEST_PATH_IMAGE022
Representing pixel pointsEuclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
Figure 939881DEST_PATH_IMAGE030
(6)
wherein W and H represent the width and height of the image, respectively,
Figure 571238DEST_PATH_IMAGE022
representing pixel points
Figure 872906DEST_PATH_IMAGE011
Euclidean distance to the center of gravity of the saliency map,
Figure 561377DEST_PATH_IMAGE031
the original significance map is shown,
Figure 682916DEST_PATH_IMAGE032
a modified saliency map is represented.
In the step (3), the saliency map is iteratively updated to find the target rectangle with the largest difference from the outer region of the saliency map, and the image content of the inner region of the target rectangle is the detected salient object, which specifically includes the following steps:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-1) is provided with
Figure 52718DEST_PATH_IMAGE033
The number of iterations is indicated, wherein,
Figure 271210DEST_PATH_IMAGE033
is 0,1,2,3, … …;
(3-1-2) is provided withIs shown inSignificance map updated in sub-iteration, significance map at initial state
Figure 647330DEST_PATH_IMAGE035
Wherein
Figure 658012DEST_PATH_IMAGE036
Representing the modified saliency map obtained in step (2);
(3-1-3) is provided with
Figure 688285DEST_PATH_IMAGE037
Is shown in
Figure 784416DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 558337DEST_PATH_IMAGE039
a rectangular area in an initial state is represented and is the whole saliency map;
(3-1-4) is provided with
Figure 423525DEST_PATH_IMAGE040
Is shown as
Figure 562383DEST_PATH_IMAGE033
The difference value between the rectangular area and the outer area obtained in the secondary iteration is obtained, the outer area is the area of the saliency map except the rectangular area, and the difference value between the rectangular area and the outer area in the initial state
Figure 208128DEST_PATH_IMAGE041
(3-1-5) is provided withIs shown asSignificance map in sub-iteration
Figure 815192DEST_PATH_IMAGE034
The mean value of the significance values of all the pixel points, and a significance map in an initial state
Figure 885917DEST_PATH_IMAGE043
The mean value of the significance values of all the pixels is
Figure 4570DEST_PATH_IMAGE044
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) in the second place
Figure 844350DEST_PATH_IMAGE038
In the second iteration, use
Figure 325010DEST_PATH_IMAGE033
Significance map updated in 1 iteration
Figure 945347DEST_PATH_IMAGE045
Subtracting the saliency map from the saliency value of each pixel point
Figure 536865DEST_PATH_IMAGE045
Mean value of significance values of all the pixels
Figure 293469DEST_PATH_IMAGE046
Obtaining an updated saliency map
Figure 210609DEST_PATH_IMAGE034
(3-2-2) adopting an efficient sub-window search algorithm to update the significance map
Figure 990346DEST_PATH_IMAGE034
To obtain a rectangular area
Figure 447872DEST_PATH_IMAGE037
The sum of all pixel points in the rectangular area is larger than the updated significance map
Figure 262245DEST_PATH_IMAGE034
The sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)The difference value from the outer region is calculated as:
Figure 414057DEST_PATH_IMAGE047
(7)
wherein,
Figure 612957DEST_PATH_IMAGE037
is shown in
Figure 344153DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,representing the modified saliency map obtained in step (2),
Figure 154163DEST_PATH_IMAGE048
is prepared by reacting with
Figure 891175DEST_PATH_IMAGE037
Is correspondingly at
Figure 476877DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 906721DEST_PATH_IMAGE040
is shown as
Figure 213594DEST_PATH_IMAGE033
The difference value between the rectangular region and the outer region in the sub-iteration,
Figure 488717DEST_PATH_IMAGE049
represents an intermediate variable;
Figure 194505DEST_PATH_IMAGE050
(8)
wherein,
Figure 529671DEST_PATH_IMAGE037
is shown in
Figure 320910DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 399724DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 960019DEST_PATH_IMAGE048
is prepared by reacting with
Figure 731665DEST_PATH_IMAGE037
Is correspondingly at
Figure 682304DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 627126DEST_PATH_IMAGE051
representing rectangular areas
Figure 979610DEST_PATH_IMAGE048
The number of the internal pixel points is increased,representing rectangular areas
Figure 422410DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 843027DEST_PATH_IMAGE053
representing rectangular areas
Figure 377913DEST_PATH_IMAGE048
The number of the external pixel points is,
Figure 491363DEST_PATH_IMAGE054
representing rectangular areas
Figure 478911DEST_PATH_IMAGE048
The mean value of the significance values of all the external pixel points;
(3-2-4) updating significance map
Figure 703218DEST_PATH_IMAGE034
Mean value of significance values of all the pixelsThe calculation formula is as follows:
Figure 102594DEST_PATH_IMAGE055
(9)
wherein,
Figure 515121DEST_PATH_IMAGE056
representing pre-update saliency maps
Figure 605437DEST_PATH_IMAGE034
The mean of the saliency values of all the above pixels,
Figure 787019DEST_PATH_IMAGE052
representing rectangular areas
Figure 976692DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 938832DEST_PATH_IMAGE048
is prepared by reacting with
Figure 504942DEST_PATH_IMAGE037
Is correspondingly at
Figure 868928DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 229502DEST_PATH_IMAGE037
is shown inThe rectangular area obtained in the sub-iteration,
Figure 48739DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 939335DEST_PATH_IMAGE042
representing post-update saliency maps
Figure 798706DEST_PATH_IMAGE034
The mean value of the significance values of all the pixel points;
(3-2-5) in the significance map
Figure 407542DEST_PATH_IMAGE034
To go to
Figure 643352DEST_PATH_IMAGE038
Rectangular regions obtained in sub-iterations
Figure 654033DEST_PATH_IMAGE037
All other pixel points have significance values set to
Figure 684306DEST_PATH_IMAGE057
(3-3) if in
Figure 845684DEST_PATH_IMAGE038
Difference values between rectangular region and outer region obtained in sub-iteration
Figure 619605DEST_PATH_IMAGE058
Then, then
Figure 484793DEST_PATH_IMAGE059
To obtainTarget rectangle of (2)
Figure 623650DEST_PATH_IMAGE060
(ii) a Otherwise, continuing to update the saliency map in the step (3-2) through iteration to obtain a target rectangle, wherein the image content of the inner area of the target rectangle is the detected salient object.
Compared with the prior art, the method for detecting the salient objects in the image based on the difference between the areas has the following advantages: the method can accurately realize the detection of the salient object in the image without setting any parameter.
Drawings
FIG. 1 is a flow chart of a method of detecting salient objects in an image based on inter-region differences in accordance with the present invention;
FIG. 2 is an input original image;
FIG. 3 is a saliency image of an original image;
FIG. 4 is a target histogram taken over a modified saliency map;
fig. 5 is a diagram of the acquisition of detected salient objects on an original image.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings attached to the specification.
The simulation experiment carried out by the invention is realized by programming on a PC test platform with a CPU of 2.53GHz and a memory of 1.96 GB.
As shown in fig. 1, the method for detecting a salient object in an image based on a difference between regions according to the present invention is described by the following steps:
(1) inputting an original image, as shown in fig. 2 (a), calculating a saliency map of the original image, which comprises the following specific steps:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
Figure 269395DEST_PATH_IMAGE001
(1)
wherein,
Figure 519111DEST_PATH_IMAGE002
is shown as
Figure 566702DEST_PATH_IMAGE002
The number of the regions is one,
Figure 876460DEST_PATH_IMAGE003
representing a divided region
Figure 9501DEST_PATH_IMAGE004
The number of the pixel points in (1),
Figure 62908DEST_PATH_IMAGE005
indicating area
Figure 902688DEST_PATH_IMAGE004
The number of the pixel points in (1),
Figure 445665DEST_PATH_IMAGE006
representing pixel points
Figure 738106DEST_PATH_IMAGE005
The color characteristics of (a) or (b),
Figure 657520DEST_PATH_IMAGE007
representing pixel points
Figure 351807DEST_PATH_IMAGE008
The color characteristics of (a) or (b),
Figure 3368DEST_PATH_IMAGE009
is shown asThe kernel function of each of the regions is,to represent
Figure 120250DEST_PATH_IMAGE008
Pixel point of (2)
Figure 208292DEST_PATH_IMAGE011
And a first
Figure 475325DEST_PATH_IMAGE002
Color similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
Figure 736542DEST_PATH_IMAGE012
Figure 139842DEST_PATH_IMAGE013
(2)
wherein,
Figure 398785DEST_PATH_IMAGE002
is shown as
Figure 215431DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 952443DEST_PATH_IMAGE010
to represent
Figure 538145DEST_PATH_IMAGE008
Pixel point of (2)
Figure 967989DEST_PATH_IMAGE011
And a first
Figure 944036DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,
Figure 547055DEST_PATH_IMAGE014
is shown as
Figure 190526DEST_PATH_IMAGE002
The color saliency of an individual region is such that,a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows:
Figure 316931DEST_PATH_IMAGE016
(3)
wherein,
Figure 395746DEST_PATH_IMAGE002
is shown as
Figure 956040DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 462108DEST_PATH_IMAGE010
to represent
Figure 740642DEST_PATH_IMAGE008
Pixel point of (2)
Figure 623147DEST_PATH_IMAGE011
And a first
Figure 975631DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,is shown as
Figure 421361DEST_PATH_IMAGE002
The spatial significance of the individual regions is such that,a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
Figure 376864DEST_PATH_IMAGE019
(4)
wherein,
Figure 224735DEST_PATH_IMAGE015
a color saliency map representing an original image,a spatial saliency map representing an original image,a saliency map of the original image is shown as fig. 2 (b).
The value of each pixel point position in the significance map is the significance value of the pixel point, and the value range of the significance value is 0255, the larger the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is;
(2) calculating and modifying the saliency map, wherein the specific steps are as follows:
(2-1) setting the center of gravity of the saliency map
(2-2) calculating each pixel point on the significance map
Figure 110334DEST_PATH_IMAGE011
To the center of gravity
Figure 522861DEST_PATH_IMAGE021
European distance of
Figure 347597DEST_PATH_IMAGE022
The calculation formula is as follows:
Figure 529180DEST_PATH_IMAGE061
(5)
wherein,
Figure 46749DEST_PATH_IMAGE024
Figure 946572DEST_PATH_IMAGE025
representing coordinates of pixel points
Figure 512682DEST_PATH_IMAGE026
Figure 611088DEST_PATH_IMAGE027
Representing coordinates of center of gravity
Figure 790900DEST_PATH_IMAGE022
Representing pixel points
Figure 947075DEST_PATH_IMAGE011
Euclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
Figure 633595DEST_PATH_IMAGE030
(6)
wherein W and H represent the width and height of the image, respectively,
Figure 242431DEST_PATH_IMAGE022
representing pixel points
Figure 478240DEST_PATH_IMAGE011
Euclidean distance to the center of gravity of the saliency map,
Figure 488921DEST_PATH_IMAGE031
representing the original saliency map, as in figure 2 (b),
Figure 456877DEST_PATH_IMAGE032
representing the modified saliency map, as in fig. 3 (a);
(3) and iteratively updating the saliency map to find a target rectangle with the maximum difference with the outer region of the saliency map, wherein the image content of the inner region of the target rectangle is a detected salient object, and the specific steps are as follows:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-1) is provided with
Figure 615326DEST_PATH_IMAGE033
The number of iterations is indicated, wherein,
Figure 61351DEST_PATH_IMAGE033
is 0,1,2,3, … …;
(3-1-2) is provided with
Figure 254435DEST_PATH_IMAGE034
Is shown in
Figure 393292DEST_PATH_IMAGE062
Significance map updated in sub-iteration, significance map at initial state
Figure 976720DEST_PATH_IMAGE035
Wherein
Figure 288753DEST_PATH_IMAGE036
Representing the modified saliency map obtained in step (2);
(3-1-3) is provided with
Figure 274027DEST_PATH_IMAGE037
Is shown in
Figure 583785DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 716826DEST_PATH_IMAGE039
a rectangular area in an initial state is represented and is the whole saliency map; in this experiment, the resolution of the original image was 378 × 400, a rectangular area of the original state
Figure 770233DEST_PATH_IMAGE039
378X 400;
(3-1-4) is provided with
Figure 672330DEST_PATH_IMAGE040
Is shown as
Figure 152990DEST_PATH_IMAGE033
The difference value between the rectangular area and the outer area obtained in the secondary iteration, wherein the outer area is the area of the saliency map except the rectangular area, and the rectangular area and the outer area are in the initial stateDifference value of partial region
Figure 445431DEST_PATH_IMAGE041
(3-1-5) is provided with
Figure 364845DEST_PATH_IMAGE042
Is shown as
Figure 59132DEST_PATH_IMAGE033
Significance map in sub-iteration
Figure 775940DEST_PATH_IMAGE034
The mean value of the significance values of all the pixel points, and a significance map in an initial state
Figure 821256DEST_PATH_IMAGE043
The mean value of the significance values of all the pixels is
Figure 278782DEST_PATH_IMAGE063
Wherein
Figure 827575DEST_PATH_IMAGE064
Graph representing significance
Figure 915617DEST_PATH_IMAGE043
Coordinates of middle pixel point
Figure 244967DEST_PATH_IMAGE065
The value of the significance of (a) is,graph representing significance
Figure 909484DEST_PATH_IMAGE043
The number of the pixel points in (1),
Figure 168427DEST_PATH_IMAGE043
a saliency map representing an initial state;
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) taking the 1 st iteration as an example, the significance map of the initial state is used
Figure 844127DEST_PATH_IMAGE067
Subtracting the significance value of each pixel point
Figure 909035DEST_PATH_IMAGE043
Mean value of significance values of all the pixels
Figure 432421DEST_PATH_IMAGE068
Obtaining an updated saliency map
Figure 786566DEST_PATH_IMAGE069
(3-2-2) adopting an efficient sub-window search algorithm to update the significance map
Figure 90509DEST_PATH_IMAGE069
To obtain a rectangular area
Figure 631211DEST_PATH_IMAGE070
The sum of all pixel points in the rectangular area is larger than the updated significance map
Figure 71420DEST_PATH_IMAGE069
The sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)The difference value from the outer region is calculated as:
Figure 135508DEST_PATH_IMAGE047
(7)
wherein,is shown in
Figure 40196DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 608580DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 824798DEST_PATH_IMAGE048
is prepared by reacting with
Figure 707304DEST_PATH_IMAGE037
Is correspondingly at
Figure 122104DEST_PATH_IMAGE032
A rectangular area of upper, such as the purple rectangle in FIG. 3 (a),
Figure 799073DEST_PATH_IMAGE040
is shown as
Figure 302254DEST_PATH_IMAGE033
The difference value between the rectangular region and the outer region in the sub-iteration,
Figure 988451DEST_PATH_IMAGE049
represents an intermediate variable;
Figure 257758DEST_PATH_IMAGE050
(8)
wherein,
Figure 371208DEST_PATH_IMAGE037
is shown in
Figure 296438DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 583063DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 910139DEST_PATH_IMAGE048
is prepared by reacting with
Figure 256807DEST_PATH_IMAGE037
Is correspondingly atThe rectangular area of (a) above (b),
Figure 431753DEST_PATH_IMAGE051
representing rectangular areasThe number of the internal pixel points is increased,
Figure 130905DEST_PATH_IMAGE052
representing rectangular areas
Figure 30728DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 659155DEST_PATH_IMAGE053
representing rectangular areas
Figure 695244DEST_PATH_IMAGE048
The number of the external pixel points is,representing rectangular areas
Figure 505255DEST_PATH_IMAGE048
The mean value of the significance values of all the external pixel points;
for example, calculate the rectangular region obtained in iteration 1
Figure 875056DEST_PATH_IMAGE070
The difference value from the outer region is obtained according to the formula (7)Wherein
Figure 890602DEST_PATH_IMAGE070
representing the rectangular area obtained in iteration 1,
Figure 499438DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 460879DEST_PATH_IMAGE072
is prepared by reacting with
Figure 471561DEST_PATH_IMAGE070
Is correspondingly at
Figure 501834DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 597966DEST_PATH_IMAGE073
represents the intermediate variables obtained from equation (8);
(3-2-4) updating significance mapMean value of significance values of all the pixelsThe calculation formula is as follows:
Figure 641511DEST_PATH_IMAGE055
(9)
wherein,
Figure 959360DEST_PATH_IMAGE056
representing pre-update saliency maps
Figure 536972DEST_PATH_IMAGE034
The mean of the saliency values of all the above pixels,
Figure 522245DEST_PATH_IMAGE052
representing rectangular areas
Figure 894321DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 699466DEST_PATH_IMAGE048
is prepared by reacting with
Figure 752872DEST_PATH_IMAGE037
Is correspondingly at
Figure 654969DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 135629DEST_PATH_IMAGE037
is shown in
Figure 755966DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 347485DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),representing post-update saliency maps
Figure 21228DEST_PATH_IMAGE034
Significance value of last all pixel pointsThe mean value of (a);
for example, in iteration 1, the saliency map is updated according to equation (9)The mean value of the significance values of all the pixels is obtained
Figure 196175DEST_PATH_IMAGE074
(3-2-5) in the significance map
Figure 75794DEST_PATH_IMAGE034
To go toRectangular regions obtained in sub-iterations
Figure 227606DEST_PATH_IMAGE037
All other pixel points have significance values set to
Figure 426507DEST_PATH_IMAGE057
(3-3) if inDifference values between rectangular region and outer region obtained in sub-iterationThen, then
Figure 905395DEST_PATH_IMAGE059
To obtain a target rectangle
Figure 970303DEST_PATH_IMAGE060
(ii) a Otherwise, continuing to the step (3-2) to update the saliency map through iteration to obtain a target rectangle, wherein the image content of the inner region of the target rectangle is the detected salient object, for example, in the 1 st iteration,
Figure 493689DEST_PATH_IMAGE041
Figure 657954DEST_PATH_IMAGE075
Figure 961896DEST_PATH_IMAGE076
then, the step (3-2) is continued to update the saliency map through iteration to obtain a target rectangle, for example, the yellow rectangle in fig. 3 (b) is an objective correct target rectangle, the purple rectangle is a target rectangle detected on the original image, and the image content in the inner region of the target rectangle is a detected salient object.
As can be seen from the simulation experiment results, the method can accurately detect the significant object without setting any parameter.

Claims (4)

1. A method for detecting a salient object in an image based on region difference comprises the following specific steps:
(1) inputting an original image, and calculating a saliency map of the original image;
(2) calculating a modified saliency map;
(3) and iteratively updating the saliency map to find a target rectangle with the largest difference with the outer region, wherein the image content of the inner region of the target rectangle is the detected salient object.
2. The method for detecting salient objects in images based on inter-region differences as claimed in claim 1, wherein said step (1) of inputting the original image and calculating the saliency map of the original image comprises the following specific steps:
(1-1) segmenting an original image into a plurality of regions by using a mean shift algorithm;
(1-2) calculating the color similarity of each pixel point in the image and each region, wherein the calculation formula is as follows:
Figure 772004DEST_PATH_IMAGE001
(1)
wherein,
Figure 197825DEST_PATH_IMAGE002
is shown as
Figure 584944DEST_PATH_IMAGE002
The number of the regions is one,
Figure 17062DEST_PATH_IMAGE003
representing a divided region
Figure 907658DEST_PATH_IMAGE004
The number of the pixel points in (1),
Figure 767029DEST_PATH_IMAGE005
indicating areaThe number of the pixel points in (1),representing pixel points
Figure 622356DEST_PATH_IMAGE005
Color of the spotThe step of performing the sign operation,
Figure 590312DEST_PATH_IMAGE007
representing pixel pointsThe color characteristics of (a) or (b),
Figure 460365DEST_PATH_IMAGE009
is shown as
Figure 450186DEST_PATH_IMAGE002
The kernel function of each of the regions is,
Figure 589043DEST_PATH_IMAGE010
to represent
Figure 234788DEST_PATH_IMAGE008
Pixel point of (2)
Figure 484504DEST_PATH_IMAGE011
And a firstColor similarity of individual regions;
(1-3) calculating a color saliency map of the original image, wherein the calculation formula is as follows:
Figure 844783DEST_PATH_IMAGE012
Figure 915507DEST_PATH_IMAGE013
(2)
wherein,
Figure 31231DEST_PATH_IMAGE002
is shown as
Figure 871011DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 413988DEST_PATH_IMAGE010
to represent
Figure 706429DEST_PATH_IMAGE008
Pixel point of (2)And a first
Figure 320130DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,
Figure 34008DEST_PATH_IMAGE014
is shown asThe color saliency of an individual region is such that,
Figure 536850DEST_PATH_IMAGE015
a color saliency map representing an original image;
(1-4) calculating a spatial saliency map of the original image, wherein the calculation formula is as follows:
Figure 85643DEST_PATH_IMAGE016
(3)
wherein,
Figure 236002DEST_PATH_IMAGE002
is shown as
Figure 503035DEST_PATH_IMAGE002
A number of regions, n representing the total number of regions,
Figure 701935DEST_PATH_IMAGE010
to representPixel point of (2)
Figure 426495DEST_PATH_IMAGE011
And a first
Figure 257789DEST_PATH_IMAGE002
The degree of similarity of the colors of the individual regions,is shown as
Figure 580503DEST_PATH_IMAGE002
The spatial significance of the individual regions is such that,
Figure 134982DEST_PATH_IMAGE018
a spatial saliency map representing an original image;
(1-5) calculating a saliency map of the original image, wherein the calculation formula is as follows:
Figure 111028DEST_PATH_IMAGE019
(4)
wherein,
Figure 714047DEST_PATH_IMAGE015
a color saliency map representing an original image,
Figure 357518DEST_PATH_IMAGE018
a spatial saliency map representing an original image,
Figure 692685DEST_PATH_IMAGE020
representing the saliency of an original imageThe value of each pixel point position in the significance map is the significance value of the pixel point, and the value range of the significance value is 0255, the larger the significance value is, the more significant the pixel point is, and the smaller the significance value is, the less significant the pixel point is.
3. The method according to claim 2, wherein the step (2) of calculating and modifying the saliency map comprises the following steps:
(2-1) setting the center of gravity of the saliency map
Figure 483923DEST_PATH_IMAGE021
(2-2) calculating each pixel point on the significance mapTo the center of gravityEuropean distance of
Figure 629100DEST_PATH_IMAGE022
The calculation formula is as follows:
(5)
wherein,
Figure 790140DEST_PATH_IMAGE024
Figure 142624DEST_PATH_IMAGE025
representing coordinates of pixel points
Figure 588353DEST_PATH_IMAGE027
Figure 8970DEST_PATH_IMAGE028
Representing coordinates of center of gravity
Figure 543857DEST_PATH_IMAGE029
Representing pixel points
Figure 379274DEST_PATH_IMAGE011
Euclidean distance to the center of gravity of the saliency map;
(2-3) calculating the modified significance map, wherein the calculation formula is as follows:
Figure 603582DEST_PATH_IMAGE030
(6)
wherein W and H represent the width and height of the image, respectively,
Figure 930659DEST_PATH_IMAGE022
representing pixel points
Figure 277326DEST_PATH_IMAGE011
Euclidean distance to the center of gravity of the saliency map,
Figure 689853DEST_PATH_IMAGE031
the original significance map is shown,
Figure 514590DEST_PATH_IMAGE032
a modified saliency map is represented.
4. The method according to claim 3, wherein the step (3) of iteratively updating the saliency map to find a target rectangle with the largest difference from the outer region of the saliency map, wherein the image content in the inner region of the target rectangle is the detected salient object, comprises the following steps:
(3-1) setting an initial value of iteration, and specifically comprising the following steps:
(3-1-1) is provided with
Figure 696172DEST_PATH_IMAGE033
The number of iterations is indicated, wherein,
Figure 151424DEST_PATH_IMAGE033
is 0,1,2,3, … …;
(3-1-2) is provided with
Figure 113564DEST_PATH_IMAGE034
Is shown inSignificance map updated in sub-iteration, significance map at initial state
Figure 778081DEST_PATH_IMAGE035
Wherein
Figure 404234DEST_PATH_IMAGE036
Representing the modified saliency map obtained in step (2);
(3-1-3) is provided withIs shown in
Figure 957892DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,a rectangular area in an initial state is represented and is the whole saliency map;
(3-1-4) is provided with
Figure 976368DEST_PATH_IMAGE040
Is shown asThe difference value between the rectangular area and the outer area obtained in the secondary iteration is obtained, the outer area is the area of the saliency map except the rectangular area, and the difference value between the rectangular area and the outer area in the initial state
(3-1-5) is provided with
Figure 831695DEST_PATH_IMAGE042
Is shown as
Figure 799651DEST_PATH_IMAGE033
Significance map in sub-iterationThe mean value of the significance values of all the pixel points, and a significance map in an initial state
Figure 404124DEST_PATH_IMAGE043
The mean value of the significance values of all the pixels is
Figure 534892DEST_PATH_IMAGE044
(3-2) obtaining a rectangular area by iteratively updating the saliency map, wherein the method comprises the following specific steps:
(3-2-1) in the second place
Figure 736066DEST_PATH_IMAGE038
In the second iteration, use
Figure 319494DEST_PATH_IMAGE033
Significance map updated in 1 iterationSubtracting the saliency map from the saliency value of each pixel point
Figure 616800DEST_PATH_IMAGE045
Mean value of significance values of all the pixels
Figure 926559DEST_PATH_IMAGE046
Obtaining an updated saliency map
Figure 59600DEST_PATH_IMAGE034
(3-2-2) adopting an efficient sub-window search algorithm to update the significance map
Figure 113006DEST_PATH_IMAGE034
To obtain a rectangular area
Figure 952786DEST_PATH_IMAGE037
The sum of all pixel points in the rectangular area is larger than the updated significance map
Figure 495763DEST_PATH_IMAGE034
The sum of the pixel points in any other rectangle;
(3-2-3) calculating the rectangular region obtained in the step (3-2-2)
Figure 788204DEST_PATH_IMAGE037
The difference value from the outer region is calculated as:
Figure 707619DEST_PATH_IMAGE047
(7)
wherein,
Figure 401905DEST_PATH_IMAGE037
is shown in
Figure 53466DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 164029DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 559239DEST_PATH_IMAGE048
is prepared by reacting with
Figure 170349DEST_PATH_IMAGE037
Is correspondingly at
Figure 258390DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 525424DEST_PATH_IMAGE040
is shown as
Figure 521061DEST_PATH_IMAGE033
The difference value between the rectangular region and the outer region in the sub-iteration,
Figure 189940DEST_PATH_IMAGE049
represents an intermediate variable;
Figure 511200DEST_PATH_IMAGE050
(8)
wherein,
Figure 999950DEST_PATH_IMAGE037
is shown in
Figure 2541DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 588243DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 752509DEST_PATH_IMAGE048
is prepared by reacting with
Figure 56451DEST_PATH_IMAGE037
Is correspondingly at
Figure 597154DEST_PATH_IMAGE032
The rectangular area of (a) above (b),
Figure 975045DEST_PATH_IMAGE051
representing rectangular areas
Figure 638108DEST_PATH_IMAGE048
The number of the internal pixel points is increased,
Figure 101450DEST_PATH_IMAGE052
representing rectangular areas
Figure 508161DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 6138DEST_PATH_IMAGE053
representing rectangular areasThe number of the external pixel points is,
Figure 781951DEST_PATH_IMAGE054
representing rectangular areasThe mean value of the significance values of all the external pixel points;
(3-2-4) updating significance map
Figure 79258DEST_PATH_IMAGE034
Mean value of significance values of all the pixels
Figure 756227DEST_PATH_IMAGE042
The calculation formula is as follows:
Figure 194161DEST_PATH_IMAGE055
(9)
wherein,representing pre-update saliency maps
Figure 149665DEST_PATH_IMAGE034
The mean of the saliency values of all the above pixels,
Figure 325431DEST_PATH_IMAGE052
representing rectangular areas
Figure 250662DEST_PATH_IMAGE048
The mean of the saliency values of all pixels inside,
Figure 474970DEST_PATH_IMAGE048
is prepared by reacting with
Figure 864363DEST_PATH_IMAGE037
Is correspondingly at
Figure 211031DEST_PATH_IMAGE032
The rectangular area of (a) above (b),is shown in
Figure 448294DEST_PATH_IMAGE038
The rectangular area obtained in the sub-iteration,
Figure 692193DEST_PATH_IMAGE032
representing the modified saliency map obtained in step (2),
Figure 147446DEST_PATH_IMAGE042
representing post-update saliency maps
Figure 112515DEST_PATH_IMAGE034
The mean value of the significance values of all the pixel points;
(3-2-5) in the significance mapTo go to
Figure 714715DEST_PATH_IMAGE038
Rectangular regions obtained in sub-iterations
Figure 403185DEST_PATH_IMAGE037
All other pixel points have significance values set to
Figure 587042DEST_PATH_IMAGE057
(3-3) if inDifference values between rectangular region and outer region obtained in sub-iteration
Figure 113018DEST_PATH_IMAGE058
Then, then
Figure 972390DEST_PATH_IMAGE059
To obtain a target rectangle(ii) a Otherwise, continuing to update the saliency map in the step (3-2) through iteration to obtain a target rectangle, wherein the image content of the inner area of the target rectangle is the detected salient object.
CN 201110312091 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference Expired - Fee Related CN102509072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110312091 CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110312091 CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Publications (2)

Publication Number Publication Date
CN102509072A true CN102509072A (en) 2012-06-20
CN102509072B CN102509072B (en) 2013-08-28

Family

ID=46221153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110312091 Expired - Fee Related CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Country Status (1)

Country Link
CN (1) CN102509072B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938139A (en) * 2012-11-09 2013-02-20 清华大学 Automatic synthesis method for fault finding game images
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN110689007A (en) * 2019-09-16 2020-01-14 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN111461139A (en) * 2020-03-27 2020-07-28 武汉工程大学 Multi-target visual saliency layered detection method in complex scene
CN113114943A (en) * 2016-12-22 2021-07-13 三星电子株式会社 Apparatus and method for processing image
US11670068B2 (en) 2016-12-22 2023-06-06 Samsung Electronics Co., Ltd. Apparatus and method for processing image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郎丛妍等: "一种基于模糊信息粒化的视频时空显著单元提取方法", 《电子学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image
CN103218832B (en) * 2012-10-15 2016-01-13 上海大学 Based on the vision significance algorithm of global color contrast and spatial distribution in image
CN102938139A (en) * 2012-11-09 2013-02-20 清华大学 Automatic synthesis method for fault finding game images
CN102938139B (en) * 2012-11-09 2015-03-04 清华大学 Automatic synthesis method for fault finding game images
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106407978B (en) * 2016-09-24 2020-10-30 上海大学 Method for detecting salient object in unconstrained video by combining similarity degree
CN113114943A (en) * 2016-12-22 2021-07-13 三星电子株式会社 Apparatus and method for processing image
US11670068B2 (en) 2016-12-22 2023-06-06 Samsung Electronics Co., Ltd. Apparatus and method for processing image
CN113114943B (en) * 2016-12-22 2023-08-04 三星电子株式会社 Apparatus and method for processing image
CN110689007A (en) * 2019-09-16 2020-01-14 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN110689007B (en) * 2019-09-16 2022-04-15 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN111461139A (en) * 2020-03-27 2020-07-28 武汉工程大学 Multi-target visual saliency layered detection method in complex scene

Also Published As

Publication number Publication date
CN102509072B (en) 2013-08-28

Similar Documents

Publication Publication Date Title
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN105844669B (en) A kind of video object method for real time tracking based on local Hash feature
CN102509072A (en) Method for detecting salient object in image based on inter-area difference
CN107292234B (en) Indoor scene layout estimation method based on information edge and multi-modal features
US20160358035A1 (en) Saliency information acquisition device and saliency information acquisition method
Kuo et al. 3D object detection and pose estimation from depth image for robotic bin picking
CN108647694B (en) Context-aware and adaptive response-based related filtering target tracking method
US20160196467A1 (en) Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud
CN108052624A (en) Processing Method of Point-clouds, device and computer readable storage medium
CN106485651B (en) The image matching method of fast robust Scale invariant
CN108564120B (en) Feature point extraction method based on deep neural network
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN111860494A (en) Optimization method and device for image target detection, electronic equipment and storage medium
CN105404886A (en) Feature model generating method and feature model generating device
CN105225226A (en) A kind of cascade deformable part model object detection method based on Iamge Segmentation
CN104978582B (en) Shelter target recognition methods based on profile angle of chord feature
CN105957107A (en) Pedestrian detecting and tracking method and device
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN107507226A (en) A kind of method and device of images match
KR101182683B1 (en) A Visual Shape Descriptor Generating Method Using Sectors and Shape Context of Contour Lines and the Recording Medium thereof
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
CN108256567B (en) Target identification method and system based on deep learning
CN105374030B (en) A kind of background model and Mobile object detection method and system
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN105631849B (en) The change detecting method and device of target polygon

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130828

Termination date: 20201017

CF01 Termination of patent right due to non-payment of annual fee