WO2011061905A1 - 物体領域抽出装置、物体領域抽出方法、及びコンピュータ可読媒体 - Google Patents
物体領域抽出装置、物体領域抽出方法、及びコンピュータ可読媒体 Download PDFInfo
- Publication number
- WO2011061905A1 WO2011061905A1 PCT/JP2010/006612 JP2010006612W WO2011061905A1 WO 2011061905 A1 WO2011061905 A1 WO 2011061905A1 JP 2010006612 W JP2010006612 W JP 2010006612W WO 2011061905 A1 WO2011061905 A1 WO 2011061905A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- likelihood
- region
- feature
- background
- color
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/162—Segmentation; Edge detection involving graph-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Definitions
- the present invention relates to an object region extraction device, an object region extraction method, and a program for extracting an object region that extract an object from an image, and in particular, an object region extraction device that can accurately extract an object from an image,
- the present invention relates to an object region extraction method and a program for extracting an object region.
- Non-Patent Document 1 discloses a technique for separating an object area and a background area by manually specifying an object area and a background area from an image manually, and extracting the object area.
- the extraction method is a method of separating a background region and an object region by minimizing an energy function including a data term and a smoothing term, and is a so-called graph cut method.
- the data term is defined based on the probability distribution generated from the luminance histogram of the object region and the background region designated by the user
- the smoothing term is defined based on the difference in luminance between adjacent pixels.
- Non-Patent Document 2 discloses a method of extracting an object region by separating a object region and a background region by designating a rectangular region including the object region from an image.
- the extraction method is an improvement of the graph cut disclosed in Non-Patent Document 1.
- a color distribution model is generated based on the inside of the rectangular area designated as the object area and the outside of the rectangular area designated as the background area, and the color distribution corresponding to each area is used as the data term. Therefore, the user can extract the object area only by specifying the rectangular area including the object area.
- Patent Document 1 an object having a known shape is detected in a medical image and designated as an object region, and a region outside the sufficiently large range around the detection point is designated as a background region.
- a method for extracting a region is disclosed.
- an organ to be extracted is detected as one point of an object region in order to extract an organ in a medical image.
- an organ to be extracted is arranged at the center of an image at the time of photographing, thereby setting the center of the image as one point of the object region.
- the shape of the organ is known to some extent, the organ to be extracted can be detected using the shape information.
- an area sufficiently separated from one point of the object area is defined as a background area, and the object is extracted using a graph cut (see Non-Patent Document 1 and Non-Patent Document 3).
- Patent Document 2 discloses a technique for separating an object region and a background region and extracting an object region by specifying a position where an object color exists as an object region using color information unique to the object. .
- a color unique to an object such as human skin is defined in advance, and an energy function that reduces the data term when the probability of including that color is high is used.
- the required method graph cut is used.
- Non-Patent Documents 1 and 2 it is necessary to manually specify the object region and the background region.
- the object color distribution is estimated from the rectangular area including the object area, and the background color distribution is estimated from outside the rectangular area. Therefore, if a background similar to the object color exists outside the rectangular area, There is a problem of extracting as a region.
- an object of the present invention is to provide an object region extraction apparatus, an object region extraction method, and a program for extracting an object region that can extract an object from an image with high accuracy.
- the object region extraction apparatus calculates a likelihood of a feature region from the similar region calculation means for calculating a region having a high similarity with the feature extracted from the image, and the position of the feature and the similar region.
- an object region extraction device it is possible to provide an object region extraction device, an object region extraction method, and a program for extracting an object region that can extract an object from an image with high accuracy.
- FIG. 1 is a block diagram illustrating an object region extraction device according to a first exemplary embodiment; It is a block diagram which shows the other aspect of the object area
- FIG. 3 is a flowchart for explaining a method of extracting an object region using the object region extraction apparatus according to the first embodiment; It is a block diagram which shows the object area
- FIG. 10 is a flowchart for explaining a method of extracting an object region using the object region extraction apparatus according to the second embodiment. It is a figure which shows the object position likelihood calculated based on the Gaussian distribution centering on the position of the feature point of an object. It is a figure for demonstrating the method of calculating object color likelihood based on object position likelihood.
- FIG. 6 is a block diagram illustrating an object region extracting apparatus according to a third embodiment.
- FIG. 10 is a diagram illustrating a result of generating an object position likelihood from an object detection result in an object region in the object region extracting apparatus according to the third embodiment.
- FIG. 10 is a block diagram illustrating an object region extracting apparatus according to a fourth embodiment.
- FIG. 10 is a diagram illustrating a result of generating an object position likelihood from a detection result of a shape unique to an object in the object region extraction device according to the fourth exemplary embodiment.
- FIG. 1 is a block diagram showing an object region extracting apparatus according to this embodiment.
- the object region extraction apparatus 100 includes a similar region calculation unit 120 that calculates a region having a high degree of similarity with the feature extracted from the image, and the feature region based on the extracted feature position and the similar region.
- a feature region likelihood calculating unit 130 for calculating likelihood and an object region extracting unit 140 for extracting an object region based on the likelihood of the feature region are provided.
- the similar region calculation means 120 calculates a region having a high similarity with the feature extracted from the image input from the image input device 10.
- a feature extraction unit 110 may be provided before the similar region calculation unit 120, and features may be extracted from an image input using the feature extraction unit 110.
- the feature is a feature of the object or a feature of the background.
- a method for extracting features of an object shape such as Haar-Like feature, SIFT feature, HOG feature, etc. may be used.
- a method for extracting color characteristics of an object may be used.
- the feature of the object may be extracted from the image by combining the feature of the shape of the object and the feature of the color of the object.
- the desired object feature (the feature of the object shape and the feature of the object color) stored in the object feature storage unit 21 of the data storage unit 20 is compared with the feature extracted from the input image.
- a desired feature may be extracted from the inside.
- the similar area calculation means 120 calculates, for example, the degree of similarity between the shape or color of the extracted feature and the shape or color of the peripheral area around the feature position.
- the range of the peripheral region can be determined by generating a Gaussian distribution having a variance corresponding to the size of the feature around the position of the extracted feature (feature shape, feature color).
- a plurality of Gaussian distributions can be expressed as a mixed Gaussian distribution, and the range of the peripheral region can be determined by using the mixed Gaussian distribution.
- the method for determining the range of the peripheral region is not limited to this method, and any other method may be used as long as the method can determine the range of the peripheral region.
- the feature region likelihood calculating unit 130 calculates the likelihood of the feature region from the extracted feature position and the region with high similarity (similar region) calculated by the similar region calculating unit 120. For example, the feature region likelihood calculating unit 130 can calculate the feature region likelihood based on the product of the extracted feature position, the distance between the region where the similarity is calculated, and the similarity. The feature region likelihood calculating unit 130 can also calculate the feature region likelihood based on the product of the calculated position likelihood and the similarity of the peripheral region around the feature position.
- the position likelihood can be calculated by generating a Gaussian distribution having a variance according to the size of the feature with the extracted feature position as the center.
- the object region extracting unit 140 extracts an object region based on the likelihood of the feature region calculated by the feature region likelihood calculating unit 130.
- the object region extraction unit 140 uses a graph cut method or the like for an energy function including the likelihood of the feature region calculated by the feature region likelihood calculation unit 130 and a function representing the intensity between adjacent pixels. Perform the minimization process. By using this minimization process, an object region can be extracted from the divided regions. Then, the object region extracted by the object region extraction unit 140 is sent to the image output device 30.
- the feature extraction unit 110 shown in FIG. 2 may extract the position of the feature representing the object and the background.
- the similar area calculation unit 120 may calculate an area having a high degree of similarity to the extracted object feature and an area having a high degree of similarity to the extracted background feature.
- the feature region likelihood calculating unit 130 calculates the likelihood of the object region from the position of the feature of the object and the similar region, and calculates the likelihood of the background region from the position of the background feature and the similar region. Also good.
- the object region extraction unit 140 may extract the object region based on the likelihood of the background region and the likelihood of the object region.
- the similar region calculation unit 120 that calculates a region having high similarity to the extracted feature, and the similar region calculated by the extracted feature position and the similar region calculation unit 120 Since the feature region likelihood calculating means 130 for calculating the likelihood of the feature region is provided, the object region can be extracted with high accuracy. In addition, since the feature extraction unit 110 shown in FIG. 2 is provided, a desired object region can be automatically extracted from the image, so that it does not bother the user.
- FIG. 3 is a flowchart for explaining the object region extraction method according to the present embodiment.
- an image to be processed is first input (step S1).
- a feature is obtained from the image, and the position of the feature is extracted (step S2).
- a region having a high similarity to the extracted feature is calculated (step S3).
- the likelihood of the feature region is calculated from the similar region and the feature position (step S4).
- an object region is extracted based on the likelihood of the feature region (step S5).
- step S2 when extracting features from the image in step S2, the user may manually specify them, or may automatically extract them using, for example, a device such as the feature extracting unit 110 shown in FIG. Since the operation in each step is the same as the operation of the object region extraction apparatus, a duplicate description is omitted.
- the program for extracting the object region obtains a feature from the image, extracts the position of the feature, calculates a region having a high degree of similarity with the extracted feature,
- This is a program for causing a computer to execute an operation of calculating the likelihood of a feature region from the feature position and extracting an object region based on the likelihood of the feature region.
- the user may manually specify the feature, or for example, automatically using a program for extracting the feature.
- the object region extraction device, the object region extraction method, and the program for extracting the object region that can accurately extract the object from the image by the object region extraction device according to the present embodiment Can be provided. Further, by using the feature extraction unit 110 shown in FIG. 2, it is not necessary to manually extract features, and an object can be automatically extracted from an input image.
- FIG. 4 is a block diagram showing the object region extraction apparatus according to the present embodiment.
- the object region extraction apparatus 300 includes a feature extraction unit 210, an object position likelihood calculation unit 220, an object color likelihood calculation unit 230, and an object region likelihood calculation unit. 240, background position likelihood calculating means 250, background color likelihood calculating means 260, background area likelihood calculating means 270, and object area extracting means 280.
- the object region extraction apparatus 300 according to the present embodiment in addition to calculating the likelihood of the object region, means for calculating the likelihood of the background region, that is, the background position likelihood calculating unit 250 and the background color likelihood calculation.
- Means 260 and background area likelihood calculating means 270 are further provided.
- the object region extraction device 300 includes the object position likelihood calculating unit 220, the object color likelihood calculating unit 230, and the background position likelihood as the similar region calculating unit 120 described in the first embodiment.
- Calculation means 250 and background color likelihood calculation means 260 are provided.
- the feature region likelihood calculating unit 130 described in Embodiment 1 includes an object region likelihood calculating unit 240 and a background region likelihood calculating unit 270.
- the image input device 10 has a function of acquiring an image acquired from an imaging system such as a still camera, a video camera, or a copy machine or an image posted on the web and passing it to the feature extraction unit 210.
- the feature extraction unit 210 performs feature extraction from the input image.
- a method of extracting object shape features such as Haar-Like feature, SIFT feature, HOG feature, or the like, or a method of extracting object color features may be used. It may be used.
- the feature of the object may be extracted from the image by combining the feature of the shape of the object and the feature of the color of the object.
- desired object features object shape features and object color features
- background features background shape features and background color features
- a feature extracted from the input image may be compared to extract a desired feature from the input image.
- the feature extraction may be performed by the user determining a feature in the image other than using the feature extraction unit 210 and designating the feature using an input terminal (not shown). Good. In this case, the feature extraction unit 210 may not be provided.
- the object position likelihood calculating means 220 has a function of calculating the likelihood of the position where the object exists from the feature of the object from the region where the object exists.
- the object position likelihood calculating unit 220 calculates the object position likelihood by generating a Gaussian distribution having a variance corresponding to the feature size around the feature position extracted by the feature extracting unit 210. .
- a plurality of Gaussian distributions can be expressed as a mixed Gaussian distribution, and the object position likelihood can be calculated from the mixed Gaussian distribution.
- the object position likelihood calculating means 220 may perform object collation using a feature group existing in a certain area, and may calculate the object position likelihood from the collation result. Further, the object position likelihood calculating unit 220 may perform object matching using a feature group existing in a region divided in advance, and calculate the object position likelihood from the result of the matching.
- the object color likelihood calculating unit 230 has a function of calculating the likelihood of the object color based on the object position likelihood calculated by the object position likelihood calculating unit 220.
- the object color likelihood calculating unit 230 sets the object position likelihood in a certain pixel generated by the object position likelihood calculating unit 220 as a candidate for object color likelihood, and uses the same pixel color among the candidate object color likelihoods.
- An object color likelihood candidate that maximizes the object color likelihood is defined as the object color likelihood.
- the object region likelihood calculating unit 240 calculates the likelihood of the object region from the object position likelihood calculated by the object position likelihood calculating unit 220 and the object color likelihood calculated by the object color likelihood calculating unit 230. have. Further, the object region likelihood calculating unit 240 may calculate the object region likelihood based on the product of the calculated object position likelihood and the similarity of the peripheral region centered on the feature position.
- the background position likelihood calculating means 250 has a function of calculating the likelihood of the position where the background exists from the background feature from the region where the background exists.
- the background position likelihood calculating unit 250 calculates the background position likelihood by generating a Gaussian distribution having a variance corresponding to the feature size around the position of the background feature extracted by the feature extracting unit 210. Also in this case, when there are a plurality of background features extracted by the feature extraction unit 210, a plurality of Gaussian distributions can be expressed as a mixed Gaussian distribution, and the background position likelihood can be calculated from the mixed Gaussian distribution.
- the background color likelihood calculating means 260 has a function of calculating the likelihood of the background color based on the likelihood of the background position.
- the background color likelihood calculating means 260 uses the background position likelihood of a certain pixel generated by the background position likelihood calculating means 250 as a background color likelihood candidate, and uses the value with the highest likelihood for the same color as the background color likelihood.
- the background region likelihood calculating unit 270 calculates the likelihood of the background region from the background position likelihood calculated by the background position likelihood calculating unit 250 and the background color likelihood calculated by the background color likelihood calculating unit 260. have.
- the object region extraction unit 280 defines a data term of an energy function from the likelihood of the object region calculated by the object region likelihood calculation unit 240 and the likelihood of the background region calculated by the background region likelihood calculation unit 270. , It has a function of dividing the object area and the background area by minimizing the energy function and extracting the object area. That is, the object region extraction unit 280 calculates the object region likelihood calculated by the object region likelihood calculation unit 240, the background region likelihood calculated by the background region likelihood calculation unit 270, and the adjacent pixels. A minimization process is performed using an graph function or the like on an energy function including a function representing intensity. An object region can be extracted from the divided regions using this minimization process.
- the object region extracted by the object region extraction means 280 is sent to the image output device 30.
- FIG. 5 is a flowchart for explaining the object region extraction method according to the present embodiment.
- an image to be processed is input (step S11).
- the features of the object and background to be extracted from the image are obtained, and the positions of the features representing the object and the background are extracted (step S12).
- the object position likelihood is calculated from the extracted object features (step S13).
- an object color likelihood is calculated from the calculated object position likelihood (step S14).
- an object region likelihood is calculated from the calculated object position likelihood and object color likelihood (step S15).
- the background position likelihood is calculated from the extracted background feature (step S16).
- a background color likelihood is calculated from the calculated background position likelihood (step S17).
- a background area likelihood is calculated from the calculated background position likelihood and background color likelihood (step S18). Note that the order of the calculation of the object region likelihood (steps S13 to S15) and the calculation of the background region likelihood (steps S16 to S18) can be arbitrarily set.
- an object region is extracted based on the calculated object region likelihood and background region likelihood (step S19). Note that the operation in each step is the same as the operation of the object region extraction apparatus described above, and thus a duplicate description is omitted. Further, when extracting a feature from an image, the user may manually specify the feature, or the feature may be automatically extracted using an apparatus such as the feature extraction unit 210 shown in FIG.
- an object region is extracted using the object region extraction apparatus according to the present embodiment.
- feature extraction is performed for each object from an image showing various cars, forests, sky, roads, and the like, and the feature for each object is stored in the feature storage unit 21 in advance.
- SIFT features are extracted. Since the number of features extracted from all images is about tens of thousands, about hundreds of representative features are calculated using a clustering technique such as k-means.
- typical features that frequently appear in the car image are stored in the feature storage unit 21 as car features.
- Such representative features that frequently appear may be used as the object features, or the object features may be obtained based on the co-occurrence frequency between the features. Further, not only the SIFT feature but also a texture feature may be used.
- the object position likelihood calculating unit 220 calculates the object position likelihood.
- the object position likelihood calculating unit 220 uses the position of the car feature point as a reference.
- the object position likelihood representing the position of the vehicle area is calculated based on the Gaussian distribution defined by (Equation 1).
- FIG. 6 is a diagram illustrating the object position likelihood calculated based on a Gaussian distribution centered on the position of the feature point of the object.
- ⁇ represents the distribution of features by covariance
- ⁇ represents the position of the feature point
- x represents the position around the feature point as a vector
- T represents transposition. If there are a plurality of feature points, the object position likelihood is calculated from the mixed Gaussian distribution shown in (Expression 2).
- the variance value is not limited to the feature size, and may be set to a constant value.
- the object color likelihood is calculated from the object position likelihood obtained by the object position likelihood calculating unit 220.
- the object position likelihood set at a certain pixel position is set as an object color likelihood candidate at that position.
- the object color likelihood candidate that becomes the maximum with the same pixel color is set as the object color likelihood.
- FIG. 7 is a diagram for explaining a method of calculating the object color likelihood based on the object position likelihood.
- the object color likelihood candidate object color likelihood candidate with a likelihood of 0.7
- the object color likelihood can be expressed as (Equation 3).
- the object region likelihood calculating unit 240 calculates the object region likelihood in a certain pixel I from the object position likelihood and the object color likelihood using (Expression 4). For example, if there is a background that is very similar to an object, the object color likelihood is large even for the background, so the background may be extracted as an object region only with the object color likelihood. Therefore, it is possible to prevent a background area from being extracted as an object area by adding a position restriction using the object position likelihood.
- the background region likelihood can be calculated in the same manner as the object region likelihood described above.
- the background position likelihood calculating means 250 calculates the background position likelihood in the same manner as the method of calculating the position likelihood of the vehicle area. That is, the background position likelihood calculating unit 250 calculates the background position likelihood based on the Gaussian distribution defined by (Equation 5).
- a Gaussian distribution centering around the four sides of the input image may be set using prior knowledge that the background position is likely to be the four sides of the input image.
- FIG. 8 is a diagram showing the background position likelihood calculated based on the Gaussian distribution centered on the position of the feature point of the background, with the positions near the four sides around the image as the center.
- the object color likelihood is calculated from the object position likelihood obtained by the background position likelihood calculating means 250 using the background color likelihood calculating means 260.
- the background color likelihood can be expressed as (Equation 6).
- an input image may be used, or an image obtained by performing color clustering of the input image may be used.
- the background region likelihood calculating means 270 calculates the background region likelihood in a certain pixel I from the background position likelihood and the background color likelihood using (Equation 7).
- the object region is extracted using the graph cut method.
- the energy function is defined as in (Equation 8).
- ⁇ in (Equation 8) is a parameter of the ratio of R (I) and B (I)
- R (I) is a penalty function for the region
- B (I) is a penalty function representing the intensity between adjacent pixels.
- the energy function E defined by R (I) and B (I) (Equation 8) is minimized.
- R (I) is expressed by (Expression 9) and (Expression 10), and the likelihood of the object and the background is set.
- B (I) is expressed by (Expression 11), and sets the similarity of luminance values between adjacent pixels.
- FIG. 9 shows the result of extracting the object region using the object region extracting apparatus according to the present embodiment.
- the graph cut method is used as a method for minimizing the energy function.
- other optimization algorithms such as belief propagation (Belief Propagation) may be used.
- an object can be extracted from an image with high accuracy.
- the object region extraction apparatus since the background region likelihood is calculated in addition to the object region likelihood, the object can be extracted from the image with higher accuracy.
- the feature extraction unit 210 it is not necessary to manually extract features, and an object can be automatically extracted from an input image.
- FIG. 10 is a block diagram showing an object region extraction apparatus according to the present embodiment.
- the object region extraction apparatus 400 includes a feature extraction unit 210, an object detection unit 310, an object position likelihood calculation unit 220, an object color likelihood calculation unit 230, An object region likelihood calculating unit 240, a background position likelihood calculating unit 250, a background color likelihood calculating unit 260, a background region likelihood calculating unit 270, and an object region extracting unit 280 are included. That is, in the object region extraction apparatus 400 according to the present embodiment, the object detection unit 310 is added to the object region extraction apparatus 300 described in the second embodiment. Since the other parts are the same as those in the second embodiment, a duplicate description is omitted.
- the object detection unit 310 detects an object from features existing in a certain region with respect to the input image. If it is an object-like area, a value based on the object-likeness is voted for the pixels in the area. For example, “1” can be set as a value based on the object likeness if the object likeness is large, and “0.2” if the object likeness is small. As a result, a large value is voted for a region that is likely to be an object in the input image, and a small value is voted for a region that is not likely to be an object. Then, the voting result can be used as the object position likelihood by normalizing the voting value in the object position likelihood calculating means 220.
- FIG. 11 is a diagram showing a result of generating the object position likelihood using such a method. As shown in FIG. 11, the object position likelihood at a position corresponding to the position of the car in the input image is large. The other portions are the same as those described in the second embodiment, and thus the description thereof is omitted.
- the object detection unit 310 is used to vote for pixels in a region likely to be an object from the entire region, and the object position likelihood is determined based on the voting result. For this reason, a likelihood distribution finer than that of the object region extraction apparatus according to the second embodiment can be set for an object having a texture pattern of a certain region. Note that the object position likelihood obtained from the object feature points (described in the second embodiment) and the object position likelihood obtained using the object detection unit 310 may be integrated.
- FIG. 12 is a block diagram showing an object region extraction apparatus according to the present embodiment.
- the object region extracting apparatus 500 includes a feature extracting unit 210, an object shape detecting unit 410, an object position likelihood calculating unit 220, an object color likelihood calculating unit 230, , An object region likelihood calculating unit 240, a background position likelihood calculating unit 250, a background color likelihood calculating unit 260, a background region likelihood calculating unit 270, and an object region extracting unit 280. That is, the object area extraction apparatus 500 according to the present embodiment is obtained by adding an object shape detection unit 410 to the object area extraction apparatus 300 described in the second embodiment.
- an object shape storage unit 22 is provided in the data storage unit 20. Since the other parts are the same as those in the second embodiment, a duplicate description is omitted.
- the object shape detection unit 410 detects a shape unique to the object from the input image by collating with the object shape stored in the object shape storage unit 22. For example, when a car is extracted as the object region, a tire can be used as a shape unique to the object. In this case, the object shape detection means 410 collates with the tire shape stored in the object shape storage unit 22, and detects an ellipse that is the tire shape from the input image. Then, the detected ellipse is processed using a preset threshold value for the tire. Then, a large object likelihood is set for the position of the ellipse after the threshold processing, and is integrated with the object position likelihood calculated by the object position likelihood calculating means 220.
- FIG. 13 is a diagram illustrating a result of generating the object position likelihood from the detection result of the object-specific shape (tire).
- the diagram on the right side of FIG. 13 shows a state in which the object-specific shape (tire) obtained by the object shape detecting unit 410 and the object position likelihood calculated by the object position likelihood calculating unit 220 are integrated. .
- the other portions are the same as those described in the second embodiment, and thus the description thereof is omitted.
- the object-specific shape is detected using the object shape detection unit 410, and the object position likelihood is set to be large with respect to the position of the detected object-specific shape. For this reason, even an object shape that is difficult to extract as a feature point can be detected as an object-specific shape, so that the object position likelihood distribution can be set more finely than the object region extraction device according to the second embodiment. .
- the present invention can also realize arbitrary processing by causing a CPU (Central Processing Unit) to execute a computer program.
- the programs described above can be stored using various types of non-transitory computer readable media and supplied to a computer.
- Non-transitory computer readable media include various types of tangible storage media.
- non-transitory computer-readable media examples include magnetic recording media (eg, flexible disks, magnetic tapes, hard disk drives), magneto-optical recording media (eg, magneto-optical disks), CD-ROM (Read Only Memory) CD-R, CD -R / W, including semiconductor memory (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (Random Access Memory)).
- the program may be supplied to the computer by various types of temporary computer readable media. Examples of transitory computer readable media include electrical signals, optical signals, and electromagnetic waves.
- the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
- the present invention can be widely applied in the field of image processing for extracting a desired object from an input image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
以下、図面を参照して本発明の実施の形態1について説明する。図1は、本実施の形態にかかる物体領域抽出装置を示すブロック図である。本実施の形態にかかる物体領域抽出装置100は、画像中から抽出された特徴と類似度の高い領域を算出する類似領域算出手段120と、抽出された特徴の位置と類似領域とから特徴領域の尤度を算出する特徴領域尤度算出手段130と、特徴領域の尤度に基づいて物体領域を抽出する物体領域抽出手段140と、を備える。
次に、本発明の実施の形態2について説明する。図4は、本実施の形態にかかる物体領域抽出装置を示すブロック図である。図4に示すように、本実施の形態にかかる物体領域抽出装置300は、特徴抽出手段210と、物体位置尤度算出手段220と、物体色尤度算出手段230と、物体領域尤度算出手段240と、背景位置尤度算出手段250と、背景色尤度算出手段260と、背景領域尤度算出手段270と、物体領域抽出手段280とを有する。本実施の形態にかかる物体領域抽出装置300は、物体領域の尤度を算出する以外に、背景領域の尤度を算出する手段、すなわち、背景位置尤度算出手段250と、背景色尤度算出手段260と、背景領域尤度算出手段270を更に備えている。なお、本実施の形態にかかる物体領域抽出装置300は、実施の形態1で説明した類似領域算出手段120として、物体位置尤度算出手段220と物体色尤度算出手段230と、背景位置尤度算出手段250と背景色尤度算出手段260とを備える。また、実施の形態1で説明した特徴領域尤度算出手段130として、物体領域尤度算出手段240と背景領域尤度算出手段270とを備える。
次に、物体位置尤度算出手段220は物体位置尤度を算出する。このとき、特徴抽出手段210で決定された車特徴点(車特徴の位置)の周囲も車領域である可能性が高いので、物体位置尤度算出手段220は車特徴点の位置を基準に、車領域の位置を表す物体位置尤度を(式1)で定義されるガウス分布に基づき算出する。図6は、物体の特徴点の位置を中心とするガウス分布に基づき算出された物体位置尤度を示す図である。
ここで、Σは共分散で特徴の分布を表し、μは特徴点の位置、xは特徴点周辺の位置をベクトルで表記している。Tは転置を表す。なお、特徴点が複数ある場合は、(式2)に示す混合ガウス分布から物体位置尤度を算出する。また、分散値は特徴の大きさに制限するものではなく、一定の値を設定してもよい。
なお、物体色尤度を算出する場合、入力画像を用いてもよいし、入力画像の色クラスタリングを行った画像を用いてもよい。
例えば、物体とよく似た背景がある場合、背景に対しても物体色尤度が大きくなるため、物体色尤度のみでは、物体領域として背景が抽出される場合がある。そこで、物体位置尤度を用いて位置の制約を加えることにより、背景領域を物体領域として抽出されることを防ぐことができる。
まず、背景位置尤度算出手段250は、車領域の位置尤度を算出した方法と同様に、背景位置尤度を算出する。つまり、背景位置尤度算出手段250は、背景位置尤度を(式5)で定義されるガウス分布に基づき算出する。
ここで、背景位置は入力画像中の周囲4辺である可能性が高いという事前知識を用いて、入力画像の周囲4辺を中心とするガウス分布を設定してもよい。図8は、画像の周囲4辺付近の位置を背景の特徴点位置の中心とし、この特徴点の位置を中心とするガウス分布に基づき算出された背景位置尤度を示す図である。
なお、背景色尤度を算出する場合、入力画像を用いてもよいし、入力画像の色クラスタリングを行った画像を用いてもよい。
次に、本発明の実施の形態3について説明する。図10は、本実施の形態にかかる物体領域抽出装置を示すブロック図である。図10に示すように、本実施の形態にかかる物体領域抽出装置400は、特徴抽出手段210と、物体検出手段310と、物体位置尤度算出手段220と、物体色尤度算出手段230と、物体領域尤度算出手段240と、背景位置尤度算出手段250と、背景色尤度算出手段260と、背景領域尤度算出手段270と、物体領域抽出手段280とを有する。すなわち、本実施の形態にかかる物体領域抽出装置400は、実施の形態2で説明した物体領域抽出装置300に、物体検出手段310が追加されている。これ以外の部分については実施の形態2と同様であるので重複した説明は省略する。
次に、本発明の実施の形態4について説明する。図12は、本実施の形態にかかる物体領域抽出装置を示すブロック図である。図12に示すように、本実施の形態にかかる物体領域抽出装置500は、特徴抽出手段210と、物体形状検出手段410と、物体位置尤度算出手段220と、物体色尤度算出手段230と、物体領域尤度算出手段240と、背景位置尤度算出手段250と、背景色尤度算出手段260と、背景領域尤度算出手段270と、物体領域抽出手段280とを有する。すなわち、本実施の形態にかかる物体領域抽出装置500は、実施の形態2で説明した物体領域抽出装置300に、物体形状検出手段410が追加されている。また、本実施の形態ではデータ記憶部20に物体形状記憶部22が設けられている。これ以外の部分については実施の形態2と同様であるので重複した説明は省略する。
110 特徴抽出手段
120 類似領域算出手段
130 特徴領域尤度算出手段
140 物体領域抽出手段
200、300、400、500 物体領域抽出装置
210 特徴抽出手段
220 物体位置尤度算出手段
230 物体色尤度算出手段
240 物体領域尤度算出手段
250 背景位置尤度算出手段
260 背景色尤度算出手段
270 背景領域尤度算出手段
280 物体領域抽出手段
310 物体検出手段
410 物体形状検出手段
Claims (19)
- 画像中から抽出された特徴と類似度の高い領域を算出する類似領域算出手段と、
前記特徴の位置と前記類似領域とから特徴領域の尤度を算出する特徴領域尤度算出手段と、
前記特徴領域の尤度に基づいて物体領域を抽出する物体領域抽出手段と、を備える、
物体領域抽出装置。 - 前記物体領域抽出装置は、前記画像中から特徴を求め、当該特徴の位置を抽出する特徴抽出手段を更に有する、請求項1に記載の物体領域抽出装置。
- 前記類似領域算出手段は、抽出された前記特徴の形状もしくは色と、当該特徴の位置を中心とした周辺領域の形状もしくは色との類似度を算出する、請求項1または2に記載の物体領域抽出装置。
- 前記周辺領域の範囲は、前記特徴の位置を中心として当該特徴の大きさに応じた分散を持つガウス分布を生成することで決定される、請求項3に記載の物体領域抽出装置。
- 前記周辺領域の範囲は、前記特徴が複数ある場合は、複数のガウス分布を混合ガウス分布として表現し、当該混合ガウス分布を用いることで決定される、請求項4に記載の物体領域抽出装置。
- 前記特徴領域尤度算出手段は、抽出された前記特徴の位置と、類似度を算出した領域との距離と、類似度との積により、前記特徴領域の尤度を算出する、請求項1乃至5のいずれか一項に記載の物体領域抽出装置。
- 前記特徴抽出手段は、物体および背景を表す特徴の位置を抽出し、
前記類似領域算出手段は、抽出された前記物体の特徴と類似度の高い領域および抽出された前記背景の特徴と類似度の高い領域をそれぞれ算出し、
前記特徴領域尤度算出手段は、前記物体の特徴の位置と前記類似領域とから物体領域の尤度を算出すると共に、前記背景の特徴の位置と前記類似領域とから背景領域の尤度を算出し、
前記物体領域抽出手段は、前記物体領域の尤度と前記背景領域の尤度とに基づき物体領域を抽出する、請求項2乃至6のいずれか一項に記載の物体領域抽出装置。 - 前記類似領域算出手段は、前記物体が存在する領域中から当該物体が存在する位置の尤度を物体の特徴から算出する物体位置尤度算出手段と、
前記物体位置尤度算出手段で算出された物体位置尤度に基づいて物体の色の尤度を算出する物体色尤度算出手段と、有し、
前記特徴領域尤度算出手段は、前記物体位置尤度と前記物体色尤度に基づき物体領域尤度を算出する物体領域尤度算出手段を有する、請求項1または2に記載の物体領域抽出装置。 - 前記類似領域算出手段は、背景が存在する領域中から当該背景が存在する位置の尤度を背景の特徴から算出する背景位置尤度算出手段と、
前記背景位置尤度算出手段で算出された背景位置尤度に基づいて背景の色の尤度を算出する背景色尤度算出手段と、を更に有し、
前記特徴領域尤度算出手段は、前記背景位置尤度と前記背景色尤度に基づき背景領域尤度を算出する背景領域尤度算出手段を更に有する、請求項8に記載の物体領域抽出装置。 - 前記物体位置尤度算出手段は、前記特徴の位置を中心として当該特徴の大きさに応じた分散を持つガウス分布を生成することで前記物体位置尤度を算出し、
前記背景位置尤度算出手段は、前記特徴の位置を中心として当該特徴の大きさに応じた分散を持つガウス分布を生成することで前記背景位置尤度を算出する、請求項9に記載の物体領域抽出装置。 - 前記物体色尤度算出手段は、前記物体位置尤度算出手段で生成されたある画素における物体位置尤度を物体色尤度の候補とし、当該物体色尤度の候補のうち同一の画素色で物体色尤度が最大となる物体色尤度の候補を物体色尤度とし、
前記背景色尤度算出手段は、前記背景位置尤度算出手段で生成されたある画素における背景位置尤度を背景色尤度の候補とし、当該背景色尤度の候補のうち同一の画素色で背景色尤度が最大となる背景色尤度の候補を背景色尤度とする、請求項9または10に記載の物体領域抽出装置。 - 前記物体位置尤度算出手段は、一定の領域内に存在する特徴群を用いて物体の照合を行い、照合した結果から物体位置尤度を算出する、請求項8乃至11のいずれか一項に記載の物体領域抽出装置。
- 前記物体位置尤度算出手段は、予め領域分割された領域内に存在する特徴群を用いて物体の照合を行い、照合した結果から物体位置尤度を算出する、請求項8乃至11のいずれか一項に記載の物体領域抽出装置。
- 前記物体領域尤度算出手段は、算出された前記物体位置尤度と特徴位置を中心とした周辺領域の類似度との積に基づき物体領域尤度を算出する、請求項8乃至11のいずれか一項に記載の物体領域抽出装置。
- 前記物体領域抽出手段は、前記物体領域尤度と前記背景領域尤度から、各画素における物体・背景の事後確率を算出する関数と、隣接する画素間の輝度が類似している程、値が高くなる関数が最小化するように、全画素を物体・背景領域に分離し、物体領域を抽出する、請求項8乃至14のいずれか一項に記載の物体領域抽出装置。
- 前記物体領域抽出装置は、物体らしさに基づいた値を領域の画素に投票する物体検出手段を更に有し、
前記物体位置尤度算出手段は当該物体検出手段の当該投票値を正規化した結果を物体位置尤度として用いる、請求項8乃至15のいずれか一項に記載の物体領域抽出装置。 - 前記物体領域抽出装置は、予め設定された物体形状に関する情報と照合することで、入力画像から物体固有の形状を検出する物体形状検出手段を更に有し、
前記物体位置尤度算出手段は前記算出された物体位置尤度と前記物体形状検出手段で検出された物体固有の形状に関する情報を統合する、請求項8乃至15のいずれか一項に記載の物体領域抽出装置。 - 画像中から特徴を求め、当該特徴の位置を抽出し、
抽出された前記特徴と類似度の高い領域を算出し、
前記類似領域と前記特徴の位置とから特徴領域の尤度を算出し、
前記特徴領域の尤度に基づいて物体領域を抽出する、
物体領域抽出方法。 - 画像中から特徴を求め、当該特徴の位置を抽出し、
抽出された前記特徴と類似度の高い領域を算出し、
前記類似領域と前記特徴の位置とから特徴領域の尤度を算出し、
前記特徴領域の尤度に基づいて物体領域を抽出する動作をコンピュータに実行させるための非一時的なコンピュータ可読媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/510,507 US20120230583A1 (en) | 2009-11-20 | 2010-11-10 | Object region extraction device, object region extraction method, and computer-readable medium |
JP2011541801A JPWO2011061905A1 (ja) | 2009-11-20 | 2010-11-10 | 物体領域抽出装置、物体領域抽出方法、及びプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-265545 | 2009-11-20 | ||
JP2009265545 | 2009-11-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011061905A1 true WO2011061905A1 (ja) | 2011-05-26 |
Family
ID=44059392
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/006612 WO2011061905A1 (ja) | 2009-11-20 | 2010-11-10 | 物体領域抽出装置、物体領域抽出方法、及びコンピュータ可読媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20120230583A1 (ja) |
JP (1) | JPWO2011061905A1 (ja) |
WO (1) | WO2011061905A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013097369A (ja) * | 2011-11-03 | 2013-05-20 | Kotatsu Kokusai Denshi Kofun Yugenkoshi | 背景壁紙及び一つ又は複数のユーザーインターフェイスエレメントを同時に電子装置の表示ユニットに表示するための方法、この方法を実行するためのコンピュータプログラム製品、並びにこの方法を実施する電子装置 |
WO2014050129A1 (ja) * | 2012-09-28 | 2014-04-03 | 富士フイルム株式会社 | 画像処理装置および方法並びにプログラム |
KR101747216B1 (ko) * | 2012-05-30 | 2017-06-15 | 한화테크윈 주식회사 | 표적 추출 장치와 그 방법 및 상기 방법을 구현하는 프로그램이 기록된 기록 매체 |
JP2017157091A (ja) * | 2016-03-03 | 2017-09-07 | 日本電信電話株式会社 | 物体領域特定方法、装置、及びプログラム |
CN112288003A (zh) * | 2020-10-28 | 2021-01-29 | 北京奇艺世纪科技有限公司 | 一种神经网络训练、及目标检测方法和装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102011005715A1 (de) * | 2011-03-17 | 2012-09-20 | Siemens Aktiengesellschaft | Verfahren zum Gewinnen eines von Spuren eines Metallobjektes befreiten 3D-Bilddatensatzes |
US10026002B2 (en) * | 2013-10-01 | 2018-07-17 | Nec Corporation | Object detection apparatus, method for detecting object, and learning apparatus |
JP6148426B1 (ja) * | 2016-05-27 | 2017-06-14 | 楽天株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
WO2020012530A1 (ja) | 2018-07-09 | 2020-01-16 | 日本電気株式会社 | 施術支援装置、施術支援方法、及びコンピュータ読み取り可能な記録媒体 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006053919A (ja) * | 2004-08-06 | 2006-02-23 | Microsoft Corp | 画像データ分離システム及びその方法 |
JP2007316950A (ja) * | 2006-05-25 | 2007-12-06 | Nippon Telegr & Teleph Corp <Ntt> | 画像処理方法及び装置及びプログラム |
JP2008015641A (ja) * | 2006-07-04 | 2008-01-24 | Fujifilm Corp | 人体領域抽出方法および装置並びにプログラム |
JP2009169518A (ja) * | 2008-01-11 | 2009-07-30 | Kddi Corp | 領域識別装置およびコンテンツ識別装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5579360A (en) * | 1994-12-30 | 1996-11-26 | Philips Electronics North America Corporation | Mass detection by computer using digital mammograms of the same breast taken from different viewing directions |
JPH09163161A (ja) * | 1995-12-01 | 1997-06-20 | Brother Ind Ltd | 画像処理装置 |
CN1313979C (zh) * | 2002-05-03 | 2007-05-02 | 三星电子株式会社 | 产生三维漫画的装置和方法 |
US20060083428A1 (en) * | 2004-01-22 | 2006-04-20 | Jayati Ghosh | Classification of pixels in a microarray image based on pixel intensities and a preview mode facilitated by pixel-intensity-based pixel classification |
KR20050085638A (ko) * | 2002-12-13 | 2005-08-29 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 텔레비전 이미지들의 적응형 분할 |
JP2004350130A (ja) * | 2003-05-23 | 2004-12-09 | Fuji Photo Film Co Ltd | デジタルカメラ |
JP2005293367A (ja) * | 2004-04-01 | 2005-10-20 | Seiko Epson Corp | 画像処理方法及び装置 |
KR100647322B1 (ko) * | 2005-03-02 | 2006-11-23 | 삼성전자주식회사 | 객체의 모양모델 생성장치 및 방법과 이를 이용한 객체의특징점 자동탐색장치 및 방법 |
EP1897033A4 (en) * | 2005-06-16 | 2015-06-24 | Strider Labs Inc | SYSTEM AND METHOD FOR RECOGNIZING 2D IMAGES USING 3D CLASS MODELS |
US8102465B2 (en) * | 2006-11-07 | 2012-01-24 | Fujifilm Corporation | Photographing apparatus and photographing method for photographing an image by controlling light irradiation on a subject |
JP2008152555A (ja) * | 2006-12-18 | 2008-07-03 | Olympus Corp | 画像認識方法及び画像認識装置 |
JP4493679B2 (ja) * | 2007-03-29 | 2010-06-30 | 富士フイルム株式会社 | 対象領域抽出方法および装置ならびにプログラム |
EP2264679A4 (en) * | 2008-03-11 | 2013-08-21 | Panasonic Corp | TAG SENSOR SYSTEM AND SENSOR DEVICE, AND OBJECT POSITION ESTIMATING DEVICE, AND OBJECT POSITION ESTIMATING METHOD |
JP5235770B2 (ja) * | 2009-04-27 | 2013-07-10 | 日本電信電話株式会社 | 顕著領域映像生成方法、顕著領域映像生成装置、プログラムおよび記録媒体 |
US20120002855A1 (en) * | 2010-06-30 | 2012-01-05 | Fujifilm Corporation | Stent localization in 3d cardiac images |
-
2010
- 2010-11-10 WO PCT/JP2010/006612 patent/WO2011061905A1/ja active Application Filing
- 2010-11-10 JP JP2011541801A patent/JPWO2011061905A1/ja active Pending
- 2010-11-10 US US13/510,507 patent/US20120230583A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006053919A (ja) * | 2004-08-06 | 2006-02-23 | Microsoft Corp | 画像データ分離システム及びその方法 |
JP2007316950A (ja) * | 2006-05-25 | 2007-12-06 | Nippon Telegr & Teleph Corp <Ntt> | 画像処理方法及び装置及びプログラム |
JP2008015641A (ja) * | 2006-07-04 | 2008-01-24 | Fujifilm Corp | 人体領域抽出方法および装置並びにプログラム |
JP2009169518A (ja) * | 2008-01-11 | 2009-07-30 | Kddi Corp | 領域識別装置およびコンテンツ識別装置 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013097369A (ja) * | 2011-11-03 | 2013-05-20 | Kotatsu Kokusai Denshi Kofun Yugenkoshi | 背景壁紙及び一つ又は複数のユーザーインターフェイスエレメントを同時に電子装置の表示ユニットに表示するための方法、この方法を実行するためのコンピュータプログラム製品、並びにこの方法を実施する電子装置 |
US8943426B2 (en) | 2011-11-03 | 2015-01-27 | Htc Corporation | Method for displaying background wallpaper and one or more user interface elements on display unit of electrical apparatus at the same time, computer program product for the method and electrical apparatus implementing the method |
KR101747216B1 (ko) * | 2012-05-30 | 2017-06-15 | 한화테크윈 주식회사 | 표적 추출 장치와 그 방법 및 상기 방법을 구현하는 프로그램이 기록된 기록 매체 |
WO2014050129A1 (ja) * | 2012-09-28 | 2014-04-03 | 富士フイルム株式会社 | 画像処理装置および方法並びにプログラム |
JP2014068861A (ja) * | 2012-09-28 | 2014-04-21 | Fujifilm Corp | 画像処理装置および方法並びにプログラム |
US9436889B2 (en) | 2012-09-28 | 2016-09-06 | Fujifilm Corporation | Image processing device, method, and program |
JP2017157091A (ja) * | 2016-03-03 | 2017-09-07 | 日本電信電話株式会社 | 物体領域特定方法、装置、及びプログラム |
CN112288003A (zh) * | 2020-10-28 | 2021-01-29 | 北京奇艺世纪科技有限公司 | 一种神经网络训练、及目标检测方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
US20120230583A1 (en) | 2012-09-13 |
JPWO2011061905A1 (ja) | 2013-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011061905A1 (ja) | 物体領域抽出装置、物体領域抽出方法、及びコンピュータ可読媒体 | |
US10242249B2 (en) | Method and apparatus for extracting facial feature, and method and apparatus for facial recognition | |
US10438059B2 (en) | Image recognition method, image recognition apparatus, and recording medium | |
US9798956B2 (en) | Method for recognizing target object in image, and apparatus | |
Konstantinidis et al. | Building detection using enhanced HOG–LBP features and region refinement processes | |
CN107346409B (zh) | 行人再识别方法和装置 | |
US9367766B2 (en) | Text line detection in images | |
US20240282095A1 (en) | Image processing apparatus, training apparatus, image processing method, training method, and storage medium | |
US9317784B2 (en) | Image processing apparatus, image processing method, and program | |
JP4905931B2 (ja) | 人体領域抽出方法および装置並びにプログラム | |
Cohen et al. | Robust text and drawing segmentation algorithm for historical documents | |
US9489566B2 (en) | Image recognition apparatus and image recognition method for identifying object | |
US9025882B2 (en) | Information processing apparatus and method of processing information, storage medium and program | |
EP2500864B1 (en) | Irradiation field recognition | |
Heimowitz et al. | Image segmentation via probabilistic graph matching | |
Michail et al. | Detection of centroblasts in h&e stained images of follicular lymphoma | |
Soni et al. | Text detection and localization in natural scene images using MSER and fast guided filter | |
KR101741761B1 (ko) | 멀티 프레임 기반 건물 인식을 위한 특징점 분류 방법 | |
Kotteswari et al. | Analysis of foreground detection in MRI images using region based segmentation | |
JP2017084006A (ja) | 画像処理装置およびその方法 | |
Rafiee et al. | Automatic segmentation of interest regions in low depth of field images using ensemble clustering and graph cut optimization approaches | |
KR102112033B1 (ko) | 얼굴 군집화 기법을 이용한 영상 추출 장치 | |
Shehnaz et al. | An object recognition algorithm with structure-guided saliency detection and SVM classifier | |
KR101306576B1 (ko) | 차분 성분을 고려한 조명 변화에 강인한 얼굴 인식 시스템 | |
JP4812743B2 (ja) | 顔認識装置、顔認識方法、顔認識プログラムおよびそのプログラムを記録した記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10831304 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011541801 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13510507 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10831304 Country of ref document: EP Kind code of ref document: A1 |