[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2012046426A1 - Dispositif de détection d'objet, procédé de détection d'objet, et programme de détection d'objet - Google Patents

Dispositif de détection d'objet, procédé de détection d'objet, et programme de détection d'objet Download PDF

Info

Publication number
WO2012046426A1
WO2012046426A1 PCT/JP2011/005542 JP2011005542W WO2012046426A1 WO 2012046426 A1 WO2012046426 A1 WO 2012046426A1 JP 2011005542 W JP2011005542 W JP 2011005542W WO 2012046426 A1 WO2012046426 A1 WO 2012046426A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
input image
image
detection
occurrence probability
Prior art date
Application number
PCT/JP2011/005542
Other languages
English (en)
Japanese (ja)
Inventor
哲夫 井下
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2012537577A priority Critical patent/JPWO2012046426A1/ja
Publication of WO2012046426A1 publication Critical patent/WO2012046426A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present invention relates to an object detection device, an object detection method, and an object detection program for detecting an object in an image taken by a user without restriction.
  • a function to detect an object from an image taken with a photographing system such as a still camera or a video camera makes it possible to adjust the shutter speed and focus suitable for the object, and to perform image processing, so that a beautiful image can be easily taken.
  • information related to an object can be superimposed and displayed at the position, application to the camera industry and the information display field is expected.
  • a method for detecting an object in an image a method has been proposed in which a rectangular area for object detection is set, and the entire image is scanned to determine whether a desired object is present in the rectangular area.
  • Non-Patent Document 1 describes an object detection method that assumes “people” as detection targets.
  • a rectangular area size for detecting “people” in an image is set from three aspect ratios, and the entire area is scanned by scanning the rectangular area. "Person" is detected.
  • Non-Patent Document 2 describes an object detection method that assumes a “cat face” as a detection target.
  • a rectangular area size for detecting a “cat face” is set for each type of feature, and after scanning the rectangular area over the entire image, two types are detected.
  • the “cat face” is detected by integrating the results based on the characteristics of.
  • Patent Document 1 describes an object detection method that is supposed to detect an object (white line, preceding vehicle, road obstacle) on the road ahead of the vehicle.
  • object detection is performed after limiting a region for detecting a feature point of an object from an image.
  • Patent Document 2 describes an object detection method that assumes that a model registered in a database is located in an image. In the object detection method described in Patent Document 2, a range in which the probability that an object exists is estimated to be high is obtained, and object detection is performed on that region.
  • Non-Patent Document 1 and Non-Patent Document 2 a rectangular area and a detection area for detecting an object are determined decisively. For this reason, it is difficult to say that a rectangular area and a detection area are suitably determined for a photographed scene. For example, a “person” photographed in a city has many standing positions, but a “person” photographed in a park or sandy beach may lie down. That is, the rectangular area may differ depending on the scene even for the same object. In addition, there may be a case where an area where no object exists in the image is set as a detection area. In these cases, a decrease in the object detection rate and an increase in the false detection rate are possible.
  • an object of the present invention is to provide an object detection device, an object detection method, and an object detection program that can accurately detect an object from an image captured by a user without any particular restriction.
  • the object detection device can determine which scene an input image is based on information indicating characteristics of an image captured in the scene associated with the scene in which the image is captured and features extracted from the input image.
  • a scene membership degree calculating means for calculating a scene membership degree, which is information indicating whether the scene belongs to, an object occurrence information indicating an occurrence probability of an object for each scene, and an input image calculated by the scene membership degree calculating means
  • Object occurrence probability calculating means for calculating the occurrence probability of the object with respect to the input image based on the scene membership degree of the object, and the object occurrence probability with respect to the input image calculated by the object occurrence probability calculation means.
  • an object detecting means for detecting.
  • the object detection method is based on information indicating characteristics of an image captured in the scene associated with the scene where the image is captured and features extracted from the input image.
  • the occurrence probability of the object with respect to the input image is calculated, and the object is detected from the input image using the calculated occurrence probability of the object with respect to the input image.
  • the object detection program is based on information indicating characteristics of an image captured in the scene associated with the scene where the image is captured, and characteristics extracted from the input image.
  • Processing for calculating scene attribution which is information indicating to which scene the input image belongs, object occurrence information indicating the occurrence probability of the object for each scene, and input image calculated by the scene attribution calculation means A process of calculating an occurrence probability of an object with respect to the input image based on the scene belonging degree of the image and a process of detecting an object from the input image using the calculated occurrence probability of the object with respect to the input image.
  • FIG. 1 is a block diagram illustrating a configuration example of an object detection device according to a first embodiment of the present invention.
  • the object detection device 100 shown in FIG. 1 includes an image input device 110, a data processing unit 120, a data storage unit 130, and an object detection result output device 140.
  • the image input device 110 inputs an image captured by an imaging system such as a still camera or a video camera to the scene attribution degree calculation unit 121.
  • the data processing unit 120 includes a scene attribution degree calculating unit 121, an object occurrence probability calculating unit 122, and an object detecting unit 123.
  • the data processing unit 120 is realized by a CPU that operates according to a program, for example.
  • FIG. 1 shows an example in which the scene membership degree calculating unit 121, the object occurrence probability calculating unit 122, and the object detecting unit 123 are realized by one data processing unit 120. However, each unit is a separate unit. It can also be realized.
  • the data storage unit 130 includes a scene feature storage unit 131, an object occurrence information storage unit 132, and an object photographing information storage unit 133.
  • the data storage unit 130 is realized by a storage device such as a memory. 1 shows an example in which the scene feature storage unit 131, the object occurrence information storage unit 132, and the object shooting information storage unit 133 are realized by one data storage unit 130, but each storage unit is separately provided. It can also be realized as a unit.
  • Scene attribution calculating means 121 extracts features from the input image.
  • the scene attribution level calculation unit 121 compares the extracted feature with the feature of each scene stored in the scene feature storage unit 131 to determine what kind of scene (scene, stage, etc.) the image has taken. To do. That is, the scene attribution level calculation unit 121 calculates an attribution level that indicates to which scene the image belongs.
  • the scene feature storage unit 131 stores a feature vector group describing a scene as information indicating the feature of each scene. These feature vectors are associated with scenes in advance.
  • the scene attribution level calculation unit 121 calculates the scene attribution level by comparing the feature vector extracted from a certain image with the feature vector associated with the scene.
  • the object occurrence probability calculating unit 122 is configured to generate an object for the input image based on the scene belonging degree calculated by the scene belonging degree calculating unit 121 and the object occurrence information for each scene stored in the object occurrence information storage unit 132.
  • the occurrence probability of is calculated.
  • the object occurrence information storage unit 132 stores information on an object that occurs for each scene, that is, object occurrence information for each scene.
  • the object shooting information storage unit 133 stores object shooting information indicating what position and size the object tends to be shot at the time of shooting for each scene.
  • the object photographing information is information indicating a region where an object is likely to appear in the photographed image, such as the position and size of the object that is likely to appear in the photographed image for each scene.
  • the object photographing information is preferably statistical information. Note that the object photographing information is not limited to information that directly indicates the range of an area where an object is likely to appear, such as the position and size of the appearing object.
  • the object photographing information may be, for example, color information that is likely to exist in the object. In such a case, the detection area may be determined based on the color likely to be in the object.
  • the object detection unit 123 sets a detection area to be applied to the input image based on the object shooting information stored in the object shooting information storage unit 133.
  • the object detection means 123 scans the detection area of the input image using an object detector and calculates the detection result as reliability.
  • the reliability is the reliability that, for each area that is a determination unit included in the detection area of the input image, what is reflected in the area is an object to be detected. Further, the object detection unit 123 obtains the object position likelihood in the input image based on the occurrence probability of the object with respect to the input image calculated by the object occurrence probability calculation unit 122 and the calculated reliability.
  • the object detection result output device 140 determines an area in which the object position likelihood is set to a certain value or more by the object detection unit 123 as the object detection result. Then, the object detection result output device 140 outputs the object detection result to a display device such as a display.
  • FIG. 2 is a flowchart showing an example of the operation of the present embodiment.
  • the image input device 110 acquires an image acquired from a still camera or a video camera or an image posted to the WEB, and inputs the image to the scene attribution degree calculation means 121 (step S1).
  • Scene attribution degree calculation means 121 performs feature extraction from the input image and generates a feature vector for identifying the scene (step S2). Then, the scene attribution level calculation unit 121 compares the generated feature vector with the feature vector for each scene stored in the scene feature storage unit 131, and expresses to what scene the input image belongs. The degree of attribution is calculated (step S3).
  • the distance between the feature vector generated from the input image and the feature vector of the scene A is LA
  • the distance between the feature vector with the scene B is LB.
  • the scene belonging degree calculating unit 121 calculates the belonging degree of the input image to the scene A as LA / (LA + LB) and calculates the belonging degree of the input image to the scene B as LB / (LA + LB).
  • a feature such as SIFT (Scale-Invariant Feature Transform) or HOG (Histograms of Oriented Gradients) may be used. For example, about hundreds of representative features may be calculated using a clustering method, and a histogram represented by representative features as bins (classes) may be used as the feature vector. Since the feature vector associated with the scene is stored in the scene feature storage unit 131, the degree of belonging Pr (Sj
  • the matching method may be histogram matching between feature vectors, or may be a matching by learning using a classifier such as SVM (Support vector machine).
  • the object occurrence probability calculating unit 122 includes the scene belonging degree Pr (Sj
  • S j ) of each object in each scene is used as the occurrence information.
  • the coefficient i represents the number of object types
  • the coefficient j represents the number of scene types.
  • the object detection unit 123 refers to the object shooting information storage unit 133 that stores object shooting information such as the position and size of an object that appears for each scene, and makes it easy for the object to appear in the image area.
  • I) of the object representing is calculated based on the following equation (2) (step S5).
  • the detection area of the object detector may be determined based on the calculated object existence position probability Pr area (PosO i
  • the object detection unit 123 scans the image using the object detector, and calculates the object position in the image as the reliability Pr detector (PosO i
  • the object detection means 123 includes the object position likelihood Pr (PosO i
  • the object detection result output device 140 determines the object position by setting a threshold for the calculated object position likelihood. Then, the object detection result output device 140 outputs the determined object position to a display device such as a display.
  • the scene feature storage unit 131 stores a dictionary for identifying a scene from which a feature has been extracted from an image including a scene to be identified in advance.
  • SIFT features are extracted from image groups classified into scenes of “town”, “autumn leaves”, “office”, “park”, “indoor”, “mountain”, and “beach”.
  • a representative feature vector having a cluster center as a representative feature vector is calculated from the features extracted from all images using a clustering technique such as K-means. Then, a histogram using the representative feature vector in the bin is generated for each image. The number of bins in the histogram may be determined by experiment so that the recognition rate becomes high.
  • a dictionary for identifying the scene is generated.
  • the histogram is learned using SVM, and the support vector of the learning result is stored in the scene feature storage unit 131 as a dictionary.
  • SVM is used for the discriminator, the discriminator is not limited to SVM.
  • a scene may be identified by the distance between histograms. In that case, the histogram is stored in the scene feature storage unit 131 as a dictionary.
  • the object occurrence information storage unit 132 stores occurrence information of objects existing in the scene in advance for each scene. For example, the occurrence probability may be calculated as the occurrence information, and the result may be stored in the object occurrence information storage unit 132.
  • the occurrence probability is expressed by (number of objects including an object in the scene) / (total number of scenes).
  • FIG. 3A is an explanatory diagram illustrating an example of an object for which occurrence information is obtained
  • FIG. 3B is an explanatory diagram illustrating an example of a scene for which occurrence information is obtained
  • FIG. 3C is a diagram illustrating scenes S 1 and S 2 . It is explanatory drawing which shows an example of the occurrence probability of each object.
  • the object occurrence information storage unit 132 stores an object list including an object ID and an object name, a scene list including a scene ID and a scene classification name, and an occurrence probability of each object for each scene in the list. May be.
  • the object occurrence information storage unit 132 may store the number of the objects included in the scene as the occurrence information. In this way, the object occurrence information storage unit 132 only needs to add the number of objects that include the object in the scene, and can suppress the recalculation time of the probability when addition frequently occurs. . In this case, the occurrence probability may be obtained by the object occurrence probability calculation means 122 only once at the time of execution.
  • the occurrence information may include information that weights the number of objects in addition to whether or not the object is included in the scene. For example, it is assumed that there is one image including “car” in 100 “city” images classified in advance for the scene. The number of “cars” included in one image is nine. On the other hand, it is assumed that 100 images classified as “park” include “car” in one image. Note that the number of “cars” included in one image is one. In such a case, since the occurrence probability of the object is calculated by (number of objects included in the scene) / (total number of scenes), both the scene “town” and the scene “park” have the same occurrence probability (0. 01). Further, the occurrence probability of the scene “town” may be set to nine times the occurrence probability of the scene “park” in consideration of the number of cars.
  • the object shooting information storage unit 133 stores object shooting information such as the position and size at which an object is shot in the image. For example, in the scene “town”, when the “car” is located around the center of the image, the object photographing information Pr (PosO 1
  • FIG. 4 shows the position of the “car” on the image, and shows an area where the “car” tends to exist as the color becomes darker.
  • the probability that “car” exists in the scene “town” is represented by a pattern on the image plane.
  • a probability on the image plane is converted into array data and stored as object photographing information.
  • FIG. 5 a 100 ⁇ 100 array is prepared, and the probability that a “car” exists at each position on the screen plane is associated with a component of each array. Then, this arrangement may be used as data indicating the probability Pr (PosO 1
  • each component may hold the probability that an object exists at the position corresponding to that component.
  • FIG. 6 is an explanatory diagram showing the correspondence between the existence probability on the image plane and the array data. As shown in FIG. 6, since the probability 0.9 is set near the center of the array in the array data, in the case of the scene “town”, “car” is 0.9 (90%) near the center in the image. ) With the probability of.
  • the object photographing information may include information indicating the size of the rectangular area for detecting the object in addition to such position information.
  • the scene attribution level calculation unit 121 performs the same process on the input image as when the feature vector stored in the scene feature storage unit 131 is generated. That is, the scene attribution level calculation unit 121 extracts the above-described SIFT feature from the input image, and generates a histogram using the representative feature vector in the bin as a feature vector describing the scene. By inputting the feature vector to the classifier, the degree of belonging to each scene is calculated.
  • FIG. 8 is an explanatory diagram showing an example of the calculation result of the scene attribution degree for the input image in this example. As shown in FIG.
  • the object occurrence probability calculation means 122 calculates the occurrence probability of the object included in the image based on the above-described equation (1) from the object occurrence probability for each scene based on the calculated degree of belonging to each scene. To do. For example, assume that the object occurrence probability for each scene is given as shown in FIG. In this case, the degree of membership Pr (S 1
  • the object occurrence probability calculation unit 122 calculates the occurrence probability of each object for all scenes, and then performs normalization based on Expression (1) to calculate the occurrence probability of each object with respect to the input image.
  • FIG. 9 is an explanatory diagram illustrating an example of the calculation result of the occurrence probability of each object with respect to the input image.
  • I) 0.1 is obtained as Pr (O 4
  • the object detection unit 123 refers to the object shooting information stored in the object shooting information storage unit 133, and determines a detection region to be scanned by the object detector based on the above equation (2). When there is object size information, the object detection unit 123 may also determine the size to which the object detector is applied.
  • FIG. 10 is an explanatory diagram illustrating a calculation example of the existence position probability of the “car” with respect to the input image “town”.
  • the object detection unit 123 represents the probability indicating the position of the “car” with respect to the input image “town”, that is, where the “car” is likely to exist when the input image is the scene “city”. The existence position probability is calculated.
  • the object detection unit 123 includes the existence position probability Pr (PosO 1
  • the presence position probability is obtained by multiplying by the degree of membership Pr (S 1
  • I) 0.8.
  • the object detection means 123 has a probability indicating the position of “car” with respect to the input image “autumn leaves”, a probability indicating the position of “car” with respect to the input image “office”,.
  • a position that is likely to exist for all scenes is calculated for each object, such as a probability that represents the position of “bike”, a probability that represents the position of “desk” in the input image “beach”. After that, the object detection unit 123 performs normalization based on the calculation result of the denominator, so that the presence position probability Pr area (PosO i
  • FIG. 11 is an explanatory diagram illustrating an example of an object detection method and an example of a detection result by an object detector.
  • the probability of showing the value of “car” likelihood for the rectangular area ⁇ is obtained.
  • erroneous detection may occur depending on the accuracy of object detection.
  • the object detection unit 123 uses the expression (3) to calculate the object occurrence probability Pr (Oi
  • I) representing the ease of appearance of the object in the region is applied, and the object position likelihood in the image is calculated for each object. That is, the probability that the target object exists for the region in the image is calculated.
  • FIG. 12 is an explanatory diagram illustrating an example of calculating the object position likelihood for the “car” with respect to the input image. In FIG. 12, the correct calculation result of the array data is not shown, but finally, the result as shown in the upper part is obtained. In this example, a threshold is set for the calculated object position likelihood, and an area having a likelihood equal to or greater than the threshold is set as an object detection result.
  • FIG. 13 is an explanatory diagram showing an example in which the detection result by the object detector and the object detection result by the object position likelihood obtained by the present embodiment are compared for the object “car”, “person”, and “desk”. .
  • the detection result of each object by the object detector is shown on the left side
  • the detection result of each object by the object position likelihood of the object obtained in this embodiment is shown on the right side.
  • a region having a likelihood equal to or greater than the threshold value for each object is illustrated as a region where the object is detected.
  • the present embodiment it is possible to reduce false detection at a place where it is difficult to appear, which is generated as a result of only the object detector.
  • the object to be detected since the object to be detected is not limited, the object can be accurately detected from the image captured by the user without any particular restriction.
  • the degree of scene attribution is calculated from the captured image, and the occurrence information of the object included in the scene and the statistical information (object shooting information) such as the position and size of the object appearing for each scene are used.
  • the detection target object and the detection target region are set or weighted.
  • object detection can be performed in a state similar to that in which a suitable rectangular area or detection area is set depending on the scene, and object detection accuracy can be improved and false detection can be reduced.
  • it is based on statistical information it is possible to automatically calculate a rectangular area size and a detection area for a general still image taken by a user.
  • FIG. 14 is an explanatory diagram showing another display example of the object detection result.
  • the detection result areas of the respective objects may be integrated and displayed as the detection result of the object in the entire image.
  • the object detection result is displayed from the object detection result of each object shown in FIG. 13, assuming that only “car” and “person” are detected as the object detection result in the entire image. ing.
  • only the object occurrence probability is applied.
  • An embodiment in which the position likelihood is obtained is also possible.
  • the object occurrence probability is applied to the input image calculated by the object occurrence probability calculating unit 122, for example, an object erroneously detected from an image in which no object can exist depending on the scene may be excluded from the detection result. it can.
  • the object photographing information storage unit 133 may be omitted.
  • FIG. 15 is a block diagram illustrating a configuration example of the object detection device according to the second exemplary embodiment of the present invention.
  • the object detection apparatus of this embodiment is different from the first embodiment shown in FIG. 1 in that the data processing unit 120 further includes a detection priority calculation unit 124.
  • the detection priority calculation means 124 calculates the priority of the object or area detected by the object detector. In an environment where processing time is limited, it is necessary to efficiently detect an object from an image. Therefore, the detection priority calculation means 124 calculates the priority for the object or detection area to be detected so as to meet the given condition, and selects the object or detection area to be detected as necessary. Set to be limited.
  • the detection processing time is proportional to the number of detected objects and the size of the detection target area. For this reason, when the detection processing time is determined, the detection priority calculation unit 124 calculates the detection target region from the detection processing time and the number of detected objects after setting the ratio of the number of detected objects. Now, assume that the ratio of the number of detected objects is set to 80%. In this case, in the example illustrated in FIG. 9, “car”, “bike”, and “building” are the detection target objects in descending order of occurrence probability of the objects included in the image. Further, the detection priority calculation unit 124 detects in order from the region having the highest location probability based on the location probability Pr area (PosO i
  • the detection priority calculation unit 124 calculates the object detection result Pr detector (PosO i
  • the processing after obtaining the detection target object and the detection area is the same as that in the first embodiment.
  • the object detection unit 123 applies the appearance position distribution in which the area is similarly limited to the detection result of the limited detection area, and thereby the object position likelihood of only that area.
  • FIG. 16 is an explanatory diagram illustrating a calculation example when the detection area is limited.
  • the detection priority calculation unit 124 calculates the number of detection target objects and the detection area within the processing time from the statistical information. Therefore, accurate object detection can be performed even in an environment where the processing time is limited.
  • FIG. 17 is a block diagram showing an outline of the present invention.
  • the object detection apparatus shown in FIG. 17 includes a scene attribution degree calculation unit 201, an object occurrence probability calculation unit 202, and an object detection unit 203.
  • the scene attribution level calculation unit 201 determines which scene the input image is based on the information indicating the characteristics of the image captured in the scene associated with the scene where the image is captured and the characteristics extracted from the input image.
  • the scene attribution level which is information indicating whether or not the image belongs to the scene, is calculated.
  • the scene attribution level calculation unit 201 is disclosed as, for example, the scene attribution level calculation unit 121.
  • the object occurrence probability calculating means 202 is based on the object occurrence information indicating the occurrence probability of the object for each scene and the scene belonging degree of the input image calculated by the scene belonging degree calculating means 201. Is calculated.
  • the object occurrence probability calculating unit 202 is disclosed as, for example, the object occurrence probability calculating unit 122.
  • the object detection unit 203 detects an object from the input image using the occurrence probability of the object with respect to the input image calculated by the object occurrence probability calculation unit 202.
  • the object detection unit 203 is disclosed as, for example, the object detection unit 123.
  • the object detection unit 203 reflects each region in the input image by reflecting the object occurrence probability with respect to the input image calculated by the object occurrence probability calculation unit 202 in the detection result obtained from the object detector (not shown).
  • the object may be detected from the input image by calculating the object position likelihood that represents the probability that the target object exists in FIG.
  • the occurrence probability of an object for each scene indicated by the object occurrence information may be information calculated based on the number of objects included in a captured image classified in advance by scene.
  • the object occurrence probability calculating means 202 calculates the object occurrence probability for all scenes with respect to the input image, and based on the calculated object occurrence probability for all scenes, the object occurrence probability for the input image is calculated. An occurrence probability may be calculated.
  • FIG. 18 is a block diagram showing another configuration example of the object detection apparatus according to the present invention. As shown in FIG. 18, the object detection apparatus may further include an object appearance position distribution calculation unit 204 and a detection priority calculation unit 205.
  • the object appearance position distribution calculating means 204 is based on the scene belonging degree of the input image and object photographing information that is information indicating an area where the object is likely to appear in the photographed image for each scene. Is calculated.
  • the object appearance distribution calculating unit 204 is disclosed as a function of the object detecting unit 123, for example.
  • the object shooting information may be information indicating a position and a size at which an object is likely to appear in an image shot in the scene for each pre-classified scene.
  • the object detecting unit 203 further uses the object appearance position distribution in the input image calculated by the object appearance position distribution calculating unit 204 to obtain an object from the input image. May be detected.
  • the object detection unit 203 determines a detection target region based on the appearance position distribution of the object in the input image calculated by the object appearance position distribution calculation unit 204, and detects the detection target region obtained from the object detector.
  • the object may be detected from the input image by reflecting the occurrence probability of the object with respect to the input image in the result.
  • the object appearance position distribution calculating unit 204 generates the appearance of the object in the input image based on the scene belonging degree of the input image and the object photographing information which is information indicating the area where the object is likely to appear in the photographed image for each scene.
  • the object location probability for each region of the input image may be calculated.
  • the object detection means 203 reflects the occurrence probability of the object with respect to the input image and the existence position probability of the object with respect to each area of the input image in the detection result for the input image obtained from the object detector.
  • the object may be detected from the input image by calculating the object position likelihood indicating the probability that the object exists in each region of the input image.
  • the detection priority calculation unit 205 sets the priority of the object to be detected based on the occurrence probability of the object with respect to the input image calculated by the object occurrence probability calculation unit 202.
  • the detection priority calculation unit 205 sets an object to be detected so that the time required for the object detection process falls within a predetermined time, and the object appears in the scene attribution level of the input image and the captured image.
  • the detection target area may be set based on the object location probability with respect to each area of the input image calculated based on object photographing information that is information indicating an easy area for each scene.
  • the present invention can be applied to uses such as an object detection device that detects a desired object from an image and a program for realizing the object detection device on a computer. Further, the present invention can be applied to a use of changing the focus according to an object in an image or performing image processing for each object using an object detection function.
  • Object detection apparatus 110 Image input device 120 Data processing part 121 Scene attribution degree calculation means 122 Object occurrence probability calculation means 123 Object detection means 124 Detection priority calculation means 130 Data storage part 131 Scene feature storage part 132 Object occurrence information storage part 133 Object photographing information storage unit 140 Object detection result output device 201 Scene attribution degree calculation means 202 Object occurrence probability calculation means 203 Object detection means 204 Object appearance position distribution calculation means 205 Detection priority calculation means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Le dispositif de détection d'objet comprend : des moyens de calcul de degré d'appartenance à une scène pour calculer un degré d'appartenance à une scène, qui consiste en des informations indiquant, par un rapport, à quelle scène une image d'entrée appartient, sur la base d'informations indiquant une caractéristique corrélée avec une scène dont une image est capturée, la caractéristique appartenant à l'image capturée dans la scène, et d'une caractéristique extraite d'une image d'entrée ; des moyens de calcul de probabilité d'apparition d'objet pour calculer la probabilité d'apparition d'un objet pour l'image d'entrée, sur la base d'informations d'apparition d'objet indiquant la probabilité d'apparition de l'objet pour chaque scène et du degré d'appartenance à une scène de l'image d'entrée calculé par les moyens de calcul de degré d'appartenance à une scène ; et des moyens de détection d'objet pour détecter un objet dans l'image d'entrée en utilisant la probabilité d'apparition de l'objet pour l'image d'entrée calculée par les moyens de calcul de probabilité d'apparition d'objet.
PCT/JP2011/005542 2010-10-06 2011-09-30 Dispositif de détection d'objet, procédé de détection d'objet, et programme de détection d'objet WO2012046426A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012537577A JPWO2012046426A1 (ja) 2010-10-06 2011-09-30 物体検出装置、物体検出方法および物体検出プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010226725 2010-10-06
JP2010-226725 2010-10-06

Publications (1)

Publication Number Publication Date
WO2012046426A1 true WO2012046426A1 (fr) 2012-04-12

Family

ID=45927433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/005542 WO2012046426A1 (fr) 2010-10-06 2011-09-30 Dispositif de détection d'objet, procédé de détection d'objet, et programme de détection d'objet

Country Status (2)

Country Link
JP (1) JPWO2012046426A1 (fr)
WO (1) WO2012046426A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254262A (ja) * 2012-06-05 2013-12-19 Toshiba Corp 移動体検出装置、移動体検出システム、および、移動体検出方法
JP2013257182A (ja) * 2012-06-11 2013-12-26 Canon Inc 画像処理装置及び画像処理方法
JP2015082245A (ja) * 2013-10-23 2015-04-27 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP2015099571A (ja) * 2013-11-20 2015-05-28 オリンパス株式会社 対象物位置特定システム、および対象物位置特定方法
JP2015158712A (ja) * 2014-02-21 2015-09-03 株式会社東芝 学習装置、密度計測装置、学習方法、学習プログラム、及び密度計測システム
JP2016091202A (ja) * 2014-10-31 2016-05-23 株式会社豊田中央研究所 自己位置推定装置及び自己位置推定装置を備えた移動体
JP2017157201A (ja) * 2016-02-29 2017-09-07 トヨタ自動車株式会社 人間を中心とした場所認識方法
US11113555B2 (en) 2017-03-23 2021-09-07 Nec Corporation Object detection apparatus, traffic monitoring system, method of controlling an object detection apparatus and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000293694A (ja) * 1999-04-07 2000-10-20 Toyota Motor Corp シーン認識装置
JP2010154187A (ja) * 2008-12-25 2010-07-08 Nikon Corp 撮像装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000293694A (ja) * 1999-04-07 2000-10-20 Toyota Motor Corp シーン認識装置
JP2010154187A (ja) * 2008-12-25 2010-07-08 Nikon Corp 撮像装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013254262A (ja) * 2012-06-05 2013-12-19 Toshiba Corp 移動体検出装置、移動体検出システム、および、移動体検出方法
JP2013257182A (ja) * 2012-06-11 2013-12-26 Canon Inc 画像処理装置及び画像処理方法
US9621856B2 (en) 2012-06-11 2017-04-11 Canon Kabushiki Kaisha Image processing apparatus and image processing method
JP2015082245A (ja) * 2013-10-23 2015-04-27 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP2015099571A (ja) * 2013-11-20 2015-05-28 オリンパス株式会社 対象物位置特定システム、および対象物位置特定方法
JP2015158712A (ja) * 2014-02-21 2015-09-03 株式会社東芝 学習装置、密度計測装置、学習方法、学習プログラム、及び密度計測システム
JP2016091202A (ja) * 2014-10-31 2016-05-23 株式会社豊田中央研究所 自己位置推定装置及び自己位置推定装置を備えた移動体
JP2017157201A (ja) * 2016-02-29 2017-09-07 トヨタ自動車株式会社 人間を中心とした場所認識方法
US10049267B2 (en) 2016-02-29 2018-08-14 Toyota Jidosha Kabushiki Kaisha Autonomous human-centric place recognition
US11113555B2 (en) 2017-03-23 2021-09-07 Nec Corporation Object detection apparatus, traffic monitoring system, method of controlling an object detection apparatus and program

Also Published As

Publication number Publication date
JPWO2012046426A1 (ja) 2014-02-24

Similar Documents

Publication Publication Date Title
US12020474B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
WO2012046426A1 (fr) Dispositif de détection d'objet, procédé de détection d'objet, et programme de détection d'objet
US10216979B2 (en) Image processing apparatus, image processing method, and storage medium to detect parts of an object
CN109829398B (zh) 一种基于三维卷积网络的视频中的目标检测方法
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
CN111639616B (zh) 一种基于深度学习的重身份识别方法
JP5121506B2 (ja) 画像処理装置、画像処理方法、プログラム及び記憶媒体
JP4479478B2 (ja) パターン認識方法および装置
JP6032921B2 (ja) 物体検出装置及びその方法、プログラム
JP6921694B2 (ja) 監視システム
JP6112801B2 (ja) 画像認識装置及び画像認識方法
US20110091113A1 (en) Image processing apparatus and method, and computer-readable storage medium
US9740965B2 (en) Information processing apparatus and control method thereof
US8111877B2 (en) Image processing device and storage medium storing image processing program
US20070058836A1 (en) Object classification in video data
JP2001307096A (ja) 画像認識装置及び方法
JP2016095808A (ja) 物体検出装置、物体検出方法、画像認識装置及びコンピュータプログラム
JP2014093023A (ja) 物体検出装置、物体検出方法及びプログラム
JP5936561B2 (ja) 画像における外観及びコンテキストに基づく物体分類
CN109902576B (zh) 一种头肩图像分类器的训练方法及应用
WO2019171779A1 (fr) Dispositif de détection d'objet, procédé de détection d'objet et programme
JP2008251039A (ja) 画像認識システム及びその認識方法並びにプログラム
JP6384167B2 (ja) 移動体追跡装置及び移動体追跡方法、並びにコンピュータ・プログラム
JP5335554B2 (ja) 画像処理装置及び画像処理方法
JP2014010633A (ja) 画像認識装置、画像認識方法、及び画像認識プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11830360

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012537577

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11830360

Country of ref document: EP

Kind code of ref document: A1