WO2024201662A1 - Object detection device - Google Patents
Object detection device Download PDFInfo
- Publication number
- WO2024201662A1 WO2024201662A1 PCT/JP2023/012243 JP2023012243W WO2024201662A1 WO 2024201662 A1 WO2024201662 A1 WO 2024201662A1 JP 2023012243 W JP2023012243 W JP 2023012243W WO 2024201662 A1 WO2024201662 A1 WO 2024201662A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature extraction
- model
- feature
- unit
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 101
- 238000000605 extraction Methods 0.000 claims abstract description 173
- 230000009466 transformation Effects 0.000 claims abstract description 93
- 238000011156 evaluation Methods 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 71
- 230000008569 process Effects 0.000 claims abstract description 67
- 230000035945 sensitivity Effects 0.000 claims description 40
- 238000006243 chemical reaction Methods 0.000 claims description 21
- 230000008859 change Effects 0.000 claims description 18
- 239000000284 extract Substances 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 description 23
- 238000012545 processing Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000015654 memory Effects 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 239000012636 effector Substances 0.000 description 3
- 101100182885 Homo sapiens MADD gene Proteins 0.000 description 2
- 102100028822 MAP kinase-activating death domain protein Human genes 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000013256 coordination polymer Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- This disclosure relates to an object detection device.
- a robot system that detects the position and orientation of an object based on an image captured by a visual sensor and has the robot perform a task on the object.
- detection first, image features that represent specific parts of the object, whose position and orientation is known, are extracted from the captured image by feature extraction processing and registered together with the position and orientation.
- image features that represent specific parts of the object are similarly extracted from the captured image by feature extraction processing, and the position and orientation of the object is identified based on the amount of change in the position and orientation in the image obtained by comparing it with the registered image features (model features).
- Patent documents 1 and 2 describe a device that identifies the position and orientation of an object by performing a matching process using edge points.
- the image features used for matching change depending on the feature extraction process applied. For example, even with the same edge feature extraction process, the edge features extracted by a Sobel filter and a Laplacian filter differ. Furthermore, if the imaging conditions change and the appearance of the object changes, the image features output by the feature extraction process will change. There is a demand for an object detection device that can determine a feature extraction process that can stably detect the object even in situations where the imaging conditions change.
- One aspect of the present disclosure is an object detection device that includes a feature extraction unit that extracts image features from an image, a model image receiving unit that receives a first image of an object whose position and orientation are known as a model image, a model feature storage unit that stores the first image feature extracted from the model image by the feature extraction unit as a model feature, and a detection unit that identifies the position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image of the object whose position and orientation are unknown with the model feature, the object detection device further including a model conversion unit that applies one or more conversion processes to the model image and the model feature to generate one or more converted model images and one or more converted model features, respectively, and a model conversion unit that converts the converted model image into a converted model image by the feature extraction unit.
- the object detection device includes a first characteristic calculation unit that calculates a first characteristic related to the characteristics of the feature extraction process by the feature extraction unit based on the extracted third image feature and the conversion model feature, and an evaluation index calculation unit that calculates an evaluation index of the feature extraction process based on one or more first characteristics calculated by the first characteristic calculation unit based on the one or more third image features and the one or more conversion model features, where the feature extraction unit has a plurality of feature extraction processes, the first characteristic calculation unit calculates the one or more first characteristics for each of the plurality of feature extraction processes based on the third image feature and the conversion model feature, and the evaluation index calculation unit calculates the evaluation index for each of the plurality of feature extraction processes to determine the feature extraction process to be applied to the detection process of the detection unit.
- FIG. 1 is a diagram illustrating a device configuration of a robot system including an object detection device according to an embodiment.
- FIG. 2 is a functional block diagram of a robot control device.
- FIG. 13 is a diagram showing a result of applying a Sobel filter to an image of a captured object.
- FIG. 13 is a diagram showing a result of applying a Laplacian filter to an image of a captured object.
- FIG. 11 is a data flow diagram showing the flow of data in the feature extraction process determination process.
- FIG. 11 illustrates an example of a conversion process.
- 6B is a flowchart showing the feature extraction process determination process.
- 6B is a flowchart showing the feature extraction process determination process together with FIG. 6A.
- FIG. 13 is a diagram illustrating an example of a list of evaluation indexes.
- FIG. 13 is a diagram illustrating an example of processing by a Shi-Tomasi corner detector.
- the robot system 100 includes a robot 10, a robot control device 50 that controls the robot 10, a visual sensor 70, and a teaching device 40 connected to the robot control device 50.
- the visual sensor 70 is mounted on the tip of the arm of the robot 10.
- the visual sensor 70 is connected to the robot control device 50 and operates under the control of the robot control device 50.
- the robot control device 50 is equipped with a function as an object detection device 60 that detects the position and orientation of an object 1 placed on, for example, a workbench 2 based on an image captured by the visual sensor 70.
- the robot control device 50 can cause the robot 10 to perform a predetermined task on the object 1 based on the detected position and orientation of the object 1.
- the robot 10 is a six-axis vertical articulated robot. Note that various types of robots may be used as the robot 10 depending on the work target, such as a horizontal articulated robot, a parallel link type robot, or a dual-arm robot.
- the robot 10 can perform the desired work using an end effector attached to the wrist.
- the end effector is an external device that can be replaced depending on the application, such as a hand, a welding gun, or a tool.
- Figure 1 shows an example in which a hand 33 is used as an end effector.
- the visual sensor 70 may be a camera that captures two-dimensional images such as grayscale images or color images, or a stereo camera or three-dimensional sensor that can obtain distance images or three-dimensional point clouds. In this embodiment, it is assumed that the visual sensor 70 is a camera that captures two-dimensional images.
- the robot control device 50 holds model data of the object and can execute a detection process that identifies the position and orientation of the object by matching the image of the object in the captured image with the model data (pattern matching). In this embodiment, it is assumed that the visual sensor 70 has been calibrated, and that the robot control device 50 holds calibration data that defines the relative positional relationship between the visual sensor 70 and the robot 10. This makes it possible to convert a position on an image captured by the visual sensor 70 into a position on a coordinate system (such as a robot coordinate system) fixed to the working space.
- a coordinate system such as a robot coordinate system
- the robot control device 50 controls the operation of the robot 10 according to an operation program or commands from the teaching device 40.
- the robot control device 50 may have a hardware configuration as a general computer having a processor, memory (ROM, RAM, non-volatile memory, etc.), a storage device 152 (see FIG. 2), an operation unit, an input/output interface, a network interface, etc.
- the teaching device 40 is used as an operation terminal for teaching the robot 10, performing various settings, displaying information, etc.
- a teaching operation panel, a tablet terminal, etc. may be used as the teaching device 40.
- the teaching device 40 may have a hardware configuration as a general computer having a processor, memory (ROM, RAM, non-volatile memory, etc.), a storage device, an operation unit, a display unit 41 (see Figure 2), an input/output interface, a network interface, etc.
- the display unit 41 is equipped with, for example, a liquid crystal display as a display device.
- FIG. 2 is a functional block diagram of the robot control device 50. As shown in FIG. 2, the robot control device 50 has a function as an object detection device 60 in addition to a function for controlling the robot 10.
- the robot control device 50 includes an operation control unit 151 and a storage device 152.
- the storage device 152 is, for example, a storage device formed of a non-volatile memory or a hard disk device.
- the storage device 152 stores a robot program for controlling the robot 10, a program (vision program) for performing image processing such as workpiece detection based on images captured by the visual sensor 70, calibration data, various setting information, etc.
- the motion control unit 151 controls the motion of the robot according to the robot program or according to commands from the teaching device 40.
- the robot control device 50 is equipped with a servo control unit (not shown) that performs servo control on the servo motors of each axis according to commands for each axis generated by the motion control unit 151.
- the object detection device 60 has a function of detecting an object from an image captured by the visual sensor 70.
- the image features (edge points, etc.) of the object in the input image are extracted by applying feature processing such as a filter to the input image, and the position and orientation of the object in the input image are identified by comparing the extracted image features with pre-prepared model data (image features) of the object.
- a filter for extracting edge features of the object may be used as a feature extraction process.
- there are multiple types of filters such as Sobel filters and Laplacian filters.
- the object detection device 60 according to this embodiment provides a function for automatically determining a feature extraction process suitable for stably detecting a certain object.
- the object detection device 60 includes a visual sensor control unit 161, an image acquisition unit 162, a detection unit 163, a feature extraction unit 164, a model image reception unit 165, a matching area reception unit 166, a model feature storage unit 167, a model conversion unit 168, a stability calculation unit (first characteristic calculation unit) 169, a sensitivity calculation unit (second characteristic calculation unit) 170, an evaluation index calculation unit 171, and a display control unit 172.
- These functional blocks may be realized by the processor of the robot control device 50 executing a program.
- the components of the object detection device 60 in FIG. 2 correspond to the processor of the robot control device 50.
- the visual sensor control unit 161 controls the operation of the visual sensor 70.
- the visual sensor control unit 161 can control the visual sensor 70 according to instructions for the visual sensor 70 in an operation program.
- the image acquisition unit 162 has a function of acquiring image information obtained by the visual sensor 70 capturing an image within its field of view. In this embodiment, the image acquisition unit 162 acquires a two-dimensional image from the visual sensor 70.
- the detection unit 163 can perform a detection process to identify the position and orientation of an object by comparing image features (hereinafter, mainly these image features) extracted by the feature extraction unit 164 from an image captured of an object whose position and orientation are unknown (hereinafter, this image is also referred to as a second image) with model features of the object prepared in advance.
- image features hereinafter, mainly these image features
- the model features are, for example, image features extracted by the feature extraction unit 164 from an image captured of an object whose position and orientation are known.
- the feature extraction unit 164 has multiple types of feature extraction processes that extract image features from an image.
- the feature extraction processes have multiple filters that extract edge features, for example.
- the multiple types of filters include, for example, Sobel filters and Laplacian filters. Both Sobel filters and Laplacian filters are filters for extracting edge features of an object. However, these filters have different properties.
- the Sobel filter determines whether or not a pixel is an edge point based on the edge strength represented by the first differential value in the x direction (horizontal direction) and the first differential value in the y direction (vertical direction) of the image.
- the Laplacian filter determines whether or not a pixel is an edge point based on the second differential value of the image.
- Fig. 3A image GI1
- Fig. 3B image GI2
- the model image used here is image IG11 shown on the left side of Fig. 5.
- image IG11 the details of the cast surface of the object 90 are omitted.
- Fig. 3A image GI1
- Fig. 3B image GI2
- the edge points extracted by the filter are represented by black dots.
- Fig. 3A image GI1
- Fig. 3B image GI2
- the edge features (edge points) extracted by each filter are different.
- the Laplacian filter in Fig.
- the Laplacian filter is suitable for identifying the position and orientation of the object by matching the arc-shaped step portion 91 as a characteristic part of the object.
- the object detection device 60 is configured to evaluate the properties of the feature extraction process for a certain object in response to changes in imaging conditions, and to determine the appropriate feature extraction process to apply to that object.
- the model image receiving unit 165 receives an image of an object whose position and orientation are known (hereinafter, this image is also referred to as the "first image") as a model image.
- the matching area receiving unit 166 provides a function for receiving input specifying an area in the model image to be used for matching with the target object.
- the matching area may be a part of the model image, or may be the entirety of the model image.
- the matching area receiving unit 166 receives a user operation for specifying an area in the model image to be used for matching with the target object.
- the matching area receiving unit 166 may provide a graphical user interface that displays a captured image of the target object on the display unit 41 of the teaching device 40, and receives an operation for specifying a matching area on the captured image using a pointing device or the like.
- the model feature storage unit 167 stores, as model features, the image features (hereinafter also referred to as "first image features") that are included in the matching region among the image features extracted from the model image by the feature extraction unit 164.
- the model transformation unit 168 provides a function for applying one or more transformation processes to input image data.
- the transformation processes include one or more of a change in brightness and a projective transformation.
- Projective transformation includes rotation and scale transformation.
- the model transformation unit 168 can output a "transformed model image" obtained by applying a transformation process to a model image.
- the model transformation unit 168 can also output "transformed model features" obtained by applying a transformation process to model features.
- third image features The image features obtained by applying feature extraction processing to the transformed model image are referred to as "third image features.”
- the stability calculation unit 169 calculates a first characteristic related to the characteristics of the feature extraction process based on the third image feature and the transformation model feature extracted by the feature extraction unit 164 from the transformation model image.
- the third image feature represents an image obtained by applying the feature extraction process to an image (transformed model image) to which transformation (brightness transformation, projective transformation, etc.) by the model transformation unit has been applied to the model image, in other words, an image in which the appearance of the object has changed. Therefore, by comparing the transformation model feature with the third image feature, it is possible to derive information that represents the characteristics of the feature extraction process with respect to changes in the image.
- the stability calculation unit 169 calculates, as the first characteristic, an index that represents the stability of the feature extraction process with respect to changes in the image. Hereinafter, this index will also be referred to simply as stability.
- the sensitivity calculation unit 170 calculates a second characteristic related to the characteristics of the feature extraction process based on the third image feature and the transformation model feature extracted by the feature extraction unit 164 from the transformation model image.
- the sensitivity calculation unit 170 calculates, as the second characteristic, an index that indicates the sensitivity of the feature extraction process by the feature extraction unit to changes in the image.
- this index will also be referred to simply as sensitivity.
- the evaluation index calculation unit 171 can calculate an evaluation index for determining a feature extraction process based on one or more stabilities calculated based on the third image feature and the transformation model feature.
- the evaluation index calculation unit 171 can also calculate an evaluation index for determining a feature extraction process based on one or more stabilities and one or more sensitivities.
- Step 1 A user inputs a model image 201 obtained by capturing an object whose position and orientation are known.
- Step 2 The user specifies an area on the model image 201 to be used for matching.
- the object detection device 60 performs the following processes (procedure 3-1) to (procedure 3-4) for each of the feature extraction processes 164A.
- Step 3-1 A feature extraction process is applied to the model image 201.
- Step 3-2 Image features within the matching region in the model image 201 are registered as model features 202 for the selected feature extraction process.
- Step 3-3) The object detection device 60 performs the following processes (a) to (d) for each of the multiple conversion processes 168A.
- a transformation process is applied to the model image 201 to obtain a transformed model image 203.
- a feature extraction process is applied to the transformed model image 203 to obtain a third image feature 204 .
- a transformation process is applied to the model features 202 to obtain transformed model features 205 .
- Stability and sensitivity are calculated (symbol 169A) from the image features (third image features 204) extracted from the transformed model image 203 and the transformed model features (transformed model features 205), and stored as the stability and sensitivity of the selected transformation process.
- an evaluation index for determining a feature extraction process from a plurality of sets of stability and sensitivity corresponding to a plurality of conversion processes is calculated (reference numeral 171A).
- the object detection device 60 determines the feature extraction process to be applied to the detection process based on the evaluation index calculated for each feature extraction process in the above step 3. For example, the evaluation index calculation unit 171 may determine the feature extraction process having the highest evaluation index as the feature extraction process to be used for the detection process.
- FIG. 5 An example of the conversion process by the model conversion unit 168 is shown in FIG. 5.
- a converted image IG12 is generated by applying a projective transformation (including rotation and scale transformation) to a model image IG11 obtained by capturing an image of an object 90.
- the stability calculation unit 169 calculates stability by the following process.
- the image features are edge points.
- Step k1 Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
- Step k2 Calculate the stability of the model feature based on the proportion of edge points included in the transformed model feature whose difference from the corresponding edge points included in the third image feature is smaller than a specific value.
- the correspondence between the edge points included in the transformation model feature and the third image feature may be determined according to the following rule.
- Rule Define edge points in the transformation model feature and the third image feature as points consisting of two variables, namely, their position and the direction of the brightness gradient. Then, determine an edge point in the third image feature whose distance from an edge point in the transformation model feature (distance in a space defined by the two variables (position, brightness gradient)) is within a predetermined threshold as the edge point corresponding to the edge point of the transformation model feature.
- stability can be said to be the proportion of similarity between the corresponding image feature of the third image feature and the transformation model feature.
- stability is an index that indicates how many edge points are extracted at the target position (the position of the corresponding point in the transformation model feature) in the third image feature.
- the stability calculation unit 169 provides an index that indicates the stability of the image feature against image changes (transformation) for the image feature extracted by the feature extraction process to be evaluated.
- the sensitivity calculation unit 170 calculates the sensitivity by the following process.
- the image features are edge points.
- Step m1 Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
- Step m2 Calculate the sensitivity of the model feature based on the proportion of image features whose difference from the corresponding edge points included in the transformation model feature is smaller than a specific value among edge points included in the third image feature.
- step m1 The determination of corresponding points in step m1 above is the same as in step k1 above.
- the sensitivity calculation unit 170 calculates the sensitivity as follows, for example.
- the total number of edge points appearing in the third image feature is set to Gtotal .
- the total number of edge points whose distance to the corresponding point of the transformation model feature is smaller than a specific value is set to G1 .
- sensitivity is an index that indicates how few third image features are similar to the transformed model features.
- sensitivity is an index that indicates the degree to which the image features extracted by the feature extraction process being evaluated react sensitively only to the model features, even if there is a change in the image (transformation).
- edge points (G total ) appearing in the third image feature increases, the likelihood that edge points will appear at the same positions of the edge points of the transformation model feature increases, tending to increase stability, but tending to decrease sensitivity.
- FIG. 6A-FIG. 6B are flowcharts showing the process for determining the feature extraction process (feature extraction process determination process) executed in the object detection device 60. This process is executed under the control of the processor.
- the model image receiving unit 165 receives an input of a model image acquired by the visual sensor 70 capturing an image of an object whose position and orientation are known (step S1).
- the matching area receiving unit 166 receives an operation by the user to specify a matching area on the model image that corresponds to an area from which image features are to be extracted (step S2).
- the series of processes from steps S4 to S16 are repeatedly executed for each feature extraction process (loop 1 of step S3).
- the model feature storage unit 167 applies the feature extraction process to the model image (step S4). Then, the model feature storage unit 167 saves the image features in the matching area in the model image as model features for the feature extraction process selected in loop 1 (step S5).
- the series of processes from steps S7 to S15 are repeated a predetermined number of times while changing the brightness conversion parameters (loop 2 of step S6). Also, the series of processes from steps S8 to S15 are repeated a predetermined number of times while switching the projection transformation parameters (loop 3 of step S7).
- step S8 the model transformation unit 168 applies a process to change the brightness of the model image. Furthermore, the model transformation unit 168 applies a projective transformation to the model image whose brightness has been changed (step S9).
- the projective transformation includes, for example, rotation and scale transformation of the model image.
- the feature extraction unit 164 applies the selected feature extraction process to the model image (transformed model image) to which the brightness change and projective transformation have been applied (step S10).
- the model transformation unit 168 then applies projective transformation to the model features resulting from the selected feature extraction process (step S11).
- the stability calculation unit 169 calculates the above-mentioned stability from the image features (third image features) extracted from the transformed model image and the model features (transformed model features) to which projective transformation has been applied (step S12).
- the stability calculation unit 169 then stores the calculated stability as one of the stabilities of the selected feature extraction process.
- step S14 the sensitivity calculation unit 170 calculates the above-mentioned sensitivity from the image feature (third image feature) extracted from the transformation model image and the transformation model feature. Then, the sensitivity calculation unit 170 stores the calculated sensitivity as one of the sensitivities for the selected feature extraction process (step S15).
- Loop processing by loop 2 and loop 3 is executed for multiple brightness change parameters and projection transformation parameters, and the number of stabilities and sensitivities each is generated equal to the number of brightness change parameters multiplied by the number of projection transformation parameters.
- the evaluation index calculation unit 171 calculates an evaluation index for the currently selected feature extraction process from the pair of stabilities and the pair of sensitivities generated for that feature extraction process.
- the evaluation index calculation unit 171 may use the average, weighted harmonic mean, or other statistics for the stabilities ST(1) to ST(n) and sensitivities SE(1) to SE(n) as the evaluation index for the selected feature extraction process.
- the evaluation index calculation unit 171 determines, for example, the feature extraction process with the highest evaluation index among the evaluation indexes calculated for each of the multiple feature extraction processes as the feature extraction process to be used for the detection process (step S17).
- the feature extraction process determination process is configured to calculate an evaluation index for determining the feature extraction process based on the stability of image features against changes in the image.
- the feature extraction process determination process is configured to calculate an evaluation index for determining the feature extraction process based on the sensitivity of image features to changes in the image.
- the feature extraction process determination process is configured to be able to calculate an evaluation index for determining the feature extraction process using a single image (model image). This configuration has the advantage of reducing the user's workload required for the feature extraction process determination process.
- the evaluation index calculation unit 171 is configured to calculate an evaluation index for determining a feature extraction process based on stability and sensitivity, but the evaluation index calculation unit 171 may calculate an evaluation index based on stability. Even in this case, it is possible to determine a feature extraction process that can stably detect changes in the image. Alternatively, the evaluation index calculation unit 171 may calculate an evaluation index based on sensitivity. Even in this case, it is possible to determine a feature extraction process that can provide stable detection.
- the evaluation index calculation unit 171 may calculate an evaluation index for each of one or more conversion processes (i.e., for each pair of one or more third image features and one or more conversion model features), and determine the minimum value of the one or more calculated evaluation indexes as the evaluation index for the feature extraction process.
- the display control unit 172 may operate to display the evaluation index for each feature extraction process calculated by the evaluation index calculation unit 171.
- FIG. 7 shows an example in which a list 250 showing the evaluation indexes of each feature extraction process (filter) is displayed on the display unit 41 by the display control unit 172. By displaying the evaluation indexes in this way, the user can recognize which feature extraction process is suitable. By referring to this list 250, the user can also select the feature extraction process to apply to detection.
- the list 250 may also function as a user interface that accepts a user operation to select a feature extraction process to be used for detection from multiple feature extraction processes.
- a filter that performs edge detection has been given as an example of feature extraction processing, but the above-mentioned content for determining a feature extraction processing can be applied to various types of feature extraction processing.
- corner detectors include the Harris corner detector, the Shi-Tomasi corner detector, and the FAST (Features from Accelerated Segment Test) corner detector.
- the Harris corner detector and Shi-Tomasi corner detector detect points that have a large amount of change (E in the following equation (1)) when the image position is shifted as corners.
- I(x,y) is the brightness of the pixel in the image
- I(x+u,y+v) is the brightness of the pixel in the shifted image
- w(x,y) is the window function.
- the Harris corner detector and the Shi-Tomasi corner detector determine whether a corner exists from the eigenvalues of M in the following equations (2) and (3), which are simplified versions of the above equations, but the method of determination is different.
- Figure 8 shows an example of the results of processing an image of an object using the Shi-Tomasi corner detector.
- Image IG20 shown in Figure 8 shows the results of applying the Shi-Tomasi corner detector to an image containing an object 190 (part of the object is marked with the reference symbol 190).
- the detected corners CP are represented by black dots (only a portion is marked with the reference symbol CP).
- the FAST corner detector classifies whether a candidate point is actually a corner by the difference in brightness values of a 16-pixel circle. If the difference is consecutively greater or smaller than a threshold, it is considered a corner.
- the FAST corner detector is said to be able to operate at high speed.
- corner detectors Due to the differences in the detection methods used by each corner detector, it is believed that, as with the edge detector, the extraction of image features by the corner detector will also be affected by changes in how the object appears. Therefore, for corner detectors as well, it is possible to calculate the stability and sensitivity described above, obtain an evaluation index, and determine a corner detector that will provide stable detection. Therefore, the various processes for determining the feature extraction process in the above-mentioned embodiments can also be applied to corner detectors.
- the object detection device is realized as a function of the robot control device, but the object detection device can also be realized as an independent device separate from the robot control device.
- the object detection device can be configured as an information processing device such as a personal computer connected to the robot control device.
- the functional blocks of the robot control device shown in Figure 2 may be realized by the processor of the robot control device executing various software stored in a storage device, or may be realized by a hardware-based configuration such as an ASIC (Application Specific Integrated Circuit).
- ASIC Application Specific Integrated Circuit
- the programs that execute various processes such as the feature extraction process and determination process in the above-mentioned embodiments can be recorded on various computer-readable recording media (for example, semiconductor memories such as ROM, EEPROM, and flash memory, magnetic recording media, and optical disks such as CD-ROM and DVD-ROM).
- a feature extraction unit (164) for extracting image features from an image a model image receiving unit (165) that receives, as a model image, a first image obtained by capturing an object whose position and orientation are known; a model feature storage unit (167) that stores the first image feature extracted from the model image by the feature extraction unit as a model feature; a detection unit (163) that identifies a position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image capturing the object, the position and orientation of which are unknown, with the model feature,
- the object detection device (60) further comprises: a model transformation unit (168) that applies one or more transformation processes to the model image and the model feature to generate one or more transformed model images and one or more transformed model features, respectively; a first characteristic calculation unit (169) that calculates a first characteristic related to a characteristic of the feature extraction process by the feature extraction unit based on a third image characteristic extracted from the transformation model image by the feature extraction unit and
- (Appendix 2) A matching area receiving unit (166) that receives a part or the whole of the object shown in the model image as a matching area,
- (Appendix 3) The object detection device (60) described in Appendix 1 or 2, wherein the evaluation index calculation unit (171) determines the feature extraction process to be used by the feature extraction unit in the detection process by the detection unit (163) based on the evaluation index calculated for each of the multiple feature extraction processes.
- (Appendix 4) The object detection device (60) described in Appendix 3, wherein the evaluation index calculation unit (171) determines the feature extraction process having the highest evaluation index among the evaluation indexes calculated for each of the multiple feature extraction processes as the feature extraction process to be used by the feature extraction unit (164) in the detection process by the detection unit (163).
- (Appendix 5) The object detection device (60) according to any one of appendixes 1 to 4, wherein the one or more conversion processes by the model conversion unit (168) include a process of converting brightness of the model image.
- (Appendix 6) The object detection device (60) according to any one of appendices 1 to 5, wherein the one or more transformation processes by the model transformation unit (168) include performing a projective transformation on each of the model image and the model feature.
- a second characteristic calculation unit (170) that calculates a second characteristic representing sensitivity of the feature extraction process by the feature extraction unit (164) to a change in an image based on the third image characteristic and the transformation model characteristic,
- the second characteristic calculation unit (170) determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
- the evaluation index calculation unit (171) calculates, for each of the plurality of feature extraction processes, calculating an index for determining the feature extraction process for each pair of the one or more third image features and the one or more transformation model features;
- Robot 33 Hand 40 Teaching device 41 Display unit 50 Robot control device 60 Object detection device 70 Visual sensor 100 Robot system 151 Operation control unit 161 Visual sensor control unit 162 Image acquisition unit 163 Detection unit 164 Feature extraction unit 165 Model image reception unit 166 Matching area reception unit 167 Model feature storage unit 168 Model conversion unit 169 Stability calculation unit 170 Sensitivity calculation unit 171 Evaluation index calculation unit 172 Display control unit 201 Model image 202 Model feature 203 Transformed model image 204 Third image feature 205 Transformed model feature
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides an object detection device comprising: a model transformation unit that applies one or more transformation processes to a model image and a model feature to generate one or more transformed model images and one or more transformed model features; a first characteristic calculation unit that calculates a first characteristic related to a characteristic of a feature extraction process by a feature extraction unit, on the basis of a transformed model feature and a third image feature extracted from the transformed model image by the feature extraction unit; and an evaluation index calculation unit that calculates an evaluation index for the feature extraction process on the basis of one or more first characteristics calculated by the first characteristic calculation unit on the basis of one or more third image features and one or more transformed model features, wherein the feature extraction unit has a plurality of feature extraction processes, the first characteristic calculation unit calculates the one or more first characteristics on the basis of the third image feature and the transformed model feature for each of the plurality of feature extraction processes, and the evaluation index calculation unit calculates the evaluation index for determining the feature extraction process to apply to the detection process by the detection unit for each of the plurality of feature extraction processes.
Description
本開示は、物体検出装置に関する。
This disclosure relates to an object detection device.
視覚センサで撮像した画像に基づき対象物の位置姿勢を検出し、ロボットに対象物に対する作業を行わせるロボットシステムが知られている。検出では、まず位置姿勢が既知である対象物を撮像した画像から対象物の特定の箇所を表す画像特徴を特徴抽出処理により抽出し位置姿勢と合わせて登録しておく。位置姿勢が不明である対象物に対しては、撮像して得られた画像に対して同様に対象物の特定の箇所を表す画像特徴を特徴抽出処理により抽出して、登録されている画像特徴(モデル特徴)と照合することで得られる画像上の位置姿勢の変化量に基づき対象物の位置姿勢を特定する。
A robot system is known that detects the position and orientation of an object based on an image captured by a visual sensor and has the robot perform a task on the object. In detection, first, image features that represent specific parts of the object, whose position and orientation is known, are extracted from the captured image by feature extraction processing and registered together with the position and orientation. For objects whose position and orientation is unknown, image features that represent specific parts of the object are similarly extracted from the captured image by feature extraction processing, and the position and orientation of the object is identified based on the amount of change in the position and orientation in the image obtained by comparing it with the registered image features (model features).
特許文献1及び2は、エッジ点を用いて照合処理を行ことで対象物の位置及び姿勢を特定する装置を記載する。
Patent documents 1 and 2 describe a device that identifies the position and orientation of an object by performing a matching process using edge points.
照合に用いる画像特徴は、適用する特徴抽出処理によって変化する。例えば、同じエッジ特徴抽出処理でも、ソーベルフィルタとラプラシアンフィルタでは抽出されるエッジ特徴は異なる。また、撮像条件が変わり対象物の映り具合が変わると、特徴抽出処理により出力される画像特徴は変わってくる。撮像条件が変わる状況においても対象物を安定して検出することのできる特徴抽出処理を決定することのできる物体検出装置が望まれる。
The image features used for matching change depending on the feature extraction process applied. For example, even with the same edge feature extraction process, the edge features extracted by a Sobel filter and a Laplacian filter differ. Furthermore, if the imaging conditions change and the appearance of the object changes, the image features output by the feature extraction process will change. There is a demand for an object detection device that can determine a feature extraction process that can stably detect the object even in situations where the imaging conditions change.
本開示の一態様は、画像から画像特徴を抽出する特徴抽出部と、位置及び姿勢が既知である対象物を撮像した第一の画像をモデル画像として受け付けるモデル画像受付部と、前記特徴抽出部が前記モデル画像から抽出した第一の画像特徴をモデル特徴として記憶するモデル特徴記憶部と、位置及び姿勢が未知である前記対象物を撮像した第二の画像から前記特徴抽出部が抽出した第二の画像特徴と前記モデル特徴とを照合することで前記対象物の位置及び姿勢を特定する検出部と、を備える物体検出装置であって、前記物体検出装置は、更に、前記モデル画像と前記モデル特徴に1以上の変換処理を加えて1以上の変換モデル画像と1以上の変換モデル特徴とをそれぞれ生成するモデル変換部と、前記特徴抽出部が前記変換モデル画像から抽出した第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部による特徴抽出処理の特性に関する第一特性を算出する第一特性計算部と、前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第一特性計算部が算出した1以上の第一特性に基づいて前記特徴抽出処理の評価指標を計算する評価指標計算部と、を備え、前記特徴抽出部は、複数の特徴抽出処理を有し、前記第一特性計算部は、前記複数の特徴抽出処理の各々について、前記第三の画像特徴と前記変換モデル特徴とに基づき前記1以上の第一特性を算出し、前記評価指標計算部は、前記複数の特徴抽出処理の各々について、前記検出部の検出処理に適用する特徴抽出処理を決定するための前記評価指標を算出する、物体検出装置である。
One aspect of the present disclosure is an object detection device that includes a feature extraction unit that extracts image features from an image, a model image receiving unit that receives a first image of an object whose position and orientation are known as a model image, a model feature storage unit that stores the first image feature extracted from the model image by the feature extraction unit as a model feature, and a detection unit that identifies the position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image of the object whose position and orientation are unknown with the model feature, the object detection device further including a model conversion unit that applies one or more conversion processes to the model image and the model feature to generate one or more converted model images and one or more converted model features, respectively, and a model conversion unit that converts the converted model image into a converted model image by the feature extraction unit. The object detection device includes a first characteristic calculation unit that calculates a first characteristic related to the characteristics of the feature extraction process by the feature extraction unit based on the extracted third image feature and the conversion model feature, and an evaluation index calculation unit that calculates an evaluation index of the feature extraction process based on one or more first characteristics calculated by the first characteristic calculation unit based on the one or more third image features and the one or more conversion model features, where the feature extraction unit has a plurality of feature extraction processes, the first characteristic calculation unit calculates the one or more first characteristics for each of the plurality of feature extraction processes based on the third image feature and the conversion model feature, and the evaluation index calculation unit calculates the evaluation index for each of the plurality of feature extraction processes to determine the feature extraction process to be applied to the detection process of the detection unit.
添付図面に示される本発明の典型的な実施形態の詳細な説明から、本発明のこれらの目的、特徴および利点ならびに他の目的、特徴および利点がさらに明確になるであろう。
These and other objects, features and advantages of the present invention will become more apparent from the detailed description of exemplary embodiments of the present invention illustrated in the accompanying drawings.
次に、本開示の実施形態について図面を参照して説明する。参照する図面において、同様の構成部分または機能部分には同様の参照符号が付けられている。理解を容易にするために、これらの図面は縮尺を適宜変更している。また、図面に示される形態は本発明を実施するための一つの例であり、本発明は図示された形態に限定されるものではない。
Next, an embodiment of the present disclosure will be described with reference to the drawings. In the drawings, similar components or functional parts are given similar reference symbols. The scale of these drawings has been appropriately changed to facilitate understanding. Furthermore, the form shown in the drawings is one example for implementing the present invention, and the present invention is not limited to the form shown.
図1は一実施形態に係る物体検出装置60を備えるロボットシステム100の機器構成を示す図である。図1に示すように、ロボットシステム100は、ロボット10と、ロボット10を制御するロボット制御装置50と、視覚センサ70と、ロボット制御装置50に接続された教示装置40とを含む。視覚センサ70は、ロボット10のアーム先端部に搭載されている。視覚センサ70は、ロボット制御装置50に接続され、ロボット制御装置50からの制御により動作する。ロボット制御装置50は、視覚センサ70で撮像された画像に基づき例えば作業台2に配置された対象物1の位置及び姿勢を検出する物体検出装置60としての機能を搭載する。ロボット制御装置50は、検出された対象物1の位置及び姿勢に基づき、ロボット10に対象物1に対する所定の作業を行うことができる。
1 is a diagram showing the equipment configuration of a robot system 100 including an object detection device 60 according to one embodiment. As shown in FIG. 1, the robot system 100 includes a robot 10, a robot control device 50 that controls the robot 10, a visual sensor 70, and a teaching device 40 connected to the robot control device 50. The visual sensor 70 is mounted on the tip of the arm of the robot 10. The visual sensor 70 is connected to the robot control device 50 and operates under the control of the robot control device 50. The robot control device 50 is equipped with a function as an object detection device 60 that detects the position and orientation of an object 1 placed on, for example, a workbench 2 based on an image captured by the visual sensor 70. The robot control device 50 can cause the robot 10 to perform a predetermined task on the object 1 based on the detected position and orientation of the object 1.
ロボット10は、例示として、6軸の垂直多関節ロボットであるものとする。なお、ロボット10として、水平多関節ロボット、パラレルリンク型ロボット、双腕ロボット等、作業対象に応じて様々なタイプのロボットが用いられても良い。ロボット10は、手首部に取り付けられたエンドエフェクタによって所望の作業を実行することができる。エンドエフェクタは、用途に応じて交換可能な外部装置であり、例えば、ハンド、溶接ガン、工具等である。図1では、エンドエフェクタとしてのハンド33が用いられている例を示す。
As an example, the robot 10 is a six-axis vertical articulated robot. Note that various types of robots may be used as the robot 10 depending on the work target, such as a horizontal articulated robot, a parallel link type robot, or a dual-arm robot. The robot 10 can perform the desired work using an end effector attached to the wrist. The end effector is an external device that can be replaced depending on the application, such as a hand, a welding gun, or a tool. Figure 1 shows an example in which a hand 33 is used as an end effector.
視覚センサ70は、濃淡画像やカラー画像等の2次元画像を撮像するカメラでも、距離画像や3次元点群を取得できるステレオカメラや3次元センサでもよい。本実施形態では、視覚センサ70は、2次元画像を撮像するカメラである場合を想定する。ロボット制御装置50は、対象物のモデルデータを保持し、撮像画像中の対象物の画像とモデルデータとの照合処理(パターマッチング)により対象物の位置及び姿勢を特定する検出処理を実行することができる。本実施形態では、視覚センサ70はキャリブレーション済みであるものとし、ロボット制御装置50は、視覚センサ70とロボット10との相対位置関係を定義したキャリブレーションデータを保有しているものとする。これにより、視覚センサ70で撮像した画像上の位置を、作業空間に固定した座標系(ロボット座標系等)上の位置に変換することができる。
The visual sensor 70 may be a camera that captures two-dimensional images such as grayscale images or color images, or a stereo camera or three-dimensional sensor that can obtain distance images or three-dimensional point clouds. In this embodiment, it is assumed that the visual sensor 70 is a camera that captures two-dimensional images. The robot control device 50 holds model data of the object and can execute a detection process that identifies the position and orientation of the object by matching the image of the object in the captured image with the model data (pattern matching). In this embodiment, it is assumed that the visual sensor 70 has been calibrated, and that the robot control device 50 holds calibration data that defines the relative positional relationship between the visual sensor 70 and the robot 10. This makes it possible to convert a position on an image captured by the visual sensor 70 into a position on a coordinate system (such as a robot coordinate system) fixed to the working space.
ロボット制御装置50は、動作プログラム或いは教示装置40からの指令に従ってロボット10の動作を制御する。ロボット制御装置50は、プロセッサ、メモリ(ROM、RAM、不揮発性メモリ等)、記憶装置152(図2参照)、操作部、入出力インタフェース、ネットワークインタフェース等を有する一般的なコンピュータとしてのハードウェア構成を有していても良い。
The robot control device 50 controls the operation of the robot 10 according to an operation program or commands from the teaching device 40. The robot control device 50 may have a hardware configuration as a general computer having a processor, memory (ROM, RAM, non-volatile memory, etc.), a storage device 152 (see FIG. 2), an operation unit, an input/output interface, a network interface, etc.
教示装置40は、ロボット10の教示、各種設定、情報の表示等を行うための操作端末として用いられる。教示装置40として、教示操作盤、タブレット端末等が用いられても良い。教示装置40は、プロセッサ、メモリ(ROM、RAM、不揮発性メモリ等)、記憶装置、操作部、表示部41(図2参照)、入出力インタフェース、ネットワークインタフェース等を有する一般的なコンピュータとしてのハードウェア構成を有していても良い。表示部41は、表示装置として、例えば、液晶ディスプレイを備える。
The teaching device 40 is used as an operation terminal for teaching the robot 10, performing various settings, displaying information, etc. A teaching operation panel, a tablet terminal, etc. may be used as the teaching device 40. The teaching device 40 may have a hardware configuration as a general computer having a processor, memory (ROM, RAM, non-volatile memory, etc.), a storage device, an operation unit, a display unit 41 (see Figure 2), an input/output interface, a network interface, etc. The display unit 41 is equipped with, for example, a liquid crystal display as a display device.
図2は、ロボット制御装置50の機能ブロック図である。図2に示すように、ロボット制御装置50は、ロボット10を制御する機能に加えて、物体検出装置60としての機能を備えている。
FIG. 2 is a functional block diagram of the robot control device 50. As shown in FIG. 2, the robot control device 50 has a function as an object detection device 60 in addition to a function for controlling the robot 10.
ロボット制御装置50は、動作制御部151と、記憶装置152とを備える。記憶装置152は、例えば、不揮発性メモリ或いはハードディスク装置等からなる記憶装置である。記憶装置152には、ロボット10を制御するロボットプログラム、視覚センサ70により撮像された画像に基づきワークの検出等の画像処理を行うプログラム(ビジョンプログラム)、キャリブレーションデータ、各種設定情報等が格納されている。
The robot control device 50 includes an operation control unit 151 and a storage device 152. The storage device 152 is, for example, a storage device formed of a non-volatile memory or a hard disk device. The storage device 152 stores a robot program for controlling the robot 10, a program (vision program) for performing image processing such as workpiece detection based on images captured by the visual sensor 70, calibration data, various setting information, etc.
動作制御部151は、ロボットプログラムにしたがって、或いは教示装置40からの指令に従ってロボットの動作を制御する。ロボット制御装置50は、動作制御部151が生成する各軸に対する指令に従って各軸のサーボモータに対するサーボ制御を実行するサーボ制御部(不図示)を備えている。
The motion control unit 151 controls the motion of the robot according to the robot program or according to commands from the teaching device 40. The robot control device 50 is equipped with a servo control unit (not shown) that performs servo control on the servo motors of each axis according to commands for each axis generated by the motion control unit 151.
物体検出装置60は、視覚センサ70で撮像された画像から対象物を検出する機能を有する。対象物の位置及び姿勢を検出する検出処理では、一般に、入力画像に対しフィルタ等の特徴処理を施すことで入力画像中の物体の画像特徴(エッジ点等)を抽出し、抽出された画像特徴と、予め用意された対象物のモデルデータ(画像特徴)とを照合することで入力画像中の対象物の位置及び姿勢を特定する。図1に記載のような対処物をハンドリングするシステムでは、特徴抽出処理として、対象物のエッジ特徴を抽出するためのフィルタが用いられる場合がある。しかしながら、同じエッジ特徴抽出処理でも、ソーベルフィルタ、ラプラシアンフィルタなど複数種類のフィルタが存在する。本実施形態に係る物体検出装置60は、ある対象物に関して、それを安定して検出するのに適した特徴抽出処理を自動的に決定する機能を提供する。
The object detection device 60 has a function of detecting an object from an image captured by the visual sensor 70. In the detection process for detecting the position and orientation of an object, generally, the image features (edge points, etc.) of the object in the input image are extracted by applying feature processing such as a filter to the input image, and the position and orientation of the object in the input image are identified by comparing the extracted image features with pre-prepared model data (image features) of the object. In a system for handling objects such as that shown in FIG. 1, a filter for extracting edge features of the object may be used as a feature extraction process. However, even for the same edge feature extraction process, there are multiple types of filters, such as Sobel filters and Laplacian filters. The object detection device 60 according to this embodiment provides a function for automatically determining a feature extraction process suitable for stably detecting a certain object.
図2に示すように物体検出装置60は、視覚センサ制御部161と、画像取得部162と、検出部163と、特徴抽出部164と、モデル画像受付部165と、照合領域受付部166と、モデル特徴記憶部167と、モデル変換部168と、安定性計算部(第一特性計算部)169と、敏感性計算部(第二特性計算部)170と、評価指標計算部171と、表示制御部172とを含む。これらの機能ブロックは、ロボット制御装置50のプロセッサがプログラム実行することで実現されるものであっても良い。この場合、図2における物体検出装置60の構成部分は、ロボット制御装置50のプロセッサに相当する。
2, the object detection device 60 includes a visual sensor control unit 161, an image acquisition unit 162, a detection unit 163, a feature extraction unit 164, a model image reception unit 165, a matching area reception unit 166, a model feature storage unit 167, a model conversion unit 168, a stability calculation unit (first characteristic calculation unit) 169, a sensitivity calculation unit (second characteristic calculation unit) 170, an evaluation index calculation unit 171, and a display control unit 172. These functional blocks may be realized by the processor of the robot control device 50 executing a program. In this case, the components of the object detection device 60 in FIG. 2 correspond to the processor of the robot control device 50.
視覚センサ制御部161は、視覚センサ70の動作を制御する。例えば、視覚センサ制御部161は、動作プログラム内の視覚センサ70に対する指令に従い視覚センサ70を制御することができる。画像取得部162は、視覚センサ70が視野内を撮像して得た画像情報を取得する機能を有する。本実施形態では、画像取得部162は、視覚センサ70から2次元画像を取得する。
The visual sensor control unit 161 controls the operation of the visual sensor 70. For example, the visual sensor control unit 161 can control the visual sensor 70 according to instructions for the visual sensor 70 in an operation program. The image acquisition unit 162 has a function of acquiring image information obtained by the visual sensor 70 capturing an image within its field of view. In this embodiment, the image acquisition unit 162 acquires a two-dimensional image from the visual sensor 70.
検出部163は、位置及び姿勢が未知である対象物を撮像した画像(以下、この画像を第二の画像とも称する)から特徴抽出部164が抽出した画像特徴(以下、この画像特徴をだいに)と、予め用意された対象物のモデル特徴とを照合することで、対象物の位置及び姿勢を特定する検出処理を行うことができる。モデル特徴は、例えば、位置及び姿勢が既知である対象物について撮像した画像から特徴抽出部164が抽出した画像特徴である。
The detection unit 163 can perform a detection process to identify the position and orientation of an object by comparing image features (hereinafter, mainly these image features) extracted by the feature extraction unit 164 from an image captured of an object whose position and orientation are unknown (hereinafter, this image is also referred to as a second image) with model features of the object prepared in advance. The model features are, for example, image features extracted by the feature extraction unit 164 from an image captured of an object whose position and orientation are known.
特徴抽出部164は、画像から画像特徴を抽出する特徴抽出処理を複数種類有する。特徴抽出処理は、例えば、エッジ特徴を抽出する複数のフィルタを有する。複数種類のフィルタは、例えば、ソーベルフィルタとラプラシアンフィルタとを含む。ソーベルフィルタとラプラシアンフィルタとはいずれも対象物のエッジ特徴を抽出するためのフィルタである。しかしながら、これらのフィルタは異なる性質を有する。ソーベルフィルタは、画像のx方向(水平方向)の1次微分値と、y方向(垂直方向)の1次微分値とにより表されるエッジの強度に基づき、ある画素がエッジ点であるか否かの判定を行う。他方、ラプラシアンフィルタは、画像の2次微分値に基づいて、ある画素がエッジ点であるか否かの判定を行う。フィルタリングに関するこれらの性質の相違が、フィルタリング処理結果としてのエッジ特徴に相違をもたらす。
The feature extraction unit 164 has multiple types of feature extraction processes that extract image features from an image. The feature extraction processes have multiple filters that extract edge features, for example. The multiple types of filters include, for example, Sobel filters and Laplacian filters. Both Sobel filters and Laplacian filters are filters for extracting edge features of an object. However, these filters have different properties. The Sobel filter determines whether or not a pixel is an edge point based on the edge strength represented by the first differential value in the x direction (horizontal direction) and the first differential value in the y direction (vertical direction) of the image. On the other hand, the Laplacian filter determines whether or not a pixel is an edge point based on the second differential value of the image. These differences in filtering properties result in different edge features as a result of the filtering process.
ここで、同一の対象物のモデル画像に対してソーベルフィルタ、ラプラシアンフィルタをそれぞれ適用した場合の処理結果の例を図3A(画像GI1)、図3B(画像GI2)に示す。ここで使用されるモデル画像は、図5の左側に示した画像IG11である。ただし、画像IG11では、対象物90の鋳肌面のディテールは略されている。図3A(画像GI1)及び図3B(画像GI2)では、フィルタにより抽出されたエッジ点を黒点で表している。図3A(画像GI1)及び図3B(画像GI2)から理解されるように、それぞれのフィルタで抽出されるエッジ特徴(エッジ点)は異なるものとなっている。図3B(画像GI2)のラプラシアンフィルタによる処理結果では、対象物90の円弧状の段差部分91に良好にエッジ点が出力されていることが分かる。この場合、円弧状の段差部分91を対象物の特徴部分であるとして照合(マッチング)を行い対象物の位置及び姿勢を特定する場合には、ラプラシアンフィルタが好適であるということが言える。
Here, examples of the processing results when the Sobel filter and the Laplacian filter are applied to the model image of the same object are shown in Fig. 3A (image GI1) and Fig. 3B (image GI2). The model image used here is image IG11 shown on the left side of Fig. 5. However, in image IG11, the details of the cast surface of the object 90 are omitted. In Fig. 3A (image GI1) and Fig. 3B (image GI2), the edge points extracted by the filter are represented by black dots. As can be seen from Fig. 3A (image GI1) and Fig. 3B (image GI2), the edge features (edge points) extracted by each filter are different. In the processing result by the Laplacian filter in Fig. 3B (image GI2), it can be seen that edge points are output well to the arc-shaped step portion 91 of the object 90. In this case, it can be said that the Laplacian filter is suitable for identifying the position and orientation of the object by matching the arc-shaped step portion 91 as a characteristic part of the object.
このように、適用する特徴抽出処理によって、検出処理(照合)に利用する画像特徴は変化する。また、同じ対象物を撮像した場合でも、対象物の性質や撮像条件(照明や対象物を写す角度等を含む)により対象物の写り具合が変化する。したがって、図3A、図3Bに示したようなフィルタの出力結果も、対象物の性質や撮像条件に依存して変化すると考える必要がある。本実施形態に係る物体検出装置60は、以下で説明するように、ある対象物に関して撮像条件の変化に対する特徴抽出処理の性質を評価し、当該対象物に適用する好適な特徴抽出処理を決定するように構成される。
In this way, the image features used in the detection process (matching) change depending on the feature extraction process applied. Furthermore, even when the same object is imaged, the way the object is captured changes depending on the object's properties and the imaging conditions (including lighting, the angle at which the object is captured, etc.). Therefore, it is necessary to consider that the output results of filters such as those shown in Figures 3A and 3B also change depending on the object's properties and imaging conditions. As described below, the object detection device 60 according to this embodiment is configured to evaluate the properties of the feature extraction process for a certain object in response to changes in imaging conditions, and to determine the appropriate feature extraction process to apply to that object.
モデル画像受付部165は、位置及び姿勢が既知の対象物を撮像した画像(以下、この画像を「第一の画像」とも称する)をモデル画像として受け付ける。
The model image receiving unit 165 receives an image of an object whose position and orientation are known (hereinafter, this image is also referred to as the "first image") as a model image.
照合領域受付部166は、モデル画像中で対象物との照合に利用する領域を指定する入力を受け付ける機能を提供する。照合領域は、モデル画像の一部であっても良く、或いはその全部であっても良い。例えば、照合領域受付部166は、モデル画像中で対象物との照合に利用する領域を指定するユーザ操作を受け付ける。照合領域受付部166は、例えば、教示装置40の表示部41に対象物の撮像画像を表示し、ポインティングデバイス等により撮像画像上で照合領域を指定する操作を受け付けるグラフィカルユーザインタフェースを提供しても良い。
The matching area receiving unit 166 provides a function for receiving input specifying an area in the model image to be used for matching with the target object. The matching area may be a part of the model image, or may be the entirety of the model image. For example, the matching area receiving unit 166 receives a user operation for specifying an area in the model image to be used for matching with the target object. For example, the matching area receiving unit 166 may provide a graphical user interface that displays a captured image of the target object on the display unit 41 of the teaching device 40, and receives an operation for specifying a matching area on the captured image using a pointing device or the like.
モデル特徴記憶部167は、特徴抽出部164がモデル画像から抽出した画像特徴(以下、「第一の画像特徴」とも称する)のうち照合領域に含まれる画像特徴をモデル特徴として記憶する。
The model feature storage unit 167 stores, as model features, the image features (hereinafter also referred to as "first image features") that are included in the matching region among the image features extracted from the model image by the feature extraction unit 164.
モデル変換部168は、入力画像データに対して1以上の変換処理を適用する機能を提供する。変換処理は、明るさの変更、射影変換の1以上を含む。射影変換は、回転、及びスケール変換等を含む。モデル変換部168は、モデル画像に変換処理を適用した「変換モデル画像」を出力することができる。また、モデル変換部168は、モデル特徴に変換処理を適用した「変換モデル特徴」を出力することができる。
The model transformation unit 168 provides a function for applying one or more transformation processes to input image data. The transformation processes include one or more of a change in brightness and a projective transformation. Projective transformation includes rotation and scale transformation. The model transformation unit 168 can output a "transformed model image" obtained by applying a transformation process to a model image. The model transformation unit 168 can also output "transformed model features" obtained by applying a transformation process to model features.
変換モデル画像に特徴抽出処理を適用することで得られる画像特徴を「第三の画像特徴」と称することとする。
The image features obtained by applying feature extraction processing to the transformed model image are referred to as "third image features."
安定性計算部169は、特徴抽出部164が変換モデル画像から抽出した第三の画像特徴と変換モデル特徴とに基づいて、特徴抽出処理の特性に関する第一特性を算出する。第三の画像特徴はモデル画像にモデル変換部による変換(明るさ変換、射影変換等)を適用した画像(変換モデル画像)、つまり、対象物の映り具合が変化した画像に対して特徴抽出処理を適用して得られた画像特徴を表す。したがって、変換モデル特徴と第三の画像特徴との対比により、画像の変化に対する特徴抽出処理の特性を表す情報を導き出すことができる。安定性計算部169は、第一特性として、画像の変化に対する特徴抽出処理の安定性を表す指標を算出する。以下、この指標を単に安定性とも称することとする。
The stability calculation unit 169 calculates a first characteristic related to the characteristics of the feature extraction process based on the third image feature and the transformation model feature extracted by the feature extraction unit 164 from the transformation model image. The third image feature represents an image obtained by applying the feature extraction process to an image (transformed model image) to which transformation (brightness transformation, projective transformation, etc.) by the model transformation unit has been applied to the model image, in other words, an image in which the appearance of the object has changed. Therefore, by comparing the transformation model feature with the third image feature, it is possible to derive information that represents the characteristics of the feature extraction process with respect to changes in the image. The stability calculation unit 169 calculates, as the first characteristic, an index that represents the stability of the feature extraction process with respect to changes in the image. Hereinafter, this index will also be referred to simply as stability.
敏感性計算部170は、特徴抽出部164が変換モデル画像から抽出した第三の画像特徴と変換モデル特徴とに基づいて、特徴抽出処理の特性に関する第二特性を算出する。敏感性計算部170は、第二特性として、特徴抽出部による特徴抽出処理の、画像の変化に対する敏感性を表す指標を算出する。以下、この指標を単に敏感性とも称することとする。
The sensitivity calculation unit 170 calculates a second characteristic related to the characteristics of the feature extraction process based on the third image feature and the transformation model feature extracted by the feature extraction unit 164 from the transformation model image. The sensitivity calculation unit 170 calculates, as the second characteristic, an index that indicates the sensitivity of the feature extraction process by the feature extraction unit to changes in the image. Hereinafter, this index will also be referred to simply as sensitivity.
評価指標計算部171は、第三の画像特徴と変換モデル特徴とに基づいて算出された一以上の安定性に基づいて特徴抽出処理を決定するための評価指標を計算することができる。評価指標計算部171は、また、一以上の安定性と一以上の敏感性とに基づいて特徴抽出処理を決定するための評価指標を計算することができる。
The evaluation index calculation unit 171 can calculate an evaluation index for determining a feature extraction process based on one or more stabilities calculated based on the third image feature and the transformation model feature. The evaluation index calculation unit 171 can also calculate an evaluation index for determining a feature extraction process based on one or more stabilities and one or more sensitivities.
物体検出装置60において、モデル画像を入力して特徴抽出処理を決定するための評価指標を計算するまでの処理の概略的な流れを図4のデータフロー図を参照して説明する。
The general flow of processing in the object detection device 60 from inputting a model image to calculating an evaluation index for determining the feature extraction processing will be explained with reference to the data flow diagram in Figure 4.
(手順1)ユーザは、位置及び姿勢が既知の対処物を撮像したモデル画像201を入力する。
(手順2)ユーザは、モデル画像201上で照合に用いる領域を指定する。 (Step 1) A user inputs amodel image 201 obtained by capturing an object whose position and orientation are known.
(Step 2) The user specifies an area on themodel image 201 to be used for matching.
(手順2)ユーザは、モデル画像201上で照合に用いる領域を指定する。 (Step 1) A user inputs a
(Step 2) The user specifies an area on the
(手順3)物体検出装置60は、複数の特徴抽出処理164Aの各特徴抽出処理ごとに以下の処理(手順3-1)から(手順3-4)を行う。
(手順3-1)モデル画像201に特徴抽出処理を適用する。
(手順3-2)モデル画像201内の照合領域内の画像特徴を選択中の特徴抽出処理のモデル特徴202として登録する。
(手順3-3)物体検出装置60は、複数の変換処理168Aの各変換処理ごとに以下の処理(a)から(d)を行う。
(a)モデル画像201に変換処理を適用し変換モデル画像203を得る。
(b)変換モデル画像203に特徴抽出処理を適用し、第三の画像特徴204を得る。
(c)モデル特徴202に変換処理を適用し変換モデル特徴205を得る。
(d)変換モデル画像203から抽出した画像特徴(第三の画像特徴204)と、変換したモデル特徴(変換モデル特徴205)とから安定性と敏感性とを算出し(符号169A)、選択中の変換処理の安定性・敏感性として記憶する。
(手順3-4)次に、複数の変換処理に対応する複数の安定性・敏感性の組から特徴抽出処理を決定するための評価指標を算出する(符号171A)。
(手順4)上記手順3により特徴抽出処理毎に求められた評価指標に基づき、物体検出装置60は、検出処理に適用する特徴抽出処理を決定する。例えば、評価指標計算部171は、最も高い評価指標を有する特徴抽出処理を、検出処理に用いる特徴抽出処理として決定しても良い。 (Procedure 3) Theobject detection device 60 performs the following processes (procedure 3-1) to (procedure 3-4) for each of the feature extraction processes 164A.
(Step 3-1) A feature extraction process is applied to themodel image 201.
(Step 3-2) Image features within the matching region in themodel image 201 are registered as model features 202 for the selected feature extraction process.
(Step 3-3) Theobject detection device 60 performs the following processes (a) to (d) for each of the multiple conversion processes 168A.
(a) A transformation process is applied to themodel image 201 to obtain a transformed model image 203.
(b) A feature extraction process is applied to the transformedmodel image 203 to obtain a third image feature 204 .
(c) A transformation process is applied to the model features 202 to obtain transformed model features 205 .
(d) Stability and sensitivity are calculated (symbol 169A) from the image features (third image features 204) extracted from the transformed model image 203 and the transformed model features (transformed model features 205), and stored as the stability and sensitivity of the selected transformation process.
(Step 3-4) Next, an evaluation index for determining a feature extraction process from a plurality of sets of stability and sensitivity corresponding to a plurality of conversion processes is calculated (reference numeral 171A).
(Step 4) Theobject detection device 60 determines the feature extraction process to be applied to the detection process based on the evaluation index calculated for each feature extraction process in the above step 3. For example, the evaluation index calculation unit 171 may determine the feature extraction process having the highest evaluation index as the feature extraction process to be used for the detection process.
(手順3-1)モデル画像201に特徴抽出処理を適用する。
(手順3-2)モデル画像201内の照合領域内の画像特徴を選択中の特徴抽出処理のモデル特徴202として登録する。
(手順3-3)物体検出装置60は、複数の変換処理168Aの各変換処理ごとに以下の処理(a)から(d)を行う。
(a)モデル画像201に変換処理を適用し変換モデル画像203を得る。
(b)変換モデル画像203に特徴抽出処理を適用し、第三の画像特徴204を得る。
(c)モデル特徴202に変換処理を適用し変換モデル特徴205を得る。
(d)変換モデル画像203から抽出した画像特徴(第三の画像特徴204)と、変換したモデル特徴(変換モデル特徴205)とから安定性と敏感性とを算出し(符号169A)、選択中の変換処理の安定性・敏感性として記憶する。
(手順3-4)次に、複数の変換処理に対応する複数の安定性・敏感性の組から特徴抽出処理を決定するための評価指標を算出する(符号171A)。
(手順4)上記手順3により特徴抽出処理毎に求められた評価指標に基づき、物体検出装置60は、検出処理に適用する特徴抽出処理を決定する。例えば、評価指標計算部171は、最も高い評価指標を有する特徴抽出処理を、検出処理に用いる特徴抽出処理として決定しても良い。 (Procedure 3) The
(Step 3-1) A feature extraction process is applied to the
(Step 3-2) Image features within the matching region in the
(Step 3-3) The
(a) A transformation process is applied to the
(b) A feature extraction process is applied to the transformed
(c) A transformation process is applied to the model features 202 to obtain transformed model features 205 .
(d) Stability and sensitivity are calculated (
(Step 3-4) Next, an evaluation index for determining a feature extraction process from a plurality of sets of stability and sensitivity corresponding to a plurality of conversion processes is calculated (reference numeral 171A).
(Step 4) The
モデル変換部168による変換処理の例を図5に示す。図5では、対象物90を撮像したモデル画像IG11に射影変換(回転、スケール変換を含む)を適用することで変換後画像IG12が生成されている。
An example of the conversion process by the model conversion unit 168 is shown in FIG. 5. In FIG. 5, a converted image IG12 is generated by applying a projective transformation (including rotation and scale transformation) to a model image IG11 obtained by capturing an image of an object 90.
安定性計算部169による安定性の計算の具体例について説明する。安定性計算部169は、以下の処理により安定性の計算を行う。ここでは、画像特徴はエッジ点である場合を想定する。
(手順k1)変換モデル特徴と、変換モデル画像から抽出した第三の画像特徴のそれぞれに含まれるエッジ点との対応関係を決定する。
(手順k2)変換モデル特徴に含まれるエッジ点のうち第三の画像特徴に含まれる対応するエッジ点との差が特定の値よりも小さいエッジ点の割合に基づいてモデル特徴の安定性を計算する。 A specific example of the calculation of stability by thestability calculation unit 169 will be described below. The stability calculation unit 169 calculates stability by the following process. Here, it is assumed that the image features are edge points.
(Step k1) Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
(Step k2) Calculate the stability of the model feature based on the proportion of edge points included in the transformed model feature whose difference from the corresponding edge points included in the third image feature is smaller than a specific value.
(手順k1)変換モデル特徴と、変換モデル画像から抽出した第三の画像特徴のそれぞれに含まれるエッジ点との対応関係を決定する。
(手順k2)変換モデル特徴に含まれるエッジ点のうち第三の画像特徴に含まれる対応するエッジ点との差が特定の値よりも小さいエッジ点の割合に基づいてモデル特徴の安定性を計算する。 A specific example of the calculation of stability by the
(Step k1) Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
(Step k2) Calculate the stability of the model feature based on the proportion of edge points included in the transformed model feature whose difference from the corresponding edge points included in the third image feature is smaller than a specific value.
上記手順k1では、例えば、以下のようなルールで変換モデル特徴と第三の画像特徴のそれぞれに含まれるエッジ点の対応を決定しても良い。
(ルール)変換モデル特徴及び第三の画像特徴内のエッジ点をその位置及び輝度勾配の方向の2つの変数からなる点として定義する。そして、変換モデル特徴内のあるエッジ点からの距離((位置,輝度勾配)の2変数で定義される空間内の距離)が、所定の閾値内にある第三の画像特徴内のエッジ点を、変換モデル特徴のエッジ点に対応するエッジ点と決定する。 In the above step k1, for example, the correspondence between the edge points included in the transformation model feature and the third image feature may be determined according to the following rule.
(Rule) Define edge points in the transformation model feature and the third image feature as points consisting of two variables, namely, their position and the direction of the brightness gradient. Then, determine an edge point in the third image feature whose distance from an edge point in the transformation model feature (distance in a space defined by the two variables (position, brightness gradient)) is within a predetermined threshold as the edge point corresponding to the edge point of the transformation model feature.
(ルール)変換モデル特徴及び第三の画像特徴内のエッジ点をその位置及び輝度勾配の方向の2つの変数からなる点として定義する。そして、変換モデル特徴内のあるエッジ点からの距離((位置,輝度勾配)の2変数で定義される空間内の距離)が、所定の閾値内にある第三の画像特徴内のエッジ点を、変換モデル特徴のエッジ点に対応するエッジ点と決定する。 In the above step k1, for example, the correspondence between the edge points included in the transformation model feature and the third image feature may be determined according to the following rule.
(Rule) Define edge points in the transformation model feature and the third image feature as points consisting of two variables, namely, their position and the direction of the brightness gradient. Then, determine an edge point in the third image feature whose distance from an edge point in the transformation model feature (distance in a space defined by the two variables (position, brightness gradient)) is within a predetermined threshold as the edge point corresponding to the edge point of the transformation model feature.
上記手順k2では、安定性計算部169は、一例として、以下の様に安定性を計算する。変換モデル特徴において現れているエッジ点の総数をHtotalとする。第三の画像特徴に表れたエッジ点のうち、変換モデル特徴の対応点との距離が特定の値よりも小さいエッジ点の総数をG1とする。この場合、安定性計算部169は、以下により安定性を算出しても良い。
(安定性)=G1/Htotal In the above step k2, thestability calculation unit 169 calculates the stability as follows, for example. Let the total number of edge points appearing in the transformation model feature be Htotal . Let the total number of edge points appearing in the third image feature, the distance between the edge points and the corresponding points in the transformation model feature being smaller than a specific value, be G1 . In this case, the stability calculation unit 169 may calculate the stability as follows.
(Stability)=G 1 /H total
(安定性)=G1/Htotal In the above step k2, the
(Stability)=G 1 /H total
上記内容から、安定性とは、変換モデル特徴からみて、第三の画像特徴の対応する画像特徴が似ている割合と言うことができる。別の表現では、安定性は、第三の画像特徴において目的の位置(変換モデル特徴内の対応点の位置)にどれだけ多くのエッジ点が抽出されているかを表す指標である。このように、安定性計算部169は、評価対象の特徴抽出処理が抽出する画像特徴に関し、画像の変化(変換)に対する画像特徴の安定性を表す指標を提供する。
From the above, stability can be said to be the proportion of similarity between the corresponding image feature of the third image feature and the transformation model feature. In other words, stability is an index that indicates how many edge points are extracted at the target position (the position of the corresponding point in the transformation model feature) in the third image feature. In this way, the stability calculation unit 169 provides an index that indicates the stability of the image feature against image changes (transformation) for the image feature extracted by the feature extraction process to be evaluated.
敏感性計算部170による敏感性の計算の具体例について説明する。敏感性計算部170は、以下の処理により敏感性の計算を行う。ここでは、画像特徴はエッジ点である場合を想定する。
(手順m1)変換モデル特徴と、変換モデル画像から抽出した第三の画像特徴のそれぞれに含まれるエッジ点の対応関係を決定する。
(手順m2)第三の画像特徴に含まれるエッジ点のうち、変換モデル特徴に含まれる対応するエッジ点との差が特定の値よりも小さい画像特徴の割合に基づいてモデル特徴の敏感性を算出する。 A specific example of the calculation of sensitivity by thesensitivity calculation unit 170 will be described below. The sensitivity calculation unit 170 calculates the sensitivity by the following process. Here, it is assumed that the image features are edge points.
(Step m1) Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
(Step m2) Calculate the sensitivity of the model feature based on the proportion of image features whose difference from the corresponding edge points included in the transformation model feature is smaller than a specific value among edge points included in the third image feature.
(手順m1)変換モデル特徴と、変換モデル画像から抽出した第三の画像特徴のそれぞれに含まれるエッジ点の対応関係を決定する。
(手順m2)第三の画像特徴に含まれるエッジ点のうち、変換モデル特徴に含まれる対応するエッジ点との差が特定の値よりも小さい画像特徴の割合に基づいてモデル特徴の敏感性を算出する。 A specific example of the calculation of sensitivity by the
(Step m1) Determine the correspondence between the transformation model feature and the edge points included in each of the third image features extracted from the transformation model image.
(Step m2) Calculate the sensitivity of the model feature based on the proportion of image features whose difference from the corresponding edge points included in the transformation model feature is smaller than a specific value among edge points included in the third image feature.
上記手順m1における対応点の決定は、上記手順k1の場合と同じとする。
The determination of corresponding points in step m1 above is the same as in step k1 above.
手順m2では、敏感性計算部170は、一例として、以下の様に敏感性を計算する。第三の画像特徴に表れているエッジ点の総数をGtotalとする。そのうち、変換モデル特徴の対応点との距離が特定の値よりも小さいエッジ点の総数をG1とする。この場合、敏感性計算部は、以下により敏感性を算出しても良い。
(敏感性)=G1/Gtotal In step m2, thesensitivity calculation unit 170 calculates the sensitivity as follows, for example. The total number of edge points appearing in the third image feature is set to Gtotal . Among them, the total number of edge points whose distance to the corresponding point of the transformation model feature is smaller than a specific value is set to G1 . In this case, the sensitivity calculation unit may calculate the sensitivity as follows.
(Sensitivity) = G 1 /G total
(敏感性)=G1/Gtotal In step m2, the
(Sensitivity) = G 1 /G total
以上の内容から、敏感性は、第三の画像特徴には変換モデル特徴と似たもの以外がどれだけ少ないかを表す指標ということができる。別の表現では、敏感性とは、評価対象の特徴抽出処理が抽出する画像特徴に関し、画像の変化(変換)があってもモデル特徴だけに敏感に反応する度合い表す指標である。
From the above, we can say that sensitivity is an index that indicates how few third image features are similar to the transformed model features. In other words, sensitivity is an index that indicates the degree to which the image features extracted by the feature extraction process being evaluated react sensitively only to the model features, even if there is a change in the image (transformation).
例えば、第三の画像特徴に表れているエッジ点自体の数(Gtotal)が増加すると、変換モデル特徴のエッジ点の位置に同じようにエッジ点が現れる可能性が高くなり安定性は高くなる傾向となる一方、敏感性は低くなる傾向となる。
For example, as the number of edge points (G total ) appearing in the third image feature increases, the likelihood that edge points will appear at the same positions of the edge points of the transformation model feature increases, tending to increase stability, but tending to decrease sensitivity.
図6A-図6Bは、物体検出装置60において実行される、特徴抽出処理を決定するための処理(特徴抽出処理決定処理)を表すフローチャートである。本処理は、プロセッサによる制御の下で実行される。
FIG. 6A-FIG. 6B are flowcharts showing the process for determining the feature extraction process (feature extraction process determination process) executed in the object detection device 60. This process is executed under the control of the processor.
はじめに、モデル画像受付部165は、位置及び姿勢が既知の対象物を視覚センサ70が撮像することで取得されたモデル画像の入力を受け付ける(ステップS1)。次に、照合領域受付部166は、ユーザによる、モデル画像上で画像特徴を抽出する領域に相当する照合領域を指定する操作を受け付ける(ステップS2)。
First, the model image receiving unit 165 receives an input of a model image acquired by the visual sensor 70 capturing an image of an object whose position and orientation are known (step S1). Next, the matching area receiving unit 166 receives an operation by the user to specify a matching area on the model image that corresponds to an area from which image features are to be extracted (step S2).
ステップS4からS16に至る一連の処理は、特徴抽出処理毎に繰り返し実行される(ステップS3のループ1)。モデル特徴記憶部167は、モデル画像に特徴抽出処理を適用する(ステップS4)。そして、モデル特徴記憶部167は、モデル画像中の照合領域内の画像特徴を、ループ1において選択中の特徴抽出処理のモデル特徴として保存する(ステップS5)。
The series of processes from steps S4 to S16 are repeatedly executed for each feature extraction process (loop 1 of step S3). The model feature storage unit 167 applies the feature extraction process to the model image (step S4). Then, the model feature storage unit 167 saves the image features in the matching area in the model image as model features for the feature extraction process selected in loop 1 (step S5).
ステップS7からS15に至る一連の処理は、明るさ変換パラメータを変更しながら所定回数繰り返し実行される(ステップS6のループ2)。また、ステップS8からS15に至る一連の処理は、射影変換パラメータを切り替えながら所定回数繰り返し実行される(ステップS7のループ3)。
The series of processes from steps S7 to S15 are repeated a predetermined number of times while changing the brightness conversion parameters (loop 2 of step S6). Also, the series of processes from steps S8 to S15 are repeated a predetermined number of times while switching the projection transformation parameters (loop 3 of step S7).
ステップS8では、モデル変換部168は、モデル画像に明るさを変更する処理を適用する。さらに、モデル変換部168は、明るさを変更したモデル画像に射影変換を適用する(ステップS9)。射影変換は、例えば、モデル画像に対する回転、スケール変換を含む。
In step S8, the model transformation unit 168 applies a process to change the brightness of the model image. Furthermore, the model transformation unit 168 applies a projective transformation to the model image whose brightness has been changed (step S9). The projective transformation includes, for example, rotation and scale transformation of the model image.
次に、特徴抽出部164は、明るさ変更及び射影変換を適用したモデル画像(変換モデル画像)に対して選択中の特徴抽出処理を適用する(ステップS10)。
Next, the feature extraction unit 164 applies the selected feature extraction process to the model image (transformed model image) to which the brightness change and projective transformation have been applied (step S10).
図6Bに示すように、次に、モデル変換部168は、選択中の特徴抽出処理によるモデル特徴に射影変換を適用する(ステップS11)。次に、安定性計算部169は、変換モデル画像から抽出した画像特徴(第三の画像特徴)と、射影変換を適用したモデル特徴(変換モデル特徴)とから上述した安定性を算出する(ステップS12)。そして、安定性計算部169は、算出した安定性を、選択中の特徴抽出処理の安定性の一つとして記憶する。
As shown in FIG. 6B, the model transformation unit 168 then applies projective transformation to the model features resulting from the selected feature extraction process (step S11). Next, the stability calculation unit 169 calculates the above-mentioned stability from the image features (third image features) extracted from the transformed model image and the model features (transformed model features) to which projective transformation has been applied (step S12). The stability calculation unit 169 then stores the calculated stability as one of the stabilities of the selected feature extraction process.
次に、ステップS14において、敏感性計算部170は、変換モデル画像から抽出した画像特徴(第三の画像特徴)と、変換モデル特徴とから上述した敏感性を算出する。そして、敏感性計算部170は、算出した敏感性を、選択中の特徴抽出処理の敏感性の一つとして記憶する(ステップS15)。
Next, in step S14, the sensitivity calculation unit 170 calculates the above-mentioned sensitivity from the image feature (third image feature) extracted from the transformation model image and the transformation model feature. Then, the sensitivity calculation unit 170 stores the calculated sensitivity as one of the sensitivities for the selected feature extraction process (step S15).
ループ2及びループ3によるループ処理が、複数の明るさ変更パラメータ及び射影変換パラメータについて実行され、安定性及び敏感性の各々は明るさ変更パラメータの数に射影変換パラメータの数を乗じた数の分生成されることとなる。ループ2によるループ処理が終了すると、処理はステップS16に進む。ステップS16では、評価指標計算部171は、現在選択中の特徴抽出処理について生成されている安定性の組と敏感性の組から、当該特徴抽出処理についての評価指標を計算する。
Loop processing by loop 2 and loop 3 is executed for multiple brightness change parameters and projection transformation parameters, and the number of stabilities and sensitivities each is generated equal to the number of brightness change parameters multiplied by the number of projection transformation parameters. When loop processing by loop 2 ends, the process proceeds to step S16. In step S16, the evaluation index calculation unit 171 calculates an evaluation index for the currently selected feature extraction process from the pair of stabilities and the pair of sensitivities generated for that feature extraction process.
例えば、選択中の特徴抽出処理について、それぞれn個の安定性ST(i)、敏感性SE(i)が算出されているとする(iは、1からn)。この場合、評価指標計算部171は、安定性ST(1)からST(n)及び敏感性SE(1)からSE(n))についての、平均値、重み付き調和平均、その他の統計量を、選択中の特徴抽出処理の評価指標としても良い。
For example, suppose that n stabilities ST(i) and sensitivities SE(i) have been calculated for each selected feature extraction process (i ranges from 1 to n). In this case, the evaluation index calculation unit 171 may use the average, weighted harmonic mean, or other statistics for the stabilities ST(1) to ST(n) and sensitivities SE(1) to SE(n) as the evaluation index for the selected feature extraction process.
ループ1によるループ処理が終了すると、評価指標計算部171は、例えば、複数の特徴抽出処理のそれぞれについて計算された評価指標のうち、評価指標が最も高い特徴抽出処理を検出処理に用いる特徴抽出処理として決定する(ステップS17)。
When the loop processing by loop 1 is completed, the evaluation index calculation unit 171 determines, for example, the feature extraction process with the highest evaluation index among the evaluation indexes calculated for each of the multiple feature extraction processes as the feature extraction process to be used for the detection process (step S17).
上述の実施形態では、複数種類の変換(明るさ変換、射影変換)を適用して安定性、敏感性を複数計算し、それらに基づいて評価指標を計算する構成とすることで、様々な環境要因により対象物の見え方、写り具合等が変化する状況においても適切な特徴抽出処理を選択して検出に適用することが可能になる。
In the above-described embodiment, by applying multiple types of transformations (brightness transformation, projective transformation) to calculate multiple stability and sensitivity values and then calculating the evaluation index based on these, it becomes possible to select an appropriate feature extraction process and apply it to detection even in situations where the appearance and quality of the target object change due to various environmental factors.
上記特徴抽出処理決定処理は、画像の変化に対する画像特徴の安定性に基づいて特徴抽出処理を決定するための評価指標を算出する構成となっている。それにより決定された特徴抽出処理を検出処理に適用することで、安定した検出を行うことが可能となる。検出処理が安定化するため、検出パラメータ(検出スコア等)の調整も容易になる。
The feature extraction process determination process is configured to calculate an evaluation index for determining the feature extraction process based on the stability of image features against changes in the image. By applying the feature extraction process determined in this way to the detection process, stable detection becomes possible. As the detection process becomes more stable, it also becomes easier to adjust the detection parameters (detection score, etc.).
上記特徴抽出処理決定処理は、画像の変化に対する画像特徴の敏感性を基に特徴抽出処理を決定するための評価指標を算出する構成となっている。それにより決定された特徴抽出処理を検出処理に適用することで、検出の安定化をすることが可能となる。また、この場合、検出処理時間の短縮を図ることも可能となる。
The feature extraction process determination process is configured to calculate an evaluation index for determining the feature extraction process based on the sensitivity of image features to changes in the image. By applying the feature extraction process determined in this way to the detection process, it becomes possible to stabilize detection. In this case, it is also possible to shorten the detection process time.
上記特徴抽出処理決定処理は、1枚の画像(モデル画像)で特徴抽出処理を決定する評価指標を算出することができる構成となっている。この構成は、特徴抽出処理決定処理に必要なユーザの手間を軽減できるメリットをもたらす。
The feature extraction process determination process is configured to be able to calculate an evaluation index for determining the feature extraction process using a single image (model image). This configuration has the advantage of reducing the user's workload required for the feature extraction process determination process.
以上で説明した実施形態では、評価指標計算部171は、安定性と敏感性とに基づいて特徴抽出処理を決定するための評価指標を算出する構成となっているが、評価指標計算部171は、安定性により評価指標を算出しても良い。この場合においても、画像の変化に対しても安定して検出を行うことのできる特徴抽出処理を決定することができる。或いは、評価指標計算部171は、敏感性により評価指標を算出しても良い。この場合のおいても、安定した検出をもたらし得る特徴抽出処理を決定することは可能である。
In the embodiment described above, the evaluation index calculation unit 171 is configured to calculate an evaluation index for determining a feature extraction process based on stability and sensitivity, but the evaluation index calculation unit 171 may calculate an evaluation index based on stability. Even in this case, it is possible to determine a feature extraction process that can stably detect changes in the image. Alternatively, the evaluation index calculation unit 171 may calculate an evaluation index based on sensitivity. Even in this case, it is possible to determine a feature extraction process that can provide stable detection.
上述の実施形態では、ある特徴抽出処理について、1以上の変換処理により1以上の安定性及び敏感性の組が得られている場合に、それらの重みづけ調和平均等の統計量をその特徴抽出処理の評価指標とする例を示した。他の動作例として、評価指標計算部171は、1以上の変換処理の各々について(すなわち、1以上の第三の画像特徴と1以上の変換モデル特徴のそれぞれの組について)評価指標を算出し、算出された1以上の評価指標のうち最小値を、特徴抽出処理の評価指標として決定しても良い。各特徴抽出処理についてこのように評価指標を計算して検出に用いる特徴抽出処理を決定するようにすることで、対象物の映り具合が最悪となるケースでも安定して検出を行うことのできる特徴抽出処理を決定することが可能となる。
In the above embodiment, an example was shown in which, for a certain feature extraction process, when one or more pairs of stability and sensitivity are obtained by one or more conversion processes, their statistical quantity such as a weighted harmonic mean is used as the evaluation index for that feature extraction process. As another operation example, the evaluation index calculation unit 171 may calculate an evaluation index for each of one or more conversion processes (i.e., for each pair of one or more third image features and one or more conversion model features), and determine the minimum value of the one or more calculated evaluation indexes as the evaluation index for the feature extraction process. By calculating an evaluation index in this way for each feature extraction process and determining the feature extraction process to be used for detection, it is possible to determine a feature extraction process that can perform stable detection even in cases where the object is poorly reflected.
表示制御部172は、評価指標計算部171により算出された各特徴抽出処理についての評価指標を表示するように動作しても良い。図7は、表示制御部172により各特徴抽出処理(フィルタ)の評価指標を表すリスト250を表示部41に表示した例を示す。このように、評価指標を表示することで、ユーザは、どの特徴抽出処理が好適であるかを認識することができる。このリスト250を参照することで、ユーザが、検出に適用する特徴抽出処理を選択することもできる。リスト250は、複数の特徴抽出処理から検出に用いる特徴抽出処理を選択するユーザ操作を受け付けるユーザインタフェースとしての機能を有していても良い。
The display control unit 172 may operate to display the evaluation index for each feature extraction process calculated by the evaluation index calculation unit 171. FIG. 7 shows an example in which a list 250 showing the evaluation indexes of each feature extraction process (filter) is displayed on the display unit 41 by the display control unit 172. By displaying the evaluation indexes in this way, the user can recognize which feature extraction process is suitable. By referring to this list 250, the user can also select the feature extraction process to apply to detection. The list 250 may also function as a user interface that accepts a user operation to select a feature extraction process to be used for detection from multiple feature extraction processes.
上述の実施形態では、特徴抽出処理の一例としてエッジ検出を行うフィルタを挙げて説明を行ったが、特徴抽出処理を決定するための上述の内容は様々な種類の特徴抽出処理について適用することができる。例えば、画像内のコーナーを検出するコーナー検出器をとりあげる。コーナー検出器として、Harrisのコーナー検出器、Shi-Tomasiのコーナー検出器、FAST(Features from Accelerated Segment Test)コーナー検出器等がある。
In the above embodiment, a filter that performs edge detection has been given as an example of feature extraction processing, but the above-mentioned content for determining a feature extraction processing can be applied to various types of feature extraction processing. For example, take a corner detector that detects corners within an image. Examples of corner detectors include the Harris corner detector, the Shi-Tomasi corner detector, and the FAST (Features from Accelerated Segment Test) corner detector.
Harrisのコーナー検出器及びShi-Tomasiのコーナー検出器は、画像の位置をずらしたときの変化量(下記式(1)のE)が大きい点をコーナーとして検出する。下記式(1)で、I(x,y)は画像の画素の輝度、I(x+u,y+v)はずらした画像の画素の輝度、w(x,y)は窓関数である。
The Harris corner detector and Shi-Tomasi corner detector detect points that have a large amount of change (E in the following equation (1)) when the image position is shifted as corners. In the following equation (1), I(x,y) is the brightness of the pixel in the image, I(x+u,y+v) is the brightness of the pixel in the shifted image, and w(x,y) is the window function.
Harrisのコーナー検出器及びShi-Tomasiのコーナー検出器は、上記式を簡略化した場合の下記式(2),(3)のMの固有値からコーナーかどうかの判定を行うが判定方法が異なる。
The Harris corner detector and the Shi-Tomasi corner detector determine whether a corner exists from the eigenvalues of M in the following equations (2) and (3), which are simplified versions of the above equations, but the method of determination is different.
Shi-Tomasiのコーナー検出器により対象物の画像を処理した結果の一例を図8に示す。図8に示す画像IG20は、対象物190が写る画像(対象物の一部に符号190を付している)にShi-Tomasiのコーナー検出器を適用した結果を示す。画像IG20では、検出されたコーナーCPが黒点で表されている(一部のみに符号CPを付している)。
Figure 8 shows an example of the results of processing an image of an object using the Shi-Tomasi corner detector. Image IG20 shown in Figure 8 shows the results of applying the Shi-Tomasi corner detector to an image containing an object 190 (part of the object is marked with the reference symbol 190). In image IG20, the detected corners CP are represented by black dots (only a portion is marked with the reference symbol CP).
FASTコーナー検出器は、候補点が実際にコーナーであるかどうかを16ピクセルの円の輝度値の差で分類を行う。差が連続して閾値よりも大きい、又は小さい場合にコーナーとする。FASTコーナー検出器は、高速に動作することができるとされている。
The FAST corner detector classifies whether a candidate point is actually a corner by the difference in brightness values of a 16-pixel circle. If the difference is consecutively greater or smaller than a threshold, it is considered a corner. The FAST corner detector is said to be able to operate at high speed.
このような各コーナー検出器の検出手法の相違から、エッジ検出器の場合と同様に、コーナー検出器による画像特徴の抽出も対象物の映り具合の変化による影響を受けると考えられる。したがって、コーナー検出器に関しても、上述の安定性や敏感性を計算し評価指標を得て、安定した検出をもたらすコーナー検出器を決定することは可能である。したがって、上述の実施形態における特徴抽出処理を決定するための各種処理をコーナー検出器に関しても適用することができる。
Due to the differences in the detection methods used by each corner detector, it is believed that, as with the edge detector, the extraction of image features by the corner detector will also be affected by changes in how the object appears. Therefore, for corner detectors as well, it is possible to calculate the stability and sensitivity described above, obtain an evaluation index, and determine a corner detector that will provide stable detection. Therefore, the various processes for determining the feature extraction process in the above-mentioned embodiments can also be applied to corner detectors.
以上説明したように本実施形態によれば、撮像条件が変化する状況においても安定して検出を行うことのできる特徴抽出処理を決定することができる。
As described above, according to this embodiment, it is possible to determine a feature extraction process that can perform stable detection even in situations where the imaging conditions change.
上述の実施形態では、物体検出装置をロボット制御装置の機能として実現する構成例を示しているが、物体検出装置をロボット制御装置と別個の独立した装置として実現することもできる。例えば、物体検出装置をロボット制御装置に接続されたパーソナルコンピュータ等の情報処理装置により構成しても良い。
In the above embodiment, a configuration example is shown in which the object detection device is realized as a function of the robot control device, but the object detection device can also be realized as an independent device separate from the robot control device. For example, the object detection device can be configured as an information processing device such as a personal computer connected to the robot control device.
図2に示したロボット制御装置の機能ブロックは、ロボット制御装置のプロセッサが、記憶装置に格納された各種ソフトウェアを実行することで実現されても良く、或いは、ASIC(Application Specific Integrated Circuit)等のハードウェアを主体とした構成により実現されても良い。
The functional blocks of the robot control device shown in Figure 2 may be realized by the processor of the robot control device executing various software stored in a storage device, or may be realized by a hardware-based configuration such as an ASIC (Application Specific Integrated Circuit).
上述した実施形態における特徴抽出処理決定処理等の各種の処理を実行するプログラムは、コンピュータに読み取り可能な各種記録媒体(例えば、ROM、EEPROM、フラッシュメモリ等の半導体メモリ、磁気記録媒体、CD-ROM、DVD-ROM等の光ディスク)に記録することができる。
The programs that execute various processes such as the feature extraction process and determination process in the above-mentioned embodiments can be recorded on various computer-readable recording media (for example, semiconductor memories such as ROM, EEPROM, and flash memory, magnetic recording media, and optical disks such as CD-ROM and DVD-ROM).
本開示について詳述したが、本開示は上述した個々の実施形態に限定されるものではない。これらの実施形態は、本開示の要旨を逸脱しない範囲で、または、特許請求の範囲に記載された内容とその均等物から導き出される本開示の趣旨を逸脱しない範囲で、種々の追加、置き換え、変更、部分的削除等が可能である。また、これらの実施形態は、組み合わせて実施することもできる。例えば、上述した実施形態において、各動作の順序や各処理の順序は、一例として示したものであり、これらに限定されるものではない。また、上述した実施形態の説明に数値又は数式が用いられている場合も同様である。
Although the present disclosure has been described in detail, the present disclosure is not limited to the individual embodiments described above. Various additions, substitutions, modifications, partial deletions, etc. are possible to these embodiments without departing from the gist of the present disclosure, or without departing from the spirit of the present disclosure derived from the contents described in the claims and their equivalents. These embodiments can also be implemented in combination. For example, in the above-mentioned embodiments, the order of each operation and the order of each process are shown as examples, and are not limited to these. The same applies when numerical values or formulas are used to explain the above-mentioned embodiments.
上記実施形態および変形例に関し更に以下の付記を記載する。
(付記1)
画像から画像特徴を抽出する特徴抽出部(164)と、
位置及び姿勢が既知である対象物を撮像した第一の画像をモデル画像として受け付けるモデル画像受付部(165)と、
前記特徴抽出部が前記モデル画像から抽出した第一の画像特徴をモデル特徴として記憶するモデル特徴記憶部(167)と、
位置及び姿勢が未知である前記対象物を撮像した第二の画像から前記特徴抽出部が抽出した第二の画像特徴と前記モデル特徴とを照合することで前記対象物の位置及び姿勢を特定する検出部(163)と、を備える物体検出装置(60)であって、
前記物体検出装置(60)は、更に、
前記モデル画像と前記モデル特徴に1以上の変換処理を加えて1以上の変換モデル画像と1以上の変換モデル特徴とをそれぞれ生成するモデル変換部(168)と、
前記特徴抽出部が前記変換モデル画像から抽出した第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部による特徴抽出処理の特性に関する第一特性を算出する第一特性計算部(169)と、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第一特性計算部が算出した1以上の第一特性に基づいて前記特徴抽出処理の評価指標を計算する評価指標計算部(171)と、を備え、
前記特徴抽出部(164)は、複数の特徴抽出処理を有し、
前記第一特性計算部(169)は、前記複数の特徴抽出処理の各々について、前記第三の画像特徴と前記変換モデル特徴とに基づき前記1以上の第一特性を算出し、
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について、前記検出部の検出処理に適用する特徴抽出処理を決定するための前記評価指標を算出する、物体検出装置(60)。
(付記2)
前記モデル画像に写った前記対象物の一部又は全部を照合領域として受け付ける照合領域受付部(166)を更に備え、
前記モデル特徴記憶部(167)は、前記第一の画像特徴のうち前記照合領域に含まれる画像特徴を前記モデル特徴として記憶する、付記1に記載の物体検出装置(60)。
(付記3)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について算出された前記評価指標に基づいて、前記検出部(163)による検出処理において前記特徴抽出部が用いる特徴抽出処理を決定する、付記1又は2に記載の物体検出装置(60)。
(付記4)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について算出された前記評価指標のうち最も高い評価指標を有する特徴抽出処理を、前記検出部(163)による検出処理において前記特徴抽出部(164)が用いる特徴抽出処理として決定する、付記3に記載の物体検出装置(60)。
(付記5)
前記モデル変換部(168)による前記1以上の変換処理は、前記モデル画像の明るさを変換する処理を含む、付記1から4のいずれか一項に記載の物体検出装置(60)。
(付記6)
前記モデル変換部(168)による前記1以上の変換処理は、前記モデル画像と前記モデル特徴の各々に対して射影変換を行うことを含む、付記1から5のいずれか一項に記載の物体検出装置(60)。
(付記7)
前記第一特性計算部(169)が算出する前記第一特性は、画像の変化に対する前記特徴抽出処理の安定性を表す、付記1から6のいずれか一項に記載の物体検出装置(60)。
(付記8)
前記第一特性計算部(169)は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記変換モデル特徴に含まれる画像特徴のうち前記第三の画像特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第一特性を算出する、付記7に記載の物体検出装置(60)。
(付記9)
前記第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部(164)による特徴抽出処理の、画像の変化に対する敏感性を表す第二特性を算出する第二特性計算部(170)を更に備え、
前記評価指標計算部(171)は、前記1以上の第一特性と、前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第二特性計算部(170)が算出した1以上の第二特性とに基づいて、前記複数の特徴抽出処理の各々について、前記評価指標を算出する、付記1から8のいずれか一項に記載の物体検出装置(60)。
(付記10)
前記第二特性計算部(170)は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記第三の画像特徴に含まれる画像特徴のうち前記変換モデル特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第二特性を算出する付記9に記載の物体検出装置(60)。
(付記11)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について、前記1以上の第一特性と前記1以上の第二特性の重み付き調和平均を前記評価指標として算出する付記9又は10に記載の物体検出装置(60)。
(付記12)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々に関し、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴のそれぞれの組で前記特徴抽出処理を決定するための指標を算出し、
前記それぞれの組で算出された前記指標のうちの最小値を前記特徴抽出処理の評価指標とする、付記1から11のいずれか一項に記載の物体検出装置(60)。
(付記13)
前記複数の特徴抽出処理のそれぞれについて算出された前記評価指標を表示装置に表示する表示制御部(172)を更に備える、付記1から12のいずれか一項に記載の物体検出装置(60)。 The following additional notes are provided regarding the above embodiment and modifications.
(Appendix 1)
A feature extraction unit (164) for extracting image features from an image;
a model image receiving unit (165) that receives, as a model image, a first image obtained by capturing an object whose position and orientation are known;
a model feature storage unit (167) that stores the first image feature extracted from the model image by the feature extraction unit as a model feature;
a detection unit (163) that identifies a position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image capturing the object, the position and orientation of which are unknown, with the model feature,
The object detection device (60) further comprises:
a model transformation unit (168) that applies one or more transformation processes to the model image and the model feature to generate one or more transformed model images and one or more transformed model features, respectively;
a first characteristic calculation unit (169) that calculates a first characteristic related to a characteristic of the feature extraction process by the feature extraction unit based on a third image characteristic extracted from the transformation model image by the feature extraction unit and the transformation model characteristic;
an evaluation index calculation unit (171) that calculates an evaluation index of the feature extraction process based on one or more first characteristics calculated by the first characteristic calculation unit based on the one or more third image features and the one or more transformation model features,
The feature extraction unit (164) has a plurality of feature extraction processes,
the first characteristic calculation unit (169) calculates the one or more first characteristics for each of the plurality of feature extraction processes based on the third image feature and the transformation model feature;
The object detection device (60), wherein the evaluation index calculation unit (171) calculates the evaluation index for determining which feature extraction process to apply to the detection process of the detection unit for each of the multiple feature extraction processes.
(Appendix 2)
A matching area receiving unit (166) that receives a part or the whole of the object shown in the model image as a matching area,
The object detection device (60) described in Appendix 1, wherein the model feature storage unit (167) stores, as the model feature, the image feature included in the matching area among the first image features.
(Appendix 3)
The object detection device (60) described inAppendix 1 or 2, wherein the evaluation index calculation unit (171) determines the feature extraction process to be used by the feature extraction unit in the detection process by the detection unit (163) based on the evaluation index calculated for each of the multiple feature extraction processes.
(Appendix 4)
The object detection device (60) described in Appendix 3, wherein the evaluation index calculation unit (171) determines the feature extraction process having the highest evaluation index among the evaluation indexes calculated for each of the multiple feature extraction processes as the feature extraction process to be used by the feature extraction unit (164) in the detection process by the detection unit (163).
(Appendix 5)
The object detection device (60) according to any one of appendixes 1 to 4, wherein the one or more conversion processes by the model conversion unit (168) include a process of converting brightness of the model image.
(Appendix 6)
The object detection device (60) according to any one of appendices 1 to 5, wherein the one or more transformation processes by the model transformation unit (168) include performing a projective transformation on each of the model image and the model feature.
(Appendix 7)
An object detection device (60) according to any one of appendixes 1 to 6, wherein the first characteristic calculated by the first characteristic calculation unit (169) represents stability of the feature extraction process against changes in an image.
(Appendix 8)
The first characteristic calculation unit (169)
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
An object detection device (60) as described in Appendix 7, which calculates the first characteristic based on the proportion of image features included in the transformation model features whose difference from corresponding image features included in the third image features is smaller than a specific value.
(Appendix 9)
a second characteristic calculation unit (170) that calculates a second characteristic representing sensitivity of the feature extraction process by the feature extraction unit (164) to a change in an image based on the third image characteristic and the transformation model characteristic,
The object detection device (60) described in any one of Appendices 1 to 8, wherein the evaluation index calculation unit (171) calculates the evaluation index for each of the multiple feature extraction processes based on the one or more first characteristics and one or more second characteristics calculated by the second characteristic calculation unit (170) based on the one or more third image features and the one or more transformation model features.
(Appendix 10)
The second characteristic calculation unit (170)
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
An object detection device (60) as described in Appendix 9, which calculates the second characteristic based on the proportion of image features among the image features included in the third image features whose difference from the corresponding image features included in the transformation model features is smaller than a specific value.
(Appendix 11)
The object detection device (60) described inAppendix 9 or 10, wherein the evaluation index calculation unit (171) calculates a weighted harmonic mean of the one or more first characteristics and the one or more second characteristics as the evaluation index for each of the multiple feature extraction processes.
(Appendix 12)
The evaluation index calculation unit (171) calculates, for each of the plurality of feature extraction processes,
calculating an index for determining the feature extraction process for each pair of the one or more third image features and the one or more transformation model features;
An object detection device (60) according to any one of appendixes 1 to 11, wherein the minimum value of the indices calculated for each of the sets is used as an evaluation index for the feature extraction process.
(Appendix 13)
The object detection device (60) according to any one of appendix 1 to 12, further comprising a display control unit (172) that displays the evaluation index calculated for each of the multiple feature extraction processes on a display device.
(付記1)
画像から画像特徴を抽出する特徴抽出部(164)と、
位置及び姿勢が既知である対象物を撮像した第一の画像をモデル画像として受け付けるモデル画像受付部(165)と、
前記特徴抽出部が前記モデル画像から抽出した第一の画像特徴をモデル特徴として記憶するモデル特徴記憶部(167)と、
位置及び姿勢が未知である前記対象物を撮像した第二の画像から前記特徴抽出部が抽出した第二の画像特徴と前記モデル特徴とを照合することで前記対象物の位置及び姿勢を特定する検出部(163)と、を備える物体検出装置(60)であって、
前記物体検出装置(60)は、更に、
前記モデル画像と前記モデル特徴に1以上の変換処理を加えて1以上の変換モデル画像と1以上の変換モデル特徴とをそれぞれ生成するモデル変換部(168)と、
前記特徴抽出部が前記変換モデル画像から抽出した第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部による特徴抽出処理の特性に関する第一特性を算出する第一特性計算部(169)と、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第一特性計算部が算出した1以上の第一特性に基づいて前記特徴抽出処理の評価指標を計算する評価指標計算部(171)と、を備え、
前記特徴抽出部(164)は、複数の特徴抽出処理を有し、
前記第一特性計算部(169)は、前記複数の特徴抽出処理の各々について、前記第三の画像特徴と前記変換モデル特徴とに基づき前記1以上の第一特性を算出し、
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について、前記検出部の検出処理に適用する特徴抽出処理を決定するための前記評価指標を算出する、物体検出装置(60)。
(付記2)
前記モデル画像に写った前記対象物の一部又は全部を照合領域として受け付ける照合領域受付部(166)を更に備え、
前記モデル特徴記憶部(167)は、前記第一の画像特徴のうち前記照合領域に含まれる画像特徴を前記モデル特徴として記憶する、付記1に記載の物体検出装置(60)。
(付記3)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について算出された前記評価指標に基づいて、前記検出部(163)による検出処理において前記特徴抽出部が用いる特徴抽出処理を決定する、付記1又は2に記載の物体検出装置(60)。
(付記4)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について算出された前記評価指標のうち最も高い評価指標を有する特徴抽出処理を、前記検出部(163)による検出処理において前記特徴抽出部(164)が用いる特徴抽出処理として決定する、付記3に記載の物体検出装置(60)。
(付記5)
前記モデル変換部(168)による前記1以上の変換処理は、前記モデル画像の明るさを変換する処理を含む、付記1から4のいずれか一項に記載の物体検出装置(60)。
(付記6)
前記モデル変換部(168)による前記1以上の変換処理は、前記モデル画像と前記モデル特徴の各々に対して射影変換を行うことを含む、付記1から5のいずれか一項に記載の物体検出装置(60)。
(付記7)
前記第一特性計算部(169)が算出する前記第一特性は、画像の変化に対する前記特徴抽出処理の安定性を表す、付記1から6のいずれか一項に記載の物体検出装置(60)。
(付記8)
前記第一特性計算部(169)は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記変換モデル特徴に含まれる画像特徴のうち前記第三の画像特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第一特性を算出する、付記7に記載の物体検出装置(60)。
(付記9)
前記第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部(164)による特徴抽出処理の、画像の変化に対する敏感性を表す第二特性を算出する第二特性計算部(170)を更に備え、
前記評価指標計算部(171)は、前記1以上の第一特性と、前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第二特性計算部(170)が算出した1以上の第二特性とに基づいて、前記複数の特徴抽出処理の各々について、前記評価指標を算出する、付記1から8のいずれか一項に記載の物体検出装置(60)。
(付記10)
前記第二特性計算部(170)は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記第三の画像特徴に含まれる画像特徴のうち前記変換モデル特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第二特性を算出する付記9に記載の物体検出装置(60)。
(付記11)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々について、前記1以上の第一特性と前記1以上の第二特性の重み付き調和平均を前記評価指標として算出する付記9又は10に記載の物体検出装置(60)。
(付記12)
前記評価指標計算部(171)は、前記複数の特徴抽出処理の各々に関し、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴のそれぞれの組で前記特徴抽出処理を決定するための指標を算出し、
前記それぞれの組で算出された前記指標のうちの最小値を前記特徴抽出処理の評価指標とする、付記1から11のいずれか一項に記載の物体検出装置(60)。
(付記13)
前記複数の特徴抽出処理のそれぞれについて算出された前記評価指標を表示装置に表示する表示制御部(172)を更に備える、付記1から12のいずれか一項に記載の物体検出装置(60)。 The following additional notes are provided regarding the above embodiment and modifications.
(Appendix 1)
A feature extraction unit (164) for extracting image features from an image;
a model image receiving unit (165) that receives, as a model image, a first image obtained by capturing an object whose position and orientation are known;
a model feature storage unit (167) that stores the first image feature extracted from the model image by the feature extraction unit as a model feature;
a detection unit (163) that identifies a position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image capturing the object, the position and orientation of which are unknown, with the model feature,
The object detection device (60) further comprises:
a model transformation unit (168) that applies one or more transformation processes to the model image and the model feature to generate one or more transformed model images and one or more transformed model features, respectively;
a first characteristic calculation unit (169) that calculates a first characteristic related to a characteristic of the feature extraction process by the feature extraction unit based on a third image characteristic extracted from the transformation model image by the feature extraction unit and the transformation model characteristic;
an evaluation index calculation unit (171) that calculates an evaluation index of the feature extraction process based on one or more first characteristics calculated by the first characteristic calculation unit based on the one or more third image features and the one or more transformation model features,
The feature extraction unit (164) has a plurality of feature extraction processes,
the first characteristic calculation unit (169) calculates the one or more first characteristics for each of the plurality of feature extraction processes based on the third image feature and the transformation model feature;
The object detection device (60), wherein the evaluation index calculation unit (171) calculates the evaluation index for determining which feature extraction process to apply to the detection process of the detection unit for each of the multiple feature extraction processes.
(Appendix 2)
A matching area receiving unit (166) that receives a part or the whole of the object shown in the model image as a matching area,
The object detection device (60) described in Appendix 1, wherein the model feature storage unit (167) stores, as the model feature, the image feature included in the matching area among the first image features.
(Appendix 3)
The object detection device (60) described in
(Appendix 4)
The object detection device (60) described in Appendix 3, wherein the evaluation index calculation unit (171) determines the feature extraction process having the highest evaluation index among the evaluation indexes calculated for each of the multiple feature extraction processes as the feature extraction process to be used by the feature extraction unit (164) in the detection process by the detection unit (163).
(Appendix 5)
The object detection device (60) according to any one of appendixes 1 to 4, wherein the one or more conversion processes by the model conversion unit (168) include a process of converting brightness of the model image.
(Appendix 6)
The object detection device (60) according to any one of appendices 1 to 5, wherein the one or more transformation processes by the model transformation unit (168) include performing a projective transformation on each of the model image and the model feature.
(Appendix 7)
An object detection device (60) according to any one of appendixes 1 to 6, wherein the first characteristic calculated by the first characteristic calculation unit (169) represents stability of the feature extraction process against changes in an image.
(Appendix 8)
The first characteristic calculation unit (169)
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
An object detection device (60) as described in Appendix 7, which calculates the first characteristic based on the proportion of image features included in the transformation model features whose difference from corresponding image features included in the third image features is smaller than a specific value.
(Appendix 9)
a second characteristic calculation unit (170) that calculates a second characteristic representing sensitivity of the feature extraction process by the feature extraction unit (164) to a change in an image based on the third image characteristic and the transformation model characteristic,
The object detection device (60) described in any one of Appendices 1 to 8, wherein the evaluation index calculation unit (171) calculates the evaluation index for each of the multiple feature extraction processes based on the one or more first characteristics and one or more second characteristics calculated by the second characteristic calculation unit (170) based on the one or more third image features and the one or more transformation model features.
(Appendix 10)
The second characteristic calculation unit (170)
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
An object detection device (60) as described in Appendix 9, which calculates the second characteristic based on the proportion of image features among the image features included in the third image features whose difference from the corresponding image features included in the transformation model features is smaller than a specific value.
(Appendix 11)
The object detection device (60) described in
(Appendix 12)
The evaluation index calculation unit (171) calculates, for each of the plurality of feature extraction processes,
calculating an index for determining the feature extraction process for each pair of the one or more third image features and the one or more transformation model features;
An object detection device (60) according to any one of appendixes 1 to 11, wherein the minimum value of the indices calculated for each of the sets is used as an evaluation index for the feature extraction process.
(Appendix 13)
The object detection device (60) according to any one of appendix 1 to 12, further comprising a display control unit (172) that displays the evaluation index calculated for each of the multiple feature extraction processes on a display device.
10 ロボット
33 ハンド
40 教示装置
41 表示部
50 ロボット制御装置
60 物体検出装置
70 視覚センサ
100 ロボットシステム
151 動作制御部
161 視覚センサ制御部
162 画像取得部
163 検出部
164 特徴抽出部
165 モデル画像受付部
166 照合領域受付部
167 モデル特徴記憶部
168 モデル変換部
169 安定性計算部
170 敏感性計算部
171 評価指標計算部
172 表示制御部
201 モデル画像
202 モデル特徴
203 変換モデル画像
204 第三の画像特徴
205 変換モデル特徴 10Robot 33 Hand 40 Teaching device 41 Display unit 50 Robot control device 60 Object detection device 70 Visual sensor 100 Robot system 151 Operation control unit 161 Visual sensor control unit 162 Image acquisition unit 163 Detection unit 164 Feature extraction unit 165 Model image reception unit 166 Matching area reception unit 167 Model feature storage unit 168 Model conversion unit 169 Stability calculation unit 170 Sensitivity calculation unit 171 Evaluation index calculation unit 172 Display control unit 201 Model image 202 Model feature 203 Transformed model image 204 Third image feature 205 Transformed model feature
33 ハンド
40 教示装置
41 表示部
50 ロボット制御装置
60 物体検出装置
70 視覚センサ
100 ロボットシステム
151 動作制御部
161 視覚センサ制御部
162 画像取得部
163 検出部
164 特徴抽出部
165 モデル画像受付部
166 照合領域受付部
167 モデル特徴記憶部
168 モデル変換部
169 安定性計算部
170 敏感性計算部
171 評価指標計算部
172 表示制御部
201 モデル画像
202 モデル特徴
203 変換モデル画像
204 第三の画像特徴
205 変換モデル特徴 10
Claims (13)
- 画像から画像特徴を抽出する特徴抽出部と、
位置及び姿勢が既知である対象物を撮像した第一の画像をモデル画像として受け付けるモデル画像受付部と、
前記特徴抽出部が前記モデル画像から抽出した第一の画像特徴をモデル特徴として記憶するモデル特徴記憶部と、
位置及び姿勢が未知である前記対象物を撮像した第二の画像から前記特徴抽出部が抽出した第二の画像特徴と前記モデル特徴とを照合することで前記対象物の位置及び姿勢を特定する検出部と、を備える物体検出装置であって、
前記物体検出装置は、更に、
前記モデル画像と前記モデル特徴に1以上の変換処理を加えて1以上の変換モデル画像と1以上の変換モデル特徴とをそれぞれ生成するモデル変換部と、
前記特徴抽出部が前記変換モデル画像から抽出した第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部による特徴抽出処理の特性に関する第一特性を算出する第一特性計算部と、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第一特性計算部が算出した1以上の第一特性に基づいて前記特徴抽出処理の評価指標を計算する評価指標計算部と、を備え、
前記特徴抽出部は、複数の特徴抽出処理を有し、
前記第一特性計算部は、前記複数の特徴抽出処理の各々について、前記第三の画像特徴と前記変換モデル特徴とに基づき前記1以上の第一特性を算出し、
前記評価指標計算部は、前記複数の特徴抽出処理の各々について、前記検出部の検出処理に適用する特徴抽出処理を決定するための前記評価指標を算出する、物体検出装置。 a feature extraction unit that extracts image features from an image;
a model image receiving unit that receives, as a model image, a first image obtained by capturing an object whose position and orientation are known;
a model feature storage unit that stores the first image feature extracted from the model image by the feature extraction unit as a model feature;
a detection unit that identifies a position and orientation of the object by comparing a second image feature extracted by the feature extraction unit from a second image capturing the object, the position and orientation of which are unknown, with the model feature,
The object detection device further comprises:
a model transformation unit that applies one or more transformation processes to the model image and the model feature to generate one or more transformed model images and one or more transformed model features, respectively;
a first characteristic calculation unit that calculates a first characteristic related to a characteristic of a feature extraction process performed by the feature extraction unit, based on a third image feature extracted from the transformation model image by the feature extraction unit and the transformation model feature;
an evaluation index calculation unit that calculates an evaluation index of the feature extraction process based on one or more first characteristics calculated by the first characteristic calculation unit based on the one or more third image features and the one or more transformation model features,
The feature extraction unit has a plurality of feature extraction processes,
the first characteristic calculation unit calculates the one or more first characteristics based on the third image feature and the transformation model feature for each of the plurality of feature extraction processes;
The object detection device, wherein the evaluation index calculation unit calculates the evaluation index for each of the plurality of feature extraction processes to determine which feature extraction process to apply to the detection process of the detection unit. - 前記モデル画像に写った前記対象物の一部又は全部を照合領域として受け付ける照合領域受付部を更に備え、
前記モデル特徴記憶部は、前記第一の画像特徴のうち前記照合領域に含まれる画像特徴を前記モデル特徴として記憶する、請求項1に記載の物体検出装置。 a matching area receiving unit that receives a part or the whole of the object shown in the model image as a matching area;
The object detection device according to claim 1 , wherein the model feature storage unit stores, as the model feature, an image feature included in the matching region among the first image features. - 前記評価指標計算部は、前記複数の特徴抽出処理の各々について算出された前記評価指標に基づいて、前記検出部による検出処理において前記特徴抽出部が用いる特徴抽出処理を決定する、請求項1又は2に記載の物体検出装置。 The object detection device according to claim 1 or 2, wherein the evaluation index calculation unit determines the feature extraction process to be used by the feature extraction unit in the detection process by the detection unit based on the evaluation index calculated for each of the plurality of feature extraction processes.
- 前記評価指標計算部は、前記複数の特徴抽出処理の各々について算出された前記評価指標のうち最も高い評価指標を有する特徴抽出処理を、前記検出部による検出処理において前記特徴抽出部が用いる特徴抽出処理として決定する、請求項3に記載の物体検出装置。 The object detection device according to claim 3, wherein the evaluation index calculation unit determines the feature extraction process having the highest evaluation index among the evaluation indexes calculated for each of the plurality of feature extraction processes as the feature extraction process to be used by the feature extraction unit in the detection process by the detection unit.
- 前記モデル変換部による前記1以上の変換処理は、前記モデル画像の明るさを変換する処理を含む、請求項1から4のいずれか一項に記載の物体検出装置。 The object detection device according to any one of claims 1 to 4, wherein the one or more conversion processes performed by the model conversion unit include a process of converting the brightness of the model image.
- 前記モデル変換部による前記1以上の変換処理は、前記モデル画像と前記モデル特徴の各々に対して射影変換を行うことを含む、請求項1から5のいずれか一項に記載の物体検出装置。 The object detection device according to any one of claims 1 to 5, wherein the one or more transformation processes performed by the model transformation unit include performing a projective transformation on each of the model image and the model feature.
- 前記第一特性計算部が算出する前記第一特性は、画像の変化に対する前記特徴抽出処理の安定性を表す、請求項1から6のいずれか一項に記載の物体検出装置。 The object detection device according to any one of claims 1 to 6, wherein the first characteristic calculated by the first characteristic calculation unit represents the stability of the feature extraction process against changes in the image.
- 前記第一特性計算部は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記変換モデル特徴に含まれる画像特徴のうち前記第三の画像特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第一特性を算出する、請求項7に記載の物体検出装置。 The first characteristic calculation unit is
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
The object detection device according to claim 7 , wherein the first characteristic is calculated based on a proportion of image features included in the transformation model features whose difference from corresponding image features included in the third image features is smaller than a specific value. - 前記第三の画像特徴と前記変換モデル特徴とに基づいて、前記特徴抽出部による特徴抽出処理の、画像の変化に対する敏感性を表す第二特性を算出する第二特性計算部を更に備え、
前記評価指標計算部は、前記1以上の第一特性と、前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴と基づいて前記第二特性計算部が算出した1以上の第二特性とに基づいて、前記複数の特徴抽出処理の各々について、前記評価指標を算出する、請求項1から8のいずれか一項に記載の物体検出装置。 a second characteristic calculation unit that calculates a second characteristic representing sensitivity of the feature extraction process by the feature extraction unit to a change in an image based on the third image feature and the transformation model feature,
9. The object detection device according to claim 1, wherein the evaluation index calculation unit calculates the evaluation index for each of the plurality of feature extraction processes based on the one or more first characteristics and one or more second characteristics calculated by the second characteristic calculation unit based on the one or more third image features and the one or more transformation model features. - 前記第二特性計算部は、
前記変換モデル特徴と前記第三の画像特徴のそれぞれに含まれる画像特徴の対応関係を決定し、
前記第三の画像特徴に含まれる画像特徴のうち前記変換モデル特徴に含まれる対応する画像特徴との差が特定の値よりも小さい画像特徴の割合に基づいて前記第二特性を算出する請求項9に記載の物体検出装置。 The second characteristic calculation unit,
determining a correspondence relationship between the transformation model feature and the image feature included in each of the third image features;
The object detection device according to claim 9 , wherein the second characteristic is calculated based on a proportion of image features among the image features included in the third image features whose difference from the corresponding image features included in the transformation model features is smaller than a specific value. - 前記評価指標計算部は、前記複数の特徴抽出処理の各々について、前記1以上の第一特性と前記1以上の第二特性の重み付き調和平均を前記評価指標として算出する請求項9又は10に記載の物体検出装置。 The object detection device according to claim 9 or 10, wherein the evaluation index calculation unit calculates a weighted harmonic average of the one or more first characteristics and the one or more second characteristics as the evaluation index for each of the plurality of feature extraction processes.
- 前記評価指標計算部は、前記複数の特徴抽出処理の各々に関し、
前記1以上の前記第三の画像特徴と前記1以上の変換モデル特徴のそれぞれの組で前記特徴抽出処理を決定するための指標を算出し、
前記それぞれの組で算出された前記指標のうちの最小値を前記特徴抽出処理の評価指標とする、請求項1から11のいずれか一項に記載の物体検出装置。 The evaluation index calculation unit, for each of the plurality of feature extraction processes,
calculating an index for determining the feature extraction process for each pair of the one or more third image features and the one or more transformation model features;
The object detection device according to claim 1 , wherein a minimum value of the indices calculated for the respective sets is set as an evaluation index for the feature extraction process. - 前記複数の特徴抽出処理のそれぞれについて算出された前記評価指標を表示装置に表示する表示制御部を更に備える、請求項1から12のいずれか一項に記載の物体検出装置。 The object detection device according to any one of claims 1 to 12, further comprising a display control unit that displays the evaluation index calculated for each of the plurality of feature extraction processes on a display device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2023/012243 WO2024201662A1 (en) | 2023-03-27 | 2023-03-27 | Object detection device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2023/012243 WO2024201662A1 (en) | 2023-03-27 | 2023-03-27 | Object detection device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024201662A1 true WO2024201662A1 (en) | 2024-10-03 |
Family
ID=92903543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/012243 WO2024201662A1 (en) | 2023-03-27 | 2023-03-27 | Object detection device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024201662A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014199584A (en) * | 2013-03-29 | 2014-10-23 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP2016103230A (en) * | 2014-11-28 | 2016-06-02 | キヤノン株式会社 | Image processor, image processing method and program |
JP2020082273A (en) * | 2018-11-26 | 2020-06-04 | キヤノン株式会社 | Image processing device, control method thereof, and program |
-
2023
- 2023-03-27 WO PCT/JP2023/012243 patent/WO2024201662A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014199584A (en) * | 2013-03-29 | 2014-10-23 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP2016103230A (en) * | 2014-11-28 | 2016-06-02 | キヤノン株式会社 | Image processor, image processing method and program |
JP2020082273A (en) * | 2018-11-26 | 2020-06-04 | キヤノン株式会社 | Image processing device, control method thereof, and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8467596B2 (en) | Method and apparatus for object pose estimation | |
JP5612916B2 (en) | Position / orientation measuring apparatus, processing method thereof, program, robot system | |
JP7094702B2 (en) | Image processing device and its method, program | |
EP2416294B1 (en) | Face feature point detection device and program | |
CN109034017B (en) | Head pose estimation method and machine readable storage medium | |
US20020191818A1 (en) | Face detection device, face pose detection device, partial image extraction device, and methods for said devices | |
US10430650B2 (en) | Image processing system | |
JP6716996B2 (en) | Image processing program, image processing apparatus, and image processing method | |
CN111604909A (en) | Visual system of four-axis industrial stacking robot | |
JP6899189B2 (en) | Systems and methods for efficiently scoring probes in images with a vision system | |
WO2008020068A1 (en) | Method of image processing | |
EP3300025B1 (en) | Image processing device and image processing method | |
JP2002203243A (en) | Method and device for image processing, method and program for detecting image characteristic point, and method and program for supporting position designation | |
CN111476841A (en) | Point cloud and image-based identification and positioning method and system | |
US6718074B1 (en) | Method and apparatus for inspection for under-resolved features in digital images | |
CN116958145B (en) | Image processing method and device, visual detection system and electronic equipment | |
CN114782451B (en) | Workpiece defect detection method and device, electronic equipment and readable storage medium | |
US8594416B2 (en) | Image processing apparatus, image processing method, and computer program | |
CN108447092B (en) | Method and device for visually positioning marker | |
JP5769559B2 (en) | Image processing apparatus, image processing program, robot apparatus, and image processing method | |
US11244159B2 (en) | Article recognition system and article recognition method | |
JP4001162B2 (en) | Image processing method, image processing program and storage medium therefor, and image processing apparatus | |
WO2024201662A1 (en) | Object detection device | |
US20230281947A1 (en) | Image processing device, image processing method, and non-transitory computer readable storage medium | |
CN114170202A (en) | Weld segmentation and milling discrimination method and device based on area array structured light 3D vision |