[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110969661B - Image processing device and method, and position calibration system and method - Google Patents

Image processing device and method, and position calibration system and method Download PDF

Info

Publication number
CN110969661B
CN110969661B CN201811163285.5A CN201811163285A CN110969661B CN 110969661 B CN110969661 B CN 110969661B CN 201811163285 A CN201811163285 A CN 201811163285A CN 110969661 B CN110969661 B CN 110969661B
Authority
CN
China
Prior art keywords
image
target image
photographing
feature
positioning mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811163285.5A
Other languages
Chinese (zh)
Other versions
CN110969661A (en
Inventor
周帅骏
张玉地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Micro Electronics Equipment Co Ltd
Original Assignee
Shanghai Micro Electronics Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Micro Electronics Equipment Co Ltd filed Critical Shanghai Micro Electronics Equipment Co Ltd
Priority to CN201811163285.5A priority Critical patent/CN110969661B/en
Publication of CN110969661A publication Critical patent/CN110969661A/en
Application granted granted Critical
Publication of CN110969661B publication Critical patent/CN110969661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing device, an image processing method, a position calibration system and a position calibration method, wherein the image processing method comprises the following steps: photographing the positioning mark at a first photographing position; separating out an image of a feature pattern, the feature pattern having rotational invariance; scaling the separated images of the feature patterns according to preset parameters to obtain a feature template generation template library; photographing the positioning mark at a second photographing position to acquire an image, and photographing at the second photographing position, wherein the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark; and matching each characteristic template in the template library with the target image respectively to calculate the position of the rotation center point of the image of the characteristic graph in the target image. The template library has the advantages of less number of characteristic templates, less number of characteristic templates matched with the target image and capability of improving the calculation efficiency.

Description

Image processing device and method, and position calibration system and method
Technical Field
The invention relates to the technical field of material transportation, in particular to an image processing device, an image processing method, a position calibration system and a position calibration method.
Background
Position calibration systems have been widely used in industry, particularly in the field of semiconductor manufacturing, for the 21 st century. For example, in a position calibration system on a wafer production line, wafer cassettes are typically transported between a plurality of material storage devices by a material handling robot. The material transporting robot can improve the transporting efficiency of materials, improve the automation level of the wafer production line, save the labor cost and improve the product output.
The material transporting robot comprises a transporting device and a mechanical arm, wherein the mechanical arm is arranged on the transporting device. In the process of transporting materials, the mechanical arm loads the silicon wafer box at a preset position, then the transporting device transports the silicon wafer box to the next preset position, and then the mechanical arm unloads the silicon wafer box onto the material storage device. Before loading and unloading the silicon wafer box, the mechanical arm is generally required to calculate the position of the material transporting robot relative to the silicon wafer box on the material storage device, namely, the position information of the mechanical arm on the material transporting robot relative to the material storage point on the material storage device where the silicon wafer box is placed, so that the mechanical arm can accurately load and unload the silicon wafer box.
Currently, when calculating the position information of a material handling robot relative to a material storage point, the image processing device generally obtains the position information of a positioning mark relative to the material handling robot, so that the position of the material handling robot relative to the material storage point is calculated according to the position information of the positioning mark relative to the material storage point on the material storage device and the position information of the positioning mark relative to the material handling robot. However, the time required for acquiring the position information of the positioning mark relative to the material handling robot is generally long and the efficiency is low.
Disclosure of Invention
The invention aims to provide an image processing device, an image processing method, a position calibration system and a position calibration method, which are used for solving the problems of long calculation time and low efficiency of the existing image processing device, image processing method, position calibration system and position calibration method.
In order to solve the above technical problems, the present invention provides an image processing method, including: photographing the positioning mark at a first photographing position to acquire an image; separating an image of a feature pattern from an image acquired at a first photographing position, wherein the positioning mark comprises at least one feature pattern, and the feature pattern has rotation invariance; scaling the image of the separated feature pattern according to a predetermined parameter to obtain at least one feature template; generating a template library, wherein the template library comprises all the characteristic templates; photographing the positioning mark at a second photographing position to obtain an image, wherein the image obtained by photographing at the second photographing position is a target image; and matching each characteristic template in the template library with the target image respectively to calculate the position of the rotation center point of the image of the characteristic graph in the target image.
Optionally, the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark.
Optionally, matching each feature template in the template library with the target image includes: overlapping a rotation center point of each feature template in the template library with a point on the image at the starting position of the target image; each characteristic template in the template library is moved to an end position from a starting position of a target image step by step according to a preset interval, and a similarity coefficient of the characteristic template and the image at a corresponding position in the target image is calculated; comparing all the similarity coefficients to select the largest similarity coefficient in the similarity coefficients; and calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, wherein the translation amount is the position of the rotation center point of the image of the feature graph in the acquired target image in the target image.
Optionally, before calculating the similarity coefficient of the feature template and the image at the corresponding position in the target image, filling the boundary of the target image, so that the distance of the increase of the target image in the length direction and the distance of the increase of the target image in the width direction are respectively equal to the sizes of the feature template in the length direction and the width direction.
The present invention provides still another image processing apparatus including: the image acquisition module is used for photographing the positioning mark at a first photographing position to acquire an image and photographing the positioning mark at a second photographing position to acquire an image, wherein the image of the positioning mark acquired at the second photographing position is a target image, and the positioning mark comprises at least one feature pattern with rotation invariance; an image separation module for separating an image of the feature pattern from an image acquired at the first photographing position; the template making module is used for scaling the image of the separated characteristic graph according to preset parameters so as to obtain at least one characteristic template; the template library generation module is used for generating a template library, and the template library comprises all the characteristic templates; and the matching module is used for respectively matching each characteristic template in the template library with the target image so as to calculate the position of the rotation center point of the image of the characteristic graph in the target image.
Optionally, the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark.
Optionally, the matching module includes: the similarity coefficient calculation module is used for enabling the rotation center point of each characteristic template in the template library to coincide with a point on an image at the initial position of the target image, enabling each characteristic template in the template library to move to the final position from the initial position of the target image step by step according to a preset distance, and calculating the similarity coefficient of the characteristic template and the image at the corresponding position in the target image; the comparison module is used for comparing all the similarity coefficients to select the largest similarity coefficient in the similarity coefficients; the position calculation module is used for calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, wherein the translation amount is the position corresponding to the maximum similarity coefficient in the target image of the rotation center point of the image of the feature graph in the acquired target image.
Optionally, the matching module further includes a filling module, and the filling module fills the boundary of the target image, so that the distance of the increase of the target image in the length direction and the distance of the increase of the target image in the width direction are equal to the sizes of the feature templates in the length direction and the width direction respectively.
The invention also provides a position calibration system, which comprises a material storage device, a positioning mark, a material transportation robot, a position calculating device, a judging device, a material and the image processing device, wherein the positioning mark is arranged on the material storage device, the material is stored on a material storage point of the material storage device, and an image acquisition module in the image processing device is arranged on the material transportation robot; the position calculating device is used for calculating the spatial position information of the positioning mark relative to the image acquisition module of the material transportation robot according to the position of the rotation center point of the image of at least two feature patterns in the target image obtained by the image processing device in the target image, and is used for calculating the spatial position information between the material transportation robot and the material storage point according to the spatial position information of the positioning mark relative to the image acquisition module of the material transportation robot, the spatial position information of the image acquisition module and the material transportation robot and the spatial position information between the positioning mark and the material storage point; the material transporting robot is used for acquiring materials from the material storage points on the material storage device or placing the materials to the material storage points on the material storage device according to the space position information between the material transporting robot and the material storage points.
Optionally, the material transportation robot includes controller, arm, mechanical fork and base, be provided with on the base the arm, the end of arm is provided with the mechanical fork, be provided with on the mechanical fork the image acquisition module, the controller can control the arm drives the mechanical fork acquires and places the material.
The invention also provides a position calibration method, wherein the positioning mark is arranged on the material storage device, the material storage point of the material storage device is stored with materials, the image acquisition module in the image processing device is arranged on the material transportation robot, and the position calibration method comprises the following steps: the material transportation robot drives an image acquisition module in the image processing device to take a picture of the positioning mark to a first photographing position to acquire an image, and generates a template library, wherein the template library comprises a plurality of characteristic templates; the material transportation robot drives an image acquisition module in the image processing device to a second photographing position, and an image acquired at the second photographing position is a target image; photographing the positioning mark to acquire an image; judging whether photographing is successful, if so, calculating the position of the rotation center point of the feature pattern in the target image, if not, resetting the material transportation robot, and simultaneously reporting photographing failure; calculating the spatial position information of the positioning mark relative to an image acquisition module of the material transportation robot according to the positions of rotation center points of images of at least two feature patterns in the target image; calculating position information between the material transporting robot and the material storage point according to the spatial position information of the positioning mark relative to the image acquisition module of the material transporting robot, the spatial position information of the image acquisition module and the material transporting robot and the spatial position information of the positioning mark and the material storage point; and acquiring the wafer box from the material storage point on the material storage device or placing the wafer box to the material storage point on the material storage device according to the position information between the material transportation robot and the material storage point.
Optionally, the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark.
Optionally, generating the template library includes the image processing device separating the image of the feature pattern from the image acquired at the first photographing position; the image processing device zooms the image of the separated characteristic graph according to preset parameters to obtain at least one characteristic template; the image processing device generates a template library, wherein the template library comprises all the characteristic templates.
Optionally, determining whether photographing is successful includes: judging whether the obtained target image comprises images of feature patterns or not, if yes, judging whether the obtained target image comprises at least two images of the feature patterns or not, and if not, scanning the positioning mark; judging whether the obtained target image comprises at least two images of the feature patterns, if so, calculating whether the distance between the rotation center points of the feature patterns in the target image is within a set second threshold range, and if not, retesting the positioning mark; calculating whether the distance between the rotation center points of the feature patterns in the target image is within a set second threshold range, if the distance between the rotation center points of the feature patterns is within the set second threshold range, calculating the position of the rotation center points of the feature patterns in the target image, and if the distance between the rotation center points of the feature patterns is not within the set second threshold range, resetting the material transportation robot, and reporting photographing failure.
Optionally, determining whether the obtained target image includes the image of the feature pattern includes: overlapping a rotation center point of each feature template in the template library with a point on the image at the starting position of the target image; each characteristic template in the template library is moved to an end position from a starting position of a target image step by step according to a preset interval, and a similarity coefficient of the characteristic template and the image at a corresponding position in the target image is calculated; comparing all the similarity coefficients with a set similarity coefficient range, and counting the number of the similarity coefficients in the similarity coefficient range; judging whether the number of the similarity coefficients in the similarity coefficient range is zero, if not, judging whether the obtained target image comprises images of at least two feature patterns, and if so, scanning the positioning mark.
Optionally, scanning the positioning mark includes: and judging whether the current second photographing position is the last photographing position in the scanning track, if so, resetting the material transportation robot, reporting photographing failure at the same time, if not, moving the camera to the next second photographing position in the scanning track by the material transportation robot, and photographing the positioning mark to acquire an image.
Optionally, retesting the positioning mark includes: calculating whether the distance between the position of the rotation center point of the feature pattern in the target image and the edge of the target image is within a set first threshold range or not, if the distance between the position of the rotation center point of the feature pattern in the target image and the edge of the target image is within the set first threshold range, calculating a retest photographing position of the image acquisition module according to the distance between the rotation center points of the feature pattern in the positioning mark, moving a camera to the retest photographing position through the material transportation robot, and photographing the positioning mark to acquire the image; and if the distance is out of a set first threshold range, resetting the material transportation robot, and reporting photographing failure.
The image processing device, the image processing method, the position calibration system and the position calibration method provided by the invention have the following beneficial effects:
although all feature templates in the template library are required to participate in matching, the feature templates can be obtained only by scaling feature graphs with rotation invariance in the positioning marks according to preset parameters, and rotation or rotation and scaling of feature graphs with rotation invariance in the positioning marks are not required, so that the number of feature templates in the template library can be reduced, the number of feature templates matched with a target image can be reduced, and the calculation efficiency of the image processing method can be improved.
Because the feature templates only comprise feature patterns with rotation invariance, but not the patterns comprising the whole positioning mark, the patterns of the feature templates are relatively simple, the time for respectively matching each feature template in the template library with the target image can be saved, and the calculation efficiency is further improved.
Because the slight deformation of the target image can influence the position of the rotation center point of the image of the characteristic pattern in the calculated target image in the target image, in theory, when the pattern of the characteristic template is complex, an independent template library needs to be established under each working environment, and the pattern of the characteristic template is relatively simple, so that the independent template library does not need to be established under each working environment, the universality of the template library is high, the corresponding process of manufacturing the template library can be simplified, and the manufacturing efficiency of the template library is improved.
Drawings
Fig. 1 is a flowchart of an image processing method in a first embodiment of the present invention;
FIG. 2 is a flow chart of matching each feature template in a template library with a target image, respectively, in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature template in accordance with a first embodiment of the present invention;
FIG. 4 is a schematic illustration of the coincidence of the center of rotation point of the feature template with a point on the image at the start position of the target image in the first embodiment of the present invention;
FIG. 5 is a schematic illustration of the coincidence of the center of rotation point of the feature template with a point on the image at a position in the target image corresponding to the maximum similarity coefficient in the first embodiment of the present invention;
fig. 6 is a block diagram of an image processing apparatus in the second embodiment of the present invention;
FIG. 7 is a schematic diagram of a position calibration system according to a third embodiment of the present invention;
FIG. 8 is a schematic diagram of the positioning principle of the position calibration system in the third embodiment of the present invention;
FIG. 9 is a correspondence of an image coordinate system, a camera coordinate system, and a world coordinate system;
FIG. 10 is a schematic view of a positioning marker in a world coordinate system;
FIG. 11 is a schematic illustration of a target image acquired by a camera;
FIG. 12 is a flow chart of a position calibration method of the position calibration system in the third embodiment of the present invention;
fig. 13 is a schematic view showing a projection of a scanning track on a plane z=0 in a camera coordinate system in the fourth embodiment of the present invention.
The reference numerals in fig. 3, 4 and 5 illustrate:
h-positioning marks; g-feature templates; q (Q) 1 -a starting position; q (Q) 2 -an end position; f-target image.
The reference numerals in fig. 7 illustrate:
510-a material storage device; 511-trays; 512-locating pins;
520-positioning marks;
530-a material handling robot; 531-mechanical arm; 532—mechanical fork; 533-base;
540-an image acquisition module;
550-cassette.
The reference numerals in fig. 9 illustrate:
k-image plane.
Detailed Description
The core idea of the present invention is to provide an image processing method for calculating positional information of a rotation center point of an image of a feature pattern in an acquired image. The image processing method establishes a plurality of feature templates through the acquired images with the feature patterns of rotation invariance, finds out the images matched with the feature templates in the acquired images through step-by-step movement of the feature templates from the initial positions in the acquired images, designates the images as the images of the feature patterns in the acquired images, calculates the moving distance of the feature templates when the images matched with the feature templates in the acquired images are found out, and calculates the position information of the rotation center point of the images of the feature patterns in the acquired images relative to the initial positions in the acquired images according to the moving distance. Because the feature pattern has rotation invariance, a feature template with rotation of the feature pattern is not needed, so that the number of the feature templates can be reduced, and the calculation time of an image processing method is shortened.
The utility model further provides a position calibration system, a position calibration method and an image processing device adopting the image processing method, wherein the position calibration system comprises the image processing device, and the position calibration method comprises the image processing method.
The image processing device, the image processing method, the position calibration system and the position calibration method according to the present utility model will be described in further detail with reference to the accompanying drawings and the specific embodiments. Advantages and features of the utility model will become more apparent from the following description and from the claims. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the utility model.
Example 1
The present embodiment provides an image processing method. Referring to fig. 1, fig. 1 is a flowchart of an image processing method in a first embodiment of the present utility model, the image processing method including:
step S010, photographing the positioning mark at the first photographing position to acquire an image.
Step S020, separating the image of the feature pattern from the image acquired at the first photographing position. Wherein the positioning mark comprises at least one feature pattern, the feature pattern having rotational invariance.
Step S030, scaling the image of the separated feature pattern according to a predetermined parameter to obtain at least one feature template.
And S040, generating a template library, wherein the template library comprises all the characteristic templates.
In step S050, a second photographing position photographs the positioning mark to obtain an image, and the image obtained by photographing the second photographing position is a target image, wherein the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark.
And step S060, each characteristic template in the template library is respectively matched with the target image so as to calculate the position of the rotation center point of the image of the characteristic graph in the target image.
In the image processing method, although all feature templates in the template library need to participate in calculation, in this embodiment, feature templates can be obtained only by scaling feature graphs with rotation invariance in the positioning mark according to predetermined parameters, and rotation or rotation and scaling of feature graphs with rotation invariance in the positioning mark are not needed, so that the number of feature templates in the template library can be reduced, and further, the number of feature templates matched with a target image can be reduced, and therefore, the calculation efficiency of the image processing method can be improved.
Because the feature templates only comprise feature patterns with rotation invariance, but not the patterns comprising the whole positioning mark, the patterns of the feature templates are relatively simple, the time for respectively matching each feature template in the template library with the target image can be saved, the calculation efficiency is further improved, and the applicability of the template library can also be improved.
Because the slight deformation of the target image can influence the position of the rotation center point of the image of the characteristic pattern in the calculated target image in the target image, in theory, when the pattern of the characteristic template is complex, an independent template library needs to be established under each working environment, but the pattern of the characteristic template in the embodiment is relatively simple, so that the independent template library does not need to be established under each working environment, the universality of the template library is high, the corresponding process of manufacturing the template library can be simplified, and the manufacturing efficiency of the template library is improved.
Referring to fig. 2, fig. 2 is a flowchart of matching each feature template in the template library with a target image in the first embodiment of the present invention, and in step S060, matching each feature template in the template library with the target image specifically includes:
Step S061, the rotation center point of each feature template in the template library is made to coincide with a point on the image at the start position of the target image.
Step S062, each characteristic template in the template library is moved to an end position from a starting position of the target image step by step according to a preset distance, and similarity coefficients of the characteristic templates and the images at the corresponding positions in the target image are calculated.
Step S063, comparing all the similarity coefficients to select the largest similarity coefficient among the similarity coefficients.
Step S064, calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, wherein the translation amount is the position of the rotation center point of the image of the feature graph in the acquired target image in the target image.
In the image processing method, although the rotation center point of each feature template needs to be overlapped with the point on the image at the initial position of the target image, each feature template is moved to the final position from the initial position of the target image step by step according to the preset distance, and the similarity coefficient of the feature template and the image at the corresponding position in the target image is calculated, namely, all feature templates in the template library need to participate in calculation, in the embodiment, the feature templates can be obtained only by scaling the feature graph with rotation invariance in the positioning mark according to the preset parameter, and the feature graph with rotation invariance in the positioning mark does not need to be rotated or rotated and scaled, so that the number of feature templates in the template library can be reduced, the number of feature templates overlapped with the point on the image at the initial position of the target image can be reduced, the number of feature templates moved to the lower right corner from the initial position of the target image according to the preset distance can be reduced, and the number of similarity coefficients needing to be compared can be reduced, and the calculation efficiency of the image processing method can be improved. Because the feature template only comprises the feature pattern with rotation invariance, but not the pattern comprising the whole positioning mark, the pattern of the feature template is relatively simple, the time for calculating the similarity coefficient of the feature template and the image at the corresponding position in the target image can be saved, the calculation efficiency can be further improved, and the applicability of the template library can be improved. Since the translation amount is the position of the rotation center point of the image of the feature pattern in the acquired target image in the target image, the rotation center point of the image of the feature pattern in the target image is not required to be calculated through complex calculation, and therefore the calculation accuracy of the image processing method can be improved.
In the above step, the number of feature patterns in the positioning mark may be one or more. The feature pattern may include at least one concentric circle, and the plurality of concentric circles may be arranged with different color intervals. As shown in fig. 3, fig. 3 is a schematic diagram of a feature template in a first embodiment of the present invention, where the feature template G includes three concentric circles, a first concentric circle and a third concentric circle are filled with black, and a second concentric circle is filled with white. Of course, in other embodiments, the feature pattern may be other patterns with rotational invariance. Of course, in some embodiments, the number of feature patterns in the positioning mark may be one, three, four or more, and the number of feature patterns in the positioning mark is the same as the number of rotation center points of the images of the feature patterns in the target image to be calculated; the sizes of the feature patterns in the positioning marks can be different; the images of the feature patterns in the positioning marks can be different, and the corresponding feature templates are generated according to the images of the different feature patterns.
In the step S030, the predetermined parameter includes at least one image scale, wherein the image scale S i The calculation can be performed according to the following formula:
S i =X m ×Y n
wherein X is m For scaling in X-direction, Y n For Y-scaling, the X m And Y n The calculation can be performed according to the following formula respectively:
X m =X 0 +A x ×m
Y n =Y 0 +A y ×n
wherein X is 0 For the lower limit of the X-direction scaling, A x For scaling step length in X direction, Y 0 For the lower limit of the Y-direction scaling, A y For the Y-direction scaling step, m and n are natural numbers within a certain range.
The number of image scales is known from the above to be related to the lower limit of the X-direction scale, the X-direction scale step size, the lower limit of the Y-direction scale step size, and the ranges of m and n. The parameters can be set according to different calculation precision in the image processing process.
For example, when X 0 、A x 、m、Y 0 、A y And n, the number of image scaling ratios is 5×9=35, and the number of feature templates corresponding thereto is 35 when the relationship described in the following table is satisfied.
In step S050, the second photographing bit may be located at a different position, the images obtained at each of the different photographing positions are different, the positions of the images of the positioning marks in the obtained images are different, and the positions of the images of the feature patterns in the images of the positioning marks in the obtained images are also different. Therefore, the image processing method in this embodiment can be used to calculate the position of the rotation center point of the feature pattern in the obtained image at the different second photographing positions, and further can be used to identify the spatial position relationship of the device for obtaining the image at the different second photographing positions with respect to the positioning mark, such as the spatial position relationship of the camera with respect to the positioning mark. Due to the difference of the photographing positions, the image acquired at the second photographing position may include all or part of the image of the positioning mark or no image of the positioning mark, wherein the image including the positioning mark may not include the feature pattern or include part of the feature pattern or include all of the feature pattern.
In step S060, as shown in fig. 4, fig. 4 is a schematic diagram of the rotation center point of the feature template and the point on the image at the start position of the target image F in the first embodiment of the present invention, the start position Q of the target image F 1 Is positioned at the upper left corner of the image, is positioned at the origin of the pixel coordinate system of the target image, and the characteristic template G is not coincident with the image of the positioning mark H in the target image F.
In step S070, the predetermined pitch may be set according to different image processing precision, for example, the predetermined pitch may be set to one pixel, two pixels, or the like, and the predetermined pitch is set to one pixel in this embodiment. As shown in fig. 4, the end position Q of the target image F 2 At the lower right corner of the image. The image at the corresponding position of the characteristic template G and the target image F can comprise the initial position Q of the target image F 1 The image corresponding to the characteristic template G can also comprise the end position Q in the target image F 2 An image corresponding to the feature template G.
As shown in fig. 5, fig. 5 is a schematic diagram in which the rotation center point of the feature template in the first embodiment of the present invention coincides with a point on the image at a position corresponding to the maximum similarity coefficient in the target image, the rotation center point of the feature template G coincides with a point on the image at a position corresponding to the maximum similarity coefficient in the target image F, and the pattern of the feature template G almost overlaps with the image at a position corresponding to the maximum similarity coefficient in the target image F, and the coordinate of the position at which the rotation center point of the image of the feature pattern in the positioning mark H in the acquired target image F is located in the pixel coordinate system is equal to the translation amount of the feature template G translated from the upper left corner (start position) of the target image to the position corresponding to the maximum similarity coefficient.
The image processing method further includes a step S065 of filling the boundary of the target image so that the distance of the increase of the target image in the length direction and the distance of the increase of the target image in the width direction are equal to the dimensions of the feature template in the length direction and the width direction, respectively, so as to calculate the similarity coefficient of the feature template and the image at the corresponding position in the target image in the step S062. The step S065 may be located between the step S050 and the step S060, or may be located in the step S060, and located before the step S062. The target image is generally rectangular in shape as a whole, and the target image corresponds to the long side and the short side of the rectangle in the longitudinal direction and the width direction, respectively.
When the image processing method is applied to the position calibration system, the steps S010 to S040 can be applied to a training stage of the position calibration system, wherein the training stage is a preprocessing process and mainly comprises the steps of photographing a positioning mark at a first photographing position to obtain an image, separating an image of a feature pattern from the image obtained at the first photographing position, scaling the image of the separated feature pattern according to preset parameters to obtain at least one feature template, and then generating a template library, wherein the template library comprises all the feature templates, and the first photographing position is also called as a teaching photographing position in general; the steps S050 to S060 may be applied to a matching stage of the position calibration system, where the matching stage mainly includes photographing the positioning mark at a second photographing position to obtain an image, photographing the obtained image at the second photographing position to obtain a target image, and matching each feature template in the template library with the target image to calculate a position of a rotation center point of an image of the feature pattern in the target image, where the second photographing position is generally referred to as a task photographing position. The position of the rotation center point of the image of the feature pattern in the acquired target image in the target image, namely the coordinates in the pixel coordinate system, can be obtained through the image processing method.
Example two
The present embodiment provides an image processing apparatus employing the image processing method in the first embodiment. As shown in fig. 6, fig. 6 is a block diagram showing the configuration of an image processing apparatus in the second embodiment of the present invention, the image processing apparatus including: the system comprises an image acquisition module, an image separation module, a template making module, a template library generating module and a matching module.
The image acquisition module is used for taking a picture of the positioning mark at a first photographing position to acquire an image, and is used for taking a picture of the positioning mark at a second photographing position to acquire an image. The image of the positioning mark acquired at the second photographing position is a target image, the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark, the positioning mark comprises at least one feature pattern, and the feature pattern has rotation invariance.
The image separation module is used for separating the image of the characteristic graph from the image acquired at the first photographing position.
The template making module is used for scaling the image of the separated characteristic graph according to preset parameters so as to obtain at least one characteristic template.
The template library generation module is used for generating a template library, and the template library comprises all the characteristic templates.
The matching module is used for respectively matching each characteristic template in the template library with the target image so as to calculate the position of the rotation center point of the image of the characteristic graph in the target image.
Specifically, the matching module comprises a similarity coefficient calculation module, a comparison module and a position calculation module. The similarity coefficient calculation module is used for enabling the rotation center point of each characteristic template in the template library to coincide with a point on an image at the initial position of the target image, enabling each characteristic template in the template library to move to the final position from the initial position of the target image step by step according to a preset distance, and calculating the similarity coefficient of the characteristic template and the image at the corresponding position in the target image.
The comparison module is used for comparing all the similarity coefficients to select the largest similarity coefficient in the similarity coefficients.
The position calculation module is used for calculating the translation amount of the characteristic template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image. The translation amount is the position of the rotation center point of the image of the feature graph in the acquired target image, which corresponds to the maximum similarity coefficient in the target image.
In the image processing device, the feature templates can be obtained only by scaling the feature patterns with rotation invariance in the positioning marks separated by the image separation module according to the preset parameters, and the feature patterns with rotation invariance in the positioning marks are not required to be rotated or rotated and scaled, so that the number of the feature templates in a template library can be reduced, the number of feature templates overlapped with the points on the image at the initial position of the target image can be further reduced, the number of feature templates which are moved from the initial position of the target image to the lower right corner step by step according to the preset interval can be reduced, and the number of similarity coefficients required to be compared can be reduced, so that the calculation time of an image processing method can be reduced, and the calculation efficiency of the image processing method can be improved. Because the feature template only comprises the feature pattern with rotation invariance, but not the pattern comprising the whole positioning mark, the pattern of the feature template is relatively simple, the time for calculating the similarity coefficient of the feature template and the image at the corresponding position in the target image can be saved, the calculation efficiency can be further improved, and the universality of the template file can be also improved. The translation amount is the position of the rotation center point of the image of the feature pattern in the acquired target image, corresponding to the maximum similarity coefficient in the target image, and the rotation center point of the image of the feature pattern in the target image is not required to be calculated through complex calculation, so that the calculation accuracy of the image processing method can be improved.
In this embodiment, the number of feature patterns in the positioning mark may be one or more. The feature pattern may include at least one concentric circle, and the plurality of concentric circles may be arranged with different color intervals.
The image processing device further comprises a filling module, wherein the filling module fills the boundary of the target image so that the distance of the target image increased in the length direction and the distance of the target image increased in the width direction are respectively equal to the sizes of the feature templates in the length direction and the width direction. The target image is generally rectangular in shape as a whole, and the target image corresponds to the long side and the short side of the rectangle in the longitudinal direction and the width direction, respectively.
In this embodiment, the image acquisition module may be a camera.
The position of the rotation center point of the image of the feature pattern in the acquired target image in the target image, that is, the coordinates in the pixel coordinate system can be obtained by the above-described image processing apparatus.
Example III
The embodiment provides a position calibration system. Referring to fig. 7, fig. 7 is a schematic structural diagram of a position calibration system according to a third embodiment of the present invention, where the position calibration system includes a material storage device 510, a positioning mark 520, a material handling robot 530, a position calculating device, a judging device, a cassette 550, and an image processing device according to the second embodiment.
As shown in fig. 7, the positioning mark 520 is provided on the material storage device 510. The material storage device 510 has a cassette 550 stored at a material storage point. The image acquisition module 540 in the image processing apparatus is disposed on the material handling robot 530, and in particular, the image acquisition module 540 in the image processing apparatus may be fixedly disposed on the material handling robot 530. The position calculating device is used for calculating the spatial position information of the positioning mark 520 relative to the image acquiring module 540 of the material transporting robot 530 according to the position of the rotation center point of the image of at least two feature patterns in the target image obtained by the image processing device in the target image, and is used for calculating the spatial position information between the material transporting robot 530 and the material storage point according to the spatial position information of the positioning mark 520 relative to the image acquiring module 540 of the material transporting robot 530, the spatial position information of the image acquiring module 540 and the material transporting robot 530, and the spatial position information of the positioning mark 520 and the material storage point. The material handling robot 530 is configured to acquire the cassette 550 from a material storage point on the material storage device 510 or place the cassette 550 at the material storage point on the material storage device 510 according to spatial position information between the material handling robot 530 and the material storage point.
Since the image acquisition module 540 is disposed on the material handling robot 530, positional information of the image acquisition module 540 with respect to the material handling robot 530 is known; since the position calculating means may calculate the spatial position information of the positioning mark 520 with respect to the image acquiring module 540 of the material handling robot 530 according to the position of the rotation center point of the image of at least two feature patterns in the acquired target image obtained by the image processing means in the target image, the spatial position information of the positioning mark 520 with respect to the image acquiring module 540 of the material handling robot 530 may be obtained by calculation; since the locating indicia 520 are disposed on the material storage points of the material storage device 510, the positional information of the locating indicia 520 relative to the material storage points is known; therefore, the position calculating device can calculate the position information between the material handling robot 530 and the material storage point according to the spatial position information of the positioning mark 520 relative to the image acquiring module 540 of the material handling robot 530, the spatial position information of the image acquiring module 540 and the material handling robot 530, and the position information of the positioning mark 520 and the material storage point. The position calibration system comprising the image processing device in the second embodiment can calculate the position information between the material transporting robot 530 and the material storage point in a shorter time, and has the characteristics of high calculation efficiency, high calculation precision, quick response, simple test and teaching flow and the like.
Specifically, as shown in fig. 7, the material handling robot 530 includes a controller, a robot arm 531, a mechanical fork 532, and a base 533.
The base 533 may be an automatic guided vehicle, which can perform functions such as automatic route planning and obstacle avoidance, or may be a fixed base. The base 533 is provided with a mechanical arm 531, a mechanical fork 532 is disposed at an end of the mechanical arm 531, and a camera (image acquisition module 540) is disposed on the mechanical fork 532. The controller may control the mechanical arm 531 to drive the mechanical fork 532 to acquire and place the cassette 550. The mechanical arm 531 may be a six-degree-of-freedom mechanical arm 531.
The material storage device 510 comprises a tray 511 and a positioning pin 512, wherein the positioning pin 512 is arranged on the tray 511, and the positioning pin 512 is used for positioning the position of the sheet box 550. The location of the locating pin 512 corresponds to the location of the material storage point. The material storage device 510 may be a material tray.
The positioning mark 520 is provided on the tray 511 of the material storage device 510.
In general, the set position of the positioning mark 520 and the spatial distance of the positioning pin 512 are machine constants, and the position of the positioning pin 512 corresponds to the position of the material storage point, so the set position of the positioning mark 520 and the position of the material storage point are known.
The cassette 550 includes a limiting portion that mates with the locating pin 512, which may be a groove provided on the cassette 550. When the cassette 550 is placed on the material storage device 510, the positioning pins 512 are inserted into the grooves under the action of gravity, so that the cassette 550 can only move upwards in the vertical direction.
In other embodiments, the cassette 550 may be other materials, such as wafers.
The spatial position information between the material handling robot 530 and the material storage point may specifically be spatial position information between the base 533 and the material storage point, or may be spatial position information between the mechanical fork 532 and the material storage point. The principle of calculating the position information between the base 533 and the material storage point of the position calibration system is described below taking the spatial position information between the material handling robot 530 and the material storage point as the position information between the base 533 and the material storage point.
Fig. 8 is a schematic diagram of a positioning principle of the position calibration system in the third embodiment of the present invention, as shown in fig. 8, a dotted line a indicates spatial position information between the base 533 provided with the mechanical arm 531 and the mechanical fork 532, a dotted line B indicates spatial position information (hand-eye relationship) between the mechanical fork 532 and the camera (image acquisition module 540), a dotted line C indicates spatial position information between the camera and the positioning mark 520, and a dotted line D indicates spatial position information between the positioning mark 520 and the material storage point. The spatial position information between the base 533 and the fork 532 of the mechanical arm 531 is generally readable from the encoder of the mechanical arm 531, and is spatial position information that can be obtained in real time; the spatial position information between the mechanical fork 532 and the camera can be obtained by means of camera calibration, the process of which is known in the art; spatial position information between the camera and the positioning mark 520 may be obtained by calculation by the position calculation means; the spatial position information between the locating marks 520 and the material storage points can be obtained by means of measurement, typically a machine constant. Therefore, if the spatial position information between the base 533 and the mechanical fork 532, the spatial position information between the mechanical fork 532 and the camera, the spatial position information between the camera and the positioning mark 520, and the spatial position information between the positioning mark 520 and the material storage point are known, the spatial position information between the base 533 and the material storage point can be calculated.
The following positioning mark only comprises two feature patterns, and the rotation center points of the images of the two feature patterns in the positioning mark are respectively points P l Sum point P r For example, the principle of the position calculating means calculating the spatial position information of the positioning mark with respect to the image acquiring module 540 of the material handling robot according to the positions of the rotation center points (the point Pl and the point Pr) of the images of the two feature patterns in the acquired target image obtained by the image processing means in the target image will be described.
Referring to fig. 9, fig. 9 is a diagram showing the correspondence between an image coordinate system, a camera coordinate system and a world coordinate system, and the image coordinate systemThe coordinates of the P point in XOY (coordinate system of the positioning mark in the image plane K) are [ x, y ]] T Its corresponding coordinates in the camera coordinate system are [ x ] c ,y c ,z c ] T . The coordinates of the P point in the image coordinate system and the coordinates of the P point in the camera coordinate system may satisfy the following proportional relationship:
where f is the focal length. The following matrix relationship can be obtained:
if the origin O of the image coordinate system is the coordinate (u) in the pixel coordinate system 0 ,v 0 ) The physical dimension of each pixel in the x-axis and y-axis directions of the image coordinate system is d x ,d y Coordinates [ x, y ] of P point in image coordinate system] T After conversion to a pixel coordinate system, the following steps are obtained:
In the formula, [ u, v ]] T Is the coordinate of point P in the pixel coordinate system. Where a is an upper triangular matrix, called the reference matrix of the camera.
The coordinate of the point P in the world coordinate system is [ x ] w ,y w ,z w ] T Then coordinate with [ x ] c ,y c ,z c ] T The transformation relation of (2) is expressed as:
wherein R is a rotation matrix and T is a translation matrix, wherein:
the orthogonal matrix R is a combination of cosine of the angle between the optical axis of the camera and the coordinate axis of the world coordinate system, and actually contains only three independent angle variables euler angles: yaw (rotation angle ψ about the X-axis), pitch (rotation angle θ about the Y-axis), roll (rotation angle φ about the Z-axis). These three independent angular variables plus three variables of the translation matrix T total six variables, called camera external parameters. Let the homogeneous coordinates of the point P in the world coordinate system be [ x ] w ,y w ,z w ,1] T Homogeneous coordinates in camera coordinate system [ x ] c ,y c ,z c ,1] T The homogeneous coordinate form of formula (4) is:
therefore, the coordinate transformation between the world coordinate system and the camera coordinate system can be represented by a matrix M, and coordinate transformation can be performed between the two coordinate systems as long as M is known.
Bringing formula (5) into formula (3), the following expression can be obtained:
wherein M is 1 As an internal parameter matrix, M 2 For the external parameter matrix, M is a 3×4 matrix, called the projection matrix. It characterizes the basic relationship between a two-dimensional image coordinate system and a three-dimensional world coordinate system. Knowing the coordinates of the P point in the world coordinate system, the coordinates of the P point in the image coordinate system can be found using this matrix. On the contrary, if the coordinates of the matrix M and P points in the image coordinate system are known, the coordinates of a space ray corresponding to the optical center passing through the camera in the world coordinate system can be obtained. Usually, after the camera is calibrated, the matrix M can be obtained, and the internal parameter matrix and the external parameter matrix can be known.
Referring to FIG. 10, FIG. 10 is a schematic diagram of a positioning mark 520 in world coordinate system, since the optical axis is perpendicular to the positioning mark 520 when the camera takes a pictureIn the plane, and the vertical distance of the camera from the positioning mark 520, so that Z in the world coordinate system can be determined w The plane of =0 coincides with the plane in which the positioning mark 520 is located.
The rotation center point P of the images of the two feature patterns of the positioning mark 520 is obtained from (6) l Sum point P r The world coordinate system coordinates of (a) are respectively [ x ] wl ,y wl ,0]And [ x ] wr ,y wr ,0]. The horizontal distance of the positioning mark 520 relative to the world coordinate system origin can be expressed as:
the rotation of the positioning marker 520 relative to the world coordinate system may be expressed as:
positional information of the positioning mark 520 in the world coordinate system can be obtained.
If the rotation center point P of the images of the two feature patterns of the positioning mark 520 is known l Sum point P r Coordinates in the pixel coordinate system, i.e. the point P can be obtained by internal parameters l Sum point P r The coordinates in the image coordinate system, combined with the external parameters of the camera and the vertical distance of the camera from the positioning mark 520, determine the point P l Sum point P r Coordinates in the world coordinate system so that the spatial position information between the positioning mark 520 and the camera can be obtained, the spatial position information between the positioning mark 520 and the camera can be calculated by the position calculating means.
FIG. 11 is a schematic view of a target image acquired by a camera, and referring to FIG. 11, the center point P of rotation of images of two feature patterns of the positioning mark 520 in the target image l Sum point P r Is not coincident with the origin of the pixel coordinate system, and point P l Sum point P r The line of (a) is rotated by an angle with respect to the coordinate axis of the pixel coordinate system, point P l Sum point P r In the pixel coordinate systemThe coordinates of (a) are [ u ] l ,v l ] T ,[u r ,v r ] T . Since the camera is fixed in photographing posture and the parking rotation error of the material transporting robot 530 is small when the camera acquires the image, the point P can be passed l Sum point P r The left-right relationship of the two feature patterns is determined at the position in the target image. The image processing apparatus can thus obtain the point P by the commonly used image processing method and the image processing method in the present embodiment l Sum point P r Coordinates in the pixel coordinate system, i.e. point P l Sum point P r A position in the target image.
In a common image processing method, there is no need to separate the image of the feature pattern from the image acquired at the first photographing position before the feature template is manufactured, and the feature template includes the image of the entire positioning mark 520, i.e. includes the point P l Sum point P r Is a picture of the image of (a). And a conventional image processing method needs to rotate the image of the entire positioning mark 520 in addition to scaling the image of the entire positioning mark 520 when making the feature templates, and also needs to record the rotation angle of each feature template. In the conventional image processing method, in the process of image matching, the center point of the feature template including the image of the entire positioning mark 520 is overlapped with a point on the image at the start position of the target image, so that each feature template is moved from the start position of the target image to the end position step by step according to a predetermined distance, and the similarity coefficient of the feature template and the image at the corresponding position in the target image is calculated. The common image processing method compares all the similarity coefficients to select the largest similarity coefficient among the similarity coefficients. The conventional image processing method also calculates the translation amount (deltau, deltav) of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, and based on the translation amount, the rotation angle deltar of the feature template corresponding to the maximum similarity coefficient, and the rotation center point P of the image of the feature pattern in the positioning mark 520 l Sum point P r Position relation with the positioning mark 520, calculate the point P l Sum point P r Sitting in the pixelThe position in the frame, i.e., the position in the acquired target image.
In the conventional image processing method, since the feature templates in the template library need to include the entire positioning mark 520 and the positioning mark 520 in the target image has different rotation angles with respect to the pixel coordinate system even if it is at the same position in the pixel coordinate system, the feature templates including the entire positioning mark 520 rotated at different angles at the same position are also needed, and the rotation angle of each feature template is also needed to be recorded. Therefore, the number of feature templates in the conventional image processing method is large, and the number of feature templates required to be used in the calculation is correspondingly large, so that the calculation time is long, and the calculation time of the image processing method is long and the efficiency is low. Furthermore, the point P is calculated in a common image processing method l Sum point P r The position in the pixel coordinate system is inevitably considered by the rotation angle deltar, and if the calculated translation amount (deltau, deltav) has a certain deviation, the point P obtained after calculation is obtained l Sum point P r The deviation of the position in the pixel coordinate system will be further amplified.
In the image processing method of this embodiment, the feature patterns in the positioning mark 520 have rotation invariance, and even if the positioning mark 520 in the target image rotates by a certain angle, the single feature pattern in the positioning mark 520 is located at the same position in the pixel coordinate system, so that the template library does not need to include feature templates in which the single feature pattern rotates by different angles at the same position. Therefore, in the image processing method in this embodiment, the number of feature templates in the template library is small, and the number of feature templates required to be used in the calculation is correspondingly small, so that the calculation time is shortened and the efficiency is improved. Since a single feature image has rotational invariance, the point P is calculated l Sum point P r The position in the pixel coordinate system does not need to consider the rotation angle deltar, and even if the calculated translation amount (deltau, deltav) has a certain deviation, the point P is not caused by subsequent calculation l Sum point P r The deviation of the position in the pixel coordinate system is further amplified, and therefore the calculation accuracy is high.
When the feature templates are manufactured, only the scaling operation is needed to be carried out on the images of the separated feature graphs, and the number of the feature templates is small. Table 1 shows the comparison result of the calculation time (photographing from the object transportation robot 530 to the second photographing position to the position of the rotation center point of the image of the calculated feature pattern in the target image) of the image processing performed by the two image processing methods under the environment that the CPU is i7-6700HQ, and the image processing method in the present embodiment can obviously save the calculation time and has high calculation efficiency as shown in the following table. Where global refers to the image in the feature template that includes the entire locating mark 520 and local refers to the image in the feature template that includes only a single feature pattern.
Calculation method Image size (pixel) Number of images Average time(s)
Integral body 2448×2048 321 4.5753
Local area 2448×2048 457 0.8145
Example IV
The embodiment provides a position calibration method of the position calibration system according to the third embodiment.
Referring to fig. 12, fig. 12 is a flowchart of a position calibration method of the position calibration system in the third embodiment of the present invention, the position calibration method specifically includes:
step S110, the material transportation robot drives an image acquisition module (camera) in the image processing device to reach a first photographing position to photograph the positioning mark so as to acquire an image, and a template library is generated, wherein the template library comprises a plurality of characteristic templates.
The generating the template library in step S110 may specifically include the following steps:
in step S111, the image processing apparatus separates an image of the feature pattern from the image acquired at the first photographing bit.
In step S112, the image processing apparatus scales the image of the separated feature pattern according to a predetermined parameter to obtain at least one feature template.
In step S113, the image processing apparatus generates a template library including all of the feature templates.
Step S120 may be performed after step S110 is performed.
In step S120, the material transporting robot drives the image capturing module (camera) in the image processing apparatus to reach the second photographing position. Specifically, when the material transportation robot drives an image acquisition module (camera) in the image processing device to reach a second photographing position, the mechanical fork can be driven to move through the mechanical arm, and then the image acquisition module (camera) is driven to move through the mechanical fork.
In step S130, the positioning mark is photographed to obtain an image (the obtained image may not include a feature pattern, or includes a part of feature patterns, or includes all feature patterns due to the difference of photographing positions), and the image obtained in the second photographing position is the target image, and the vertical distance between the second photographing position and the positioning mark is equal to the vertical distance between the first photographing position and the positioning mark.
And then judging whether photographing is successful or not. Specifically, it is determined whether the photographing is successful or not including step S140 to step S170.
Step S140, determining whether the obtained target image meets the image quality requirement, if so, executing step S150, and if not, executing step S300. The quality requirements of the target image include whether the image is over-exposed or under-exposed, whether the definition meets the requirements, and the like.
Step S150, determining whether the obtained target image includes an image of the feature pattern, if so, executing step S160, and if not, executing step S155.
Specifically, the step S150 includes:
step S151, the rotation center point of each feature template in the template library is made to coincide with a point on the image at the start position of the target image.
Step S152, each characteristic template in the template library is moved to an end position from a starting position of the target image step by step according to a preset distance, and similarity coefficients of the characteristic templates and the images at the corresponding positions in the target image are calculated.
In step S153, all the similarity coefficients are compared with the set similarity coefficient range, and the number of similarity coefficients in the similarity coefficient range is counted.
Step S154, determining whether the number of similarity coefficients within the similarity coefficient range is zero, if not, executing step S160, and if zero, executing step S155.
Step S155, judging whether the current second photographing bit is the last photographing bit in the scanning track, if so, executing step S300, and if not, executing step S156;
step S156, the material handling robot moves the camera to the next second photographing position in the scanning track, and then step S130 is performed.
Step S160, determining whether the obtained target image includes at least two images of feature patterns, if so, executing step S170, and if not, executing step S161.
Specifically, the step S160 is: judging whether the number of similarity coefficients within the similarity coefficient range is not less than 2, if not less than 2, executing step S170, and if=1, executing step S161.
Step S161, calculating whether the distance between the rotation center point of the feature pattern in the target image and the edge of the target image is located at a set first threshold value [ k ] 1 ,k 2 ]Within the range, if the distance between the rotation center point of the feature pattern in the target image and the edge of the target image is at a set first threshold value [ k 1 ,k 2 ]Step S162 is performed if the distance is within a set first threshold value [ k ] 1 ,k 2 ]Step S300 is performed when the range is out. The method comprises the steps of calculating the position of a rotation center point of a feature pattern in a target image, and calculating the translation amount of the rotation center point of the feature pattern in the target image, wherein the translation amount is the position of the rotation center point of the image of the feature pattern in the acquired target image, from the initial position of the target image to the position corresponding to the rotation center point of the feature pattern in the target image. It is known whether the positioning mark or the feature pattern is located at the edge of the target image through step S161.
Step S162, a retest photographing position of an image acquisition module (camera) is calculated according to the distance between the rotation center points of the feature patterns in the positioning marks, and the camera is moved to the retest photographing position by the material transportation robot. Step S130 is then performed. When the retest photographing position of the camera is calculated according to the distance between the feature patterns in the positioning mark, since the distance between the feature patterns in the positioning mark is generally known, the position of the rotation center point of the feature pattern in the target image can be obtained through calculation, and therefore the retest photographing position of the image acquisition module (camera) can be calculated by calculating and knowing the positions of all feature patterns included in the image obtained by photographing the image acquisition module. Since the posture and the position of the material handling robot at the time of acquiring the target image are known in step S162, that is, the position of the image acquisition module relative to the base is known (which can be calculated from the data position read out by the encoder of the robot arm). If the camera is moved to the retest photographing position by only changing the posture of the mechanical arm, the spatial position relationship between the base and the retest photographing position can be calculated according to the spatial position relationship between the retest photographing position and the photographing position when the target image is acquired.
Step S170, calculating whether the distance between the rotation center points of the feature patterns in the target image is at the set second threshold value [ k ] 3 ,k 4 ]Within the range, if the distance between the rotation center points of the feature pattern is within the set second threshold value [ k ] 3 ,k 4 ]If the distance between the rotation center points of the feature pattern is within the range, step S180 is performed, if the distance between the rotation center points of the feature pattern is not within the set second threshold value [ k ] 3 ,k 4 ]Within the range, step S300 is performed. For example, when the number of feature patterns in the positioning mark is two, the distance between the rotation center points of the two feature patterns in the positioning mark is constant, and the distance between the rotation center points of the two feature patterns in the target image should be at the set second threshold value [ k 3 ,k 4 ]In the range, otherwise, the images of the positioning marks in the target image have larger deviation.
Step S180, calculating the position of the rotation center point of the feature pattern in the target image.
The step S180 specifically includes:
step S181, selecting a larger similarity coefficient from the similarity coefficients within the similarity coefficient range, wherein the number of the selected similarity coefficients is equal to the number of the feature patterns in the positioning mark. For example, when the number of feature patterns in the positioning mark is two, the number of similarity coefficients selected is two.
Step S182, calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the selected similarity coefficient in the target image, wherein the translation amount is the position of the rotation center point of the image of the feature graph in the acquired target image in the target image.
Step S190, calculating spatial position information of the positioning mark relative to the image acquisition module of the material transportation robot according to the positions of the rotation center points of the images of at least two feature patterns in the target image.
Step 200, calculating position information between the material transporting robot and the material storage point according to the spatial position information of the positioning mark relative to the image acquiring module of the material transporting robot, the spatial position information of the image acquiring module and the material transporting robot, and the spatial position information of the positioning mark and the material storage point. The spatial position information of the image acquisition module and the material transporting robot can specifically comprise spatial position information of the image acquisition module and a mechanical fork, spatial position information of the mechanical fork, spatial position information of a mechanical arm and spatial position information between the mechanical arm and a base. The spatial position information of the image acquisition module and the mechanical fork, the spatial position information of the mechanical fork and the spatial position information of the mechanical arm can be updated in real time according to the state of the mechanical arm recorded when the target image is acquired by photographing.
Step S210, acquiring the wafer box from the material storage point on the material storage device or placing the wafer box to the material storage point on the material storage device according to the position information between the material transportation robot and the material storage point.
Step S220, resetting the material transporting robot.
Step S300, resetting the material transportation robot, and reporting photographing failure. In step S155 and step S156, the scanning track refers to a route where a certain number of photographing positions are connected according to a certain sequence, and the material transporting robot drives the cameras to sequentially move according to the scanning track.
One specific case of a scan trajectory is illustrated below.
Since the optical axis of the camera is perpendicular to the plane in which the positioning mark is located, the projection of the scan track on the plane with z=0 in the camera coordinate system can refer to fig. 13, and fig. 13 is a schematic view showing the projection of the scan track on the plane with z=0 in the camera coordinate system in the fourth embodiment of the present invention. As shown in fig. 13, six predetermined photographing bits are provided in the scanning track. Wherein the first preset photographing position is overlapped with the second preset photographing position, the third preset photographing position and the fourth preset photographing positionThe positioning photographing position, the fifth preset photographing position and the sixth preset photographing position are respectively positioned in a coordinate system X taking the first preset photographing position as a circle center d O d Y d The third preset photographing position is positioned in the third quadrant, the fourth preset photographing position is positioned in the fourth quadrant, the fifth preset photographing position is positioned in the first quadrant, and the sixth preset photographing position is positioned in the second quadrant; third predetermined photographing position and fourth predetermined photographing position distance X d The distance between the axes is 50mm, and the third preset photographing position and the fourth preset photographing position are the distance Y d The distance of the axis is 100mm, the fifth preset photo position distance X d The distance of the axis is 20mm, the fifth preset photo-taking position distance Y d The distance of the axis is 50mm, the sixth predetermined photo-taking position distance X d The distance of the axis is 20mm, the fifth preset photo-taking position distance Y d The distance of the axis is 50mm. Of course, in other embodiments, the scan trajectory may be other ways, and is not limited herein.
For example, in the process of executing step S150, when the image capturing module (camera) does not find a feature pattern in the image obtained at the first predetermined photographing position on the scanning track, that is, the number of similarity coefficients in the range of the similarity coefficients is zero, then executing step S155 and step S156, and executing step S130 after executing step S156, then if the feature pattern is still not found in the image obtained at the second predetermined photographing position, moving the image capturing module (camera) to the third predetermined photographing position for photographing. Next, if the camera does not find the feature pattern in the images obtained at the third predetermined photographing position, the fourth predetermined photographing position, the fifth predetermined photographing position and the sixth predetermined photographing position, step S300 is performed after step S155 is performed.
Because the actual parking position and the theoretical position of the material transportation robot deviate after the material transportation robot reaches the designated station, under extreme conditions, there may be a situation that there is no positioning mark in the target image acquired by the second photographing position, for which in this embodiment, the positioning mark may be scanned, and the method for scanning the positioning mark specifically refers to step S155 and step S156.
Since the actual parking position of the material transporting robot is deviated from the theoretical position after reaching the designated station, under extreme conditions, there may be only a part of positioning marks in the target image acquired by the second photographing position, for example, the number of images of the feature pattern in the target image is only one, for which in this embodiment, the positioning marks may be retested, and the method for retesting the positioning marks is specifically referred to step S161 and step S162.
In the above embodiment, step S110 may be applied to a training phase of the position calibration system, which is a preprocessing process, in which a template library may be generated. Steps S120 to S200 may be applied to the matching phase of the position calibration system. The template library generated in the training stage can be stored, and can be called for matching when the position calibration system performs visual positioning, particularly when judging whether photographing is successful or not. That is, the template library may be generated only once, and may be called each time a match is made, for example, each time it is judged whether or not the image of the feature pattern is included in the obtained target image in the above-described step S150. When a template library is called, the template library in the memory is typically output and stored in the cache for matching, the template library in the cache is purged before the next matching, and the template library in the memory is again output and stored in the cache. In the above embodiment, although all feature templates in the template library need to participate in matching, the feature templates can be obtained only by scaling the feature patterns with rotation invariance in the positioning mark according to the predetermined parameters, and the feature patterns with rotation invariance in the positioning mark do not need to be rotated or rotated and scaled, so that the number of feature templates in the template library can be reduced, and the number of feature templates matched with the target image can be reduced, thereby improving the calculation efficiency of the image processing method.
In the above embodiment, since the feature templates only include the feature pattern having rotation invariance, but not the pattern including the entire positioning mark, the pattern of the feature templates is relatively simple, so that the time for respectively matching each feature template in the template library with the target image can be saved, and the calculation efficiency can be further improved.
In the above embodiment, since the slight deformation of the target image affects the position of the rotation center point of the image of the feature pattern in the calculated target image in the target image, in theory, when the pattern of the feature template is complex, an independent template library needs to be built in each working environment, while the pattern of the feature template is relatively simple, so that an independent template library does not need to be built in each working environment, the universality of the template library is high, the corresponding process of manufacturing the template library can be simplified, and the manufacturing efficiency of the template library is improved.
In the above embodiment, since the shift amount is the position of the rotation center point of the image of the feature pattern in the acquired target image in the target image, the rotation center point of the image of the feature pattern in the target image does not need to be calculated through complicated calculation, so that the calculation accuracy of the image processing method can be improved.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (15)

1. An image processing method, comprising:
photographing the positioning mark at a first photographing position to acquire an image;
separating an image of a feature pattern from an image acquired at a first photographing position, wherein the positioning mark comprises at least one feature pattern, and the feature pattern has rotation invariance;
scaling the image of the separated feature pattern according to a predetermined parameter to obtain at least one feature template;
generating a template library, wherein the template library comprises all the characteristic templates;
photographing the positioning mark at a second photographing position to obtain an image, wherein the image obtained by photographing at the second photographing position is a target image;
matching each characteristic template in the template library with a target image respectively to calculate the position of the rotation center point of the image of the characteristic graph in the target image; the matching of each feature template in the template library with the target image comprises the following steps: overlapping a rotation center point of each feature template in the template library with a point on the image at the starting position of the target image; each characteristic template in the template library is moved to an end position from a starting position of a target image step by step according to a preset interval, and a similarity coefficient of the characteristic template and the image at a corresponding position in the target image is calculated; comparing all the similarity coefficients to select the largest similarity coefficient in the similarity coefficients; and calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, wherein the translation amount is the position of the rotation center point of the image of the feature graph in the acquired target image in the target image.
2. The image processing method of claim 1, wherein a vertical distance between the second photographing bit and the positioning mark is equal to a vertical distance between the first photographing bit and the positioning mark.
3. The image processing method according to claim 1, wherein a boundary of the target image is filled such that a distance by which the target image increases in a length direction and a distance by which the target image increases in a width direction are equal to the sizes of the feature template in the length and width directions, respectively, before similarity coefficients of the feature template and the image at the corresponding position in the target image are calculated.
4. An image processing apparatus, comprising:
the image acquisition module is used for photographing the positioning mark at a first photographing position to acquire an image and photographing the positioning mark at a second photographing position to acquire an image, wherein the image of the positioning mark acquired at the second photographing position is a target image, and the positioning mark comprises at least one feature pattern with rotation invariance;
an image separation module for separating an image of the feature pattern from an image acquired at the first photographing position;
The template making module is used for scaling the image of the separated characteristic graph according to preset parameters so as to obtain at least one characteristic template;
the template library generation module is used for generating a template library, and the template library comprises all the characteristic templates;
the matching module is used for respectively matching each characteristic template in the template library with the target image so as to calculate the position of the rotation center point of the image of the characteristic graph in the target image; the matching module comprises a similarity coefficient calculation module, a comparison module and a position calculation module; the similarity coefficient calculation module is used for enabling the rotation center point of each characteristic template in the template library to coincide with a point on an image at the initial position of the target image, enabling each characteristic template in the template library to move to the final position from the initial position of the target image step by step according to a preset distance, and calculating the similarity coefficient of the characteristic template and the image at the corresponding position in the target image; the comparison module is used for comparing all the similarity coefficients to select the largest similarity coefficient in the similarity coefficients; the position calculation module is used for calculating the translation amount of the feature template from the initial position of the target image to the position corresponding to the maximum similarity coefficient in the target image, wherein the translation amount is the position corresponding to the maximum similarity coefficient in the target image of the rotation center point of the image of the feature graph in the acquired target image.
5. The image processing apparatus according to claim 4, wherein a vertical distance between the second photographing position and the positioning mark is equal to a vertical distance between the first photographing position and the positioning mark.
6. The image processing apparatus according to claim 4, wherein the matching module further includes a filling module that fills a boundary of the target image such that a distance by which the target image increases in a length direction and a distance by which the target image increases in a width direction are equal to the sizes of the feature templates in the length and width directions, respectively.
7. A position calibration system comprising a material storage device, a positioning mark, a material handling robot, a position calculation device, a judgment device, a material, and an image processing device according to any one of claims 4 to 6,
the positioning mark is arranged on the material storage device, materials are stored on material storage points of the material storage device, and an image acquisition module in the image processing device is arranged on the material transportation robot; the position calculating device is used for calculating the spatial position information of the positioning mark relative to the image acquisition module of the material transportation robot according to the position of the rotation center point of the image of at least two feature patterns in the target image obtained by the image processing device in the target image, and is used for calculating the spatial position information between the material transportation robot and the material storage point according to the spatial position information of the positioning mark relative to the image acquisition module of the material transportation robot, the spatial position information of the image acquisition module and the material transportation robot and the spatial position information between the positioning mark and the material storage point; the material transporting robot is used for acquiring materials from the material storage points on the material storage device or placing the materials to the material storage points on the material storage device according to the space position information between the material transporting robot and the material storage points.
8. The position calibration system of claim 7, wherein the material handling robot comprises a controller, a mechanical arm, a mechanical fork and a base, wherein the mechanical arm is arranged on the base, the mechanical fork is arranged at the tail end of the mechanical arm, the image acquisition module is arranged on the mechanical fork, and the controller can control the mechanical arm to drive the mechanical fork to acquire and place materials.
9. A position calibration method is characterized in that a positioning mark is arranged on a material storage device, a material is stored on a material storage point of the material storage device, an image acquisition module in the image processing device according to any one of claims 4-6 is arranged on a material transportation robot,
the position calibration method comprises the following steps:
the material transportation robot drives an image acquisition module in the image processing device to take a picture of the positioning mark to a first photographing position to acquire an image, and generates a template library, wherein the template library comprises a plurality of characteristic templates;
the material transportation robot drives an image acquisition module in the image processing device to a second photographing position, and an image acquired at the second photographing position is a target image;
photographing the positioning mark to acquire an image;
Judging whether photographing is successful, if so, calculating the position of the rotation center point of the feature pattern in the target image, if not, resetting the material transportation robot, and simultaneously reporting photographing failure;
calculating the spatial position information of the positioning mark relative to an image acquisition module of the material transportation robot according to the positions of rotation center points of images of at least two feature patterns in the target image;
calculating position information between the material transporting robot and the material storage point according to the spatial position information of the positioning mark relative to the image acquisition module of the material transporting robot, the spatial position information of the image acquisition module and the material transporting robot and the spatial position information of the positioning mark and the material storage point;
and acquiring the wafer box from the material storage point on the material storage device or placing the wafer box to the material storage point on the material storage device according to the position information between the material transportation robot and the material storage point.
10. The position calibration method of claim 9, wherein a vertical distance between the second photographing position and the positioning mark is equal to a vertical distance between the first photographing position and the positioning mark.
11. The method of positional calibration as defined in claim 9, wherein generating a template library comprises,
the image processing device separates an image of the feature pattern from an image acquired at the first photographing position;
the image processing device zooms the image of the separated characteristic graph according to preset parameters to obtain at least one characteristic template;
the image processing device generates a template library, wherein the template library comprises all the characteristic templates.
12. The position calibration method of claim 9, wherein determining whether the photograph was successful comprises:
judging whether the obtained target image comprises images of feature patterns or not, if yes, judging whether the obtained target image comprises at least two images of the feature patterns or not, and if not, scanning the positioning mark;
judging whether the obtained target image comprises at least two images of the feature patterns, if so, calculating whether the distance between the rotation center points of the feature patterns in the target image is within a set second threshold range, and if not, retesting the positioning mark;
Calculating whether the distance between the rotation center points of the feature patterns in the target image is within a set second threshold range, if the distance between the rotation center points of the feature patterns is within the set second threshold range, calculating the position of the rotation center points of the feature patterns in the target image, and if the distance between the rotation center points of the feature patterns is not within the set second threshold range, resetting the material transportation robot, and reporting photographing failure.
13. The position calibration method according to claim 12, wherein determining whether the obtained target image includes an image of a feature pattern includes:
overlapping a rotation center point of each feature template in the template library with a point on the image at the starting position of the target image;
each characteristic template in the template library is moved to an end position from a starting position of a target image step by step according to a preset interval, and a similarity coefficient of the characteristic template and the image at a corresponding position in the target image is calculated;
comparing all the similarity coefficients with a set similarity coefficient range, and counting the number of the similarity coefficients in the similarity coefficient range;
judging whether the number of the similarity coefficients in the similarity coefficient range is zero, if not, judging whether the obtained target image comprises images of at least two feature patterns, and if so, scanning the positioning mark.
14. The position calibration method of claim 12, wherein scanning the positioning mark comprises:
and judging whether the current second photographing position is the last photographing position in the scanning track, if so, resetting the material transportation robot, reporting photographing failure at the same time, if not, moving the camera to the next second photographing position in the scanning track by the material transportation robot, and photographing the positioning mark to acquire an image.
15. The position calibration method of claim 12, wherein retesting the positioning indicia comprises:
calculating whether the distance between the position of the rotation center point of the feature pattern in the target image and the edge of the target image is within a set first threshold range or not, if the distance between the position of the rotation center point of the feature pattern in the target image and the edge of the target image is within the set first threshold range, calculating a retest photographing position of the image acquisition module according to the distance between the rotation center points of the feature pattern in the positioning mark, moving a camera to the retest photographing position through the material transportation robot, and photographing the positioning mark to acquire the image; and if the distance is out of a set first threshold range, resetting the material transportation robot, and reporting photographing failure.
CN201811163285.5A 2018-09-30 2018-09-30 Image processing device and method, and position calibration system and method Active CN110969661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811163285.5A CN110969661B (en) 2018-09-30 2018-09-30 Image processing device and method, and position calibration system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811163285.5A CN110969661B (en) 2018-09-30 2018-09-30 Image processing device and method, and position calibration system and method

Publications (2)

Publication Number Publication Date
CN110969661A CN110969661A (en) 2020-04-07
CN110969661B true CN110969661B (en) 2023-11-17

Family

ID=70029489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811163285.5A Active CN110969661B (en) 2018-09-30 2018-09-30 Image processing device and method, and position calibration system and method

Country Status (1)

Country Link
CN (1) CN110969661B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113847868B (en) * 2021-08-05 2024-04-16 乐仓信息科技有限公司 Positioning method and system for material bearing device with rectangular support legs
CN113982276B (en) * 2021-11-03 2023-03-14 广东天凛高新科技有限公司 Method and device for accurately positioning cast-in-place wall robot
CN115026821B (en) * 2022-06-14 2024-08-27 广东天太机器人有限公司 Robot control method and system based on high-efficiency performance servo
CN115366110A (en) * 2022-09-26 2022-11-22 杭州海康机器人股份有限公司 Mechanical arm control method and device, mechanical arm and unstacking system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014017337A1 (en) * 2012-07-27 2014-01-30 株式会社日立ハイテクノロジーズ Matching process device, matching process method, and inspection device employing same
CN107110654A (en) * 2015-01-15 2017-08-29 弗劳恩霍夫应用研究促进协会 Location equipment and the method for positioning
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN108122256A (en) * 2017-12-25 2018-06-05 北京航空航天大学 It is a kind of to approach under state the method for rotating object pose measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014017337A1 (en) * 2012-07-27 2014-01-30 株式会社日立ハイテクノロジーズ Matching process device, matching process method, and inspection device employing same
CN107110654A (en) * 2015-01-15 2017-08-29 弗劳恩霍夫应用研究促进协会 Location equipment and the method for positioning
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN108122256A (en) * 2017-12-25 2018-06-05 北京航空航天大学 It is a kind of to approach under state the method for rotating object pose measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李良福 ; 陈卫东 ; 冯祖仁 ; 郑宝忠 ; .目标跟踪与定位中的视觉标定算法研究.应用光学.2008,(04),全文. *

Also Published As

Publication number Publication date
CN110969661A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110969661B (en) Image processing device and method, and position calibration system and method
US9928595B2 (en) Devices, systems, and methods for high-resolution multi-view camera calibration
CN101996398B (en) Image matching method and equipment for wafer alignment
CN111390901B (en) Automatic calibration method and calibration device for mechanical arm
KR20100065073A (en) Substrate inspecting method, substrate inspecting device and storage medium
CN113256729B (en) External parameter calibration method, device and equipment for laser radar and camera and storage medium
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN113379622A (en) Pixel size calibration method, device and equipment for electron beam defect detection
CN109544642B (en) N-type target-based TDI-CCD camera parameter calibration method
CN114723825A (en) Camera coordinate mapping method, system, medium and electronic terminal used in unmanned driving scene
CN117173254A (en) Camera calibration method, system, device and electronic equipment
US20220080597A1 (en) Device and method for calibrating coordinate system of 3d camera and robotic arm
CN113592962B (en) Batch silicon wafer identification recognition method based on machine vision
CN110490941A (en) A kind of telecentric lens external parameters calibration method based on normal vector
CN117038554A (en) Chip positioning method and chip transferring method
CN113506347B (en) Camera internal reference processing method and system based on single picture
US11418771B1 (en) Method for calibrating 3D camera by employing calibrated 2D camera
CN116051634A (en) Visual positioning method, terminal and storage medium
KR101626374B1 (en) Precision position alignment technique using edge based corner estimation
CN111716340B (en) Correcting device and method for coordinate system of 3D camera and mechanical arm
CN114677429A (en) Positioning method and device of manipulator, computer equipment and storage medium
CN209877942U (en) Image distance measuring instrument
CN113409262B (en) PCB probe positioning method, device, graphic processor and storage medium
JP7469989B2 (en) Camera Calibration Plate
CN114199124B (en) Coordinate calibration method, device, system and medium based on linear fitting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant