[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109829953B - Image acquisition device calibration method and device, computer equipment and storage medium - Google Patents

Image acquisition device calibration method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109829953B
CN109829953B CN201910146310.7A CN201910146310A CN109829953B CN 109829953 B CN109829953 B CN 109829953B CN 201910146310 A CN201910146310 A CN 201910146310A CN 109829953 B CN109829953 B CN 109829953B
Authority
CN
China
Prior art keywords
robot
coordinate
coordinates
rotation center
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910146310.7A
Other languages
Chinese (zh)
Other versions
CN109829953A (en
Inventor
孙高磊
张文刚
梅能华
李相前
罗小军
吴丰礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Topstar Technology Co Ltd
Original Assignee
Guangdong Topstar Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Topstar Technology Co Ltd filed Critical Guangdong Topstar Technology Co Ltd
Priority to CN201910146310.7A priority Critical patent/CN109829953B/en
Publication of CN109829953A publication Critical patent/CN109829953A/en
Priority to PCT/CN2020/072491 priority patent/WO2020173240A1/en
Application granted granted Critical
Publication of CN109829953B publication Critical patent/CN109829953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method and a device for calibrating an image acquisition device, computer equipment and a storage medium. The method in one embodiment comprises: the method comprises the steps of obtaining pixel coordinates of preset feature points and corresponding robot coordinates when a robot translates to different positions of an area to be calibrated, obtaining a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates when the robot translates to the different positions of the area to be calibrated, obtaining the pixel coordinates of the preset feature points when the robot rotates to different angles and the corresponding robot coordinates, obtaining a rotation center position according to the pixel coordinates of the preset feature points and the corresponding robot coordinates when the robot rotates to different angles, and obtaining a plane conversion relation between the robot and an image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.

Description

Image acquisition device calibration method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a method and an apparatus for calibrating an image capture device, a computer device, and a storage medium.
Background
In the process of grabbing a product by the robot vision positioning equipment, a tool point is usually corrected at the tail end of the robot by a hand-eye calibration mode that a camera is installed at the tail end of the robot, namely the camera moves along with the robot. When the robot grabs the inconsistent product in position, the robot drives the camera to move, and the product is in the camera field of view. And calculating the coordinates of points on the image pixels of the product under a robot tool coordinate system, guiding the robot to grab the product at the designated position, and ensuring the consistency of the grabbing positions of the product.
In a specific operation process, the robot tool points are usually calibrated by 3 or 4 points, and the robot tool points are observed by human eyes, so that errors exist in the calibrated tool points, the precision of the tool points is difficult to guarantee, the precision of camera calibration is affected, and the precision of the final robot vision positioning equipment is low.
Disclosure of Invention
In view of the above, it is necessary to provide a camera calibration method, apparatus, computer device and storage medium capable of improving calibration accuracy.
An image acquisition device calibration method, wherein the image acquisition device is applied to a robot visual positioning device, and the method comprises the following steps:
acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated;
obtaining a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates;
acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates;
obtaining a rotation center position according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates;
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
In an embodiment, the obtaining a translation position relationship between the pixel coordinate of the preset feature point and the robot coordinate according to the pixel coordinate of the preset feature point and the corresponding robot coordinate includes:
obtaining a rough translation position relation according to pixel coordinates of the preset characteristic points and corresponding robot coordinates when the robot translates to different preset positions along the X-axis direction and the Y-axis direction respectively in a region to be calibrated, wherein the X-axis and the Y-axis are mutually vertical axes on the same horizontal plane in three-dimensional space coordinates;
equally dividing the area to be calibrated into a preset number of sub-areas, and respectively extracting the pixel coordinates of the centers of the sub-areas;
obtaining a corresponding rough position according to the rough translation position relation and the pixel coordinates of the centers of the sub-regions;
acquiring the pixel coordinates of the preset feature points when the robot reaches the rough position;
and correcting the rough translation position relation according to the pixel coordinates of the preset feature points and the coordinates of the corresponding rough position when the robot reaches the rough position to obtain an accurate translation position relation.
In an embodiment, the obtaining a rough translation position relationship according to the pixel coordinates of the preset feature point and the corresponding robot coordinates when the robot translates to the preset different positions in the to-be-calibrated region along the X-axis direction and the Y-axis direction, respectively, includes:
acquiring a first pixel coordinate of a preset characteristic point and a corresponding first robot coordinate when the robot translates from a current position to a preset first position along an X-axis direction, wherein the preset first position is acquired by a bisection method;
acquiring a second pixel coordinate of the preset characteristic point and a corresponding second robot coordinate when the robot translates from the current position to a preset second position along the Y-axis direction, wherein the preset second position is acquired by a bisection method;
and obtaining a rough translation position relation according to the first pixel coordinate of the preset feature point, the corresponding first robot coordinate, the second pixel coordinate of the preset feature point and the corresponding second robot coordinate.
In one embodiment, the obtaining a rotation center position according to the pixel coordinates of the preset feature point at the different angles and the corresponding robot coordinates includes:
acquiring a rotation center conversion relation and a rough rotation center, wherein the rotation center conversion relation is used for representing the relation between the pixel coordinate of the preset feature point and the robot coordinate when the robot rotates;
obtaining corresponding robot coordinates according to the rotation center conversion relation and the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position by a preset first step length;
acquiring pixel coordinates of the preset feature points when the robot reaches the position of the corresponding robot coordinate;
obtaining a robot coordinate error according to the pixel coordinates of the preset feature points when the robot reaches the position of the corresponding robot coordinate and the pixel coordinates of the preset feature points when the robot is at the initial position;
and fitting the coordinate error in the full-angle range, and compensating the fitted coordinate error to the coordinate where the rough rotation center is located to obtain the position of the precise rotation center.
In an embodiment, after compensating the fitted coordinate error to the coordinate where the rough rotation center is located to obtain the position of the precise rotation center, the method further includes:
calculating a difference value between the coordinate of the accurate rotation center position and the initial position robot coordinate;
the obtaining of the plane conversion relationship between the robot and the image acquisition device according to the translation position relationship and the rotation center position includes:
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation, the rotation center position and the difference value.
In one embodiment, the obtaining the rotation center conversion relationship and the coarse rotation center includes:
fitting the pixel coordinates to obtain a rough rotation center according to the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position by a preset second step length;
calculating a rotation error between the coordinate of the rough rotation center and the pixel coordinate of the preset feature point at the initial position;
and obtaining a rotation center conversion relation according to the rotation error, the pixel coordinate and the corresponding robot coordinate.
In an embodiment, before fitting the pixel coordinates to obtain the rough rotation center according to the pixel coordinates of the preset feature point when the robot rotates to a different angle at the initial position with the preset second step length, the method further includes:
acquiring the maximum forward angle of the preset characteristic point searched by the image acquisition device when the robot rotates around the Z-axis forward direction in the area to be calibrated, and acquiring the maximum reverse angle of the preset characteristic point searched by the image acquisition device when the robot rotates around the Z-axis in the reverse direction, wherein the Z-axis is an axis perpendicular to a horizontal plane in three-dimensional space coordinate axes;
and obtaining a second step length of the robot according to the maximum forward angle and the maximum reverse angle.
An image acquisition device calibration device, the image acquisition device is applied to a robot vision positioning device, the device comprises:
the first information acquisition module is used for acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated;
the translation relation acquisition module is used for acquiring a translation position relation between the pixel coordinate of the preset feature point and the robot coordinate according to the pixel coordinate of the preset feature point and the corresponding robot coordinate;
the second information acquisition module is used for acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates;
the rotation center acquisition module is used for acquiring the position of a rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates;
and the calibration information acquisition module is used for acquiring the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated;
obtaining a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates;
acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates;
obtaining a rotation center position according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates;
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated;
obtaining a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates;
acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates;
obtaining a rotation center position according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates;
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
According to the calibration method, the calibration device, the computer equipment and the storage medium of the image acquisition device, when the robot is translated to different positions of an area to be calibrated, the pixel coordinates of the characteristic points and the corresponding robot coordinates are preset to obtain the translation position relationship between the pixel coordinates of the preset characteristic points and the robot coordinates, and then the rotation center position is obtained according to the pixel coordinates of the preset characteristic points and the corresponding robot coordinates when the robot rotates to different angles, so that the plane conversion relationship between the robot and the image acquisition device is obtained according to the translation position relationship and the rotation center position to calibrate the image acquisition device, the characteristic points are only required to be placed in the visual field of the image acquisition device without manually correcting tool points of the robot, the calibration precision can be improved through the scheme, the precision of the visual positioning equipment of the robot is improved, and no additional auxiliary hardware equipment is required, simple and efficient, and can greatly reduce the difficulty of debugging for operators.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a calibration method for an image capture device;
FIG. 2 is a schematic flow chart illustrating a method for calibrating an image capture device according to an embodiment;
FIG. 3 is a flowchart illustrating a translational positional relationship obtaining step according to an embodiment;
FIG. 4 is a schematic representation of a translation of a robot position in an XY axis direction in one embodiment;
FIG. 5 is a schematic diagram of sharing areas to be calibrated in one embodiment;
FIG. 6 is a flowchart illustrating a rotation center position obtaining step in one embodiment;
FIG. 7 is a flowchart illustrating a rotation center transformation relation obtaining step according to an embodiment;
FIG. 8 is a schematic diagram of circular feature points as the robot rotates to different angles in one embodiment;
FIG. 9 is a schematic illustration of the position of the robot as it rotates in one embodiment;
FIG. 10 is a schematic flow chart illustrating a calibration method for an image capturing device according to another embodiment;
FIG. 11 is a schematic diagram illustrating an exemplary implementation of a calibration method for an image capture device;
FIG. 12 is a block diagram showing the structure of a calibration apparatus for an image capturing apparatus according to an embodiment;
FIG. 13 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image acquisition device calibration method provided by the application can be applied to the application environment shown in fig. 1. In the process of grabbing an object by the robot vision positioning device, the end of the robot 10 comprises a movable tool arm 11, the image acquisition device 20 and the standard jig 30 can be installed at the end of the robot 10 through the tool arm 11, and a fixed characteristic point 40 is placed below the end of the robot 10. Wherein the image capturing device may be a camera.
In an embodiment, as shown in fig. 2, there is provided an image capturing device calibration method, which is described by taking an example of the method applied to an image capturing device in the robot vision positioning apparatus in fig. 1, and includes the following steps:
step 202, acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated.
And 204, obtaining a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates.
And moving the current coordinate position of the robot to a relative position towards the X axis, searching a given circular feature point in the camera, and recording the coordinates of the robot and the coordinates of the center image of the circular feature point at the moment. In the same method, the current coordinate position of the robot is moved to a relative position towards the Y axis, a given circular feature point is searched in the camera, and the robot coordinate and the central image coordinate of the circular feature point at the moment are recorded. The rough translation position conversion relation between the pixel coordinates of the circular feature points and the robot coordinates is obtained through the method.
And equally dividing the area to be calibrated into a preset number of sub-areas, respectively extracting the pixel coordinates of the centers of the sub-areas, and respectively calculating the rough position of the robot movement through the rough translation position conversion relation and the pixel coordinates of the centers of the sub-areas. And moving the robot to the calculated rough positions respectively, and recalculating the pixel coordinates of the circular feature points when the robot reaches different rough positions through image recognition, thereby obtaining the accurate translation position relation between the pixel coordinates of the feature points and the coordinates of the robot.
And step 206, acquiring pixel coordinates of the preset feature points when the robot rotates to different angles, and corresponding robot coordinates.
And 208, obtaining the position of the rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates.
The robot drives the camera to rotate in a preset step length, an arc formed by the circle centers of the circular characteristic points when the camera rotates to different angles is obtained, and the center coordinates of the arc are fitted to obtain the rough rotation center of the robot. And calculating the difference between the circular feature point and the initial rotation position of the robot, and finding out the rotation center conversion relation between the pixel coordinate of the circular feature point and the robot coordinate when the robot rotates.
The robot drives the camera to rotate by another fixed step length, and the robot coordinates corresponding to the pixel coordinates of the circular feature points at the moment are obtained through the conversion relation of the rotation center every time the robot rotates. And calculating the difference between the coordinate and the actual coordinate of the robot at the moment, compensating the calculated difference into the coordinate of the robot, and moving the robot to the specified position. And after the robot moves in place, obtaining a robot coordinate error according to the pixel coordinate of the circular feature point at the moment and the pixel coordinate of the circular feature point at the initial position of the robot, and compensating the error into the robot coordinate. After the robot finishes rotating within the range of 360 degrees, the rotation center of the robot coordinate after all rotation error compensations is fitted, and the difference between the rotation center coordinate and the initial position coordinate after the robot is fitted is calculated, so that the robot can accurately rotate around the initial position of the circular characteristic point through the calculated value.
And step 210, obtaining a plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
The calibration method of the image acquisition device obtains the translation position relation between the pixel coordinate of the preset characteristic point and the robot coordinate according to the pixel coordinate of the preset characteristic point and the corresponding robot coordinate when the robot translates to different positions of the area to be calibrated, obtains the rotation center position according to the pixel coordinate of the preset characteristic point and the corresponding robot coordinate when the robot rotates to different angles, obtains the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position, calibrates the image acquisition device without manually calibrating a robot tool point, only needs to place the characteristic point in the visual field of the image acquisition device, can improve the calibration precision through the scheme, thereby improving the precision of the visual positioning equipment of the robot, does not need additional auxiliary hardware equipment, and is simple and efficient, the difficulty of the operator in adjusting the machine can be greatly reduced.
In one embodiment, as shown in fig. 3, obtaining a translation position relationship between the pixel coordinates of the preset feature point and the robot coordinates according to the pixel coordinates of the preset feature point and the corresponding robot coordinates includes: 302, according to the pixel coordinates of the preset feature points and the corresponding robot coordinates when the robot translates to different preset positions in the to-be-calibrated area along the X-axis direction and the Y-axis direction respectively, obtaining a rough translation position relation, wherein the X-axis and the Y-axis are mutually vertical axes on the same horizontal plane in the three-dimensional space coordinates; step 304, dividing the area to be calibrated into a preset number of sub-areas, and respectively extracting the pixel coordinates of the centers of the sub-areas; step 306, obtaining a corresponding rough position according to the rough translation position relation and the pixel coordinates of the centers of the sub-regions; 308, acquiring pixel coordinates of preset feature points when the robot reaches the rough position; and 310, correcting the rough translation position relation according to the pixel coordinates of the preset feature points when the robot reaches the rough position and the coordinates of the corresponding rough position to obtain the precise translation position relation.
In one embodiment, when the robot translates to different preset positions in the to-be-calibrated area along the X-axis direction and the Y-axis direction, respectively, the rough translation position relationship is obtained by presetting the pixel coordinates of the feature points and the corresponding robot coordinates, including: acquiring a first pixel coordinate of a preset characteristic point and a corresponding first robot coordinate when the robot translates from a current position to a preset first position along the X-axis direction, wherein the preset first position is acquired by a bisection method; acquiring a second pixel coordinate of a preset characteristic point and a corresponding second robot coordinate when the robot translates from the current position to a preset second position along the Y-axis direction, wherein the preset second position is acquired by a bisection method; and obtaining a rough translation position relation according to the first pixel coordinate of the preset feature point, the corresponding first robot coordinate, the second pixel coordinate of the preset feature point and the corresponding second robot coordinate. As shown in fig. 4, at the current coordinate Q of the robot0(x0,y0) Moving the position towards the X axis by a relative position dx, adjusting the dx value by a dichotomy, searching a given circular feature point in the camera, and recording the coordinates Q of the robot0(x1,y1) And circular feature point center image coordinates P0(u1,v1). In the same way, in Q0(x0,y0) Moving the position to a relative position dy towards the Y axis, adjusting the dy value through dichotomy, searching a given circular feature point in the camera, and recording the coordinate Q of the robot0(x2,y2) And circular feature point center image coordinates P0(u2,v2). Calculating P0And Q0Conversion relationship A between0Then obtain P0*A0=Q0
Namely, it is
Figure BDA0001980135730000111
As shown in figure 5 of the drawings,equally dividing a region to be calibrated into 9 regions in the visual field of a camera, and respectively extracting the central coordinate P of each region1’(u0,v0)、P1’(u1,v1)、……、P1’(ui,vi) Where i is greater than or equal to 0 and less than or equal to 8, by converting matrix A0Calculating rough positions Q of robot movements, respectively1(x0,y0)、Q1(x1,y1)、……、Q1(xi,yi) Obtaining:
Figure BDA0001980135730000112
moving the robot to Q respectively1(x0,y0)、Q1(x1,y1)、……、Q1(xi,yi) The coordinate point position is calculated again by image recognition to calculate the corresponding central image coordinate P of the circular feature point1(u’0,v’0)、P1(u’1,v’1)、……、P1(u’i,v’i) Calculate P1(u’0,v’0)、P1(u’1,v’1)、……、P1(u’i,v’i) Relative to P0(u0,v0) Coordinate P at constant time1(u0,v0)、P1(u1,v1)、……、P1(ui,vi) Obtaining:
Figure BDA0001980135730000113
calculating P1And Q1Conversion relationship A between1To obtain P1*A1=Q1
Namely:
Figure BDA0001980135730000114
in one embodiment, as shown in fig. 6, obtaining the position of the rotation center according to the pixel coordinates of the preset feature point at different angles and the corresponding robot coordinates includes: step 602, obtaining a rotation center conversion relation and a rough rotation center, wherein the rotation center conversion relation is used for representing the relation between the pixel coordinates of the preset feature points and the robot coordinates when the robot rotates; step 604, according to the conversion relation of the rotation centers and the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position by a preset first step length, obtaining corresponding robot coordinates; step 606, acquiring pixel coordinates of preset feature points when the robot reaches the position of the corresponding robot coordinate; step 608, obtaining a robot coordinate error according to the pixel coordinates of the preset feature points when the robot reaches the corresponding robot coordinate position and the pixel coordinates of the preset feature points when the robot is at the initial position; and step 610, fitting the coordinate errors in the full-angle range, and compensating the fitted coordinate errors to the coordinates where the rough rotation center is located to obtain the position of the precise rotation center.
In one embodiment, as shown in fig. 7, obtaining the rotation center conversion relationship and the coarse rotation center includes: step 702, fitting pixel coordinates to obtain a rough rotation center according to the pixel coordinates of preset feature points when the robot rotates to different angles at an initial position by a preset second step length; step 704, calculating a rotation error between the coordinates of the rough rotation center and the pixel coordinates of the preset feature point at the initial position; and step 706, obtaining a rotation center conversion relation according to the rotation error, the pixel coordinate and the corresponding robot coordinate.
In one embodiment, before fitting the pixel coordinates to obtain the rough rotation center according to the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position with the preset second step length, the method further includes: acquiring the maximum forward angle of a preset characteristic point searched by an image acquisition device when the robot rotates around the Z-axis forward direction in an area to be calibrated, and the maximum reverse angle of the preset characteristic point searched by the image acquisition device when the robot rotates around the Z-axis in the reverse direction, wherein the Z-axis is an axis perpendicular to a horizontal plane in a three-dimensional space coordinate axis; and obtaining a second step length of the robot according to the maximum forward angle and the maximum reverse angle.
The robot is in Q0(x0,y0) The positions rotate around the Z axis in the positive and negative directions, the angle ranges are 0-dr 1 and 0-dr 2, the maximum angles dr1 and dr2 of the circular features which can be searched by the camera are found, the proper step length (dr1+ dr2)/m is set, and the image coordinates P of all the searched circular feature points are recorded2(u0,v0)、P2(u1,v1)、……、P2(um,vm) Wherein m is more than or equal to 0 and less than or equal to n and n is more than or equal to 11, as shown in FIG. 8. Fitting all circular characteristic point rotation image coordinates by a least square method to obtain a circular center P0(u’c,v’c) Calculate P0(u0,v0) And the center of rotation P0(u’c,v’c) Coordinate difference D'xAnd D'yObtaining:
Figure BDA0001980135730000121
calculating P1Image coordinates P of the center of rotation2And Q1Conversion relationship A between2So that P is2*A2=Q1
Namely:
Figure BDA0001980135730000131
robot returns to initial position Q0(x0,y0) Rotating around the Z axis, setting the relative angle range of 0-360 degrees and the step length d theta as a fixed value, and converting the relation A through the rotating center2Calculating the corresponding central image coordinate P of the circular feature point0(u0,v0) Robot coordinate Q when unchanging2(x0,y0)、Q2(x1,y1)、……、Q2(xm,ym) Wherein m is more than or equal to 0 and less than or equal to n and n is more than or equal to 11. P0(u0,v0) The corresponding robot coordinate is (x)c,yc),
Figure BDA0001980135730000132
Around P0(u0,v0) The rotating robot has the coordinate of (x)m,ym),
Figure BDA0001980135730000133
Moving the robot to the calculated coordinate positions Q, respectively2(x0,y0)、Q2(x1,y1)、……、Q2(xm,ym). Calculating the actual coordinate and theoretical coordinate P of the circular characteristic point after the robot moves in place0(u0,v0) And compensating the error into the robot coordinate. The robots move to the calculated coordinate positions Q, respectively2(x0,y0)、Q2(x1,y1)、……、Q2(xm,ym) Then, identifying the corresponding central image coordinate P of the circular feature point3(u0,v0)、P3(u1,v1)、……、P3(um,vm) Calculating the coordinate value and theoretical value P of the central image of the actual circular feature point0(u0,v0) Robot coordinate error of (2). And calculating the coordinate data of the robot after the robot rotates according to the rough translation position conversion relation data, and calculating the conversion relation between the pixel coordinate of the circular feature point and the coordinate of the robot after the robot rotates. Around P0(u0,v0) Rotating robot Q0Coordinate transformation is Q'0(x0,y0)、Q’(0x1,y1)、……、Q’0(xi,yi) (i ═ 1,2,3), there are:
Figure BDA0001980135730000134
wherein x0、y0For robot coordinate Q0Coordinate value u0、v0For corresponding circular feature point center image coordinate P0Coordinate value, x0 i、y0 iIs Q0(xi,yi) Coordinate value, dθM is more than or equal to 0 and less than or equal to n, and n is more than or equal to 11.
Calculating P0And Q'0Conversion relationship A between3So that P is0*A3=Q’0
Namely:
Figure BDA0001980135730000141
Figure BDA0001980135730000142
wherein u isp0 0、vp0 0For corresponding circular feature point center image coordinate P0Coordinate value up3 m、vp3 mAs a coordinate P3And coordinate values.
And after the rotation of the robot within the range of 360 degrees is completed, fitting the rotation centers of the coordinates of the robot after all rotation errors are compensated, and calculating the difference value between the coordinates of the rotation centers after the robot is fitted and the initial position. Correcting the coordinate position Q of a robot2(x0+dx0,y0+dy0)、Q2(x1+dx1,y1+dy1)、……、Q2(xm+dxm,ym+dym) As shown in fig. 9. Obtaining a rotation center coordinate Q through least square fitting2(xc,yc) Calculating Q2(x0,y0) And a center of rotation Q0(xc,yc) Coordinate difference DxAnd DyObtaining:
Figure BDA0001980135730000143
in one embodiment, after compensating the fitted coordinate error to the coordinate where the rough rotation center is located to obtain the position of the precise rotation center, the method further includes: calculating a difference value between the coordinate of the accurate rotation center position and the initial position robot coordinate; according to the translation position relation and the rotation center position, the plane conversion relation between the robot and the image acquisition device is obtained, and the method comprises the following steps: and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation, the rotation center position and the difference value.
In one embodiment, given easily identifiable feature points in the camera field of view, such as a printed circle in the camera view plane, the robot has current coordinates Q0(x0,y0) The coordinate of the corresponding circular feature point center image is P0(u0,v0). As shown in fig. 10, (1) a rough translation position relationship between the robot and the camera, that is, a conversion relationship a between the circular feature point pixels and the robot coordinates during translation, is solved0. (2) Solving for the precise translational positional relationship between the robot and the camera, i.e. applying A0Conversion relation A of circular feature point pixels and robot coordinates in translation process in whole camera view1. (3) Solving the rough rotation center position of the robot, driving the camera to rotate through the robot to obtain an arc formed by the centers of the circles of the circular feature points, fitting the center coordinates of the arc, calculating the difference value with the initial position, and finding out the conversion relation A between the pixel coordinates of the circular feature points at the rotation center of the robot and the coordinates of the robot in the translation process in the step (2)2. (4) And (4) solving the accurate rotation center position of the robot, wherein the rotation center obtained in the step (3) and the transformation relation A of the robot coordinate are obtained every time the robot rotates once2Compensating the calculated difference value into a robot coordinate, and moving the robot to a specified position; after the robot moves in place, calculating a robot coordinate error between the actual coordinate and the theoretical coordinate of the circular feature point, and compensating the error into the robot coordinate; the robot completes rotation within 360 degreesAnd then, fitting the rotation centers of all the rotation error compensated robot coordinates, and calculating the difference value between the rotation center coordinates and the initial position after the robot is fitted, so that the robot can accurately rotate around the initial position of the circular feature point through the calculated value. (5) Solving the accurate plane conversion relation between the robot and the camera, calculating the plane conversion relation between the robot and the camera according to the accurate translation position relation between the robot and the camera in the step (2) and the accurate rotation center position of the robot in the step (4), and obtaining the central image coordinate P of the circular feature point1(u0,v0)、P1(u1,v1)、……、P1(ui,vi) Wherein i is more than or equal to 0 and less than or equal to 8, and the corresponding rotation center coordinate of the robot is Q'1(x0-Dx,y0-Dy)、Q’1(x1-Dx,y1-Dy)、……、Q’1(xi-Dx,yi-Dy) Is marked as Q3(x0,y0)、Q3(x1,y1)、……、Q3(xi,yi)。
In one embodiment, as shown in fig. 11, assuming that the coordinates of the camera photographing position of the robot-grabbed product are known to be (Sqx, Sqy, Sqr), the coordinates of the feature point of the template product image in the camera field of view are (Mpx, Mpy, Mpr), and the coordinates of the robot-grabbed template product position are (Tqx, Tqy, Tqr), when the robot photographs at the (Vqx, Vqy, Vqr) position, the coordinates (Rqx, Rqy, Rqr) of any product-grabbed position can be calculated by recognizing the coordinates (Npx, Npy, Npr) of any product image feature point in the camera field of view.
P is calculated under coordinates of photographing positions (Sqx, Sqy, Sqr) and (Vqx, Vqy, Vqr)1And Q3Conversion relationship A between4So that P is1*A4=Q’3Namely:
Figure BDA0001980135730000161
where θ is Vqr-Sqr. Mqx and Mqy represent coordinate values of the robot corresponding to the template image feature points, and Nqx and Nqy represent coordinate values of the robot rotation center corresponding to the arbitrary product image feature points.
Figure BDA0001980135730000162
Then any product grasping position can be obtained as follows:
Figure BDA0001980135730000163
it should be understood that although the various steps in the flowcharts of fig. 2-3, 6-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3, 6-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 12, there is provided an image capturing apparatus calibration apparatus, including: a first information obtaining module 1202, a translation relation obtaining module 1204, a second information obtaining module 1206, a rotation center obtaining module 1208, and a calibration information obtaining module 1210. The first information acquisition module is used for acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated. And the translation relation acquisition module is used for acquiring a translation position relation between the pixel coordinates of the preset feature points and the robot coordinates according to the pixel coordinates of the preset feature points and the corresponding robot coordinates. And the second information acquisition module is used for acquiring the pixel coordinates of the preset feature points when the robot rotates to different angles and the corresponding robot coordinates. And the rotation center acquisition module is used for acquiring the position of the rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates. And the calibration information acquisition module is used for acquiring the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
In one embodiment, the translation relationship obtaining module is further configured to obtain a rough translation position relationship according to pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different preset positions in the to-be-calibrated region along the X-axis direction and the Y-axis direction, respectively, where the X-axis and the Y-axis are axes in a three-dimensional space coordinate and are perpendicular to each other on the same horizontal plane; dividing the area to be calibrated into a preset number of sub-areas, and respectively extracting the pixel coordinates of the centers of the sub-areas; obtaining a corresponding rough position according to the rough translation position relation and the pixel coordinates of the centers of all the sub-areas; acquiring pixel coordinates of preset feature points when the robot reaches a rough position; and correcting the rough translation position relation according to the pixel coordinates of the preset characteristic points when the robot reaches the rough position and the coordinates of the corresponding rough position to obtain the accurate translation position relation.
In one embodiment, the translation relation obtaining module is further configured to obtain a first pixel coordinate of a preset feature point and a corresponding first robot coordinate when the robot translates from the current position to a preset first position along the X-axis direction, where the preset first position is obtained by a bisection method; acquiring a second pixel coordinate of a preset characteristic point and a corresponding second robot coordinate when the robot translates from the current position to a preset second position along the Y-axis direction, wherein the preset second position is acquired by a bisection method; and obtaining a rough translation position relation according to the first pixel coordinate of the preset feature point, the corresponding first robot coordinate, the second pixel coordinate of the preset feature point and the corresponding second robot coordinate.
In one embodiment, the rotation center obtaining module is further configured to obtain a rotation center conversion relationship and a rough rotation center, where the rotation center conversion relationship is used to represent a relationship between a pixel coordinate of a preset feature point and a robot coordinate when the robot rotates; according to the conversion relation of the rotation centers and pixel coordinates of preset feature points when the robot rotates to different angles at an initial position by a preset first step length, obtaining corresponding robot coordinates; acquiring pixel coordinates of preset feature points when the robot reaches the position of the corresponding robot coordinate; obtaining a robot coordinate error according to the pixel coordinates of the preset feature points when the robot reaches the position of the corresponding robot coordinate and the pixel coordinates of the preset feature points when the robot is at the initial position; and fitting the coordinate error in the full-angle range, and compensating the fitted coordinate error to the coordinate where the rough rotation center is located to obtain the position of the precise rotation center.
In one embodiment, the rotation center obtaining module is further configured to fit pixel coordinates to obtain a rough rotation center according to pixel coordinates of preset feature points when the robot rotates to different angles at an initial position with a preset second step length; calculating a rotation error between the coordinate of the rough rotation center and the pixel coordinate of the preset feature point at the initial position; and obtaining a rotation center conversion relation according to the rotation error, the pixel coordinate and the corresponding robot coordinate.
In one embodiment, the rotation center obtaining module is further configured to obtain a maximum forward angle at which the image acquisition device searches the preset feature point when the robot rotates around a positive direction of a Z axis in the region to be calibrated, and a maximum reverse angle at which the image acquisition device searches the preset feature point when the robot rotates around the negative direction of the Z axis, where the Z axis is an axis perpendicular to a horizontal plane in a coordinate axis of a three-dimensional space; and obtaining a second step length of the robot according to the maximum forward angle and the maximum reverse angle.
In one embodiment, the rotation center obtaining module is further configured to calculate a difference between a coordinate where the accurate rotation center is located and a coordinate of the robot at the initial position, and the calibration information obtaining module is further configured to obtain a plane conversion relationship between the robot and the image acquisition device according to the translation position relationship, the rotation center position, and the difference.
For specific limitations of the image capturing device calibration apparatus, reference may be made to the above limitations of the image capturing device calibration method, which are not described herein again. The modules in the image capturing device calibration apparatus can be realized in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image acquisition apparatus calibration method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the calibration method of the image capturing apparatus in any embodiment when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, performs the steps of the method for calibrating an image capturing apparatus according to any one of the embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image acquisition device calibration method, wherein the image acquisition device is applied to a robot visual positioning device, and the method comprises the following steps:
acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated to obtain a rough translation position conversion relation between the pixel coordinates of the preset feature points and the robot coordinates;
obtaining a translation position relation between the pixel coordinate of the preset feature point and the robot coordinate according to the pixel coordinate of the preset feature point and the corresponding robot coordinate through the rough translation position conversion relation;
acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates, and finding out a rotation center conversion relation between the pixel coordinates of the preset feature points and the robot coordinates;
obtaining the position of the rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates through the conversion relation of the rotation center;
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
2. The method according to claim 1, wherein when the robot is translated to different positions of the area to be calibrated, pixel coordinates of the preset feature points and corresponding robot coordinates are preset, and a rough translation position conversion relationship between the pixel coordinates of the preset feature points and the robot coordinates is obtained; obtaining the translation position relation between the pixel coordinate of the preset feature point and the robot coordinate according to the pixel coordinate of the preset feature point and the corresponding robot coordinate through the rough translation position conversion relation, and the translation position relation comprises the following steps:
obtaining a rough translation position relation according to pixel coordinates of the preset characteristic points and corresponding robot coordinates when the robot translates to different preset positions along the X-axis direction and the Y-axis direction respectively in a region to be calibrated, wherein the X-axis and the Y-axis are mutually vertical axes on the same horizontal plane in three-dimensional space coordinates;
equally dividing the area to be calibrated into a preset number of sub-areas, and respectively extracting the pixel coordinates of the centers of the sub-areas;
obtaining a corresponding rough position according to the rough translation position relation and the pixel coordinates of the centers of the sub-regions;
acquiring the pixel coordinates of the preset feature points when the robot reaches the rough position;
and correcting the rough translation position relation according to the pixel coordinates of the preset feature points and the coordinates of the corresponding rough position when the robot reaches the rough position to obtain an accurate translation position relation.
3. The method according to claim 2, wherein the obtaining of the rough translation position relationship according to the pixel coordinates of the preset feature points and the corresponding robot coordinates when the robot translates to the preset different positions in the to-be-calibrated region along the X-axis direction and the Y-axis direction, respectively, comprises:
acquiring a first pixel coordinate of a preset characteristic point and a corresponding first robot coordinate when the robot translates from a current position to a preset first position along an X-axis direction, wherein the preset first position is acquired by a bisection method;
acquiring a second pixel coordinate of the preset characteristic point and a corresponding second robot coordinate when the robot translates from the current position to a preset second position along the Y-axis direction, wherein the preset second position is acquired by a bisection method;
and obtaining a rough translation position relation according to the first pixel coordinate of the preset feature point, the corresponding first robot coordinate, the second pixel coordinate of the preset feature point and the corresponding second robot coordinate.
4. The method according to claim 1, wherein obtaining the rotation center position according to the pixel coordinates of the preset feature point at the different angles and the corresponding robot coordinates comprises:
acquiring a rotation center conversion relation and a rough rotation center, wherein the rotation center conversion relation is used for representing the relation between the pixel coordinate of the preset feature point and the robot coordinate when the robot rotates;
obtaining corresponding robot coordinates according to the rotation center conversion relation and the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position by a preset first step length;
acquiring pixel coordinates of the preset feature points when the robot reaches the position of the corresponding robot coordinate;
obtaining a robot coordinate error according to the pixel coordinates of the preset feature points when the robot reaches the position of the corresponding robot coordinate and the pixel coordinates of the preset feature points when the robot is at the initial position;
and fitting the coordinate error in the full-angle range, and compensating the fitted coordinate error to the coordinate where the rough rotation center is located to obtain the position of the precise rotation center.
5. The method of claim 4, wherein after compensating the fitted coordinate error to the coordinate of the rough rotation center to obtain a precise rotation center position, further comprising:
calculating a difference value between the coordinate of the accurate rotation center position and the initial position robot coordinate;
the obtaining of the plane conversion relationship between the robot and the image acquisition device according to the translation position relationship and the rotation center position includes:
and obtaining the plane conversion relation between the robot and the image acquisition device according to the translation position relation, the rotation center position and the difference value.
6. The method of claim 4, wherein the obtaining the rotation center transformation relationship and the coarse rotation center comprises:
fitting the pixel coordinates to obtain a rough rotation center according to the pixel coordinates of the preset feature points when the robot rotates to different angles at the initial position by a preset second step length;
calculating a rotation error between the coordinate of the rough rotation center and the pixel coordinate of the preset feature point at the initial position;
and obtaining a rotation center conversion relation according to the rotation error, the pixel coordinate and the corresponding robot coordinate.
7. The method of claim 6, wherein before fitting the pixel coordinates to obtain the rough rotation center according to the pixel coordinates of the preset feature point when the robot rotates to a different angle at the initial position with the preset second step length, further comprising:
acquiring the maximum forward angle of the preset characteristic point searched by the image acquisition device when the robot rotates around the Z-axis forward direction in the area to be calibrated, and acquiring the maximum reverse angle of the preset characteristic point searched by the image acquisition device when the robot rotates around the Z-axis in the reverse direction, wherein the Z-axis is an axis perpendicular to a horizontal plane in three-dimensional space coordinate axes;
and obtaining a second step length of the robot according to the maximum forward angle and the maximum reverse angle.
8. An image acquisition device calibration device, the image acquisition device is applied to robot vision positioning equipment, characterized in that, the image acquisition device calibration device includes:
the first information acquisition module is used for acquiring pixel coordinates of preset feature points and corresponding robot coordinates when the robot translates to different positions of an area to be calibrated to obtain a rough translation position conversion relation between the pixel coordinates of the preset feature points and the robot coordinates;
a translation relation obtaining module, configured to obtain, according to the rough translation position conversion relation, a translation position relation between the pixel coordinate of the preset feature point and the robot coordinate according to the pixel coordinate of the preset feature point and the corresponding robot coordinate;
the second information acquisition module is used for acquiring pixel coordinates of the preset feature points when the robot rotates to different angles and corresponding robot coordinates, and finding out a rotation center conversion relation between the pixel coordinates of the preset feature points and the robot coordinates;
the rotation center acquisition module is used for acquiring the position of a rotation center according to the pixel coordinates of the preset feature points at different angles and the corresponding robot coordinates through the rotation center conversion relation;
and the calibration information acquisition module is used for acquiring the plane conversion relation between the robot and the image acquisition device according to the translation position relation and the rotation center position so as to calibrate the image acquisition device.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201910146310.7A 2019-02-27 2019-02-27 Image acquisition device calibration method and device, computer equipment and storage medium Active CN109829953B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910146310.7A CN109829953B (en) 2019-02-27 2019-02-27 Image acquisition device calibration method and device, computer equipment and storage medium
PCT/CN2020/072491 WO2020173240A1 (en) 2019-02-27 2020-01-16 Image acquisition apparatus calibration method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910146310.7A CN109829953B (en) 2019-02-27 2019-02-27 Image acquisition device calibration method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109829953A CN109829953A (en) 2019-05-31
CN109829953B true CN109829953B (en) 2021-09-03

Family

ID=66864630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910146310.7A Active CN109829953B (en) 2019-02-27 2019-02-27 Image acquisition device calibration method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109829953B (en)
WO (1) WO2020173240A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829953B (en) * 2019-02-27 2021-09-03 广东拓斯达科技股份有限公司 Image acquisition device calibration method and device, computer equipment and storage medium
CN110116411B (en) * 2019-06-06 2020-10-30 浙江汉振智能技术有限公司 Robot 3D vision hand-eye calibration method based on spherical target
CN113795358B (en) * 2019-06-17 2024-07-09 西门子(中国)有限公司 Coordinate system calibration method, device and computer readable medium
CN110465946B (en) * 2019-08-19 2021-04-30 珞石(北京)科技有限公司 Method for calibrating relation between pixel coordinate and robot coordinate
CN113021328A (en) * 2019-12-09 2021-06-25 广东博智林机器人有限公司 Hand-eye calibration method, device, equipment and medium
CN111369625B (en) * 2020-03-02 2021-04-13 广东利元亨智能装备股份有限公司 Positioning method, positioning device and storage medium
CN111524073A (en) * 2020-04-14 2020-08-11 云南电网有限责任公司信息中心 Image geometric transformation method, device, computer equipment and medium
CN111627071B (en) * 2020-04-30 2023-10-17 如你所视(北京)科技有限公司 Method, device and storage medium for measuring motor rotation precision
CN111912337B (en) * 2020-07-24 2021-11-09 上海擎朗智能科技有限公司 Method, device, equipment and medium for determining robot posture information
CN112058679A (en) * 2020-08-11 2020-12-11 武汉万邦德新科技有限公司 Soft agricultural product robot grabbing and sorting method and device based on impedance control
CN112348895B (en) * 2020-11-18 2024-08-23 深圳创维-Rgb电子有限公司 Control method, control equipment and medium for bonding liquid crystal panel
CN114683267B (en) * 2020-12-31 2023-09-19 北京小米移动软件有限公司 Calibration method, calibration device, electronic equipment and storage medium
CN112819884B (en) * 2021-01-08 2024-07-12 苏州华兴源创科技股份有限公司 Coordinate correction method and device, electronic equipment and computer readable medium
CN113510697B (en) * 2021-04-23 2023-02-14 知守科技(杭州)有限公司 Manipulator positioning method, device, system, electronic device and storage medium
CN114505858B (en) * 2022-02-17 2023-08-18 北京极智嘉科技股份有限公司 Cantilever shaft butt joint control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104354167A (en) * 2014-08-29 2015-02-18 广东正业科技股份有限公司 Robot hand-eye calibration method and device
WO2018046617A1 (en) * 2016-09-07 2018-03-15 Starship Technologies Oü Method and system for calibrating multiple cameras
CN107808401A (en) * 2017-10-30 2018-03-16 大族激光科技产业集团股份有限公司 The hand and eye calibrating method of the one camera of mechanical arm tail end

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3511551B2 (en) * 1996-06-27 2004-03-29 株式会社リコー Robot arm state detection method and detection system
CN106934813A (en) * 2015-12-31 2017-07-07 沈阳高精数控智能技术股份有限公司 A kind of industrial robot workpiece grabbing implementation method of view-based access control model positioning
CN109483531B (en) * 2018-10-26 2021-08-03 江苏大学 Machine vision system and method for picking and placing FPC board by manipulator at fixed point
CN109366472B (en) * 2018-12-04 2020-11-27 广东拓斯达科技股份有限公司 Method and device for placing articles by robot, computer equipment and storage medium
CN109829953B (en) * 2019-02-27 2021-09-03 广东拓斯达科技股份有限公司 Image acquisition device calibration method and device, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104354167A (en) * 2014-08-29 2015-02-18 广东正业科技股份有限公司 Robot hand-eye calibration method and device
WO2018046617A1 (en) * 2016-09-07 2018-03-15 Starship Technologies Oü Method and system for calibrating multiple cameras
CN107808401A (en) * 2017-10-30 2018-03-16 大族激光科技产业集团股份有限公司 The hand and eye calibrating method of the one camera of mechanical arm tail end

Also Published As

Publication number Publication date
CN109829953A (en) 2019-05-31
WO2020173240A1 (en) 2020-09-03

Similar Documents

Publication Publication Date Title
CN109829953B (en) Image acquisition device calibration method and device, computer equipment and storage medium
CN112964196B (en) Three-dimensional scanning method, system, electronic device and computer equipment
CN109129445B (en) Hand-eye calibration method, calibration plate, device, equipment and storage medium for mechanical arm
CN110517320B (en) High-speed high-precision automatic alignment method and device based on UVW system and computer equipment
JP4191080B2 (en) Measuring device
CN110640747B (en) Hand-eye calibration method and system for robot, electronic equipment and storage medium
CN113442169B (en) Method and device for calibrating hands and eyes of robot, computer equipment and readable storage medium
CN111438688B (en) Robot correction method, robot correction device, computer equipment and storage medium
CN110464463B (en) Surgical instrument tip calibration method and device, computer equipment and storage medium
CN109407613B (en) Adjusting method and device of three-dimensional scanning turntable, computer equipment and storage medium
CN109366472B (en) Method and device for placing articles by robot, computer equipment and storage medium
CN109531604B (en) Robot control device for performing calibration, measurement system, and calibration method
CN114663500A (en) Vision calibration method, computer device and storage medium
CN108627178B (en) Robot eye calibration method and system
JP6410411B2 (en) Pattern matching apparatus and pattern matching method
CN114193444A (en) Robot hand-eye calibration method, system, equipment and medium
CN113910756A (en) Compensation control method, device, equipment and medium based on screen printing alignment
CN116019562A (en) Robot control system and method
CN114677429B (en) Positioning method and device of manipulator, computer equipment and storage medium
CN112847350B (en) Hand-eye calibration method, system, computer equipment and storage medium
CN111383283A (en) Calibration method and system for tool coordinate system of robot
CN116175569A (en) Method for determining relation model of hand-eye matrix, hand-eye calibration method and equipment
CN114310873A (en) Pose conversion model generation method, control method, system, device and medium
CN113781301A (en) 3D visual point cloud data splicing method, system and storage medium
CN113066136B (en) Automatic calibration method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant