CN115713563A - Camera calibration method and device, electronic equipment and storage medium - Google Patents
Camera calibration method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115713563A CN115713563A CN202211456438.1A CN202211456438A CN115713563A CN 115713563 A CN115713563 A CN 115713563A CN 202211456438 A CN202211456438 A CN 202211456438A CN 115713563 A CN115713563 A CN 115713563A
- Authority
- CN
- China
- Prior art keywords
- camera
- end effector
- calibration
- coordinates
- auxiliary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 239000012636 effector Substances 0.000 claims abstract description 293
- 239000011159 matrix material Substances 0.000 claims abstract description 250
- 238000013519 translation Methods 0.000 claims abstract description 151
- 230000007246 mechanism Effects 0.000 claims abstract description 132
- 230000009466 transformation Effects 0.000 claims description 114
- 238000012545 processing Methods 0.000 claims description 45
- 238000006073 displacement reaction Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 abstract description 9
- 230000008569 process Effects 0.000 description 34
- 150000001875 compounds Chemical class 0.000 description 7
- 230000035945 sensitivity Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Landscapes
- Manipulator (AREA)
Abstract
The embodiment of the application provides a camera calibration method and device, electronic equipment and a storage medium, and relates to the technical field of computer images. The method comprises the following steps: constructing a translation calibration matrix; controlling the end effector to rotate to a first auxiliary position of the calibration object outside the camera visual field; controlling the end effector to translate to a second auxiliary position which enables the calibration object to be located in the camera visual field, and controlling the camera to shoot from the end effector to the second auxiliary position to obtain an auxiliary image; determining virtual coordinates of the end effector at the first auxiliary position and the pixel coordinates of the calibration object by using the pixel coordinates in the auxiliary image, the physical coordinates of the first auxiliary position and the second auxiliary position and the translation calibration matrix; fitting the plurality of virtual coordinates and the plurality of first auxiliary position physical coordinates to obtain a mechanism rotation center; and determining a calibration result by using the mechanism rotation center and the translation calibration matrix. By applying the scheme provided by the embodiment of the application, the calibration precision of the camera can be improved in a small visual field environment.
Description
Technical Field
The present application relates to the field of computer image technologies, and in particular, to a method and an apparatus for calibrating a camera, an electronic device, and a storage medium.
Background
In order to meet the production requirements of enterprises, related enterprises can complete production tasks in a mode of combining a camera and a motion mechanism; in the combination mode of the camera and the motion mechanism, related enterprises generally apply a camera calibration technology to carry out preprocessing, and calibration results obtained in the camera calibration process can be used for positioning and guiding the motion mechanism by using images of the camera in the production process of the enterprises.
In the related art, in the camera calibration process, the end effector of the motion mechanism needs to perform rotational movement in addition to multiple parallel movements specified by multiple times so as to perform multiple parallel movements on the calibration object in the camera field of view, and based on the pixel coordinates of the calibration object obtained by the rotational movement and the physical coordinates of the end effector in the physical coordinate system, a mechanism rotation center is obtained through fitting processing, and the calibration result is determined by using the mechanism rotation center.
However, when the ratio of the movement range of the calibration object in the camera view field to the movement range of the product to be controlled in the camera view field in actual production is small, the camera calibration process may be considered to be in a small view field environment, for example, in the case that the movement range of the calibration object in the camera view field is only one fifth of the movement range of the product to be controlled in the camera view field, the camera calibration process may be considered to be in a small view field environment. At this time, because the motion range of the calibration object in the camera view field is small, the calibration object can only rotate by a small angle in the camera view field, and therefore, when the rotation center of the mechanism is fitted, the calibration object may be sensitive to a very small error of a physical coordinate or a pixel coordinate, so that the rotation center of the mechanism obtained by fitting is unstable, and the error of the obtained calibration result is large, that is, the precision of camera calibration is low.
Therefore, how to improve the calibration accuracy of the camera in a small field environment is a problem to be solved urgently.
Disclosure of Invention
An embodiment of the present application aims to provide a method and an apparatus for calibrating a camera, an electronic device, and a storage medium, so as to improve the precision of camera calibration in a small field of view environment. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for calibrating a camera, which is applied to a control device, and the method includes:
constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple times of specified parallel movement on an end effector of the movement mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
controlling the end effector to make a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera;
when the end effector reaches a first auxiliary position each time, controlling the end effector to perform parallel movement so as to move the end effector to a second auxiliary position, and controlling the camera to take a picture when the end effector moves to each second auxiliary position, so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
determining a plurality of virtual coordinates using the pixel coordinates for the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
fitting processing about a rotation center is carried out on the plurality of virtual coordinates and physical coordinates of the end effector at each first auxiliary position, and a mechanism rotation center is obtained;
and determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
Optionally, the determining a plurality of virtual coordinates by using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first auxiliary position and the second auxiliary position, and the translation calibration matrix, includes:
determining a plurality of virtual coordinates using the pixel coordinates for the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the intermediate matrix; the intermediate matrix is the translation calibration matrix when the camera is stationary, and the intermediate matrix is a generalized calibration matrix obtained by correcting the translation calibration matrix when the camera is moving, where the correction processing is used to convert the translation calibration matrix into: and the generalized calibration matrix is used for representing the transformation relation between the rotated camera coordinate system and the physical coordinate system, and the rotated camera coordinate system is a camera coordinate system formed after the camera rotates and moves along with the motion mechanism.
Optionally, the mechanism rotation center includes: a pixel rotation center in a camera coordinate system and a physical rotation center in a physical coordinate system;
the determining the calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix includes:
performing normalization processing on the translation calibration matrix by using the pixel rotation center to obtain a calibration result corresponding to the camera; wherein the normalization processing is used for converting the translation calibration matrix into a matrix when translating the origin of the physical coordinate system corresponding to the motion mechanism to the physical rotation center.
Optionally, the determining a plurality of virtual coordinates by using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first auxiliary position and the second auxiliary position, and the intermediate matrix, includes:
determining a plurality of virtual coordinates using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix according to a translation transformation equation;
the translation transformation equation is obtained based on a plane Euclidean transformation principle and is used for representing a transformation relation among pixel coordinates of the calibration object in the auxiliary image, physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, the intermediate matrix and the virtual coordinates.
Optionally, the translation transformation equation in the case of a stationary camera is: m' -M P = W 1 -W 2 The translation transformation equation, in the case of camera motion, is: m t *P′-M t *P=W 2 -W 1 ;
Wherein M is a translation calibration matrix as the intermediate matrix, P is pixel coordinates of the calibration object in the auxiliary image, P' is the virtual coordinates, W is the virtual coordinates 1 Physical coordinates, W, for the end effector at the first auxiliary position 2 For the physical coordinates of the end effector at the second auxiliary position, M t Is a generalized calibration matrix as the intermediate matrix.
Optionally, the method for performing correction processing on the translation calibration matrix includes:
correcting the translation calibration matrix by using a rotation angle of the end effector for rotating and moving and a projective transformation equation to obtain a generalized calibration matrix; the projective transformation equation is an equation obtained based on a projective transformation principle and is used for representing a transformation relation among a rotation angle of the end effector for rotating and moving, the translation calibration matrix and the generalized calibration matrix.
Optionally, the projective transformation equation is:wherein θ represents a rotation angle of the end effector performing the rotational movement, T x And T y For an amount of displacement of the end effector from the first auxiliary position to the second auxiliary position in parallel, M t And M is a translation calibration matrix as the intermediate matrix.
Optionally, the normalizing, performed by using the pixel rotation center, the translation calibration matrix includes:
characterizing a translation component in the translation calibration matrix by a rotation angle at which the pixel rotation center and the end effector perform rotational movement, using the equation M × C = 0; and M is the translation calibration matrix.
Optionally, the constructing a translation calibration matrix corresponding to the camera based on a manner of performing multiple specified parallel movements on the end effector of the motion mechanism includes:
controlling an end effector of the motion mechanism to perform a plurality of specified parallel movements so that the calibration object performs a plurality of parallel movements within the field of view of the camera;
controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object;
a translation calibration matrix is generated based on the pixel coordinates of the calibration object in each target image and the physical coordinates of the end effector at each target position in the physical coordinate system.
In a second aspect, an embodiment of the present application provides a camera calibration apparatus, where the apparatus includes:
the construction module is used for constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple specified parallel movements on the end effector of the motion mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
a first control module for controlling the end effector to perform a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera;
the second control module is used for controlling the end effector to move in parallel when the end effector reaches a first auxiliary position each time so as to move the end effector to a second auxiliary position, and controlling the camera to take a picture when the end effector moves to each second auxiliary position, so that an auxiliary image containing the calibration object is obtained; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
a first determining module for determining a plurality of virtual coordinates using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
the fitting module is used for performing fitting processing on the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position about a rotation center to obtain a mechanism rotation center;
and the second determination module is used for determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
An embodiment of the present application further provides an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing the method for calibrating the camera when the program stored in the memory is executed.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for calibrating a camera is implemented.
The embodiment of the application has the following beneficial effects:
according to the camera calibration method provided by the embodiment of the application, a translation calibration matrix corresponding to a camera can be constructed based on a mode of carrying out multiple specified parallel movements on an end effector of a motion mechanism; the multiple specified parallel movements can enable the calibration object to perform multiple parallel movements in the camera field of view; then controlling the end effector to perform a plurality of times of rotary movement so that the end effector reaches a plurality of first auxiliary positions which enable the calibration object to be positioned outside the visual field of the camera; when the first auxiliary position is reached each time, the end effector is controlled to move in parallel so as to move the end effector to a second auxiliary position where the calibration object is located in the visual field of the camera, and the camera is controlled to take a picture when the end effector moves to each second auxiliary position, so that an auxiliary image containing the calibration object is obtained; then, a plurality of virtual coordinates can be determined by utilizing the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each first auxiliary position and each second auxiliary position, and the constructed translation calibration matrix; the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position; then, fitting processing about a rotation center can be carried out on the plurality of virtual coordinates and the physical coordinates of the plurality of first auxiliary positions to obtain a mechanism rotation center; and determining a calibration result corresponding to the camera by using the obtained mechanism rotation center and the constructed translation calibration matrix.
Based on the above scheme, after the end effector rotates and moves, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates, i.e., virtual coordinates, of the calibration object in a camera coordinate system are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates, located at each first auxiliary position, of the end effector to obtain a mechanism rotation center, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, errors of a calibration result are reduced, and the calibration precision of the camera is improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments of the present application, and other embodiments can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram illustrating the principle of error amplification of a rotation center of a fitting mechanism in a conventional calibration method under the condition of disturbance in a small field of view environment;
fig. 2 is a schematic structural diagram of a calibration control system in a stationary condition of a camera according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a camera calibration method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a camera calibration method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for calibrating a camera according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a method for calibrating a camera according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a camera calibration apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
For convenience of understanding, terms used in the technical fields related to the embodiments of the present application will be described first.
Euclidean transformation: after the object is subjected to compound transformation of translation and rotation transformation, the shape and the size of the object are kept unchanged, and the compound transformation is called European style transformation; after the compound transformation of translation and rotation transformation, the object with unchanged shape and size can be called a rigid body.
Similarity transformation: after the object is subjected to the compound transformation of the Euclidean transformation and the uniform scaling transformation, the shape is kept unchanged, but the size of the object is changed, and the compound transformation is called as similarity transformation.
Affine transformation: after the object is subjected to compound transformation of non-singular linear transformation and translation transformation, the shape of the object is not kept unchanged, but parallel lines are kept parallel, and the compound transformation is called affine transformation.
Projective transformation: after the object is subjected to projection transformation between any two planes, only the cross ratio of collinear points is kept unchanged, and the transformation is called projective transformation; among them, the projective transformation between any two planes is also called homographic (homographic) transformation.
Homogeneous linear transformation: linear transformation among homogeneous vectors is called homogeneous linear transformation; wherein a homogeneous vector may be two vectors differing by only one non-zero global scaling factor.
Calibration: establishing a geometric model of camera imaging for determining the correlation between the geometric position of a certain point on the surface of the space object and the corresponding point in the image; these geometric model parameters may be, among others, camera parameters. Under most conditions, the parameters must be obtained through experiments and calculation, and the process of solving the parameters is called camera calibration. The aim of the camera calibration is to solve a projective transformation matrix that characterizes the transformation relation between the camera coordinate system and the physical coordinate system belonging to the kinematic mechanism.
Calibrating a matrix: and the calibrated result is a calibration matrix.
Calibration object: an intermediary may be used to perform calibration, e.g., calibration plates, materials, etc.
Field Of View (FOV), field angle Of View: the actual range of objects that can be accommodated within the field of view seen under the camera.
Disturbance: in the calibration process, because the positioning accuracy of the motion mechanism, the imaging quality of the image and the positioning accuracy of the feature point extraction all have system errors or noises, the introduced deviation is called as disturbance deviation.
eye-in-hand: the eye is on the hand and the finger camera is mounted on the motion mechanism to follow the motion mechanism, so eye-in-hand can also be called camera motion.
eye-to-hand: the eye is outside the hand, meaning that the camera is mounted outside the work transport and remains stationary, and thus eye-to-hand may also be referred to as camera stationary.
The rotation center: the center of the rotation axis when the motion mechanism rotates.
For a better understanding of the embodiments of the present application, the prior art will be described below.
In the related art, in the moving process of the moving mechanism, the calibration object may perform parallel movement and rotational movement for multiple times in the camera field of view, and at the same time, the camera acquires multiple images including the calibration object to obtain multiple sets of matched physical point sets and image point sets, where a physical point set may include physical coordinates of multiple end effectors belonging to the moving mechanism in a physical coordinate system, and an image point set may include pixel coordinates of multiple calibration objects in the acquired images; then, based on a plurality of groups of physical coordinates and pixel coordinates, a corresponding relation between a camera coordinate system and a physical coordinate system which enables a re-projection error to be minimum is obtained by a nonlinear optimization method, and therefore an initial homography matrix is obtained; then, fitting the multiple groups of physical coordinates and pixel coordinates to obtain a mechanism rotation center, and normalizing the initial homography matrix based on the mechanism rotation center to obtain a calibration matrix, namely a calibration result.
However, in a small field-of-view environment, the motion range of the calibration object in the field of view of the camera is small, so that the calibration object can only rotate by a small angle in the field of view of the camera, and therefore, when the rotation center of the mechanism is fitted, the calibration object may be sensitive to a very small error of a physical coordinate or a pixel coordinate, so that the rotation center of the mechanism obtained by fitting is unstable, and an error of an obtained calibration result is large, that is, the calibration accuracy of the camera is low.
By way of example, fig. 1 is a schematic diagram illustrating error amplification of a rotation center of a fitting mechanism in a conventional calibration method under the condition of disturbance in a small-field environment. As shown in fig. 1, a = (1420.09, 988.88), B = (1115.86, 973.64), C = (803.56, 985.45), D = (1420.29, 988.67), E = (1115.66, 973.14), F = (803.76, 985.35), C: (x-1092.35) 2 +(y-4487.12) 2 =12345126.89,d:(x-1093.64) 2 +(y-4400.52) 2 =11747436.54. A. B and C are pixel coordinates of three sampled calibration objects, a circle C is obtained by fitting A, B and C, at the moment, a disturbance error is artificially applied, 0.2 pixel noise is applied to the pixel coordinates of the three calibration objects A, B and C, the obtained pixel coordinates of the three calibration objects are respectively D, E and F, and a circle D is obtained by fitting D, E and F. Because the coordinates A, B and C are different from the coordinates D, E and F by a small amount, the coordinates A, B and C are basically overlapped in the figure, and the coordinates A, B and C are shielded by the coordinates D, E and F. As can be seen from the equation of the circle c and the circle d, when the pixel coordinates of the fitting point change by 0.2 pixel due to noise interference, the difference between the circle center coordinates X obtained by fitting is 1.29 pixels, the difference between the circle center coordinates Y is 86.6 pixels, and the error magnification is about 86.6 ÷ 0.2=433 times.
Therefore, how to improve the camera calibration precision in a small view field environment is a problem to be solved urgently.
In order to improve the camera calibration precision in a small view field environment, the embodiment of the application provides a camera calibration method, a camera calibration device, an electronic device and a storage medium.
First, a method for calibrating a camera provided in an embodiment of the present application is described below.
The camera calibration method provided by the embodiment of the application can be applied to control equipment. In a specific application process, the control device, the camera and the motion mechanism may form a calibration control system. The camera can be used for photographing the calibration object and sending an image containing the calibration object to the control device, the motion mechanism can perform parallel movement and rotation movement so that the end effector belonging to the motion mechanism reaches a specified position, the control device can control the motion of the motion mechanism and the camera to photograph, receive the image sent by the camera and obtain the position of the end effector, and the determination of the calibration result is achieved based on the obtained content.
For a better understanding of the camera calibration system, an exemplary description is given in conjunction with fig. 2: for the case that the camera is stationary, as shown in fig. 2, the camera 1 is fixed at a position outside the moving mechanism, and the suction nozzle 4 of the moving mechanism can be connected with the calibration object 2, that is, the suction nozzle sucks the calibration object 2, and the suction nozzle 4 is connected with the flange 3. In a specific application, the physical coordinates of the flange 3 are generally used as the physical coordinates of the end effector in a default physical coordinate system of the motion mechanism; in another case, the movement mechanism may set a Tool (Tool Center Point, TCP) coordinate system, taking the physical coordinates of the suction nozzle 4 as the physical coordinates of the end effector; in both cases, the physical coordinates of the end effector may be obtained; specifically, the camera 1 may be configured to perform photographing processing on the calibration object 2 and send an image containing the calibration object 2 to the control device; the control equipment can drive the calibration object 2 to move by controlling the flange 3 and the suction nozzle 4 to move; the control device can also control the camera 1 to take a picture, receive the image sent by the camera 1, acquire the physical coordinates of the end effector in the physical coordinate system, and determine the calibration result based on the acquired content.
The camera calibration method provided by the embodiment of the application can comprise the following steps:
constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple specified parallel movements on an end effector of the motion mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
controlling the end effector to make a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera;
when the end effector reaches a first auxiliary position each time, controlling the end effector to perform parallel movement so as to move the end effector to a second auxiliary position, and controlling the camera to take a picture when the end effector moves to each second auxiliary position, so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
determining a plurality of virtual coordinates using pixel coordinates for the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and the translation calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
fitting the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position about a rotation center to obtain a mechanism rotation center;
and determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, errors of a calibration result are reduced, and the calibration precision of the camera is improved.
The following describes a method for calibrating a camera according to an embodiment of the present application with reference to the accompanying drawings.
Fig. 3 is a schematic flowchart of a method for calibrating a camera according to an embodiment of the present disclosure, as shown in fig. 3, the method may include steps S301 to S306:
s301, constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple specified parallel movements on an end effector of the motion mechanism; wherein the plurality of prescribed parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera.
It can be understood that the control device can control the end effector of the movement mechanism to perform multiple parallel movements under the condition that the calibration object is ensured to be in the camera field of view, and at the moment, the calibration object can also perform multiple parallel movements in the camera field of view; after each time of parallel movement, the control device can acquire a plurality of physical coordinates of the end effector and a plurality of pixel coordinates of the calibration object, and a matrix corresponding to the camera and used for representing the transformation relation between the camera coordinate system and the physical coordinate system, namely a translation calibration matrix, can be constructed according to the corresponding relation between the physical coordinates and the pixel coordinates.
The calibration object may be an intermediary for determining a conversion relationship between pixel coordinates and physical coordinates, for example, the calibration object may be a checkerboard calibration board, a two-dimensional code calibration board, a material, or the like.
In one implementation, the building a translation calibration matrix corresponding to the camera based on the multiple specified parallel movements of the end effector of the motion mechanism may include steps A1 to A3:
a1, controlling an end effector of a motion mechanism to perform a plurality of specified parallel movements so that a calibration object performs a plurality of parallel movements within a field of view of a camera.
It will be appreciated that the control device may send control signals to the motion mechanism to control the end effector of the motion mechanism to make a plurality of prescribed parallel movements; the multiple specified parallel movements can be parallel movements which are performed by the end effector for at least more than four times and are not collinear at any three positions after the end effector moves, and the calibration object can be ensured to be positioned in the camera field of view after each translational movement.
It should be noted that, under the control of the control device, the calibration object may perform multiple parallel movements in the camera field of view, and each parallel movement may be a displacement performed according to a fixed distance, and of course, the distance of each parallel movement may also be different, which is also reasonable. The embodiment of the present application is not limited to a specific parallel moving form of the calibration object.
And A2, controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object.
It can be understood that the control device can control the camera to take a picture of the calibration object and obtain an image taken by the camera after the camera moves to the target position in parallel each time.
Each target position may be a position of the end effector in a three-dimensional world, and a specific representation may be a three-dimensional coordinate, however, in the calibration process, the end effector only performs a parallel movement, that is, a certain dimension of the end effector in the three-dimensional world may be fixed, and thus, a physical coordinate of the end effector may be a two-dimensional coordinate.
It should be noted that, each time the end effector moves to the target position, the calibration object may be in the camera field of view, and move a fixed distance to reach the position corresponding to the target position; the fixed distance may be displacement in the front, rear, left, and right directions in the same plane, for example, a certain calibration object is moved by one unit distance in the positive direction of the x-axis on the plane of the (x, y) two-dimensional coordinate system, and the one unit distance may be referred to as a fixed distance. Therefore, the obtained target image containing the calibration object can comprise pixel coordinates representing the calibration object in the camera coordinate system; in addition, the pixel coordinates from the target image may be coordinates of a point of one two-dimensional image, and thus, the pixel coordinates of the calibration object may be two-dimensional coordinates.
For the determination of the pixel coordinates of the calibration object, feature points existing in a physical space can be selected on the calibration object as anchor points to position the calibration object in the image, after each target image is obtained, the feature points in the image can be determined as image points in each target image containing the feature points of the calibration object, and the coordinates of the image points are taken as the pixel coordinates of the calibration object; meanwhile, in order to obtain complete and accurate pixel coordinates, the selected image points may be points in the image with clear pixels and determined coordinate values.
And A3, generating a translation calibration matrix based on the pixel coordinates of the calibration object in each target image and the physical coordinates of the end effector at each target position in the physical coordinate system.
It will be appreciated that the control device may control the movement of the end effector by a known displacement, and thus, in the physical coordinate system, the control device may acquire the physical coordinates of the end effector; through the corresponding relation between the physical coordinates of the end effector and the pixel coordinates in the target image, a matrix corresponding to the camera and used for representing the transformation relation between the camera coordinate system and the physical coordinate system, namely a translation calibration matrix, can be constructed.
Specifically, the generation of the translation calibration matrix may find a transformation relationship between the pixel coordinate of the calibration object and the physical coordinate of the end effector based on the principle of affine transformation, so as to determine a transformation relationship between a pixel coordinate system in which the pixel coordinate is located and a physical coordinate system in which the physical coordinate is located, where the transformation relationship may be in the form of a matrix, that is, a translation transformation matrix. The generation mode of the translation calibration matrix is not limited, and any generation mode of the translation calibration matrix can be applied to the application.
S302, controlling the end effector to perform multiple times of rotary movement so that the end effector reaches multiple first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera.
It can be understood that the end effector of the motion mechanism can be controlled by the control device to perform multiple times of rotational movement within the movement range of the end effector, after the rotational movement, the end effector can reach the first auxiliary position, the calibration object can reach the position outside the field of view of the camera, and at the position outside the field of view of the camera, the camera cannot photograph the calibration object, so that when the end effector is located at the first auxiliary position, the physical coordinates of the end effector can be acquired, but the pixel coordinates of the calibration object cannot be directly acquired; the rotation angle of the end effector for the rotational movement may be a reasonable value in the movement range of the motion mechanism.
It should be noted that, in a process of one rotation movement, a rotation direction of the end effector performing the rotation movement may be clockwise rotation or counterclockwise rotation, and in a next rotation movement, the rotation movement may be a rotation movement in a direction opposite to a previous rotation movement, or a rotation movement in the same direction as the previous rotation movement, which is not specifically limited in this embodiment of the present application.
S303, when the end effector reaches a first auxiliary position each time, controlling the end effector to perform parallel movement so as to move the end effector to a second auxiliary position, and controlling the camera to take a picture when the end effector moves to each second auxiliary position, so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera.
It can be understood that the control device can control the end effector to move from the first auxiliary position to the second auxiliary position in parallel, and each time the end effector reaches the second auxiliary position, the control device can control the camera to take a picture; because the calibration object is located in the camera view field when the end effector is located at the second auxiliary position, at this time, the camera performs photographing processing to obtain an auxiliary image containing the calibration object, and the pixel coordinates of the calibration object in the auxiliary image can be obtained by using the feature points of the calibration object in the auxiliary image as image points.
S304, determining a plurality of virtual coordinates by using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each first auxiliary position and each second auxiliary position, and the translation calibration matrix; the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position.
It is understood that in the foregoing steps, pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, and a translation calibration matrix may be obtained; each time the displacement from the first auxiliary position to the second auxiliary position can be directly obtained through the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, or the physical coordinates of the end effector obtained by converting the pixel coordinates of the calibration object by utilizing a translation calibration matrix can be obtained; therefore, under the condition that the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, and the translation calibration matrix are all known, the pixel coordinates of the calibration object when the end effector is at the first auxiliary position, that is, the virtual coordinates of the calibration object outside the field of view can be obtained.
And S305, performing fitting processing about a rotation center on the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position to obtain a mechanism rotation center.
It can be understood that, in the fitting process of the rotation center, a circle can be fitted through a plurality of point coordinates, and the center of the circle is the rotation center; if the point coordinates subjected to the fitting process are the plurality of virtual coordinates and the physical coordinates of the plurality of first auxiliary positions, the center of the obtained fitting circle may be the mechanism rotation center.
S306, determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
It will be appreciated that the translational calibration matrix can be converted using the mechanism center of rotation to: and aiming at the visual field of the calibrated camera, and representing a target matrix of the transformation relation between the camera coordinate system and the physical coordinate system, wherein the target matrix can be used as a calibration result.
In one implementation, the mechanism center of rotation includes: a pixel rotation center in a camera coordinate system and a physical rotation center in a physical coordinate system.
It can be understood that, if the point coordinates subjected to the fitting processing are a plurality of virtual coordinates, the center of the obtained fitting circle may be the pixel rotation center; if the point coordinates subjected to the fitting processing are physical coordinates of the plurality of first auxiliary positions, the center of the obtained fitting circle may be a physical rotation center.
The physical rotation center can be used as a part of a calibration result, and the movement mechanism is positioned and navigated in the production process.
In addition, the determining the calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix may include:
performing normalization processing on the translation calibration matrix by using the pixel rotation center to obtain a calibration result corresponding to the camera; wherein the normalization processing is configured to convert the translational calibration matrix into a matrix when translating an origin of a physical coordinate system corresponding to the motion mechanism to the physical rotation center.
It will be appreciated that the coordinates of the center of rotation of the pixels may be brought into the translational calibration matrix by a normalization process, and in this case the resulting matrix may be used as a calibration result for the field of view of the camera being calibrated.
Specifically, in an implementation manner, the normalizing the translational calibration matrix by using the pixel rotation center to obtain the calibration result corresponding to the camera may include:
characterizing a translation component in the translation calibration matrix by a rotation angle at which the pixel rotation center and the end effector perform rotational movement, using the equation M × C = 0; wherein, C is the coordinate of the pixel rotation center, and M is the translation calibration matrix.
It can be understood that, by using the equation M × C =0, the value of the displacement of the end effector moving in parallel to the target position in the translation calibration process, that is, the translation amount in the translation calibration matrix M, is expressed by the coordinates of the pixel rotation center and the rotation angle of the end effector performing the rotation movement, so as to obtain the target matrix; the target matrix can be used as a calibration result of the field of view of the calibrated camera and can be used for representing a transformation relation between a camera coordinate system and a physical coordinate system.
Illustratively, the calibration matrix is calibrated if translatedCenter of rotation of pixelIn the process of normalizing the translation calibration matrix, the following equation can be determined by using the equation M × C = 0:
thus, the following equation can be determined:
a+(CenterX*cosθ-CenterY*sinθ)=0,b+(CenterX*sinθ+CenterY*cosθ)=0;
thus, the amount of translation in the M matrix can be determined:
a=-(CenterX*cosθ-CenterY*sinθ),b=-(CenterX*sinθ+CenterY*cosθ);
substituting the parameter-containing expressions of a and b into M to obtain an object matrix:
it should be noted that, based on the fact that both the translation and rotation transformation in the 2D projective space belong to homogeneous linear transformation, the matrix processing involved in the present embodiment includes, but is not limited to: the construction of the translation calibration matrix, the acquisition of the generalized calibration matrix, and the acquisition of the target calibration matrix may be processing related to homogeneous coordinate operation.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, errors of a calibration result are reduced, and the calibration precision of the camera is improved.
In addition, in the embodiment, the range of the pixel coordinate of the calibration object can be expanded by acquiring the virtual coordinate outside the field of view of the camera, which is equivalent to expanding the field of view of the camera, so that the influence of disturbance in a small-field-of-view environment is avoided, the problem of amplification of the calibration error in the small-field-of-view disturbance environment is solved, and high-precision calibration of the camera in the small-field-of-view environment is realized.
In addition, in this embodiment, the physical coordinates of the motion mechanism can be acquired by the control device and the pixel coordinates of the calibration object can be located by using the feature points of the calibration object, so as to achieve the target of camera calibration, without depending on a motion mechanism or a calibration board with higher precision, thereby significantly reducing the cost of camera high-precision calibration.
In addition, in the embodiment, a small number of steps can be added in the conventional camera calibration process to obtain each coordinate of the corresponding larger fitting circle, the operation flow is simple, and the calibration robustness is remarkably improved while the algorithm complexity is almost unchanged.
In addition, in the embodiment, in the process of constructing the translation calibration matrix and the target matrix, two-dimensional pixel coordinates and physical coordinates can be used, so that the method can be compatible with camera calibration of various two-dimensional scenes, and has higher transportability.
In addition, in the embodiment, the translation calibration matrix can be normalized by using the rotation center of the mechanism, so that the complexity of subsequent positioning operation is reduced, and the practicability of the calibration result is improved.
Optionally, in another embodiment, on the basis of the method for calibrating a camera shown in fig. 3, the determining a plurality of virtual coordinates by using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first auxiliary position and the second auxiliary position, and the translation calibration matrix may include the following steps:
determining a plurality of virtual coordinates using the pixel coordinates for the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the intermediate matrix; the intermediate matrix is the translation calibration matrix when the camera is stationary, and the intermediate matrix is a generalized calibration matrix obtained by correcting the translation calibration matrix when the camera is moving, where the correction processing is used to convert the translation calibration matrix into: and the generalized calibration matrix represents the transformation relation between the rotated camera coordinate system and the physical coordinate system, and the rotated camera coordinate system is a camera coordinate system formed after the camera rotates and moves along with the motion mechanism.
It can be understood that, in the case that the camera is stationary, the pixel coordinates of the calibration object in each auxiliary image obtained in the foregoing steps, the physical coordinates of the end effector at each of the first auxiliary position and the second auxiliary position, and the translation calibration matrix may be directly used to determine a plurality of virtual coordinates; and under the condition that the camera moves, determining a plurality of virtual coordinates by utilizing the pixel coordinates of the calibration object in each auxiliary image obtained in the previous step, the physical coordinates of the end effector at each first auxiliary position and each second auxiliary position, and a generalized calibration matrix obtained after the translation calibration matrix needs to be corrected.
For the intermediate matrix, in the process of calibrating the camera once, an intermediate matrix can be used as the intermediate matrix; in the case of a stationary camera, the intermediate matrix may be a translation calibration matrix; and under the condition that the camera moves, the intermediate matrix can be a generalized calibration matrix obtained by correcting the translation calibration matrix.
Aiming at the correction processing, the rotation angle of the camera which rotates and moves along with the motion mechanism can be brought into a translation calibration matrix to obtain a generalized calibration matrix representing the transformation relation between the rotated camera coordinate system and the physical coordinate system; the rotation angle may be a rotation angle at which the end effector performs rotational movement.
In one implementation, the determining a plurality of virtual coordinates using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the intermediate matrix may include:
determining a plurality of virtual coordinates using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix according to a translation transformation equation; the translation transformation equation is obtained based on a plane Euclidean transformation principle and is used for representing a transformation relation among pixel coordinates of the calibration object in the auxiliary image, physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, the intermediate matrix and the virtual coordinates.
It can be understood that the displacement from the first auxiliary position to the second auxiliary position may be directly obtained by the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, or may be obtained by converting the pixel coordinates of the calibration object into the physical coordinates of the end effector by using a translation calibration matrix; therefore, according to the translation transformation equation, when the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, and the intermediate matrix are known, the pixel coordinates of the calibration object when the end effector is at the first auxiliary position, that is, the virtual coordinates of the calibration object outside the field of view can be obtained.
Wherein the translation transformation equation, with the camera stationary, is: m P' -M P = W 1 -W 2 The translation transformation equation, in the case of camera motion, is: m is a group of t *P′-M t *P=W 2 -W 1 (ii) a Wherein M is a translation calibration matrix as the intermediate matrix, P is pixel coordinates of the calibration object in the auxiliary image, P' is the virtual coordinates, W is the virtual coordinates 1 Physical coordinates, W, for the end effector at the first auxiliary position 2 For the physical coordinates of the end effector at the second auxiliary position, M t Is a generalized calibration matrix as the intermediate matrix.
It should be noted that the transformation of points or coordinate systems in two planes can be represented by a homogeneous linear transformation, that is, the transformation of translation, rotation, scaling, etc. of points or coordinate systems can be represented as a set of homogeneous linear transformations T. When a pair of points or coordinate systems respectively belonging to two planes meet the same linear transformation T, a homogeneous transformation matrix can be utilized to act on the pixel coordinates of the point to be solved, and the physical coordinates of the point to be solved are obtained; wherein, the homogeneous transformation matrix can represent the transformation relation of the homogeneous linear transformation T.
Specifically, in this embodiment, when the camera is stationary, the physical coordinate W of the end effector at the first auxiliary position can be obtained by applying the translational calibration matrix M to the virtual coordinate P 1 The translation calibration matrix M is applied to the pixel coordinate P of the calibration object in the auxiliary image to obtain the physical coordinate W of the end effector at the second auxiliary position 2 (ii) a In the case of camera motion, the generalized calibration matrix M is used t Acting on the virtual coordinate P' may obtain the physical coordinate W of the end effector at the first auxiliary position 1 General calibration matrix M t The pixel coordinate P of the marker in the auxiliary image can be used for obtaining the physical coordinate W of the end effector at the second auxiliary position 2 。
Transforming the equation M P' -M P = W for translation 1 -W 2 M × P' may characterize the process of converting the pixel coordinates of the calibration object into physical coordinates of the end effector using the translational calibration matrix, when the end effector is located at the first auxiliary position; m × P may also characterize a process of converting the pixel coordinates of the calibration object into physical coordinates of the end effector using the translational calibration matrix, where the end effector is located at the second auxiliary position; m P' -M P may be indicative of a displacement of the end effector from the first auxiliary position to the second auxiliary position, and the displacement of the end effector from the first auxiliary position to the second auxiliary position may be obtained by subtracting a difference between physical coordinates of the end effector at the second auxiliary position from physical coordinates of the end effector at the first auxiliary position, i.e., W 1 -W 2 。
It should be noted that, in the case that the camera is still, the calibration matrix is translated based on the projective transformation propertyWherein s is a scaling scale between a camera coordinate system and a physical coordinate system, R represents a rotation matrix between the camera coordinate system and the physical coordinate system, t represents a translation matrix between the camera coordinate system and the physical coordinate system, and v represents a perspective transformation parameter between the camera coordinate system and the physical coordinate system; thus, the translation transformation equation M × P' -M × P = W 1 -W 2 Can be converted into:wherein, P x ' and P w ' two-dimensional coordinate value, P, which is a virtual coordinate x And P y Two-dimensional coordinates, W, of pixel coordinates of a marker in an auxiliary image 1x And W 1y Two-dimensional coordinates, W, being physical coordinates of the end effector at the first auxiliary position 2x And W 2y For end-effectorsTwo-dimensional coordinates of physical coordinates located at the second auxiliary location. Based on the above formula, in the known P x 、P y 、W 1x 、W 1y 、W 2x 、W 2y S, R, two-dimensional coordinates P of virtual coordinates can be obtained x ' and P y ′。
For translation transformation equation M t *P′-M t *P=W 2 -W 1 ,M t * P' may characterize the process of converting the pixel coordinates of the calibration object to physical coordinates of the end effector using the generalized calibration matrix, at which time the end effector is located at the first auxiliary location; m t * P may also represent the process of converting the pixel coordinates of the calibration object into physical coordinates of the end effector using the generalized calibration matrix, at which time the end effector is located at the second auxiliary location; m is a group of t *P′-M t * P may represent the displacement of the end effector from the first auxiliary position to the second auxiliary position, and for the case of the motion of the camera, the calibration object is in a stationary state, the camera is moving along with the motion mechanism, so that when the motion reference object is the camera, the displacement of the calibration object relative to the camera may be considered as the movement in the opposite direction, therefore, the displacement of the end effector from the first auxiliary position to the second auxiliary position may be obtained by subtracting the difference between the physical coordinates of the end effector at the first auxiliary position from the physical coordinates of the end effector at the second auxiliary position, that is, W 2 -W 1 。
In the case of camera motion, the calibration matrix is translated based on projective transformation propertiesGeneralized calibration matrixWherein s is a scaling between the camera coordinate system and the physical coordinate system, R represents a rotation matrix between the camera coordinate system and the physical coordinate system, t represents a translation matrix between the camera coordinate system and the physical coordinate system, v represents a perspective transformation parameter between the camera coordinate system and the physical coordinate system, and θ is an end effectorThe rotation angle of the rotation movement of the line former, wherein R (theta) is a rotation component; thus, translation transform equation M t *P′-M t *P=W 2 -W 1 Can be converted into:wherein, P x ' and P y ' two-dimensional coordinate value, P, which is a virtual coordinate x And P y Two-dimensional coordinates, W, of pixel coordinates of a marking object in an auxiliary image 1x And W 1y Two-dimensional coordinates, W, being physical coordinates of the end effector at the first auxiliary position 2x And W 2y Two-dimensional coordinates that are the physical coordinates of the end effector at the second auxiliary position. Based on the above formula, in the known P x 、P y 、W 1x 、W 1y 、W 2x 、W 2y S, R (theta), the two-dimensional coordinate P of the virtual coordinate can be obtained x ' and P y ′。
In another implementation manner, the manner of performing the correction processing on the translation calibration matrix may include the steps of:
correcting the translation calibration matrix by using a rotation angle of the end effector for rotating and moving and a projective transformation equation to obtain a generalized calibration matrix; the projective transformation equation is an equation obtained based on a projective transformation principle and is used for representing a transformation relation among a rotation angle of the end effector for rotating and moving, the translation calibration matrix and the generalized calibration matrix.
It can be understood that the rotation angle of the end effector performing the rotation movement may be used as a parameter, expressed in a matrix element, and substituted into a projective transformation equation obtained based on the projective transformation principle to obtain a generalized calibration matrix.
Wherein the projective transformation equation is:wherein θ represents a rotation angle of the end effector performing the rotational movement, T x And T y For the displacement of the end effector from the first auxiliary position to the second auxiliary position in parallel, M t The M is a translation calibration matrix as the intermediate matrix.
It can be understood that in the projective transformation equation, θ, M, T x And T y Is a known quantity, so M can be obtained t I.e., the generalized calibration matrix, to complete the correction process for the translational calibration matrix.
It should be noted that, in the case where the rotation angle and the mechanism rotation center of the end effector for each rotational movement are the same, each amount of displacement of the end effector from each first assist position to each second assist position in parallel may be the same; in the case where there is a difference in the rotation angle of the end effector per rotational movement and the center of rotation of the mechanism, each amount of displacement by which the end effector is moved in parallel from each first auxiliary position to each second auxiliary position may be different.
In this embodiment, under two conditions, namely a static condition and a moving condition, of the camera, the translation calibration matrix and the generalized calibration matrix can be respectively used for determining the plurality of virtual coordinates, so that the method can be applied to various camera calibration scenes, and has high portability.
For better understanding of the method for calibrating a camera provided in the embodiments of the present application, the following description is provided with reference to another embodiment.
Fig. 4 is a schematic flowchart of a method for calibrating a camera according to an embodiment of the present disclosure, as shown in fig. 4, the method may include steps S401 to S407:
s401, controlling an end effector of a motion mechanism to perform multiple specified parallel movements so as to enable a calibration object to perform multiple parallel movements in a visual field of a camera, and controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object.
S402, generating a translation calibration matrix based on the pixel coordinates of the calibration object in each target image and the physical coordinates of the end effector at each target position in the physical coordinate system.
S403, controlling the end effector to perform multiple times of rotary movement so that the end effector reaches multiple first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera.
S404, when the first auxiliary position is reached each time, the end effector is controlled to move in parallel so as to move the end effector to a second auxiliary position, and the camera is controlled to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera.
S405, determining a plurality of virtual coordinates by using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each first auxiliary position and each second auxiliary position, and the translation calibration matrix; and the virtual coordinate is the pixel coordinate of the calibration object in a camera coordinate system when the end effector is positioned at the first auxiliary position.
And S406, fitting the plurality of virtual coordinates and the physical coordinates of the end effector at the first auxiliary positions about a rotation center to obtain a mechanism rotation center.
And S407, determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
It is to be understood that the steps of the present embodiment have been described in the foregoing embodiments, and therefore, redundant description is not provided herein.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to the errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, the errors of calibration results are reduced, and the calibration precision of the camera is improved.
In order to better understand how the method for calibrating a camera is implemented in the embodiment of the present application in the case that the camera is still, the following description will be made in conjunction with another embodiment, as shown in fig. 5, where the method may include steps S501 to S509:
s501, controlling an end effector of a motion mechanism to perform multiple specified parallel movements so that a calibration object performs multiple parallel movements in a visual field of a camera, and controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object.
S502, acquiring a feature point pixel coordinate set P and a motion mechanism physical coordinate set W based on the multiple parallel movements and the target image; the feature point pixel coordinate set P is a set of pixel coordinates including a plurality of calibration objects, and the motion mechanism physical coordinate set W is a set of physical coordinates including a plurality of end effectors.
It is understood that the pixel coordinates of the calibration object may be pixel coordinates of a feature point in the target image with respect to the calibration object; the physical coordinates of the end effector may be obtained by the control device during the movement of the control motion mechanism.
S503, generating a translation calibration matrix according to the equation M P = W; and M is the translation calibration matrix, P is the feature point pixel coordinate set, and W is the motion mechanism physical coordinate set.
S504, controlling the end effector to rotate and move for multiple times so that the end effector reaches multiple first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera.
S505, when the first auxiliary position is reached each time, the end effector is controlled to move in parallel so as to move the end effector to a second auxiliary position, and the camera is controlled to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera.
S506, acquiring the physical coordinate W of the end effector at the first auxiliary position 1 Physical coordinates W when located at the second auxiliary position 2 And pixel coordinates P of the calibration object at the second auxiliary position.
S507, when the camera is still, the physical coordinate W is used 1 The physical coordinate W 2 The pixel coordinates P, the translation calibration matrix M and the translation transformation equation M P' -M P = W 1 -W 2 A plurality of virtual coordinates P' are determined.
S508, the plurality of virtual coordinates P' and the physical coordinates W of the end effector at the first auxiliary positions are processed 1 Fitting processing about the rotation center is performed to obtain a mechanism rotation center C.
And S509, obtaining a target matrix M' by using the mechanism rotation center C, the translation calibration matrix M and an equation M C = 0.
It can be understood that steps S501-S503 of the present embodiment are similar to step S301 of the previous embodiment, step S504 is similar to step S302 of the previous embodiment, step S505 is similar to step S303 of the previous embodiment, steps S506 and S507 have been described in the previous embodiment, and steps S508 and S509 are similar to steps S305 and S306 of the previous embodiment, and thus are not described in detail herein.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, errors of a calibration result are reduced, and the calibration precision of the camera is improved.
In order to better understand how to implement the method for calibrating a camera in the case of camera motion according to the embodiment of the present application, the following description is made with reference to another embodiment, and as shown in fig. 6, the method may include steps S601 to S6011:
s601, controlling an end effector of the motion mechanism to perform multiple specified parallel movements so that a calibration object performs multiple parallel movements in a visual field of a camera, and controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object.
S602, acquiring a feature point pixel coordinate set P and a motion mechanism physical coordinate set W based on the multiple parallel movements and the target image; the feature point pixel coordinate set P is a set of pixel coordinates including a plurality of calibration objects, and the motion mechanism physical coordinate set W is a set of physical coordinates including a plurality of end effectors.
S603, generating a translation calibration matrix according to the equation M P = W; and M is the translation calibration matrix, P is the feature point pixel coordinate set, and W is the motion mechanism physical coordinate set.
S604, controlling the end effector to perform multiple times of rotary movement so that the end effector reaches multiple first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera.
S605, controlling the end effector to perform parallel movement when reaching the first auxiliary position each time so as to move the end effector to the second auxiliary position, and controlling the camera to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera.
S606, acquiring the physical coordinate W of the end effector at the first auxiliary position 1 Physical coordinates W when located at the second auxiliary position 2 And pixel coordinates P of the calibration object at the second auxiliary position.
S607, constructing a rotation matrix R (theta) based on the rotation angle theta of the end effector for rotating and moving; wherein the rotation matrix R (theta) is used for representing the transformation relation between the camera coordinate system before rotation and the camera coordinate system after rotation.
It will be appreciated that the rotation matrix R (θ) may be an element of the projective transformation equation of the previous embodiment that contains θ, that is,accordingly, the projective transformation equation of the foregoing embodiment may be in the form ofM t = M; where R (θ) may also be referred to as a rotation component, T is a translation matrix between the camera coordinate system and the physical coordinate system, and T may include T x And T y 。
S608, utilizing the rotation matrix R (theta) to obtain a generalized calibration matrix M aiming at the camera motion t 。
It will be appreciated that the equation is basedCan obtainTherefore, a generalized calibration matrix M for camera motion can be obtained t 。
S609, in the case of camera motion, utilizing the physical coordinate W 1 The physical coordinate W 2 The pixel coordinate P and the generalized calibration matrix M t And translation transformation equation M t *P′-M t *P=W 1 -W 2 A plurality of virtual coordinates P' are determined.
S6010, the plurality of virtual coordinates P' and the physical coordinates W of the end effector at the first auxiliary positions are calculated 1 Fitting processing about the rotation center is performed to obtain a mechanism rotation center C.
S6011, obtaining a target matrix M' by using the mechanism rotation center C, the translation calibration matrix M, and an equation M × C = 0.
It is understood that steps S601-S603 in this embodiment are similar to S301 in the foregoing embodiment, step S604 is similar to S302 in the foregoing embodiment, step S605 is similar to S303 in the foregoing embodiment, steps S606, S607, S608, and S609 are already described in the foregoing embodiment, and steps S6010 and S6011 are similar to steps S305 and S306 in the foregoing embodiment, and therefore, they are not described herein again.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to the errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, the errors of calibration results are reduced, and the calibration precision of the camera is improved.
Fig. 7 is a schematic structural diagram of a camera calibration apparatus provided in an embodiment of the present application, and as shown in fig. 7, the apparatus may include the following modules:
the construction module 710 is configured to construct a translation calibration matrix corresponding to the camera based on a manner of performing multiple specified parallel movements on an end effector of the motion mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
a first control module 720, configured to control the end effector to perform a plurality of rotational movements so that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera;
the second control module 730 is configured to control the end effector to perform parallel movement to move the end effector to the second auxiliary position each time the end effector reaches the first auxiliary position, and control the camera to take a picture when the end effector moves to each second auxiliary position, so as to obtain an auxiliary image including the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
a first determining module 740 for determining a plurality of virtual coordinates using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
a fitting module 750, configured to perform fitting processing on the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position with respect to a rotation center, so as to obtain a mechanism rotation center;
a second determining module 760, configured to determine a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
The first determining module 740 may include:
a first determination unit configured to determine a plurality of virtual coordinates using pixel coordinates of the calibration object in each of the auxiliary images, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix; the intermediate matrix is the translation calibration matrix when the camera is stationary, and the intermediate matrix is a generalized calibration matrix obtained by correcting the translation calibration matrix when the camera is moving, where the correction processing is used to convert the translation calibration matrix into: and the generalized calibration matrix is used for representing the transformation relation between the rotated camera coordinate system and the physical coordinate system, and the rotated camera coordinate system is a camera coordinate system formed after the camera rotates and moves along with the motion mechanism.
The mechanism center of rotation includes: a pixel rotation center in a camera coordinate system and a physical rotation center in a physical coordinate system.
The second determining module 760 may include:
the second determining unit is used for performing normalization processing on the translation calibration matrix by using the pixel rotation center to obtain a calibration result corresponding to the camera; wherein the normalization processing is configured to convert the translational calibration matrix into a matrix when translating an origin of a physical coordinate system corresponding to the motion mechanism to the physical rotation center.
The first determining unit is configured to:
determining a plurality of virtual coordinates using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix according to a translation transformation equation; wherein the translation transformation equation is an equation obtained based on the principle of planar euclidean transformation, and is used to represent a transformation relationship among the pixel coordinates of the calibration object in the auxiliary image, the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, the intermediate matrix, and the virtual coordinates.
Wherein the translation transformation equation, with the camera stationary, is: m P' -M P = W 1 -W 2 The translation transformation equation, in the case of camera motion, is: m is a group of t *P′-M t *P=W 2 -W 1 (ii) a Wherein M is a translation calibration matrix as the intermediate matrix, P is a pixel coordinate of the calibration object in the auxiliary image, P' is the virtual coordinate, W 1 Is the physical coordinate, W, of the end effector at the first auxiliary position 2 For the physical coordinates of the end effector at the second auxiliary position, M t Is a generalized calibration matrix as the intermediate matrix.
The method for correcting the translation calibration matrix comprises the following steps:
correcting the translation calibration matrix by using a rotation angle of the end effector for rotating and moving and a projective transformation equation to obtain a generalized calibration matrix;
the projective transformation equation is an equation obtained based on a projective transformation principle and is used for representing a transformation relation among a rotation angle of the end effector for rotating and moving, the translation calibration matrix and the generalized calibration matrix.
The projective transformation equation is:wherein θ represents a rotation angle of the end effector performing the rotational movement, T x And T y For the displacement of the end effector from the first auxiliary position to the second auxiliary position in parallel, M t For a generalized calibration matrix as said intermediate matrix, M is asAnd the translation calibration matrix of the intermediate matrix.
The second determining unit may include:
a second determining subunit, configured to characterize, by using an equation M × C =0, a translation component in the translation calibration matrix by a rotation angle at which the pixel rotation center and the end effector perform a rotational movement; and M is the translation calibration matrix.
The building module 710 may include:
the parallel moving unit is used for controlling the end effector of the motion mechanism to perform multiple specified parallel movements so as to enable the calibration object to perform multiple parallel movements in the field of view of the camera;
the photographing unit is used for controlling the camera to photograph when the end effector moves to each target position, so that a target image containing the calibration object is obtained;
and the generation unit is used for generating a translation calibration matrix based on the pixel coordinates of the calibration object in each target image and the physical coordinates of the end effector at each target position in the physical coordinate system.
Based on the above scheme, after the end effector is rotated and moved, the end effector reaches a plurality of first auxiliary positions where the calibration object is located outside the field of view of the camera, and when the end effector is located at each first auxiliary position, pixel coordinates of the calibration object in a camera coordinate system, that is, virtual coordinates, are determined, then, fitting processing about a rotation center is performed on the plurality of virtual coordinates and physical coordinates of the end effector located at each first auxiliary position, and a mechanism rotation center is obtained, so that in the process of fitting the mechanism rotation center, the pixel coordinates of the calibration object may include pixel coordinates outside the field of view of the camera, and the physical coordinates of the end effector may include physical coordinates of the first auxiliary positions, which is equivalent to enlarging the field of view of the camera. Therefore, the method and the device have the advantages that the pixel coordinates and the physical coordinates are subjected to fitting processing, the sensitivity to errors of the physical coordinates or the pixel coordinates can be reduced, the stability of the obtained mechanism rotation center is improved, errors of a calibration result are reduced, and the calibration precision of the camera is improved.
An embodiment of the present application further provides an electronic device, as shown in fig. 8, including
A memory 801 for storing a computer program;
the processor 802 is configured to implement the following steps when executing the program stored in the memory 801:
constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple specified parallel movements on an end effector of the motion mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
controlling the end effector to make a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is a position such that the calibration object is located outside the field of view of the camera;
when the first auxiliary position is reached each time, the end effector is controlled to move in parallel so as to move the end effector to a second auxiliary position, and the camera is controlled to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
determining a plurality of virtual coordinates using the pixel coordinates for the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
fitting the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position about a rotation center to obtain a mechanism rotation center;
and determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned camera calibration methods.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of camera calibration of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or other media (e.g., solid State Disks (SSDs)), among others.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (12)
1. A method for calibrating a camera is applied to control equipment, and comprises the following steps:
constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple specified parallel movements on an end effector of the motion mechanism; wherein the plurality of specified parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
controlling the end effector to make a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is such that the calibration object is located outside the field of view of the camera;
when the first auxiliary position is reached each time, the end effector is controlled to move in parallel so as to move the end effector to a second auxiliary position, and the camera is controlled to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
determining a plurality of virtual coordinates using the pixel coordinates for the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
fitting the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position about a rotation center to obtain a mechanism rotation center;
and determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
2. The method of claim 1, wherein determining a plurality of virtual coordinates using pixel coordinates for the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix comprises:
determining a plurality of virtual coordinates using pixel coordinates for the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and the intermediate matrix; the intermediate matrix is the translation calibration matrix when the camera is stationary, and the intermediate matrix is a generalized calibration matrix obtained by correcting the translation calibration matrix when the camera is moving, where the correction processing is used to convert the translation calibration matrix into: and the generalized calibration matrix is used for representing the transformation relation between the rotated camera coordinate system and the physical coordinate system, and the rotated camera coordinate system is a camera coordinate system formed after the camera rotates and moves along with the motion mechanism.
3. The method of claim 1, wherein the mechanism center of rotation comprises: a pixel rotation center in a camera coordinate system and a physical rotation center in a physical coordinate system;
the determining the calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix includes:
performing normalization processing on the translation calibration matrix by using the pixel rotation center to obtain a calibration result corresponding to the camera; wherein the normalization processing is configured to convert the translational calibration matrix into a matrix when translating an origin of a physical coordinate system corresponding to the motion mechanism to the physical rotation center.
4. The method of claim 2, wherein determining a plurality of virtual coordinates using pixel coordinates for the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix comprises:
determining a plurality of virtual coordinates using pixel coordinates of the calibration object in each auxiliary image, physical coordinates of the end effector at each of the first and second auxiliary positions, and an intermediate matrix according to a translation transformation equation; wherein the translation transformation equation is an equation obtained based on the principle of planar euclidean transformation, and is used to represent a transformation relationship among the pixel coordinates of the calibration object in the auxiliary image, the physical coordinates of the end effector at the first auxiliary position and the second auxiliary position, the intermediate matrix, and the virtual coordinates.
5. The method of claim 4, wherein the translation transformation equation, with the camera stationary, is: m P' -M P = W 1 -W 2 The translation transformation equation, in the case of camera motion, is: m t *P′-M t *P=W 2 -W 1 ;
Wherein M is a translation calibration matrix as the intermediate matrix, P is a pixel coordinate of the calibration object in the auxiliary image, P' is the virtual coordinate, W 1 Is the physical coordinate, W, of the end effector at the first auxiliary position 2 For the physical coordinates of the end effector at the second auxiliary position, M t Is a generalized calibration matrix as the intermediate matrix.
6. The method according to claim 2, wherein the modifying the translational calibration matrix comprises:
correcting the translation calibration matrix by using a rotation angle of the end effector for rotating and moving and a projective transformation equation to obtain a generalized calibration matrix; the projective transformation equation is an equation obtained based on a projective transformation principle and is used for representing a transformation relation among a rotation angle of the end effector for rotating and moving, the translation calibration matrix and the generalized calibration matrix.
7. The method of claim 6, wherein the projective transformation equation is:wherein θ represents a rotation angle of the end effector performing the rotational movement, T x And T y For the displacement of the end effector from the first auxiliary position to the second auxiliary position in parallel, M t The M is a translation calibration matrix as the intermediate matrix.
8. The method according to claim 3, wherein the normalizing the translational calibration matrix using the pixel rotation center comprises:
characterizing a translation component in the translation calibration matrix by a rotation angle at which the pixel rotation center and the end effector perform rotational movement, using the equation M × C = 0; wherein, C is the coordinate of the pixel rotation center, and M is the translation calibration matrix.
9. The method according to any one of claims 1-8, wherein constructing a translation calibration matrix corresponding to the camera based on a plurality of specified parallel movements of the end effector of the motion mechanism comprises:
controlling an end effector of the motion mechanism to perform a plurality of specified parallel movements so that the calibration object performs a plurality of parallel movements within the field of view of the camera;
controlling the camera to take pictures when the end effector moves to each target position to obtain a target image containing the calibration object;
a translation calibration matrix is generated based on the pixel coordinates of the calibration object in each target image and the physical coordinates of the end effector at each target position in the physical coordinate system.
10. A camera calibration apparatus, comprising:
the construction module is used for constructing a translation calibration matrix corresponding to the camera based on a mode of carrying out multiple times of specified parallel movement on the end effector of the motion mechanism; wherein the plurality of prescribed parallel movements cause a calibration object to make a plurality of parallel movements within the field of view of the camera;
a first control module for controlling the end effector to perform a plurality of rotational movements such that the end effector reaches a plurality of first auxiliary positions; wherein the first auxiliary position is a position such that the calibration object is located outside the field of view of the camera;
the second control module is used for controlling the end effector to move in parallel when the first auxiliary position is reached each time so as to move the end effector to a second auxiliary position, and controlling the camera to take pictures when the end effector moves to each second auxiliary position so as to obtain an auxiliary image containing the calibration object; wherein the second auxiliary position is a position such that the calibration object is located within the field of view of the camera;
a first determining module for determining a plurality of virtual coordinates using the pixel coordinates of the calibration object in each auxiliary image, the physical coordinates of the end effector at each of the first and second auxiliary positions, and the translational calibration matrix; wherein the virtual coordinates are pixel coordinates of the calibration object in a camera coordinate system when the end effector is located at the first auxiliary position;
the fitting module is used for performing fitting processing about a rotation center on the plurality of virtual coordinates and the physical coordinates of the end effector at each first auxiliary position to obtain a mechanism rotation center;
and the second determination module is used for determining a calibration result corresponding to the camera by using the mechanism rotation center and the translation calibration matrix.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 9 when executing a program stored in a memory.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211456438.1A CN115713563A (en) | 2022-11-21 | 2022-11-21 | Camera calibration method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211456438.1A CN115713563A (en) | 2022-11-21 | 2022-11-21 | Camera calibration method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115713563A true CN115713563A (en) | 2023-02-24 |
Family
ID=85234070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211456438.1A Pending CN115713563A (en) | 2022-11-21 | 2022-11-21 | Camera calibration method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115713563A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115994954A (en) * | 2023-03-22 | 2023-04-21 | 浙江伽奈维医疗科技有限公司 | High-precision large-field near infrared optical camera calibration device and calibration method |
CN117173257A (en) * | 2023-11-02 | 2023-12-05 | 安徽蔚来智驾科技有限公司 | 3D target detection and calibration parameter enhancement method, electronic equipment and medium |
-
2022
- 2022-11-21 CN CN202211456438.1A patent/CN115713563A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115994954A (en) * | 2023-03-22 | 2023-04-21 | 浙江伽奈维医疗科技有限公司 | High-precision large-field near infrared optical camera calibration device and calibration method |
CN117173257A (en) * | 2023-11-02 | 2023-12-05 | 安徽蔚来智驾科技有限公司 | 3D target detection and calibration parameter enhancement method, electronic equipment and medium |
CN117173257B (en) * | 2023-11-02 | 2024-05-24 | 安徽蔚来智驾科技有限公司 | 3D target detection and calibration parameter enhancement method, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109118545B (en) | Three-dimensional imaging system calibration method and system based on rotating shaft and binocular camera | |
CN113920205B (en) | A Calibration Method for Non-Coaxial Cameras | |
CN108734744B (en) | Long-distance large-view-field binocular calibration method based on total station | |
CN109272574B (en) | Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation | |
CN110070564B (en) | Feature point matching method, device, equipment and storage medium | |
CN108346165A (en) | Robot and three-dimensional sensing components in combination scaling method and device | |
CN107633536A (en) | A kind of camera calibration method and system based on two-dimensional planar template | |
CN115713563A (en) | Camera calibration method and device, electronic equipment and storage medium | |
JP7173285B2 (en) | Camera calibration device, camera calibration method, and program | |
CN111801198A (en) | Hand-eye calibration method, system and computer storage medium | |
CN116038720B (en) | Hand-eye calibration method, device and equipment based on point cloud registration | |
JP7151879B2 (en) | Camera calibration device, camera calibration method, and program | |
CN110225321B (en) | Training sample data acquisition system and method for trapezoidal correction | |
US12067692B2 (en) | Systems and methods for image capture calibration | |
CN113920206A (en) | Calibration method of perspective tilt-shift camera | |
CN113012226A (en) | Camera pose estimation method and device, electronic equipment and computer storage medium | |
CN113706635B (en) | Long-focus camera calibration method based on point feature and line feature fusion | |
CN112085798A (en) | Camera calibration method, device, electronic device and storage medium | |
CN113658266A (en) | Moving axis rotation angle visual measurement method based on fixed camera and single target | |
CN117848234A (en) | Object scanning mechanism, method and related equipment | |
CN115588054A (en) | Camera calibration method and device without angle constraint, electronic equipment and storage medium | |
CN115564822A (en) | Distortion calibration method and device, electronic equipment and medium | |
CN114820307A (en) | Point cloud splicing method and system of 3D line scanning camera and readable storage medium | |
CN111432117B (en) | Image rectification method, device and electronic system | |
CN118052889A (en) | Single-camera calibration method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |