[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116823960A - Visual system calibration method without overlapping visual fields - Google Patents

Visual system calibration method without overlapping visual fields Download PDF

Info

Publication number
CN116823960A
CN116823960A CN202310549708.1A CN202310549708A CN116823960A CN 116823960 A CN116823960 A CN 116823960A CN 202310549708 A CN202310549708 A CN 202310549708A CN 116823960 A CN116823960 A CN 116823960A
Authority
CN
China
Prior art keywords
calibration
camera
target
cameras
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310549708.1A
Other languages
Chinese (zh)
Inventor
彭涛
陆永刚
徐锟
吴守仓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MCC Baosteel Technology Services Co Ltd
Original Assignee
MCC Baosteel Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MCC Baosteel Technology Services Co Ltd filed Critical MCC Baosteel Technology Services Co Ltd
Priority to CN202310549708.1A priority Critical patent/CN116823960A/en
Publication of CN116823960A publication Critical patent/CN116823960A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The visual system calibration method without overlapping visual fields is used for calibrating visual systems of which target information obtained by two visual systems is not in the same coordinate system, and carrying out joint calibration on all cameras, wherein all cameras jointly form a multi-camera visual system, so as to obtain a reference camera coordinate system of the multi-camera visual system; the method comprises the following steps: s1, designing a three-dimensional target, wherein the size of the three-dimensional target is known, and the front surface and the back surface of the three-dimensional target are provided with target arrays consisting of coding targets with characteristic information; the coding target of the characteristic information comprises a positioning area and a coding identification area, wherein the positioning area is used for providing the position information of the target so as to realize the positioning of the target; the code identification area is used for providing code numerical value information; s2, monocular calibration; s3, multi-objective calibration.

Description

Visual system calibration method without overlapping visual fields
Technical Field
The application relates to the technical field of vision system calibration, in particular to a vision system calibration method without overlapping visual fields.
Background
In the process of conveying metallurgical field materials, a heavy-load conveyor is required to be used, and in the long-time feeding process, the chain of a transmission core component of the heavy-load conveyor can be severely worn and corroded due to complex field environment, so that the chain is broken, and the work of the whole heavy-load conveyor is stopped immediately due to the broken chain; in order to overcome the defect of manual measuring tool measurement, the visual measuring technology is applied to the interval detection of characteristic points, and a set of four-eye camera system with non-overlapping view fields is constructed to monitor the abrasion condition of the double-side chains in real time; the four-eye camera system calibration belongs to a multi-eye system, and exceeds two eyes, and the multi-eye vision system calibration relates to a calibration plate target decoding algorithm, a single-eye calibration and a multi-eye system calibration based on a three-dimensional target under a non-overlapping view field;
in the prior art 1, chinese patent CN202211285876.6 discloses a method for identifying and calibrating kinematic parameters of an unmanned engineering vehicle based on multi-vision, and a method for identifying and calibrating kinematic parameters of an unmanned engineering vehicle based on multi-vision, wherein in the first step, the steps for planning a camera array of a multi-vision system are as follows: (1) Restricting the movement of the tail end of the mechanical arm bucket of the unmanned engineering vehicle on a plane, and determining the movement track of the tail end of the mechanical arm bucket of the unmanned engineering vehicle; (2) Obtaining projection of each camera view field on a motion plane according to a camera projection equation; (3) Encoding pose, namely encoding pose by using a genetic algorithm, wherein the maximum coverage area of the track is taken as an index, ensuring that each point on the track to be detected is at least positioned in the overlapping range of two camera fields of view, reducing the distance between a camera and a projection surface, increasing the overlapping degree of the camera fields of view, improving the measurement precision, and finally obtaining pose information of each camera;
in the prior art 2, chinese patent CN202010131077.8, a binocular vision system is formed by a first measurement camera, a second measurement camera and an auxiliary camera respectively, then a checkerboard calibration board is photographed by the first measurement camera, the auxiliary camera, the second measurement camera and the auxiliary camera simultaneously, after photographing, space feature point matching is performed, after paired space feature points are obtained, an essential matrix is calculated, and a rotation matrix R and a translation vector T in an external reference matrix can be decomposed by the essential matrix;
in the prior art 1, a camera projection equation is utilized to convert the camera projection equation into a plane vector for position correction and splicing, and a 2-dimensional vector is utilized to encode pose by using a genetic algorithm;
in the prior art 2, a binocular vision system is formed by a first measuring camera, a second measuring camera and an auxiliary camera respectively, then the first measuring camera, the auxiliary camera, the second measuring camera and the auxiliary camera are utilized to shoot a checkerboard calibration plate at the same time, namely, the auxiliary cameras respectively participate in shooting of a No. 1 and a No. 2 camera, and the position calibration is carried out by utilizing the overlapped part of the visual fields of the auxiliary cameras;
however, the system calibration of the multi-camera system of the non-overlapping fields of view is still not solved by the prior art 1 and the prior art 2.
Disclosure of Invention
In view of the above, the present application aims to construct a four-eye camera system with non-overlapping fields of view based on-line monitoring of the distance between the chains on both sides of the activated carbon, and to complete the calibration of the system by designing a vertical target, thereby obtaining a visual system calibration method with no overlapping fields of view.
The present application aims to solve one of the problems in the background art.
The technical scheme adopted by the application is as follows: to achieve the above and other related objects, the present application provides a vision system calibration method without overlapping fields of view;
the visual system calibration method without overlapping visual fields is used for calibrating visual systems of which target information obtained by two visual systems is not in the same coordinate system, and carrying out joint calibration on all cameras, wherein all cameras jointly form a multi-camera visual system, so as to obtain a reference camera coordinate system of the multi-camera visual system;
the method comprises the following steps:
s1, designing a three-dimensional target, wherein the size of the three-dimensional target is known, and the front surface and the back surface of the three-dimensional target are provided with target arrays consisting of coding targets with characteristic information; the coding target of the characteristic information comprises a positioning area and a coding identification area, wherein the positioning area is used for providing the position information of the target so as to realize the positioning of the target; the code identification area is used for providing code numerical value information;
the three-dimensional target is a target detection and positioning tool in the fields of vision and three-dimensional vision, and consists of a group of specially designed plane patterns, wherein the patterns have specific geometric relations and color distribution under different visual angles; in stereoscopic vision, internal and external parameters of a camera can be deduced by observing stereoscopic targets at different angles.
S2, monocular calibration; s3, multi-objective calibration.
Preferably, in S1, a least square ellipse is selected to fit the center coordinates of the encoded small circle of the encoded marker points, and the single encoded marker point region is identified and extracted according to the determined coordinates of the encoded small circle center points of the encoded marker points, so that the ID of the encoded marker points is decoded according to the encoding rule of the encoded marker points.
The technical scheme of the application also comprises the following technical characteristics:
preferably, the calibration algorithm in S2 includes: and calibrating the internal parameters of the camera and solving the distortion coefficient to finish monocular calibration.
The distortion coefficient refers to a set of parameters describing the imaging distortion condition of the camera; distortion refers to loss of imaging quality in camera imaging due to factors such as camera optics and image sensors, including radial and tangential distortions; the distortion coefficient is generally used for performing de-distortion processing on the imaging to eliminate distortion influence in the imaging, so that a more accurate and precise image is obtained;
in radial distortion, uneven curvature radius of a camera lens causes inaccurate pixel positions of an imaging center and edges, thereby generating radial distortion centering on the imaging center; the radial distortion coefficient is generally described by k1 and k2 parameters, and reflects the degree of deviation of the curvature radius unevenness and the imaging center of the camera lens;
in tangential distortion, the camera lens is not perpendicular to the imaging plane, or the imaging distortion caused by the camera imaging plane not being parallel to the scene plane; the tangential distortion coefficient is described by p1 and p2 parameters, and reflects the influence degree of factors such as perpendicularity and parallelism of a camera imaging plane and a camera lens on imaging quality;
the distortion coefficient is provided by a camera manufacturer or measured through a camera calibration experiment; by acquiring and applying the distortion coefficient, the camera imaging can be subjected to de-distortion processing, so that a more accurate and precise image is obtained.
Preferably, the calibration algorithm in S3 includes: solving relative external parameters between cameras, minimizing and optimizing the external parameters by using the reprojection error, removing accumulated errors caused by matrix continuous multiplication, and completing the multi-objective system calibration under the non-overlapping view field.
Preferably, the monocular calibration in S2 includes coordinate transformation: and (5) carrying out camera internal reference solving by using a Zhang Zhengyou internal reference calibration method to obtain a radial distortion coefficient and a tangential distortion coefficient of the camera, and correcting the projection coordinates by using the distortion parameters.
The radial distortion coefficient and the tangential distortion coefficient are two main expression forms of camera lens distortion, and respectively describe radial distortion and tangential distortion conditions when a camera images;
the radial distortion refers to inaccurate pixel positions of an imaging center and edges due to uneven curvature radius of a camera lens, so that radial distortion taking the imaging center as the center is generated; radial distortion is described by radial distortion coefficients (k 1, k 2) reflecting the degree of non-uniformity of the radius of curvature of the camera lens and the degree of deviation of the imaging center;
tangential distortion refers to the imaging distortion due to camera lenses that are not perpendicular to the imaging plane, or camera imaging planes that are not parallel to the scene plane; tangential distortion can be generally described by tangential distortion coefficients (p 1, p 2), reflecting the degree of influence of factors such as the perpendicularity and parallelism of the camera imaging plane and the camera lens on imaging quality;
after the camera images, the influence of radial and tangential distortion can be eliminated by carrying out de-distortion treatment on the image, so that a more accurate and precise image is obtained; the de-distortion method is based on a camera model and distortion coefficients, and the pixel coordinates after distortion are mapped back to a coordinate system before distortion by inversely transforming the pixels of the image, so that de-distortion is realized.
Preferably, the single target timing in S2 changes the position and the posture of the calibration plate at least once, acquires the coded concentric circle array image, extracts the coordinate pairs of the characteristic points, substitutes the coordinate pairs into a camera model to calculate the initial internal and external parameters and the distortion coefficient of the camera, and completes the monocular calibration.
Preferably, the multi-objective in S3 is: and calibrating each group of binocular cameras based on the internal and external parameters and the distortion coefficient of each camera obtained by the calibration of the monocular cameras, so as to obtain the conversion relation between the coordinate systems of the cameras between the two groups of binocular cameras.
Preferably, the coordinates of the multi-object in S3 are converted into: obtaining a transformation matrix of a coordinate system between targets corresponding to two groups of binocular cameras through the geometric structure relation of the targets, obtaining the transformation matrix through solving a homography matrix or a projection matrix of a target photo, obtaining transformation matrices of all cameras with non-overlapping view fields through simultaneous solution, namely obtaining a transformation matrix from the coordinate system of all cameras to a global coordinate system, and completing the calibration of a four-dimensional system;
homography matrices, also known as projection matrices, are used to describe the projection relationship between two planes.
The transformation matrix refers to a matrix that transforms vectors or points in one coordinate system to vectors or points in another coordinate system; in vision and graphics, a transformation matrix is used to effect a transformation from one coordinate system to another, such as translation, rotation, scaling, etc.
Preferably, the monocular calibration in S2 includes monocular camera calibration or monocular vision system calibration; the multi-objective calibration in S3 includes a multi-view camera calibration or a multi-view vision system calibration.
Because the target to be measured is the chain shafts at the two sides of the chain, which cannot appear under the same camera view field at the same time, a group of rigidly connected binocular cameras are respectively erected at the two sides of the chain and are used for respectively carrying out 3D position measurement on the chain shafts at the two sides; the two groups of binocular cameras do not have overlapping view fields, and the central information of chain shafts at two sides cannot be unified under the same camera coordinate system; therefore, based on the system measurement requirement of the non-overlapping view field, the three-dimensional target is designed to jointly calibrate the four cameras, so that the four cameras jointly form a four-eye camera vision system, and the chain shaft center information on two sides is unified under the same camera coordinate system, namely under the reference camera coordinate system of the multi-eye system, and the overall flow chart of the system is as follows: three-dimensional target design, monocular calibration and multi-objective calibration;
the size of the adopted three-dimensional target is precisely known, and the front side and the back side are not traditional checkerboards, but target arrays formed by coding targets with characteristic information; the design principle of the single target is as follows: the target comprises a positioning area and a coding recognition area, wherein the positioning area is used for providing position information of the target so as to realize positioning of the target; the code identification area is used for providing code numerical value information; selecting a least square ellipse fitting coding small circle center coordinate of a coding mark point, identifying and extracting a single coding mark point region according to the determined coding mark point coding small circle center coordinate, and decoding the ID of the coding mark point according to a coding rule of the coding mark point;
the calibration algorithm mainly comprises two parts, namely, firstly, calibrating internal parameters of a camera and solving distortion coefficients, namely, monocular calibration is carried out; secondly, solving relative external parameters between cameras, optimizing the external parameters by utilizing the minimization of the reprojection error, and avoiding the accumulated error caused by matrix continuous multiplication, namely, performing multi-objective system calibration under a non-overlapping visual field;
the coordinate conversion related to monocular calibration, specifically, the Zhang Zhengyou internal reference calibration method is adopted to carry out internal reference solution of the camera, the method overcomes the defect of high precision requirement of the traditional calibration method on the calibration object, and simultaneously solves the problem of poor robustness of the self-calibration method; taking the influence of radial distortion and tangential distortion on an image into consideration, obtaining a radial distortion coefficient and a tangential distortion coefficient of a camera, and correcting projection coordinates by using distortion parameters; when in calibration, the pose of the calibration plate is changed for a plurality of times, the coded concentric circle array image is acquired, the characteristic point coordinate pairs are extracted, and the characteristic point coordinate pairs are substituted into a camera model to calculate the initial internal and external parameters and distortion coefficients of the camera, so that monocular calibration is completed;
the system comprises a multi-view system formed by four cameras, wherein two adjacent cameras are binocular cameras which are rigidly connected and have overlapped view fields, the other two adjacent cameras are another group of binocular cameras, the optical axis directions of the two groups of binocular cameras are opposite, and a public view field or a public view field is not arranged between the two groups of binocular cameras due to the shielding of an object to be detected in a field working condition; the pose relation between the two groups of binocular cameras can be obtained by calibrating the common view field between the binocular cameras, and then the two groups of binocular systems are combined by utilizing the geometrical relation of the three-dimensional target, so that the calibration of the multi-view system under the non-overlapping view field is completed; the specific operation of the multi-objective calibration is that firstly, based on the internal and external parameters and distortion coefficients of each camera obtained by the single-objective camera calibration, two groups of double-objective cameras can be calibrated respectively to obtain the conversion relationship between the camera 1 coordinate system and the camera 2 coordinate system and the conversion relationship between the camera 3 coordinate system and the camera 4 coordinate system; secondly, a transformation matrix from a target 2 coordinate system to a target 1 coordinate system can be obtained according to the geometric structure relation of the target; the transformation matrix of the target 1 to the camera 1 coordinate system and the transformation matrix of the target 2 to the camera 3 coordinate system can be obtained by carrying out homography matrix solving when the target is photographed, so that the transformation matrix of the camera 3 coordinate system with non-overlapping fields of view to the camera 1 coordinate system, namely the global coordinate system, can be obtained through simultaneous solution, and the four-eye system calibration is completed.
The global coordinate system is a fixed coordinate system and is used for describing information such as the position, the direction, the gesture and the like of all objects in a system; in three-dimensional space, a cartesian coordinate system is generally used as the global coordinate system, and other coordinate systems such as a polar coordinate system, a spherical coordinate system, and the like may also be used;
the global coordinate system is generally used for describing information such as the position and the direction of each object in the scene; in three-dimensional modeling, a global coordinate system may be used to describe information such as the position, size, shape, etc. of all objects in a scene;
the global coordinate system can also be used for describing the position and posture information of the end effector of the robot, so that accurate control and motion planning are convenient.
The application has the following beneficial effects:
because the targets to be measured cannot appear under the same camera view field at the same time, based on the system measurement requirement of the non-overlapping view field, the three-dimensional targets are designed to jointly calibrate the four cameras, so that the four cameras jointly form a four-eye camera vision system, and the chain shaft center information on two sides is unified under the same camera coordinate system, namely under the reference camera coordinate system of the multi-eye system, a calibration algorithm mainly comprises two parts, firstly, the application calibrates the internal parameters of the cameras and solves distortion coefficients, namely, monocular calibration is carried out; and secondly, solving relative external parameters between cameras, optimizing the external parameters by utilizing the minimization of the reprojection error, and avoiding the accumulated error caused by matrix continuous multiplication, namely, performing multi-objective system calibration under a non-overlapping visual field.
Drawings
FIG. 1 is a system flow diagram of the present application;
FIG. 2 is a schematic diagram of a three-dimensional coding target according to the present application;
FIG. 3 is a schematic diagram of the spatial relationship of the coordinate system according to the present application;
FIG. 4 is a schematic illustration of multi-targeting based on stereo coded targets in non-overlapping fields of view of the present application;
FIG. 5 is a concentric circle calibration plate with encoding function of the present application;
FIG. 6 is a schematic drawing of the feature point extraction of the calibration plate of the present application;
FIG. 7 is a system coordinate system conversion relationship according to the present application;
Detailed Description
The following describes the embodiments of the present application in further detail with reference to the accompanying drawings. These embodiments are merely illustrative of the present application and are not intended to be limiting.
In the description of the present application, it should be noted that the terms "center", "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Examples
Because the target to be measured is the chain shafts at the two sides of the chain, which cannot appear under the same camera view field at the same time, a group of rigidly connected binocular cameras are respectively erected at the two sides of the chain and are used for respectively carrying out 3D position measurement on the chain shafts at the two sides; the two groups of binocular cameras do not have overlapping view fields, and the central information of chain shafts at two sides cannot be unified under the same camera coordinate system; therefore, based on the system measurement requirement of the non-overlapping view field, the three-dimensional target is designed to jointly calibrate the four cameras, so that the four cameras jointly form a four-eye camera vision system, and the chain shaft center information on two sides is unified under the same camera coordinate system, namely under the reference camera coordinate system of the multi-eye system, and the overall flow chart of the system is shown in figure 1;
the size of the adopted three-dimensional target is precisely known, and the front side and the back side are not traditional checkerboards, but are target arrays formed by coding targets with characteristic information, as shown in figure 2; the design principle of the single target is as follows: the target comprises a positioning area and a coding recognition area, wherein the positioning area is used for providing position information of the target so as to realize positioning of the target; the code identification area is used for providing code numerical value information; selecting a least square ellipse fitting coding small circle center coordinate of a coding mark point, identifying and extracting a single coding mark point region according to the determined coding mark point coding small circle center coordinate, and decoding the ID of the coding mark point according to a coding rule of the coding mark point;
the calibration algorithm mainly comprises two parts, namely, firstly, calibrating internal parameters of a camera and solving distortion coefficients, namely, monocular calibration is carried out; secondly, solving relative external parameters between cameras, optimizing the external parameters by utilizing the minimization of the reprojection error, and avoiding the accumulated error caused by matrix continuous multiplication, namely, performing multi-objective system calibration under a non-overlapping visual field;
the coordinate conversion related to monocular calibration is shown in fig. 3, specifically, a Zhang Zhengyou internal reference calibration method is adopted to carry out internal reference solution of a camera, the method overcomes the defect that the traditional calibration method has high requirement on the precision of a calibration object, and meanwhile, the problem of poor robustness of the self-calibration method is solved; taking the influence of radial distortion and tangential distortion on an image into consideration, obtaining a radial distortion coefficient and a tangential distortion coefficient of a camera, and correcting projection coordinates by using distortion parameters; when in calibration, the pose of the calibration plate is changed for a plurality of times, the coded concentric circle array image is acquired, the characteristic point coordinate pairs are extracted, and the characteristic point coordinate pairs are substituted into a camera model to calculate the initial internal and external parameters and distortion coefficients of the camera, so that monocular calibration is completed;
as shown in fig. 7, four cameras respectively establish a camera coordinate systemTo->Pixel coordinate system +.>To->Since the four cameras are mounted at known fixed positions on the camera frame, the rotation matrix R between their coordinate systems can be derived by calibration 12 、R 34 And translation vector t 12 、t 34
Two columns in FIG. 7 are equivalent chain axis diagrams, we provide two sets of binocular cameras for respectivelyAcquiring circle centers P of bottom surfaces at two sides of chain shaft 1 (X W1 ,Y W1 ,Z W1 )、P 2 (X W2 ,Y W2 ,Z W2 )、P 3 (X W3 ,Y W3 ,Z W3 )、P 4 (X W4 ,Y W4 ,Z W4 ) Projected into an image
Pixel coordinates. With P 1 (X W1 ,Y W1 ,Z W1 ) For example, its projected pixel coordinates in camera 1 areThe projected pixel coordinates in camera 2 are +.>
In order to improve and repeat the calibration, by using the principle, a multi-view system formed by four cameras is shown in fig. 4 by means of the three-dimensional coding targets shown in fig. 2, 5 and 6, wherein two adjacent cameras C1 and C2 are binocular cameras with overlapped fields of view, the other two adjacent cameras C3 and C4 are another group of binocular cameras, the optical axes of the two groups of binocular cameras are opposite, and due to the shielding of an object to be detected in a field working condition, the two groups of binocular cameras have no common field of view or have a very small common field of view; the pose relation between the two groups of binocular cameras can be obtained by calibrating the common view field between the binocular cameras, and then the two groups of binocular systems are combined by utilizing the geometrical relation of the three-dimensional target, so that the calibration of the multi-view system under the non-overlapping view field is completed; the specific operation of the multi-objective calibration is that firstly, based on the internal and external parameters and distortion coefficients of each camera obtained by the single-objective camera calibration, two groups of double-objective cameras can be calibrated respectively to obtain the conversion relationship between the camera 1 coordinate system and the camera 2 coordinate system and the conversion relationship between the camera 3 coordinate system and the camera 4 coordinate system; secondly, a transformation matrix from a target 2 coordinate system to a target 1 coordinate system can be obtained according to the geometric structure relation of the target; the transformation matrix of the target 1 to the camera 1 coordinate system and the transformation matrix of the target 2 to the camera 3 coordinate system can be obtained by carrying out homography matrix solving when the target is photographed, so that the transformation matrix of the camera 3 coordinate system with non-overlapping fields of view to the camera 1 coordinate system (namely, the global coordinate system) can be obtained through simultaneous solution, and the four-eye system calibration is completed.
Monocular camera calibration:
in the chain spacing monitoring system, cameras are required to acquire parameter matrixes in the cameras through monocular calibration respectively, and distortion coefficients of the cameras are acquired simultaneously; the calibration plate for camera calibration is shown in figures 5 and 6,
converting an image point (x, y) on an imaging surface into an image surface point (u, v) using an in-camera parametric model, having:
wherein (u) 0 ,v 0 ) A is the intersection point of the optical axis center and the imaging surface x And a y The scaling factors are the X-axis and Y-axis. Obtaining:
when f x =f y When =f, the model contains 3 parameters (f, u 0 ,v 0 ) Considering f in practical application x And f y The internal parameter matrix contains 4 parameters (f x ,f y ,u 0 ,v 0 ) And (3) expanding the formula (3.2), selecting coordinates of one point on the object, substituting the coordinates, and calculating to obtain an internal reference matrix K of the camera.
Considering the influence of the distortion of the lens on the image, the radial distortion coefficient and tangential distortion coefficient of the camera are also required to be obtained. The correction formula for the projection coordinates by using the distortion parameters is as follows:
wherein: (x) d ,y d ) Image coordinates of the imaging plane distortion point; (x) p ,y p ) To normalize intoImage coordinates of the image plane correction points;k 1 ,k 2 and k 3 Is the radial distortion coefficient of the lens; p is p 1 And p 2 Is the tangential distortion coefficient of the lens.
Binocular camera calibration
The external parameter calibration between two cameras of the binocular system is to convert coordinate points in a right camera coordinate system into a left camera coordinate system through rotation and translation (taking the left camera coordinate system as a base coordinate system). The rotation and translation conversion matrix can be formed by combining a rotation matrix and a translation vector, images on the same calibration plate are acquired through two cameras, and external parameters between two camera coordinate systems can be obtained by utilizing characteristic point coordinates under the calibration plate coordinate systems.
The three-dimensional points of a certain angular point on the calibration plate under the left camera coordinate system, the right camera coordinate system and the world coordinate system where the calibration plate is positioned are respectively P a (X a ,Y a ,Z a )、P b (X b ,Y b ,Z b ) P w (X w ,Y w ,Z w ) The following steps are:
the relative relation between the left camera and the right camera in binocular stereoscopic vision can be obtained by the above method:
P a =R 12 P b +t 12
wherein [ R ] 12 t 12 ]Representing a rotational translation matrix from the right camera coordinate system to the left camera coordinate system:
final solution from the dual-object determination from the side camera coordinate system to the front camera coordinate systemIs a rotation translation matrix [ R ] 12 t 12 ]。
The foregoing is merely a preferred embodiment of the present application, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present application, and these modifications and substitutions should also be considered as being within the scope of the present application.

Claims (9)

1. The visual system calibration method without overlapping visual fields is used for calibrating visual systems of which target information obtained by two visual systems is not in the same coordinate system, and is characterized in that all cameras are calibrated in a combined mode, all cameras form a multi-camera visual system together, and a reference camera coordinate system of the multi-camera visual system is obtained;
the method comprises the following steps:
s1, designing a three-dimensional target, wherein the size of the three-dimensional target is known, and the front surface and the back surface of the three-dimensional target are provided with target arrays consisting of coding targets with characteristic information; the coding target of the characteristic information comprises a positioning area and a coding identification area, wherein the positioning area is used for providing the position information of the target so as to realize the positioning of the target; the code identification area is used for providing code numerical value information;
s2, monocular calibration; s3, multi-objective calibration.
2. The visual system calibration method without overlapping fields of view as claimed in claim 1, wherein in S1, a least square ellipse is selected to fit the center coordinates of the encoded small circle of the encoded marker points, and the single encoded marker point region is identified and extracted according to the determined center coordinates of the encoded small circle of the encoded marker points, so that the ID of the encoded marker point is decoded according to the encoding rule of the encoded marker points.
3. The method for calibrating a vision system having no overlapping fields of view as claimed in claim 1, wherein the calibration algorithm in S2 comprises: and calibrating the internal parameters of the camera and solving the distortion coefficient to finish monocular calibration.
4. The method for calibrating a vision system having no overlapping fields of view as claimed in claim 1, wherein the calibration algorithm in S3 comprises: solving relative external parameters between cameras, minimizing and optimizing the external parameters by using the reprojection error, removing accumulated errors caused by matrix continuous multiplication, and completing the multi-objective system calibration under the non-overlapping view field.
5. The method of calibrating a vision system having no overlapping fields of view as claimed in claim 1, wherein the monocular calibration in S2 includes coordinate transformation: and (5) carrying out camera internal reference solving by using a Zhang Zhengyou internal reference calibration method to obtain a radial distortion coefficient and a tangential distortion coefficient of the camera, and correcting the projection coordinates by using the distortion parameters.
6. The vision system calibration method without overlapping fields of view according to claim 1, wherein the single target timing in S2 changes the calibration plate pose at least once, acquires the coded concentric circle array image, extracts the feature point coordinate pairs, substitutes the feature point coordinate pairs into the camera model to calculate the camera initial internal and external parameters and distortion coefficients, and completes the monocular calibration.
7. The method of calibrating a vision system having no overlapping fields of view of claim 1, wherein the multi-objective in S3 is: and calibrating each group of binocular cameras based on the internal and external parameters and the distortion coefficient of each camera obtained by the calibration of the monocular cameras, so as to obtain the conversion relation between the coordinate systems of the cameras between the two groups of binocular cameras.
8. The method of calibrating a vision system having no overlapping fields of view as claimed in claim 1, wherein the coordinates of the multi-object calibration in S3 are converted into: the method comprises the steps of obtaining a transformation matrix of a coordinate system between targets corresponding to two groups of binocular cameras through the geometric structure relation of the targets, obtaining the transformation matrix through solving a homography matrix or a projection matrix of a target photo, obtaining transformation matrices of all cameras with non-overlapping view fields through simultaneous solution, namely obtaining a transformation matrix from the coordinate system of all cameras to a global coordinate system, and completing four-dimensional system calibration.
9. The method of calibrating a vision system having no overlapping fields of view according to claim 1, wherein the monocular calibration in S2 includes monocular camera calibration or monocular vision system calibration; the multi-objective calibration in S3 includes a multi-view camera calibration or a multi-view vision system calibration.
CN202310549708.1A 2023-05-16 2023-05-16 Visual system calibration method without overlapping visual fields Pending CN116823960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310549708.1A CN116823960A (en) 2023-05-16 2023-05-16 Visual system calibration method without overlapping visual fields

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310549708.1A CN116823960A (en) 2023-05-16 2023-05-16 Visual system calibration method without overlapping visual fields

Publications (1)

Publication Number Publication Date
CN116823960A true CN116823960A (en) 2023-09-29

Family

ID=88111820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310549708.1A Pending CN116823960A (en) 2023-05-16 2023-05-16 Visual system calibration method without overlapping visual fields

Country Status (1)

Country Link
CN (1) CN116823960A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118442973A (en) * 2024-07-05 2024-08-06 天津水运工程勘察设计院有限公司 Offshore jacket construction positioning method based on binocular vision target self-identification tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118442973A (en) * 2024-07-05 2024-08-06 天津水运工程勘察设计院有限公司 Offshore jacket construction positioning method based on binocular vision target self-identification tracking

Similar Documents

Publication Publication Date Title
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
CN110044300B (en) Amphibious three-dimensional vision detection device and detection method based on laser
CN111369630A (en) Method for calibrating multi-line laser radar and camera
CN101581569B (en) Calibrating method of structural parameters of binocular visual sensing system
EP2751521B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
CN103759670B (en) A kind of object dimensional information getting method based on numeral up short
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN104835158B (en) Based on the three-dimensional point cloud acquisition methods of Gray code structured light and epipolar-line constraint
WO2016037486A1 (en) Three-dimensional imaging method and system for human body
EP1378790A2 (en) Method and device for correcting lens aberrations in a stereo camera system with zoom
CN105043250B (en) A kind of double-visual angle data alignment method based on 1 common indicium points
CN110363838B (en) Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model
CN112985293B (en) Binocular vision measurement system and measurement method for single-camera double-spherical mirror image
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN109325981B (en) Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
CN111461963B (en) Fisheye image stitching method and device
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN112489109B (en) Three-dimensional imaging system method and device and three-dimensional imaging system
CN101354796B (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN114283203A (en) Calibration method and system of multi-camera system
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN111854636A (en) Multi-camera array three-dimensional detection system and method
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination