[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114022552A - Target positioning method and related device integrating laser radar and camera - Google Patents

Target positioning method and related device integrating laser radar and camera Download PDF

Info

Publication number
CN114022552A
CN114022552A CN202111294628.3A CN202111294628A CN114022552A CN 114022552 A CN114022552 A CN 114022552A CN 202111294628 A CN202111294628 A CN 202111294628A CN 114022552 A CN114022552 A CN 114022552A
Authority
CN
China
Prior art keywords
camera
radar
vector
initial
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111294628.3A
Other languages
Chinese (zh)
Inventor
吴晖
王杨
施泽宇
张晓晔
谢志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Electric Power Research Institute of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202111294628.3A priority Critical patent/CN114022552A/en
Publication of CN114022552A publication Critical patent/CN114022552A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application discloses a target positioning method fusing a laser radar and a camera and a related device, wherein the method comprises the following steps: performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector; performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when matching errors are minimum; triangularization processing is carried out on the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information; performing relative calculation on the camera image depth information and the preset radar image depth information based on a least square method to obtain an optimal transformation matrix; and carrying out positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result. The method and the device solve the technical problem that the positioning result accuracy is low due to the fact that limitation of different degrees exists in the prior art.

Description

Target positioning method and related device integrating laser radar and camera
Technical Field
The present disclosure relates to the field of object positioning technologies, and in particular, to a target positioning method and related device integrating a laser radar and a camera.
Background
In the existing mobile robot or unmanned vehicle, instant positioning and drawing are indispensable key parts, and accurate positioning is the first link of other subsequent planning. The existing positioning technology is used for positioning by a plurality of methods such as single-point GPS positioning, differential GPS positioning, laser radar positioning and camera vision.
However, single-point GPS depends on the number of satellites to measure the quality of positioning; differential GPS requires two stations to maintain positioning in decimeters to centimeters; the positioning of the camera vision depends on the quality of the camera and the limit of the surrounding environment; lidar positioning then involves a large number of calculations. Various positioning techniques have different limitations, which also result in a less accurate final positioning result.
Disclosure of Invention
The application provides a target positioning method and a related device integrating a laser radar and a camera, which are used for solving the technical problem that the accuracy of a positioning result is lower due to limitations of different degrees in the prior art.
In view of this, the first aspect of the present application provides a target positioning method fusing a laser radar and a camera, including:
performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector;
performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when matching errors are minimum;
triangularization processing is carried out on the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information;
performing relative calculation on the camera image depth information and preset radar image depth information based on a least square method to obtain an optimal transformation matrix;
and performing positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
Preferably, the coarse calibration operation on the selected lidar and the camera to obtain an initial transformation matrix composed of an initial rotation vector and an initial translation vector includes:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
Preferably, the triangulating the camera image by the radar rotation vector and the radar translation vector to obtain camera image depth information includes:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on the normalization formula to obtain the depth information of the camera image.
The second aspect of the present application provides a target positioning device integrating a laser radar and a camera, including:
the rough calibration module is used for performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector;
the matching analysis module is used for carrying out ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when the matching error is minimum;
the triangulation module is used for triangulating the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information;
the relative calculation module is used for carrying out relative calculation on the camera image depth information and preset radar image depth information based on a least square method to obtain an optimal transformation matrix;
and the positioning calculation module is used for performing positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
Preferably, the rough calibration module is specifically configured to:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
Preferably, the triangularization module is specifically configured to:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on the normalization formula to obtain the depth information of the camera image.
A third aspect of the present application provides a target positioning device incorporating a lidar and a camera, the device comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for object localization by fusing lidar and a camera according to the first aspect, according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program code for executing the method for target location by fusing a laser radar and a camera according to the first aspect.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for target localization in a fusion lidar and a camera of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
the application provides a target positioning method fusing a laser radar and a camera, which comprises the following steps: performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector; performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when matching errors are minimum; triangularization processing is carried out on the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information; performing relative calculation on the camera image depth information and the preset radar image depth information based on a least square method to obtain an optimal transformation matrix; and carrying out positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
The target positioning method fusing the laser radar and the camera fuses two positioning technologies of the laser radar and the camera, radar rotation vectors and radar translation vectors generated through point cloud matching of the laser radar are used for optimizing camera images, more accurate camera image depth information is obtained, an optimal transformation matrix calculated based on the camera image depth information and preset radar image depth information is more reliable, and accuracy of positioning results is improved. Therefore, the technical problem that the accuracy of a positioning result is low due to limitation of different degrees in the prior art can be solved.
Drawings
Fig. 1 is a schematic flowchart of a target positioning method incorporating a laser radar and a camera according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a target positioning device incorporating a laser radar and a camera according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For easy understanding, please refer to fig. 1, an embodiment of a target positioning method for integrating a laser radar and a camera provided by the present application includes:
step 101, performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix formed by an initial rotation vector and an initial translation vector.
Further, step 101 includes:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
Corresponding coarse calibration rotation vectors and coarse calibration translation vectors can be obtained between the laser radar and the camera through a preset measuring method, the preset measuring method can also be a measuring tool in actual operation, and the number of the coarse calibration rotation vectors and the number of the coarse calibration translation vectors are 3 and 1. The preset reference point generally refers to a corresponding position point in the laser radar and the camera, and optimal calculation is performed according to the reference point at the same position, so that the accuracy of the initial rotation vector and the initial translation vector can be ensured, and the initial transformation matrix is more reliable. The initial transformation matrix can realize initial positioning to obtain the initial positioning result of the target, but further optimization of positioning calculation is still needed.
The nonlinear least square method is a parameter estimation method for estimating nonlinear static model parameters by using the square sum of errors as a criterion. Because the method is nonlinear, the parameter estimation value can not be obtained by a method of solving the extremum of a multivariate function like a linear least square method, and a complex optimization algorithm is needed to solve the parameter estimation value; there are two types of algorithms that are commonly used, one is a search algorithm and the other is an iterative algorithm. The nonlinear least square method can be directly used for estimating parameters of a static nonlinear model, and is also used for time series modeling and parameter estimation of a continuous dynamic model.
The specific optimized calculation formula is as follows:
Figure BDA0003336175010000051
wherein n is the number of the selected preset reference points, xi、yiCoordinates, R, of the ith reference point in the lidar coordinate system and in the camera coordinate system, respectivelySign board、TSign boardThe coarse calibration rotation vector and the coarse calibration translation vector with the minimum error are respectively.
And 102, performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when the matching error is minimum.
The global source point cloud data is all point cloud data corresponding to the global map, and the point cloud data between two frames of the laser radar can reflect certain position change conditions of the target, namely rotation and translation. The main principle of the ICP matching algorithm is that a free form surface-based solution is solved by using a closest point search algorithm, and the specific processing flow can be divided into: selecting a first point set from the target point cloud data, selecting a second point set from the source point cloud data, and enabling the distance between the first point set and the second point set to be minimum, wherein the distance can be specifically an Euclidean distance; then, calculating a rotation vector and a translation vector between the two point sets to enable an error function value to be minimum; obtaining a new first point set through the calculated rotation vector and translation vector; and then calculating the average distance between the new first point set and the second point set, stopping iteration if the average distance is smaller than a preset distance value or the iteration optimization exceeds a preset number of times, determining the radar rotation vector and the radar translation vector at the moment, and returning to the step of selecting the second point set from the source point cloud data to continue the iteration until convergence.
The specific ICP matching calculation formula is as follows:
Figure BDA0003336175010000061
wherein, E (R)1,T1),piFor point cloud data between two adjacent frames, qiFor global source point cloud data, R1、t1Respectively are a rotation vector and a translation vector between the point cloud data, and n is the number of the selected point cloud data.
And 103, triangularizing the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information.
Further, step 103 includes:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on a normalization formula to obtain the depth information of the camera image.
The optimization is a process of triangulating each point of image data acquired by a camera and combining to obtain depth information of the camera image. The pose estimation of the camera is optimized by using the pose estimation calculated by the laser radar, so that the positioning accuracy can be improved, and the influence of the environment on images shot by the camera is compensated. The triangulation is substantially to calculate depth information of three-dimensional points by using a triangulation relation.
Setting the two selected preset characteristic points as x1And x2Then, there are:
s1x1=s2R1x2+t1
wherein s is1、s2And respectively corresponding depth information of the two preset feature points. From this formula, the following formula can be further found:
Figure BDA0003336175010000062
wherein
Figure BDA0003336175010000063
Is x1Is used to generate the inverse symmetric matrix. x is the number of1、x2Are all representative of two-dimensional proxels.
And 104, carrying out relative calculation on the camera image depth information and the preset radar image depth information based on a least square method to obtain an optimal transformation matrix.
The least square method finds the best function match of the data by minimizing the sum of squares of the errors; unknown data can be simply obtained by using a least square method, and the sum of squares of errors between the obtained data and actual data is minimum; in this embodiment, an optimal transformation matrix is to be obtained based on this principle, and a specific calculation formula is as follows:
Figure BDA0003336175010000064
whereinR, t are the optimal rotation vector and the optimal translation vector, S, respectively, between the lidar and the camera1、S2The image depth information of the camera and the preset radar image depth information are respectively, and n is the number of the selected characteristic points.
And 105, performing positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
The target positioning method fusing the laser radar and the camera, provided by the embodiment of the application, fuses two positioning technologies of the laser radar and the camera, optimizes camera images through a radar rotation vector and a radar translation vector generated by point cloud matching of the laser radar, and obtains more accurate camera image depth information, so that an optimal transformation matrix calculated based on the camera image depth information and preset radar image depth information is more reliable, and the accuracy of a positioning result is further improved. Therefore, the technical problem that the accuracy of a positioning result is low due to limitation of different degrees in the prior art can be solved.
For easy understanding, please refer to fig. 2, the present application provides an embodiment of a target positioning apparatus integrating a laser radar and a camera, comprising:
a rough calibration module 201, configured to perform rough calibration on the selected laser radar and the selected camera to obtain an initial transformation matrix formed by an initial rotation vector and an initial translation vector;
the matching analysis module 202 is used for performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when matching errors are minimum;
the triangulation module 203 is used for triangulating the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information;
the relative calculation module 204 is used for performing relative calculation on the camera image depth information and the preset radar image depth information based on a least square method to obtain an optimal transformation matrix;
and the positioning calculation module 205 is configured to perform positioning calculation on the target object by using the optimal transformation matrix to obtain a positioning result.
Further, the rough calibration module 201 is specifically configured to:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
Further, the triangularization module 203 is specifically configured to:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on a normalization formula to obtain the depth information of the camera image.
For facilitating understanding, the application also provides target positioning equipment fusing the laser radar and the camera, and the equipment comprises a processor and a memory;
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is used for executing the target positioning method fusing the laser radar and the camera in the above method embodiment according to the instructions in the program code.
The present application further provides a computer-readable storage medium for storing program codes for executing the method for positioning a target by fusing a laser radar and a camera in the above method embodiments.
The present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of fusing lidar and camera target location in the above-described method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. A target positioning method fusing a laser radar and a camera is characterized by comprising the following steps:
performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector;
performing ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when matching errors are minimum;
triangularization processing is carried out on the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information;
performing relative calculation on the camera image depth information and preset radar image depth information based on a least square method to obtain an optimal transformation matrix;
and performing positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
2. The method of claim 1, wherein the coarse calibration of the selected lidar and the camera to obtain an initial transformation matrix comprising an initial rotation vector and an initial translation vector comprises:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
3. The method for positioning a target by fusing lidar and a camera according to claim 1, wherein triangulating the camera image by the radar rotation vector and the radar translation vector to obtain the camera image depth information comprises:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on the normalization formula to obtain the depth information of the camera image.
4. A target positioning device integrating a laser radar and a camera, comprising:
the rough calibration module is used for performing rough calibration operation on the selected laser radar and the selected camera to obtain an initial transformation matrix consisting of an initial rotation vector and an initial translation vector;
the matching analysis module is used for carrying out ICP (inductively coupled plasma) matching analysis according to the point cloud data between two adjacent frames and the global source point cloud data to obtain a radar rotation vector and a radar translation vector when the matching error is minimum;
the triangulation module is used for triangulating the camera image through the radar rotation vector and the radar translation vector to obtain camera image depth information;
the relative calculation module is used for carrying out relative calculation on the camera image depth information and preset radar image depth information based on a least square method to obtain an optimal transformation matrix;
and the positioning calculation module is used for performing positioning calculation on the target object by adopting the optimal transformation matrix to obtain a positioning result.
5. The lidar and camera integrated target positioning device of claim 4, wherein the rough calibration module is specifically configured to:
acquiring a coarse calibration rotation vector and a coarse calibration translation vector between the selected laser radar and the camera by adopting a preset measurement method;
and based on a nonlinear least square method, performing optimization calculation on the coarse calibration rotation vector and the coarse calibration translation vector according to a preset reference point selected from the laser radar and the camera to obtain an initial transformation matrix consisting of the initial rotation vector and the initial translation vector.
6. The lidar and camera integrated target positioning device of claim 4, wherein the triangularization module is specifically configured to:
normalizing the coordinates of the preset feature points in the camera image through the radar rotation vector and the radar translation vector to obtain a normalization formula;
and calculating the depth information corresponding to the preset feature points based on the normalization formula to obtain the depth information of the camera image.
7. An object locating device incorporating a lidar and a camera, the device comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for object localization in a fusion lidar and camera of any of claims 1-3 according to instructions in the program code.
8. A computer-readable storage medium for storing a program code for executing the method for object localization fusing a lidar and a camera according to any one of claims 1 to 3.
9. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of fusing lidar and camera target location of any of claims 1-3.
CN202111294628.3A 2021-11-03 2021-11-03 Target positioning method and related device integrating laser radar and camera Pending CN114022552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111294628.3A CN114022552A (en) 2021-11-03 2021-11-03 Target positioning method and related device integrating laser radar and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111294628.3A CN114022552A (en) 2021-11-03 2021-11-03 Target positioning method and related device integrating laser radar and camera

Publications (1)

Publication Number Publication Date
CN114022552A true CN114022552A (en) 2022-02-08

Family

ID=80060237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111294628.3A Pending CN114022552A (en) 2021-11-03 2021-11-03 Target positioning method and related device integrating laser radar and camera

Country Status (1)

Country Link
CN (1) CN114022552A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529615A (en) * 2022-04-21 2022-05-24 南京隼眼电子科技有限公司 Radar calibration method, device and storage medium
CN115236690A (en) * 2022-09-20 2022-10-25 图达通智能科技(武汉)有限公司 Data fusion method and device for laser radar system and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529615A (en) * 2022-04-21 2022-05-24 南京隼眼电子科技有限公司 Radar calibration method, device and storage medium
CN114529615B (en) * 2022-04-21 2022-07-08 南京隼眼电子科技有限公司 Radar calibration method, device and storage medium
CN115236690A (en) * 2022-09-20 2022-10-25 图达通智能科技(武汉)有限公司 Data fusion method and device for laser radar system and readable storage medium
CN115236690B (en) * 2022-09-20 2023-02-10 图达通智能科技(武汉)有限公司 Data fusion method and device for laser radar system and readable storage medium

Similar Documents

Publication Publication Date Title
CN102472609B (en) Position and orientation calibration method and apparatus
JP4650751B2 (en) Method and apparatus for aligning 3D shape data
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
CN104019799A (en) Relative orientation method by using optimization of local parameter to calculate basis matrix
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
US20190221000A1 (en) Depth camera 3d pose estimation using 3d cad models
CN114022552A (en) Target positioning method and related device integrating laser radar and camera
Zhou A closed-form algorithm for the least-squares trilateration problem
US20170108338A1 (en) Method for geolocating a carrier based on its environment
CN114004894A (en) Method for determining space relation between laser radar and binocular camera based on three calibration plates
JP2014216813A (en) Camera attitude estimation device and program therefor
Armangué et al. A review on egomotion by means of differential epipolar geometry applied to the movement of a mobile robot
JP6673504B2 (en) Information processing device, database generation device, method, program, and storage medium
Hao et al. Camera Calibration Error Modeling and Its Impact on Visual Positioning
CN113658260B (en) Robot pose calculation method, system, robot and storage medium
CN115100287A (en) External reference calibration method and robot
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium
CN117433511B (en) Multi-sensor fusion positioning method
CN108230377B (en) Point cloud data fitting method and system
Shahraki et al. Introducing free-function camera calibration model for central-projection and omni-directional lenses
Pagel Robust monocular egomotion estimation based on an iekf
Marko et al. Automatic Stereo Camera Calibration in Real-World Environments Without Defined Calibration Objects
CN118518009B (en) Calibration parameter determining method, calibration method, medium and equipment
Ravindranath et al. 3D-3D Self-Calibration of Sensors Using Point Cloud Data
US20240112363A1 (en) Position estimation system, position estimation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination