[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116660923B - Unmanned agricultural machinery library positioning method and system integrating vision and laser radar - Google Patents

Unmanned agricultural machinery library positioning method and system integrating vision and laser radar Download PDF

Info

Publication number
CN116660923B
CN116660923B CN202310957228.9A CN202310957228A CN116660923B CN 116660923 B CN116660923 B CN 116660923B CN 202310957228 A CN202310957228 A CN 202310957228A CN 116660923 B CN116660923 B CN 116660923B
Authority
CN
China
Prior art keywords
coordinate system
positioning
vehicle
point cloud
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310957228.9A
Other languages
Chinese (zh)
Other versions
CN116660923A (en
Inventor
李子申
尹心彤
王亮亮
常坤
汪亮
王宁波
刘振耀
蔚科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Aerospace Information Research Institute
Aerospace Information Research Institute of CAS
Original Assignee
Qilu Aerospace Information Research Institute
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Aerospace Information Research Institute, Aerospace Information Research Institute of CAS filed Critical Qilu Aerospace Information Research Institute
Priority to CN202310957228.9A priority Critical patent/CN116660923B/en
Publication of CN116660923A publication Critical patent/CN116660923A/en
Application granted granted Critical
Publication of CN116660923B publication Critical patent/CN116660923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Guiding Agricultural Machines (AREA)

Abstract

The invention discloses a positioning method and a positioning system of an unmanned agricultural machinery vehicle library, which are used for fusing vision and laser radars, and belongs to the field of positioning of unmanned agricultural machinery vehicle libraries. The invention improves the robustness and the precision of the positioning in the unmanned agricultural machinery library.

Description

Unmanned agricultural machinery library positioning method and system integrating vision and laser radar
Technical Field
The invention belongs to the field of unmanned agricultural machinery vehicle positioning, and particularly relates to an unmanned agricultural machinery vehicle positioning method and system integrating vision and laser radar.
Background
Unmanned agricultural machinery is the agricultural equipment vehicle that does not need manual control, carries out environmental perception, planning decision and control through external sensor and intelligent computer system, has safer, convenient advantage, becomes current research hotspot. The accurate positioning is a precondition of unmanned agricultural machinery path planning and control, and is a basis of automatic driving capability. Global satellite navigation systems (Global Navigation Satellite System, GNSS) can provide information on position, heading, etc. for a carrier, and are widely used in vehicle positioning. The laser radar is not easy to be interfered by illumination, high-precision high-resolution environmental information with rich structures can be acquired, and key features of the three-dimensional point cloud are matched with a known map to be positioned. Combining satellite positioning and point cloud matching positioning, on the one hand, GNSS provides an absolute position initial value for laser radar positioning, and corrects the position when the error accumulation is overlarge; on the other hand, the laser radar can fully utilize environmental characteristics, ensure output of results when the GNSS position is lost, and improve the accuracy and reliability of positioning.
However, in closed places with serious shielding, such as indoor places, tunnels, underground places and the like, the received GNSS signals are weak and poor in quality, the accurate and stable positioning requirements cannot be met, and the positioning of unmanned agricultural machinery in an engine room faces the problem. The camera has the characteristics of low cost, rich acquisition information, wide application scene and the like, can estimate the pose by extracting image features without depending on external signals in the known environment, and overcomes the defect of GNSS signal deficiency.
The positioning technology based on the laser radar maintains good positioning precision in a structured environment such as a hangar, but can generate mismatching or not lead to positioning loss when the surrounding environment is similar or has some changes, and the absolute position is required to provide a reference for the conversion of the pose. The visual sensor can assist in providing repositioning information, and currently, the repositioning is mainly performed indoors through visual detection targets, but a plurality of marks are required to be detected simultaneously, so that high requirements are placed on the visual sensor in a place, regular maintenance is required, and the place requirements and the labor cost are increased.
Disclosure of Invention
Aiming at the problem that no GNSS signal is used for auxiliary positioning in the plane, the invention provides a method and a system for positioning the plane of the unmanned plane by fusing vision and laser radar, the point cloud matching positioning of the laser radar is combined with the auxiliary positioning of stable visual features provided by a specific hangar scene, the environment information is fully utilized, and the robustness and the accuracy of positioning in the unmanned aerial vehicle hangar are improved.
The technical scheme adopted by the invention for achieving the purpose is as follows:
a positioning method of an unmanned agricultural machinery base integrating vision and laser radar comprises the following steps:
the method comprises the steps that (1) a vehicle-mounted camera collects images containing indication boards, and a target detection algorithm outputs labels of the indication boards so as to obtain world coordinates of the indication boards; the visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the vehicle-mounted camera is obtained through a pose estimation algorithm;
step (2) the laser radar scans regional point clouds, matches the real-time regional point clouds with a pre-established point cloud map, estimates the frame pose of the point cloud data, and outputs a laser point cloud matching and positioning result;
step (3), associating a camera coordinate system, a laser radar coordinate system, a carrier coordinate system and a sign coordinate system, and calculating the positions of the carrier and the laser radar; the camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as a z axis;
step (4) when the vehicle is started, providing an initial value for point cloud positioning by utilizing initial information provided by vision; in the running process, the vehicle-mounted camera positioning node is started regularly, the current position information is updated, the accumulated error of laser positioning is corrected, and repositioning is achieved.
The invention also provides an unmanned agricultural machinery library positioning system integrating vision and laser radar, which comprises:
the conversion relation module is used for acquiring an image containing the indication board by using the vehicle-mounted camera, outputting a label of the indication board by using a target detection algorithm and further obtaining world coordinates of the indication board; the visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the vehicle-mounted camera is obtained through a pose estimation algorithm;
the matching and positioning module is used for scanning regional point clouds through the laser radar, matching the real-time regional point clouds with a pre-established point cloud map, estimating the frame pose of the point cloud data and outputting a laser point cloud matching and positioning result;
the position calculation module is used for associating the camera coordinate system, the laser radar coordinate system, the carrier coordinate system and the indication board coordinate system and calculating the positions of the carrier and the laser radar;
the repositioning module is used for providing an initial value for point cloud positioning by utilizing initial information provided by vision when the vehicle is started; in the running process, the vehicle-mounted camera positioning node is started regularly, the current position information is updated, the accumulated error of laser positioning is corrected, and repositioning is achieved.
The invention has the beneficial effects that:
the invention combines the stable circular characteristic of the hangar, and on the basis of laser point cloud matching positioning, the fusion visual positioning method provides initial value and repositioning for point cloud positioning, thereby being capable of keeping stable, reliable and high-precision positioning when GNSS signals in the hangar are weak and environmental information is greatly changed.
Drawings
Fig. 1 is a schematic flow chart of a positioning method of an unmanned agricultural machinery base integrating vision and laser radar.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
As shown in fig. 1, the unmanned agricultural machinery library positioning method integrating vision and laser radar of the invention specifically comprises the following steps:
and (1) acquiring an image containing the indication board by the vehicle-mounted camera, and outputting the label of the indication board by the target detection algorithm so as to obtain the world coordinates of the indication board. The visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the camera is obtained through a pose estimation algorithm; the method specifically comprises the following steps:
1) A plurality of circular indication boards with fixed radius R are vertically arranged in the hangar, and each indication board is provided with patterns or numbers with different meanings. And establishing a sign library, training a target detection deep learning model, outputting the labels of the signs through the trained target detection deep learning model, and further enabling the labels to be in one-to-one correspondence with the absolute coordinates of all round signs measured in advance, wherein when a vehicle-mounted camera detects a certain sign, the unique position can be determined. The sign coordinate system is defined in such a way that the origin of the sign coordinate system is located at the center of a circle, the z-axis is vertically directed to the outer side of the surface, the x-y plane is located on the surface of the sign, the y-axis is vertically upward, and the direction of the x-axis is determined according to the right-hand coordinate system. The origin of a laser radar coordinate system is a laser pulse transmitting point, and the definition direction of a coordinate axis is set by a laser radar manufacturer; the carrier coordinate system is fixedly connected with the unmanned agricultural machine, the center of the IMU is taken as an origin, the x axis points to the advancing direction, and the z axis is vertically upwards; the origin of the coordinate system of the indication board is positioned at the circle center, the z-axis is vertically directed to the outer side of the surface, the x-y plane is positioned on the surface of the indication board, the y-axis is vertically upwards, and the direction of the x-axis is determined according to the coordinate system of the right hand;
2) The sign edges on the camera imaging plane are fitted by direct least squares. General expression of projected ellipses on pixel planeThe coefficients of the equation are parameters to be solved. The data points to be fitted on the image are +.>, wherein />Pixel coordinates for data points; with minimization of the algebraic sum of squares of discrete points and points on ellipses +.>To solve for the parameters.
Solving to obtainCharacteristic value of +.>And generalized eigenvectors, wherein the eigenvectors corresponding to the positive eigenvalues are the optimal solutions of the fitting equations.
wherein ,,/>
3) The internal reference matrix of the vehicle-mounted camera obtained by calibration is as follows:, wherein /> and />For focal length-> and />Is the principal point coordinate value.
Order the,/>Obtaining general equation of pixel representation under image coordinate system
In the vehicle-mounted camera coordinate systemThe general expression for forming an elliptic cone surface by an ellipse and the origin of a coordinate system is that, wherein ,/>,/>,/>,/>,/>
Establishing quadratic matrix of expressionThe characteristic value is->、/>、/>I.e.. Provision of->、/>The feature vectors corresponding to the three feature values are respectively、/> and />Then->
The center position of the circular indication boardAnd surface normal vector->The representation in the vehicle-mounted camera coordinate system is as follows:
two groups of solutions are obtained through one picture, the vehicle-mounted camera obtains the approximate pixel coordinates of the fitting edge, the vector products of a plurality of groups of pairwise non-parallel straight lines on the plane are obtained, the vector products are compared with two groups of normal vectors, and the right indication result of the circular indication board in the vehicle-mounted camera is obtained when the difference value is small. R is the radius of the circular indication board.
And (2) scanning the regional point cloud by the laser radar, matching the real-time point cloud with a pre-established point cloud map, estimating the frame pose of the point cloud data, and outputting a laser point cloud matching and positioning result. The point cloud matching method based on normal distribution transformation comprises the following steps:
1) Loading a pre-established point cloud map, dividing the map into cube grids with fixed sizes, and calculating points in the gridsProbability density function +.>Wherein the mean vector->Covariance matrix for all points in the grid +.>
2) The current scanning point cloud set isPoint->Via posture transformation parameters->Is +.>Optimizing and maximizing objective function by Newton's method>To find the optimal posture transformation parameters +.>, wherein />Representing the product of the probability density functions of each point,/->Is a probability density function of points within the grid.
And (3) correlating the camera coordinate system, the laser radar coordinate system, the carrier coordinate system and the sign coordinate system, and further calculating the position of the carrier.
Step (4) when the vehicle is started, providing an initial value for point cloud positioning by utilizing initial information provided by vision; in the running process, the vehicle-mounted camera positioning node is started periodically to update the current position information, and the accumulated error of laser positioning is corrected, so that the repositioning effect is achieved.
The invention also provides an unmanned agricultural machinery library positioning system integrating vision and laser radar, which comprises:
the conversion relation module is used for acquiring an image containing the indication board by using the vehicle-mounted camera, outputting a label of the indication board by using a target detection algorithm and further obtaining world coordinates of the indication board; the visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the vehicle-mounted camera is obtained through a pose estimation algorithm;
the matching and positioning module is used for matching the real-time point cloud with a pre-established point cloud map through the laser radar scanning area point cloud, estimating the frame pose of the point cloud data and outputting a laser point cloud matching and positioning result;
the position calculation module is used for associating the camera coordinate system, the laser radar coordinate system, the carrier coordinate system and the indication board coordinate system and calculating the positions of the carrier and the laser radar; the camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as a z axis;
the repositioning module is used for providing an initial value for point cloud positioning by utilizing initial information provided by vision when the vehicle is started; in the running process, the vehicle-mounted camera positioning node is started periodically to update the current position information, and the accumulated error of laser positioning is corrected, so that the repositioning effect is achieved.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (4)

1. The unmanned agricultural machinery library positioning method integrating vision and laser radar is characterized by comprising the following steps of:
the method comprises the steps that (1) a vehicle-mounted camera collects images containing indication boards, and a target detection algorithm outputs labels of the indication boards so as to obtain world coordinates of the indication boards; the visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the vehicle-mounted camera is obtained through a pose estimation algorithm;
step (2) the laser radar scans regional point clouds, matches the real-time regional point clouds with a pre-established point cloud map, estimates the frame pose of the point cloud data, and outputs a laser point cloud matching and positioning result;
step (3), associating a camera coordinate system, a laser radar coordinate system, a carrier coordinate system and a sign coordinate system, and calculating the positions of the carrier and the laser radar; the camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as a z axis;
step (4) when the vehicle is started, providing an initial value for point cloud positioning by utilizing initial information provided by vision; in the running process, the vehicle-mounted camera positioning node is started regularly, the current position information is updated, the accumulated error of laser positioning is corrected, and repositioning is achieved.
2. The unmanned agricultural machinery base positioning method of the fusion of vision and laser radar according to claim 1, wherein the step (1) comprises:
1) A plurality of circular indication boards with fixed radius R are vertically arranged in the hangar, and each circular indication board is provided with patterns or numbers with different meanings; building a sign library, training a target detection deep learning model, outputting the labels of round signs through the trained target detection deep learning model, and further enabling the labels to be in one-to-one correspondence with absolute coordinates of all round signs measured in advance, wherein when a vehicle-mounted camera detects a certain round sign, a unique position can be determined;
2) Fitting a circular sign edge on the camera imaging plane by direct least squares; the general expression of projected ellipses on the pixel plane isThe coefficients of the equation are parameters to be solvedThe method comprises the steps of carrying out a first treatment on the surface of the The data points to be fitted on the image are +.>I=1, 2, … …, n, whereThe pixel coordinates of the data points to be fitted are obtained; by minimizing the algebraic sum of squares of discrete points and points on ellipsesTo solve the parameters; relieve->Characteristic value of +.>And generalized eigenvectors, wherein the eigenvectors corresponding to the positive eigenvalues are the optimal solutions of the fitting equation;
wherein ,,/>
3) The camera internal reference matrix obtained by calibration is as follows:, wherein /> and />For focal length-> and />Is the coordinate value of the principal point;
order the,/>The general equation for obtaining pixel representation in the image coordinate system isThe method comprises the steps of carrying out a first treatment on the surface of the Wherein X, Y is the coordinate axis of the image coordinate system, and a, b, c, d, e and f are equation coefficients;
under the camera coordinate system, the general expression that the ellipse and the origin of the coordinate system form an elliptical cone surface is:
wherein ,,/>,/>,/>,/>,/>
establishing a quadratic matrix of the expression asThe characteristic value is->、/>、/>I.e.
Provision for provision of、/>Three characteristic values->、/>、/>The corresponding feature vectors are +.> and />Then->
The center position of the circular indication boardAnd surface normal vector->The representation in the camera coordinate system is as follows:
wherein R is the radius of the round indication board;
the vehicle-mounted camera obtains the approximate pixel coordinates of the edge of the fitted round indication board, obtains the vector products of a plurality of groups of pairwise non-parallel straight lines on the image plane, compares the vector products with two groups of normal vectors, and the indication board with small difference value is the correct indication result of the round indication board in the vehicle-mounted camera.
3. The unmanned agricultural machinery base positioning method integrating vision and laser radar according to claim 2, wherein in the step (2), the method of laser point cloud matching is based on normal distribution transformation, and specifically comprises the following steps:
1) Loading a pre-established point cloud map, dividing the map into cube grids with fixed sizes, and calculating points in the cube gridsProbability density function +.>Wherein the mean vector->Covariance matrix for all points in the square grid +.>The method comprises the steps of carrying out a first treatment on the surface of the The superscript T denotes a transpose;
2) The current scanning point cloud set isPoints in the collection->Via posture transformation parameters->The spatial conversion function for the conversion is +.>Optimizing and maximizing objective function by Newton's method>To find the optimal posture transformation parameters +.>, wherein />Representing the product of the probability density functions of each point,/->Is a probability density function of points within the grid.
4. A system of unmanned agricultural machinery base positioning method of fusion of vision and lidar according to any of claims 1-3, comprising:
the conversion relation module is used for acquiring an image containing the indication board by using the vehicle-mounted camera, outputting a label of the indication board by using a target detection algorithm and further obtaining world coordinates of the indication board; the visual characteristics are obtained through an image processing algorithm, and the conversion relation of the indication board relative to the vehicle-mounted camera is obtained through a pose estimation algorithm;
the matching and positioning module is used for scanning regional point clouds through the laser radar, matching the real-time regional point clouds with a pre-established point cloud map, estimating the frame pose of the point cloud data and outputting a laser point cloud matching and positioning result;
the position calculation module is used for associating the camera coordinate system, the laser radar coordinate system, the carrier coordinate system and the indication board coordinate system and calculating the positions of the carrier and the laser radar;
the repositioning module is used for providing an initial value for point cloud positioning by utilizing initial information provided by vision when the vehicle is started; in the running process, the vehicle-mounted camera positioning node is started regularly, the current position information is updated, the accumulated error of laser positioning is corrected, and repositioning is achieved.
CN202310957228.9A 2023-08-01 2023-08-01 Unmanned agricultural machinery library positioning method and system integrating vision and laser radar Active CN116660923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310957228.9A CN116660923B (en) 2023-08-01 2023-08-01 Unmanned agricultural machinery library positioning method and system integrating vision and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310957228.9A CN116660923B (en) 2023-08-01 2023-08-01 Unmanned agricultural machinery library positioning method and system integrating vision and laser radar

Publications (2)

Publication Number Publication Date
CN116660923A CN116660923A (en) 2023-08-29
CN116660923B true CN116660923B (en) 2023-09-29

Family

ID=87715731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310957228.9A Active CN116660923B (en) 2023-08-01 2023-08-01 Unmanned agricultural machinery library positioning method and system integrating vision and laser radar

Country Status (1)

Country Link
CN (1) CN116660923B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109709801A (en) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 A kind of indoor unmanned plane positioning system and method based on laser radar
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110490114A (en) * 2019-08-13 2019-11-22 西北工业大学 Target detection barrier-avoiding method in a kind of unmanned plane real-time empty based on depth random forest and laser radar
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112881995A (en) * 2021-02-02 2021-06-01 中国十七冶集团有限公司 Laser radar range camera
CN114689035A (en) * 2022-03-25 2022-07-01 中国科学院计算技术研究所 Long-range farmland map construction method and system based on multi-sensor fusion
KR102441103B1 (en) * 2021-03-18 2022-09-07 순천향대학교 산학협력단 Unmanned aearial vehicle for identifying objects and method for identifying objects of unmanned aearial vehicle
WO2022232913A1 (en) * 2021-05-03 2022-11-10 AIRM Consulting Ltd. Computer vision system and method for agriculture
CN115451948A (en) * 2022-08-09 2022-12-09 中国科学院计算技术研究所 Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109709801A (en) * 2018-12-11 2019-05-03 智灵飞(北京)科技有限公司 A kind of indoor unmanned plane positioning system and method based on laser radar
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110490114A (en) * 2019-08-13 2019-11-22 西北工业大学 Target detection barrier-avoiding method in a kind of unmanned plane real-time empty based on depth random forest and laser radar
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112881995A (en) * 2021-02-02 2021-06-01 中国十七冶集团有限公司 Laser radar range camera
KR102441103B1 (en) * 2021-03-18 2022-09-07 순천향대학교 산학협력단 Unmanned aearial vehicle for identifying objects and method for identifying objects of unmanned aearial vehicle
WO2022232913A1 (en) * 2021-05-03 2022-11-10 AIRM Consulting Ltd. Computer vision system and method for agriculture
CN114689035A (en) * 2022-03-25 2022-07-01 中国科学院计算技术研究所 Long-range farmland map construction method and system based on multi-sensor fusion
CN115451948A (en) * 2022-08-09 2022-12-09 中国科学院计算技术研究所 Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光测距与双目视觉信息融合的移动机器人SLAM研究;杜钊君;吴怀宇;;计算机测量与控制(第01期);第180-183页 *

Also Published As

Publication number Publication date
CN116660923A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN104197928B (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN105930819B (en) Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system
CN104215239B (en) Guidance method using vision-based autonomous unmanned plane landing guidance device
CN110244284B (en) Calibration plate for calibrating multi-line laser radar and GPS\INS and method thereof
KR20200126141A (en) System and method for multiple object detection using multi-LiDAR
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
AU2018282302A1 (en) Integrated sensor calibration in natural scenes
CN112577517A (en) Multi-element positioning sensor combined calibration method and system
CN106548173A (en) A kind of improvement no-manned plane three-dimensional information getting method based on classification matching strategy
WO2019208101A1 (en) Position estimating device
CN113534184B (en) Laser-perceived agricultural robot space positioning method
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN113327296A (en) Laser radar and camera online combined calibration method based on depth weighting
CN112068567A (en) Positioning method and positioning system based on ultra-wideband and visual image
CN116202489A (en) Method and system for co-locating power transmission line inspection machine and pole tower and storage medium
CN113608556A (en) Multi-robot relative positioning method based on multi-sensor fusion
CN111538008A (en) Transformation matrix determining method, system and device
CN115144879B (en) Multi-machine multi-target dynamic positioning system and method
CN109764864B (en) Color identification-based indoor unmanned aerial vehicle pose acquisition method and system
CN114777768A (en) High-precision positioning method and system for satellite rejection environment and electronic equipment
CN116660923B (en) Unmanned agricultural machinery library positioning method and system integrating vision and laser radar
CN111964681B (en) Real-time positioning system of inspection robot
Han et al. Multiple targets geolocation using SIFT and stereo vision on airborne video sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant