[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118261982B - Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology - Google Patents

Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology Download PDF

Info

Publication number
CN118261982B
CN118261982B CN202410511588.0A CN202410511588A CN118261982B CN 118261982 B CN118261982 B CN 118261982B CN 202410511588 A CN202410511588 A CN 202410511588A CN 118261982 B CN118261982 B CN 118261982B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
data
point cloud
laser point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410511588.0A
Other languages
Chinese (zh)
Other versions
CN118261982A (en
Inventor
丁远洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianyungang Air Patrol Intelligent Technology Co ltd
Original Assignee
Lianyungang Air Patrol Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianyungang Air Patrol Intelligent Technology Co ltd filed Critical Lianyungang Air Patrol Intelligent Technology Co ltd
Priority to CN202410511588.0A priority Critical patent/CN118261982B/en
Publication of CN118261982A publication Critical patent/CN118261982A/en
Application granted granted Critical
Publication of CN118261982B publication Critical patent/CN118261982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention relates to the technical field of three-dimensional modeling, and discloses a method and a system for constructing a three-dimensional model of an unmanned aerial vehicle by utilizing a laser point cloud scanning technology, laser point cloud data of a target flying unmanned aerial vehicle, attitude data and GPS position data of the unmanned aerial vehicle are collected, the attitude data and the GPS data are fused with the laser point cloud data to obtain the laser point cloud data of a geodetic coordinate system, and establishing an initial three-dimensional model of the target unmanned aerial vehicle according to the laser point cloud data, further utilizing image data acquired by the camera equipment to construct a motion model, and updating the initial three-dimensional model of the target unmanned aerial vehicle in real time through the motion model to realize real-time three-dimensional modeling of the target unmanned aerial vehicle. In addition, the invention fully ensures the time consistency of each data through a clock automatic synchronization mechanism in the data acquisition and data processing processes, and further improves the accuracy of unmanned aerial vehicle real-time modeling.

Description

Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method and a system for constructing a three-dimensional model of an unmanned aerial vehicle by utilizing a laser point cloud scanning technology.
Background
In the rescue operation of an unmanned aerial vehicle for carrying out the rescue in the air on natural disasters and emergency accidents, the real-time flight state of the unmanned aerial vehicle is generally required to be known, so that the current rescue situation can be mastered in real time. However, the current technology of three-dimensional modeling of the unmanned aerial vehicle mainly realizes static three-dimensional modeling of the unmanned aerial vehicle, which cannot be applied to a scene in which the real-time flight state of the unmanned aerial vehicle needs to be acquired.
Therefore, how to implement real-time three-dimensional modeling of the unmanned aerial vehicle to obtain the real-time flight state of the unmanned aerial vehicle is a problem to be solved urgently at present.
Disclosure of Invention
In view of the above, the present invention aims to provide a method and a system for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology, which can model the unmanned aerial vehicle in real time and accurately obtain the current flight state of the unmanned aerial vehicle.
The first aspect of the invention discloses a method for constructing a three-dimensional model of an unmanned aerial vehicle by utilizing a laser point cloud scanning technology, which comprises the following steps:
Collecting first laser point cloud data of a target unmanned aerial vehicle, wherein the first laser point cloud data is established by a laser radar coordinate system;
acquiring attitude data of a target unmanned aerial vehicle, converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the attitude data, and taking the laser point cloud data converted to the unmanned aerial vehicle coordinate system as second laser point cloud data;
Acquiring position data of a GPS of the target unmanned aerial vehicle, converting the second laser point cloud data from an unmanned aerial vehicle coordinate system to a geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle, and taking the laser point cloud data converted to the geodetic coordinate system as third laser point cloud data;
and carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data.
Further, first laser point cloud data of the target unmanned aerial vehicle are collected through laser radar equipment, and the laser radar equipment is carried on the target unmanned aerial vehicle in cooperation with a damping device.
Further, attitude data of the target unmanned aerial vehicle are acquired through the inertial measurement unit, wherein the attitude data comprise roll, pitch, yaw, angular velocity and linear acceleration of the target unmanned aerial vehicle.
Further, when the first laser point cloud data, the gesture data and the GPS position data of the target unmanned aerial vehicle are collected, the target unmanned aerial vehicle flies in a preset area according to a preset route.
Further, timestamp information is added to the first laser point cloud data, the gesture data and the position data of the GPS;
before starting to collect data is judged, the laser radar equipment, the inertial measurement unit and the GPS module for acquiring the GPS position data of the target unmanned aerial vehicle perform clock initial synchronization calibration according to a clock initial synchronization instruction;
The laser radar equipment, the inertia measurement unit and the GPS module execute data acquisition after the clock initial synchronization calibration is completed, and execute clock periodic synchronization calibration according to clock periodic synchronization instructions;
Setting a time synchronization monitoring mechanism, and monitoring time synchronization states of the laser radar equipment, the inertia measurement unit and the GPS module through the time synchronization monitoring mechanism;
The time synchronization state comprises time stamp deviation, a reference clock is set, whether the time stamp deviation between the monitored clocks of the laser radar device, the inertial measurement unit and the GPS module and the reference clock is larger than a preset time stamp threshold value is judged, and under the condition that the judgment is yes, an automatic clock adjustment mechanism is executed.
Further, the converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the gesture data specifically includes:
Constructing a first rotation matrix according to roll, pitch and yaw data in the attitude data of the target unmanned aerial vehicle, wherein the first rotation matrix represents rotation from a laser radar coordinate system to the unmanned aerial vehicle coordinate system;
Obtaining an angular velocity vector according to the angular velocity in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the rotation matrix, and updating a first rotation matrix according to the change rate of the rotation matrix to obtain a first updated rotation matrix;
calculating the speed change of the target unmanned aerial vehicle according to the linear acceleration in the gesture data of the target unmanned aerial vehicle, and setting a translation vector according to the speed change of the target unmanned aerial vehicle;
Multiplying the first laser point cloud data under the laser radar coordinate system by a first updated rotation matrix, and adding the multiplication result and the translation vector to obtain the laser point cloud data under the unmanned plane coordinate system.
Further, the converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle specifically includes:
Constructing a second rotation matrix according to the roll, pitch and yaw data in the gesture data of the target unmanned aerial vehicle, wherein the second rotation matrix represents the rotation from the unmanned aerial vehicle coordinate system to the geodetic coordinate system;
Obtaining an angular velocity vector according to the angular velocity data in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a second rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the second rotation matrix, and updating the second rotation matrix according to the change rate of the second rotation matrix to obtain a second updated rotation matrix;
And constructing a position vector according to the GPS position data of the target unmanned aerial vehicle, multiplying the second laser point cloud data under the unmanned aerial vehicle coordinate system by the second updating rotation matrix, and adding the multiplication result and the position vector to obtain the laser point cloud data under the geodetic coordinate system.
Further, the method further comprises the steps that the target unmanned aerial vehicle is provided with the camera equipment, and the camera equipment is used for collecting the environment image of the preset area when the target unmanned aerial vehicle flies in the preset area;
The performing real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data specifically includes:
Performing normal estimation on each coordinate point data in the third laser point cloud data, wherein a normal estimation vector of each coordinate point is obtained by calculating an average normal of adjacent points of the coordinate points, and after calculating the normal estimation vector of each coordinate point data in the third laser point cloud data, establishing a normal field of the third laser point cloud data according to the normal estimation vector of each coordinate point data in the third laser point cloud data;
Reconstructing the surface of the target unmanned aerial vehicle through a poisson equation according to the normal field of the third laser point cloud data to obtain an initial three-dimensional model of the target unmanned aerial vehicle;
performing feature extraction on a preset area environment image through a SLAM algorithm to obtain a first environment image feature, and establishing a motion model of the target unmanned aerial vehicle according to the first environment image feature, gesture data and GPS position data, wherein the first environment image feature, the gesture data and the time stamp information of the GPS position data are consistent;
the initial three-dimensional model of the target unmanned aerial vehicle is updated in real time through the motion model, so that real-time three-dimensional modeling of the target unmanned aerial vehicle is achieved, and the real-time three-dimensional model of the target unmanned aerial vehicle comprises the real-time gesture of the target unmanned aerial vehicle and real-time flight environment information of the target unmanned aerial vehicle under the real-time gesture.
Further, the method further comprises the step of visualizing the real-time three-dimensional model of the target unmanned aerial vehicle, wherein a first visualization area is set for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment, and a second visualization area is set for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment and the map.
The invention discloses a system for constructing a three-dimensional model of an unmanned aerial vehicle by utilizing a laser point cloud scanning technology, which comprises a data processing unit and a central processing unit, wherein the data processing unit comprises a first acquisition subunit, a second acquisition subunit, an acquisition subunit and a conversion subunit;
The first acquisition subunit is used for acquiring first laser point cloud data of the target unmanned aerial vehicle, and the first laser point cloud data is established by a laser radar coordinate system;
The second acquisition subunit is used for acquiring gesture data of the target unmanned aerial vehicle, and the conversion subunit is used for converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the gesture data and taking the laser point cloud data converted into the unmanned aerial vehicle coordinate system as second laser point cloud data;
The acquisition subunit is used for acquiring the position data of the GPS of the target unmanned aerial vehicle, the conversion subunit is used for converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle, and the laser point cloud data converted into the geodetic coordinate system is used as third laser point cloud data;
the central processing unit is used for receiving the third laser point cloud data and carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, laser point cloud data, gesture data and GPS position data of the target flying unmanned aerial vehicle are collected, the laser point cloud data of a geodetic coordinate system is obtained by fusing the gesture data and the GPS data with the laser point cloud data, an initial three-dimensional model of the target unmanned aerial vehicle is built according to the laser point cloud data, and further, a motion model is built by utilizing image data obtained by camera equipment, and the initial three-dimensional model of the target unmanned aerial vehicle is updated in real time through the motion model, so that real-time three-dimensional modeling of the target unmanned aerial vehicle is realized. In addition, the invention fully ensures the time consistency of each data through a clock automatic synchronization mechanism in the data acquisition and data processing processes, and further improves the accuracy of unmanned aerial vehicle real-time modeling.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings:
Fig. 1 is a schematic flow chart of a method for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a system for constructing a three-dimensional model of an unmanned aerial vehicle using a laser point cloud scanning technique according to still another embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a method for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology, which includes:
Collecting first laser point cloud data of a target unmanned aerial vehicle, wherein the first laser point cloud data is established by a laser radar coordinate system;
acquiring attitude data of a target unmanned aerial vehicle, converting first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the attitude data, and taking the laser point cloud data converted to the unmanned aerial vehicle coordinate system as second laser point cloud data;
Acquiring position data of a GPS of the target unmanned aerial vehicle, converting second laser point cloud data from an unmanned aerial vehicle coordinate system to a geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle, and taking the laser point cloud data converted to the geodetic coordinate system as third laser point cloud data;
And carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data.
Further, first laser point cloud data of the target unmanned aerial vehicle are acquired through laser radar equipment, wherein the laser radar equipment is mounted on the target unmanned aerial vehicle in cooperation with a damping device.
Specifically, in the embodiment of the invention, the unmanned aerial vehicle is subjected to laser scanning by using a laser radar device with high resolution, wide measurement range and high scanning speed, and the laser radar device generates first laser point cloud data by emitting a laser beam to the unmanned aerial vehicle and measuring the return time of the laser beam.
To accommodate the multi-angle variation in unmanned aerial vehicle flight, the mounting position of the lidar device needs to ensure that the position and orientation of the device can provide comprehensive coverage to capture all details outside the unmanned aerial vehicle, and the determination operation of the mounting position of the lidar device is determined through multiple tests before unmanned aerial vehicle flight.
In addition, compared with the vibration phenomenon when static, the unmanned aerial vehicle can exist in the flight process, so that the vibration reduction device is further carried at the laser radar equipment installation position in the embodiment to acquire stable and accurate laser point cloud data.
Further, attitude data of the target unmanned aerial vehicle are acquired through the inertial measurement unit, wherein the attitude data comprise roll, pitch, yaw, angular velocity and linear acceleration of the target unmanned aerial vehicle. The attitude data provides real-time information of the flight attitude and movement of the unmanned aerial vehicle, and in this embodiment, the first laser point cloud data is positioned in the coordinate system of the unmanned aerial vehicle mainly by using the attitude data information of the unmanned aerial vehicle.
The laser radar coordinate system in the embodiment of the invention is a local coordinate system of the laser radar equipment, the first laser point cloud data is the original data collected by the laser radar equipment, and the data is established by taking the laser radar equipment as a coordinate system of a reference object. The unmanned aerial vehicle coordinate system is a coordinate system with the unmanned aerial vehicle as a reference object, and in general, the position of the unmanned aerial vehicle is defined as the center of the unmanned aerial vehicle coordinate system. The geodetic coordinate system, also called geographic coordinate system, is used to describe the position on the earth, which uses the center of the earth as the origin, and determines a point on the earth by longitude, latitude, altitude.
Further, when the first laser point cloud data, the gesture data and the GPS position data of the target unmanned aerial vehicle are collected, the target unmanned aerial vehicle flies in a preset area according to a preset route. In order to ensure safety, the unmanned aerial vehicle in the embodiment of the invention flies based on a preset flight path and a preset area.
Further, the first laser point cloud data, the attitude data, and the position data of the GPS are each added with time stamp information.
In the embodiment of the invention, the target unmanned aerial vehicle is subjected to real-time three-dimensional modeling by converting the first laser point cloud data, the gesture data and the position data of the GPS into the third laser point cloud data of the geodetic coordinate system according to the same moment, so that the time of the first laser point cloud data, the gesture data and the position data of the GPS are consistent, and the time consistency of the first laser point cloud data, the gesture data and the position data of the GPS is ensured by adding time stamp information into the first laser point cloud data, the gesture data and the position data of the GPS.
Before starting to collect data is judged, the laser radar equipment, the inertial measurement unit and the GPS module for acquiring the GPS position data of the target unmanned aerial vehicle perform clock initial synchronization calibration according to a clock initial synchronization instruction. And the laser radar equipment, the inertia measurement unit and the GPS module execute data acquisition after the clock initial synchronization calibration is completed, and execute clock periodic synchronization calibration according to the clock periodic synchronization instruction.
In the embodiment of the invention, the third laser point cloud data is processed by the central processing unit to complete the three-dimensional modeling of the unmanned aerial vehicle. Further optionally, the laser radar device, the inertial measurement unit and the GPS module are integrated in a data acquisition system, wherein the inertial measurement unit and the GPS module are both mounted on the target unmanned aerial vehicle, and the specific mounting location is not limited in this regard. The central processing unit is in communication connection with the data acquisition system, and a user sends related instructions to a control center of the data acquisition system through the central processing unit so as to control laser radar equipment, an inertial measurement unit and a GPS module in the data acquisition system to execute the related instructions.
Further, a time synchronization monitoring mechanism is provided, and the time synchronization states of the laser radar device, the inertial measurement unit and the GPS module are monitored through the time synchronization monitoring mechanism. The time synchronization state comprises time stamp deviation, whether the time stamp deviation between the monitored clocks of the laser radar device, the inertial measurement unit and the GPS module and the reference clock is larger than a preset time stamp threshold value is judged by setting the reference clock, and under the condition that the judgment is yes, an automatic clock adjustment mechanism is executed.
As a preferred implementation manner of the embodiment of the invention, the adjustment value of the clock with the timestamp deviation larger than the preset timestamp threshold value is calculated through the PID control algorithm, and furthermore, an adaptive mechanism is introduced, so that parameters of the PID control algorithm can be dynamically adjusted according to the environment and the system to adapt to different flight conditions and environmental changes. In addition, a prediction model is set through an MPC algorithm, a time sequence model is built according to historical clock drift data, future drift conditions of clocks are predicted, and the MPC algorithm can predict the trend of drift and adjust in advance.
Furthermore, a clock synchronization network is constructed, and time information is mutually communicated and shared among the laser radar device, the inertial measurement unit and the GPS module through the clock synchronization network. Through network synchronization, the laser radar device, the inertial measurement unit and the GPS module can update accurate clock synchronization according to the timestamp information of each other, so that clock drift of the whole system is reduced.
Further, converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the pose data specifically includes:
And constructing a first rotation matrix according to the roll, pitch and yaw data in the attitude data of the target unmanned aerial vehicle, wherein the first rotation matrix represents the rotation from the laser radar coordinate system to the unmanned aerial vehicle coordinate system.
Obtaining an angular velocity vector according to the angular velocity in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the rotation matrix, and updating the first rotation matrix according to the change rate of the rotation matrix to obtain a first updated rotation matrix.
And calculating the speed change of the target unmanned aerial vehicle according to the linear acceleration in the gesture data of the target unmanned aerial vehicle, and setting a translation vector according to the speed change of the target unmanned aerial vehicle.
Multiplying the first laser point cloud data under the laser radar coordinate system by a first updated rotation matrix, and adding the multiplication result and the translation vector to obtain the laser point cloud data under the unmanned plane coordinate system.
Further, converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle specifically includes:
And constructing a second rotation matrix according to the roll, pitch and yaw data in the attitude data of the target unmanned aerial vehicle, wherein the second rotation matrix represents the rotation from the unmanned aerial vehicle coordinate system to the geodetic coordinate system.
Obtaining an angular velocity vector according to the angular velocity data in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a second rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the second rotation matrix, and updating the second rotation matrix according to the change rate of the second rotation matrix to obtain a second updated rotation matrix.
And constructing a position vector according to the GPS position data of the target unmanned aerial vehicle, multiplying the second laser point cloud data under the unmanned aerial vehicle coordinate system by the second updating rotation matrix, and adding the multiplication result and the position vector to obtain the laser point cloud data under the geodetic coordinate system.
Further, the target unmanned aerial vehicle is provided with the camera equipment, and the camera equipment is used for collecting the environment image of the preset area when the target unmanned aerial vehicle flies in the preset area.
In this embodiment, the preset area environment image is an image that includes the target unmanned aerial vehicle and is combined with the flight environment.
The real-time three-dimensional modeling of the target unmanned aerial vehicle according to the third laser point cloud data specifically comprises the following steps:
And carrying out normal estimation on each coordinate point data in the third laser point cloud data, wherein a normal estimation vector of each coordinate point is obtained by calculating an average normal of adjacent points of the coordinate points, and after calculating the normal estimation vector of each coordinate point data in the third laser point cloud data, establishing a normal field of the third laser point cloud data according to the normal estimation vector of each coordinate point data in the third laser point cloud data.
And reconstructing the surface of the target unmanned aerial vehicle through a poisson equation according to the normal field of the third laser point cloud data to obtain an initial three-dimensional model of the target unmanned aerial vehicle.
And carrying out feature extraction on the environment image of the preset area through a SLAM algorithm to obtain a first environment image feature, and establishing a motion model of the target unmanned aerial vehicle according to the first environment image feature, the gesture data and the GPS position data, wherein the timestamp information of the first environment image feature, the gesture data and the GPS position data is consistent.
The initial three-dimensional model of the target unmanned aerial vehicle is updated in real time through the motion model, so that real-time three-dimensional modeling of the target unmanned aerial vehicle is realized, and the real-time three-dimensional model of the target unmanned aerial vehicle comprises the real-time gesture of the target unmanned aerial vehicle and real-time flight environment information of the target unmanned aerial vehicle under the real-time gesture.
It can be understood that the initial three-dimensional model of the target unmanned aerial vehicle in the embodiment of the present invention is a three-dimensional model of the unmanned aerial vehicle that is relatively stationary in a flight state. The first environment image features, the gesture data and the GPS position data used for constructing the motion model are real-time data, and the motion model is constructed by adopting data with consistent time stamp information. The method comprises the steps of constructing a motion model through real-time data comprising real-time gestures, real-time positions and environment information of a target unmanned aerial vehicle in a real-time flight state, wherein a coordinate system of the real-time positions is identical to a coordinate system of the real-time positions of the environment information in the real-time flight state, so that fusion of the target unmanned aerial vehicle and the flight environment is achieved, and updating an initial three-dimensional model of the target unmanned aerial vehicle through the motion model, so that a real-time three-dimensional model of the target unmanned aerial vehicle can be obtained.
Further, the embodiment of the invention also visualizes the real-time three-dimensional model of the target unmanned aerial vehicle through the UI interface, sets a first visualization area on the UI interface for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment, and sets a second visualization area for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment and the map.
Example two
The second aspect of the present invention discloses a system for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology, please refer to fig. 2, and fig. 2 is a schematic structural diagram of a system for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology according to still another embodiment of the present invention. The system comprises a data processing unit and a central processing unit, wherein the data processing unit comprises a first acquisition subunit, a second acquisition subunit, an acquisition subunit and a conversion subunit;
the first acquisition subunit is used for acquiring first laser point cloud data of the target unmanned aerial vehicle, and the first laser point cloud data is established by a laser radar coordinate system;
The second acquisition subunit is used for acquiring gesture data of the target unmanned aerial vehicle, and the conversion subunit is used for converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the gesture data and taking the laser point cloud data converted into the unmanned aerial vehicle coordinate system as second laser point cloud data;
The conversion subunit is used for converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and GPS position data of the target unmanned aerial vehicle, and taking the laser point cloud data converted into the geodetic coordinate system as third laser point cloud data;
and the central processing unit is used for receiving the third laser point cloud data and carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data.
Further, first laser point cloud data of the target unmanned aerial vehicle are collected through the laser radar equipment serving as a first collecting subunit, and the laser radar equipment is mounted on the target unmanned aerial vehicle in cooperation with the damping device.
Further, the inertial measurement unit is used as a second acquisition subunit to acquire attitude data of the target unmanned aerial vehicle, wherein the attitude data comprise roll, pitch, yaw, angular velocity and linear acceleration of the target unmanned aerial vehicle.
Further, when the first laser point cloud data, the gesture data and the GPS position data of the target unmanned aerial vehicle are collected, the target unmanned aerial vehicle flies in a preset area according to a preset route.
Further, timestamp information is added to the first laser point cloud data, the gesture data and the position data of the GPS;
the central processing unit is also used for issuing a data acquisition instruction, a clock initial synchronization instruction and a clock periodic synchronization instruction to the data processing unit;
The data processing unit is also provided with a judging subunit, and before the judging subunit judges that the acquisition of data is started, the laser radar equipment, the inertial measurement unit and the GPS module for acquiring the GPS position data of the target unmanned aerial vehicle perform clock initial synchronization calibration according to a clock initial synchronization instruction;
The laser radar equipment, the inertia measurement unit and the GPS module execute data acquisition after the clock initial synchronization calibration is completed, and execute clock periodic synchronization calibration according to clock periodic synchronization instructions;
setting a time synchronization monitoring mechanism, and monitoring time synchronization states of the laser radar equipment, the inertia measurement unit and the GPS module through the time synchronization monitoring mechanism;
the time synchronization state comprises time stamp deviation, a reference clock is set, whether the time stamp deviation between the monitored clocks of the laser radar device, the inertial measurement unit and the GPS module and the reference clock is larger than a preset time stamp threshold value is judged, and under the condition that the judgment is yes, an automatic clock adjustment mechanism is executed.
Further, the converting subunit converting the first laser point cloud data from the laser radar coordinate system to the unmanned aerial vehicle coordinate system according to the gesture data specifically includes:
Constructing a first rotation matrix according to roll, pitch and yaw data in the attitude data of the target unmanned aerial vehicle, wherein the first rotation matrix represents rotation from a laser radar coordinate system to the unmanned aerial vehicle coordinate system;
Obtaining an angular velocity vector according to the angular velocity in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the rotation matrix, and updating a first rotation matrix according to the change rate of the rotation matrix to obtain a first updated rotation matrix;
calculating the speed change of the target unmanned aerial vehicle according to the linear acceleration in the gesture data of the target unmanned aerial vehicle, and setting a translation vector according to the speed change of the target unmanned aerial vehicle;
Multiplying the first laser point cloud data under the laser radar coordinate system by a first updated rotation matrix, and adding the multiplication result and the translation vector to obtain the laser point cloud data under the unmanned plane coordinate system.
Further, the converting subunit converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle specifically includes:
Constructing a second rotation matrix according to the roll, pitch and yaw data in the gesture data of the target unmanned aerial vehicle, wherein the second rotation matrix represents the rotation from the unmanned aerial vehicle coordinate system to the geodetic coordinate system;
Obtaining an angular velocity vector according to the angular velocity data in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a second rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the second rotation matrix, and updating the second rotation matrix according to the change rate of the second rotation matrix to obtain a second updated rotation matrix;
And constructing a position vector according to the GPS position data of the target unmanned aerial vehicle, multiplying the second laser point cloud data under the unmanned aerial vehicle coordinate system by the second updating rotation matrix, and adding the multiplication result and the position vector to obtain the laser point cloud data under the geodetic coordinate system.
Further, the system further comprises that the target unmanned aerial vehicle is provided with the camera equipment, and the camera equipment is used for collecting the environment image of the preset area when the target unmanned aerial vehicle flies in the preset area;
the method for carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle by the central processing unit according to the third laser point cloud data specifically comprises the following steps:
Performing normal estimation on each coordinate point data in the third laser point cloud data, wherein a normal estimation vector of each coordinate point is obtained by calculating an average normal of adjacent points of the coordinate points, and after calculating the normal estimation vector of each coordinate point data in the third laser point cloud data, establishing a normal field of the third laser point cloud data according to the normal estimation vector of each coordinate point data in the third laser point cloud data;
Reconstructing the surface of the target unmanned aerial vehicle through a poisson equation according to the normal field of the third laser point cloud data to obtain an initial three-dimensional model of the target unmanned aerial vehicle;
performing feature extraction on a preset area environment image through a SLAM algorithm to obtain a first environment image feature, and establishing a motion model of the target unmanned aerial vehicle according to the first environment image feature, gesture data and GPS position data, wherein the first environment image feature, the gesture data and the time stamp information of the GPS position data are consistent;
the initial three-dimensional model of the target unmanned aerial vehicle is updated in real time through the motion model, so that real-time three-dimensional modeling of the target unmanned aerial vehicle is achieved, and the real-time three-dimensional model of the target unmanned aerial vehicle comprises the real-time gesture of the target unmanned aerial vehicle and real-time flight environment information of the target unmanned aerial vehicle under the real-time gesture.
Further, the system further comprises a display unit, wherein the display unit is used for visualizing the real-time three-dimensional model of the target unmanned aerial vehicle, a first visualization area is arranged for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment, and a second visualization area is arranged for displaying the real-time flight state of the target unmanned aerial vehicle combined with the preset flight area environment and the map.
It should be noted that, the implementation process of the second embodiment is similar to that of the first embodiment, and will not be described in detail in the second embodiment.
Finally, it should be noted that: the embodiment of the invention discloses a method and a system for constructing a three-dimensional model of an unmanned aerial vehicle by utilizing a laser point cloud scanning technology, which are disclosed by the embodiment of the invention only as a preferred embodiment of the invention, and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. A method for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology, the method comprising:
Collecting first laser point cloud data of a target unmanned aerial vehicle, wherein the first laser point cloud data is established by a laser radar coordinate system;
acquiring attitude data of a target unmanned aerial vehicle, converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the attitude data, and taking the laser point cloud data converted to the unmanned aerial vehicle coordinate system as second laser point cloud data;
Acquiring position data of a GPS of the target unmanned aerial vehicle, converting the second laser point cloud data from an unmanned aerial vehicle coordinate system to a geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle, and taking the laser point cloud data converted to the geodetic coordinate system as third laser point cloud data;
performing real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data;
The converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the attitude data specifically includes:
Constructing a first rotation matrix according to roll, pitch and yaw data in the attitude data of the target unmanned aerial vehicle, wherein the first rotation matrix represents rotation from a laser radar coordinate system to the unmanned aerial vehicle coordinate system;
Obtaining an angular velocity vector according to the angular velocity in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a first rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the first rotation matrix, and updating the first rotation matrix according to the change rate of the first rotation matrix to obtain a first updated rotation matrix;
calculating the speed change of the target unmanned aerial vehicle according to the linear acceleration in the gesture data of the target unmanned aerial vehicle, and setting a translation vector according to the speed change of the target unmanned aerial vehicle;
Multiplying first laser point cloud data under a laser radar coordinate system by a first updated rotation matrix, and adding a multiplication result and a translation vector to obtain laser point cloud data under an unmanned plane coordinate system;
the converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle specifically comprises:
Constructing a second rotation matrix according to the roll, pitch and yaw data in the gesture data of the target unmanned aerial vehicle, wherein the second rotation matrix represents the rotation from the unmanned aerial vehicle coordinate system to the geodetic coordinate system;
Obtaining an angular velocity vector according to the angular velocity data in the gesture data of the target unmanned aerial vehicle, multiplying the angular velocity vector by a second rotation matrix through an oblique symmetry matrix of the angular velocity vector to obtain the change rate of the second rotation matrix, and updating the second rotation matrix according to the change rate of the second rotation matrix to obtain a second updated rotation matrix;
Constructing a position vector according to GPS position data of the target unmanned aerial vehicle, multiplying second laser point cloud data under the unmanned aerial vehicle coordinate system by a second updating rotation matrix, and adding the multiplied result and the position vector to obtain laser point cloud data under the geodetic coordinate system;
the method further comprises the steps that the target unmanned aerial vehicle is provided with the camera equipment, and the camera equipment is used for collecting environment images of the preset area when the target unmanned aerial vehicle flies in the preset area;
The performing real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data specifically includes:
Performing normal estimation on each coordinate point data in the third laser point cloud data, wherein a normal estimation vector of each coordinate point is obtained by calculating an average normal of adjacent points of the coordinate points, and after calculating the normal estimation vector of each coordinate point data in the third laser point cloud data, establishing a normal field of the third laser point cloud data according to the normal estimation vector of each coordinate point data in the third laser point cloud data;
Reconstructing the surface of the target unmanned aerial vehicle through a poisson equation according to the normal field of the third laser point cloud data to obtain an initial three-dimensional model of the target unmanned aerial vehicle;
performing feature extraction on a preset area environment image through a SLAM algorithm to obtain a first environment image feature, and establishing a motion model of the target unmanned aerial vehicle according to the first environment image feature, gesture data and GPS position data, wherein the first environment image feature, the gesture data and the time stamp information of the GPS position data are consistent;
the initial three-dimensional model of the target unmanned aerial vehicle is updated in real time through the motion model, so that real-time three-dimensional modeling of the target unmanned aerial vehicle is achieved, and the real-time three-dimensional model of the target unmanned aerial vehicle comprises the real-time gesture of the target unmanned aerial vehicle and real-time flight environment information of the target unmanned aerial vehicle under the real-time gesture.
2. The method for constructing the three-dimensional model of the unmanned aerial vehicle by utilizing the laser point cloud scanning technology according to claim 1, wherein the laser radar equipment is used for acquiring first laser point cloud data of the target unmanned aerial vehicle and is carried on the target unmanned aerial vehicle in cooperation with the damping device.
3. The method for constructing a three-dimensional model of a drone using laser point cloud scanning techniques according to claim 2, wherein the attitude data of the target drone is collected by an inertial measurement unit, the attitude data including roll, pitch, yaw, angular velocity, and linear acceleration of the target drone.
4. The method for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology according to claim 3, wherein the target unmanned aerial vehicle flies in a preset area according to a preset route when acquiring first laser point cloud data, attitude data and position data of a GPS of the target unmanned aerial vehicle.
5. The method for constructing a three-dimensional model of an unmanned aerial vehicle using laser point cloud scanning technology as claimed in claim 4, wherein the first laser point cloud data, the gesture data and the position data of the GPS are all added with time stamp information;
before starting to collect data is judged, the laser radar equipment, the inertial measurement unit and the GPS module for acquiring the GPS position data of the target unmanned aerial vehicle perform clock initial synchronization calibration according to a clock initial synchronization instruction;
The laser radar equipment, the inertia measurement unit and the GPS module execute data acquisition after the clock initial synchronization calibration is completed, and execute clock periodic synchronization calibration according to clock periodic synchronization instructions;
Setting a time synchronization monitoring mechanism, and monitoring time synchronization states of the laser radar equipment, the inertia measurement unit and the GPS module through the time synchronization monitoring mechanism;
The time synchronization state comprises time stamp deviation, a reference clock is set, whether the time stamp deviation between the monitored clocks of the laser radar device, the inertial measurement unit and the GPS module and the reference clock is larger than a preset time stamp threshold value is judged, and under the condition that the judgment is yes, an automatic clock adjustment mechanism is executed.
6. The method for constructing a three-dimensional model of a drone using laser point cloud scanning techniques of claim 5, further comprising visualizing the real-time three-dimensional model of the target drone, setting a first visualization area for showing a real-time flight status of the target drone in combination with a preset flight area environment, and setting a second visualization area for showing the real-time flight status of the target drone in combination with the preset flight area environment and a map.
7. A system for constructing a three-dimensional model of an unmanned aerial vehicle by using a laser point cloud scanning technology, which is based on the method for constructing the three-dimensional model of the unmanned aerial vehicle by using the laser point cloud scanning technology according to any one of claims 1 to 6, wherein the system comprises a data processing unit and a central processing unit, wherein the data processing unit comprises a first acquisition subunit, a second acquisition subunit, an acquisition subunit and a conversion subunit;
The first acquisition subunit is used for acquiring first laser point cloud data of the target unmanned aerial vehicle, and the first laser point cloud data is established by a laser radar coordinate system;
The second acquisition subunit is used for acquiring gesture data of the target unmanned aerial vehicle, and the conversion subunit is used for converting the first laser point cloud data from a laser radar coordinate system to an unmanned aerial vehicle coordinate system according to the gesture data and taking the laser point cloud data converted into the unmanned aerial vehicle coordinate system as second laser point cloud data;
The acquisition subunit is used for acquiring the position data of the GPS of the target unmanned aerial vehicle, the conversion subunit is used for converting the second laser point cloud data from the unmanned aerial vehicle coordinate system to the geodetic coordinate system according to the attitude data and the GPS position data of the target unmanned aerial vehicle, and the laser point cloud data converted into the geodetic coordinate system is used as third laser point cloud data;
the central processing unit is used for receiving the third laser point cloud data and carrying out real-time three-dimensional modeling on the target unmanned aerial vehicle according to the third laser point cloud data.
CN202410511588.0A 2024-04-26 2024-04-26 Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology Active CN118261982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410511588.0A CN118261982B (en) 2024-04-26 2024-04-26 Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410511588.0A CN118261982B (en) 2024-04-26 2024-04-26 Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology

Publications (2)

Publication Number Publication Date
CN118261982A CN118261982A (en) 2024-06-28
CN118261982B true CN118261982B (en) 2024-09-17

Family

ID=91613120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410511588.0A Active CN118261982B (en) 2024-04-26 2024-04-26 Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology

Country Status (1)

Country Link
CN (1) CN118261982B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063465A (en) * 2022-06-15 2022-09-16 华南理工大学 Unmanned vehicle driving road condition modeling method based on laser radar
CN115272452A (en) * 2022-06-30 2022-11-01 深圳市镭神智能系统有限公司 Target detection positioning method and device, unmanned aerial vehicle and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111198576A (en) * 2020-01-09 2020-05-26 哈尔滨工程大学 Control method, medium and unit for particle unmanned aerial vehicle under artificial intelligence big data
US12014642B2 (en) * 2020-05-14 2024-06-18 Raytheon Company Navigation based on earth centered earth fixed (ECEF) frame of reference
CN111427061A (en) * 2020-06-15 2020-07-17 北京云迹科技有限公司 Robot mapping method and device, robot and storage medium
US11367204B1 (en) * 2021-12-16 2022-06-21 Ecotron LLC Multi-sensor spatial data auto-synchronization system and method
CN114527777A (en) * 2022-01-12 2022-05-24 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle flight control method based on adaptive neural network
CN114637329A (en) * 2022-03-18 2022-06-17 西北工业大学 Airplane multi-machine intensive formation form reconstruction method and system
CN115556111B (en) * 2022-10-26 2023-08-18 哈尔滨工业大学 Flight mechanical arm coupling disturbance control method based on variable inertia parameter modeling
CN115902930A (en) * 2022-11-15 2023-04-04 上海大学 Unmanned aerial vehicle room built-in map and positioning method for ship detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063465A (en) * 2022-06-15 2022-09-16 华南理工大学 Unmanned vehicle driving road condition modeling method based on laser radar
CN115272452A (en) * 2022-06-30 2022-11-01 深圳市镭神智能系统有限公司 Target detection positioning method and device, unmanned aerial vehicle and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机载激光成像雷达运动畸变的分析与修正;王蔚然;电子科技大学学报;19981231(第06期);第2部分 *

Also Published As

Publication number Publication date
CN118261982A (en) 2024-06-28

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
US10322819B2 (en) Autonomous system for taking moving images from a drone, with target tracking and improved target location
JP2022502784A (en) Real-time mapping systems, methods, and computer programs in a moving object environment
US10437260B2 (en) Systems and methods for controlling aerial vehicles
WO2022247498A1 (en) Unmanned aerial vehicle monitoring
JP6087712B2 (en) DISTRIBUTION DATA DISPLAY DEVICE, METHOD, AND PROGRAM
US20160259043A1 (en) Method for determining trajectories of moving physical objects in a space on the basis of sensor data of a plurality of sensors
US20220113421A1 (en) Online point cloud processing of lidar and camera data
WO2021065543A1 (en) Information processing device, information processing method, and program
CN111784837A (en) High-precision map generation method and device
CN113887400B (en) Obstacle detection method, model training method and device and automatic driving vehicle
US20190033863A1 (en) Systems and methods for controlling aerial vehicles
US20190033886A1 (en) Systems and methods for controlling aerial vehicles
US20210229810A1 (en) Information processing device, flight control method, and flight control system
CN114111776A (en) Positioning method and related device
CN117685953A (en) UWB and vision fusion positioning method and system for multi-unmanned aerial vehicle co-positioning
CN115556769A (en) Obstacle state quantity determination method and device, electronic device and medium
CN118261982B (en) Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology
US20210404840A1 (en) Techniques for mapping using a compact payload in a movable object environment
US10437259B2 (en) Systems and methods for controlling aerial vehicles
WO2022126085A1 (en) Camera triggering and multi-camera photogrammetry
WO2020264528A1 (en) Calibration of inertial measurement units of vehicles using localization
CN114993306B (en) Scale self-recovery visual inertial integrated navigation method and device
JP7004374B1 (en) Movement route generation method and program of moving object, management server, management system
WO2021064982A1 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant