[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112504261B - Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points - Google Patents

Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points Download PDF

Info

Publication number
CN112504261B
CN112504261B CN202011237125.8A CN202011237125A CN112504261B CN 112504261 B CN112504261 B CN 112504261B CN 202011237125 A CN202011237125 A CN 202011237125A CN 112504261 B CN112504261 B CN 112504261B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
current moment
pose
anchor point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011237125.8A
Other languages
Chinese (zh)
Other versions
CN112504261A (en
Inventor
相晓嘉
周晗
唐邓清
常远
闫超
黄依新
陈紫叶
兰珍
刘兴宇
李子杏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011237125.8A priority Critical patent/CN112504261B/en
Publication of CN112504261A publication Critical patent/CN112504261A/en
Application granted granted Critical
Publication of CN112504261B publication Critical patent/CN112504261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Feedback Control In General (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a visual anchor point-based unmanned aerial vehicle landing pose filter estimation method and system, which are used for constructing an unmanned aerial vehicle pose estimation extended Kalman filter model based on visual anchor point measurement according to the space position and pose estimation requirements in the unmanned aerial vehicle landing process, realizing optimal estimation of the unmanned aerial vehicle pose under the minimum error two-norm square sum index by relying on the extended Kalman filter theory, effectively reducing the influence of the observation error of foundation vision on the unmanned aerial vehicle pose estimation accuracy in the unmanned aerial vehicle landing process, and greatly improving the pose estimation accuracy and robustness in the unmanned aerial vehicle landing process compared with the traditional method.

Description

Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points
Technical Field
The invention relates to the technical field of autonomous landing of unmanned aerial vehicles, in particular to a visual anchor point-based unmanned aerial vehicle landing pose filtering estimation method and system.
Background
In the autonomous take-off and landing process of the unmanned aerial vehicle, the real-time acquisition of the position and the posture of the unmanned aerial vehicle is realized by combining an onboard global positioning system and an inertial navigation system, and the unmanned aerial vehicle is a main means for the current unmanned aerial vehicle to sense the state of the unmanned aerial vehicle. Considering that factors such as magnetic field and temperature in the environment easily cause interference to an airborne positioning system, in the whole landing process of the unmanned aerial vehicle, the unmanned aerial vehicle cannot be guaranteed to be provided with stable and accurate pose information only by the airborne positioning system. The ground-based monocular vision system is utilized to observe the unmanned aerial vehicle landing process, the computer vision technology can be utilized to realize real-time estimation of the space position and the gesture of the unmanned aerial vehicle, and the auxiliary airborne positioning system provides more accurate and stable real-time space gesture information for the unmanned aerial vehicle. At present, the conventional methods such as binocular range principle and PnP problem solving are utilized to estimate the spatial position and the attitude of the target according to the two-dimensional image, and the problems of accuracy and robustness are more insufficient. Therefore, there is an urgent need to design a high-precision and robust unmanned aerial vehicle pose estimation method based on monocular vision.
Disclosure of Invention
The invention provides a visual anchor point-based unmanned aerial vehicle falling pose filtering estimation method and system, which are used for overcoming the defects of low precision, low robustness, and the like in the prior art.
In order to achieve the above purpose, the invention provides an unmanned aerial vehicle falling pose filtering estimation method based on visual anchor points, comprising the following steps:
according to the measurement condition of a visual anchor point in the unmanned aerial vehicle landing process, constructing an unmanned aerial vehicle pose estimation expansion Kalman filtering model; the model comprises a system state prediction equation and a system observation equation;
defining a visual anchor point measured value of the unmanned aerial vehicle according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle;
acquiring a space pose of the unmanned aerial vehicle at the previous moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the previous moment;
according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables, a predicted value of a measured value of a visual anchor point of the unmanned aerial vehicle at the current moment is obtained by utilizing a system observation equation, and an image position predicted value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained according to the predicted value of the measured value;
acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and acquiring the pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the predicted value of the pose of the unmanned aerial vehicle, the predicted value of the image position of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
In order to achieve the above purpose, the invention further provides an unmanned aerial vehicle landing pose filtering estimation system based on visual anchor points, comprising:
the model construction module is used for constructing an extended Kalman filtering model for estimating the pose of the unmanned aerial vehicle according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation; defining a visual anchor point measured value of the unmanned aerial vehicle according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle;
the unmanned aerial vehicle space pose prediction module is used for acquiring the unmanned aerial vehicle space pose at the last moment, and acquiring a predicted value of the unmanned aerial vehicle space pose at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the unmanned aerial vehicle space pose at the last moment;
the system observation equation is utilized to obtain a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment, and the predicted value of the image position of the unmanned aerial vehicle visual anchor point at the current moment is obtained according to the predicted value of the measured value;
the unmanned aerial vehicle space pose acquisition module is used for acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment through an unmanned aerial vehicle state update equation by utilizing the unmanned aerial vehicle space pose prediction value, the image position prediction value of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
To achieve the above object, the present invention also proposes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
Compared with the prior art, the invention has the beneficial effects that:
the unmanned aerial vehicle landing pose filter estimation method based on the visual anchor points is oriented to the space position and pose estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle pose estimation extension Kalman filter model based on the visual anchor point measurement is constructed, the optimal estimation of the unmanned aerial vehicle pose under the minimum error double-norm square sum index is realized by means of an extension Kalman filter theory, the influence of the observation error of the ground vision in the unmanned aerial vehicle landing process on the unmanned aerial vehicle pose estimation accuracy is effectively reduced, and the accuracy and the robustness of the unmanned aerial vehicle landing process pose estimation are greatly improved compared with the traditional method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an unmanned aerial vehicle falling pose filtering estimation method based on visual anchor points;
FIG. 2 is a diagram of a ground-based vision system for estimating in real time a plurality of physical coordinate systems involved in the unmanned aerial vehicle landing process;
FIG. 3 is a graph of unmanned aerial vehicle landing trajectories and positioning and attitude determination errors generated by the method provided by the invention and by conventional methods;
fig. 4 is a root mean square error diagram of target pose estimation at three stages of unmanned aerial vehicle landing by using the method and the conventional method provided by the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present invention.
The invention provides an unmanned aerial vehicle falling pose filtering estimation method based on visual anchor points, which is shown in fig. 1 and comprises the following steps:
101: according to the measurement condition of a visual anchor point in the unmanned aerial vehicle landing process, constructing an unmanned aerial vehicle pose estimation expansion Kalman filtering model; the model comprises a system state prediction equation and a system observation equation;
in the unmanned aerial vehicle landing process, the observation error exists in the foundation vision, and the estimation accuracy of the pose of the unmanned aerial vehicle can be directly influenced, so that the unmanned aerial vehicle pose estimation expansion Kalman filtering model is constructed to reduce the observation error.
102: defining a visual anchor point measured value of the unmanned aerial vehicle according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle;
the generalized image features of the unmanned aerial vehicle are mainly divided into 3 categories of points, lines and planes.
The characteristics of the application scene mainly refer to the difference of the landing track of the unmanned aerial vehicle and the difference of environmental factors such as weather, time of day and the like, and the characteristics that the conditions such as lines, surfaces and the like are more harsh are difficult to form stably in the imaging of the unmanned aerial vehicle and the target movement range is wide.
103: acquiring a space pose of the unmanned aerial vehicle at the previous moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the previous moment;
the unmanned aerial vehicle spatial pose includes a position and a pose euler angle in a world coordinate system.
The input of the unmanned aerial vehicle system at the current moment is mainly an acceleration item.
104: according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables, a predicted value of a measured value of a visual anchor point of the unmanned aerial vehicle at the current moment is obtained by utilizing a system observation equation, and an image position predicted value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained according to the predicted value of the measured value;
105: acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and acquiring the pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the predicted value of the pose of the unmanned aerial vehicle, the predicted value of the image position of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
The image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment is detected and obtained by adopting the existing visual anchor point detection method, such as SIFT, harris isocenter characteristics or speckle characteristics such as Log, doG and the like in the image.
The unmanned aerial vehicle landing pose filter estimation method based on the visual anchor points is oriented to the space position and pose estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle pose estimation extension Kalman filter model based on the visual anchor point measurement is constructed, the optimal estimation of the unmanned aerial vehicle pose under the minimum error double-norm square sum index is realized by means of an extension Kalman filter theory, the influence of the observation error of the ground vision in the unmanned aerial vehicle landing process on the unmanned aerial vehicle pose estimation accuracy is effectively reduced, and the accuracy and the robustness of the unmanned aerial vehicle landing process pose estimation are greatly improved compared with the traditional method.
In one embodiment, for step 101, the unmanned aerial vehicle pose estimation extended Kalman filter model includes a system state prediction equation fs and a system viewEquation h is measured. Aiming at the problem of estimating the pose of the unmanned aerial vehicle, the state x of the unmanned aerial vehicle is determined by the unmanned aerial vehicle in a world coordinate systemPosition +.>Speed->Attitude Euler angle->Corresponding angular velocity +.>The composition is as follows:
in another embodiment, for step 102, defining the unmanned visual anchor measurement from the image generalized characteristics and application field Jing Tedian of the unmanned includes:
according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle, defining the visual anchor point measured value of the unmanned aerial vehicle as follows:
wherein z is a visual anchor point measured value of the unmanned aerial vehicle; m is the number of visual anchor points;is the image location of the mth visual anchor.
In the unmanned aerial vehicle pose estimation extended Kalman filtering model, an unmanned aerial vehicle visual anchor point measured value z is composed of image generalized characteristics of the unmanned aerial vehicle. Common generalized features are mainly classified into point, line, and face 3. These features tend to have relatively intuitive physical meanings that are easy to understand. The line and plane features are more common in unmanned aerial vehicle imaging, but the integrity of the line and plane features is easily damaged due to shielding, so that the accuracy of feature detection is reduced. The point features often correspond to inflection points, line intersections, or areas of difference from the surroundings of the object contours in the image, which are more stable to image than the line or face features. In addition, in the unmanned aerial vehicle landing application scene, the difference of unmanned aerial vehicle landing track and environmental factor differences such as weather, time of day are considered, the characteristic that conditions such as stable formation line, face are harsher is difficult to in unmanned aerial vehicle's formation of image. In summary, the measurement value z is defined by taking the visual inflection point of the unmanned aerial vehicle as the anchor point generalized characteristic of the target.
In a next embodiment, for step 103, according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the previous moment, a system state prediction equation is utilized to obtain a predicted value of the space pose of the unmanned aerial vehicle at the current moment, including:
301: according to the acceleration item input by the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment, the system state prediction equation is utilized to obtain the predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:
x k|k-1 =f s (x k-1|k-1 ,u k )
wherein f s (. Cndot.) is the system state prediction equation; u (u) k The input of the unmanned aerial vehicle system at the current moment;
302: according to the unmanned aerial vehicle application scene, ignoring the dynamic part of unmanned aerial vehicle motion, and obtaining the predicted value of the unmanned aerial vehicle space pose at the current moment is as follows:
wherein x is k|k-1 The method comprises the steps of predicting a space pose of the unmanned aerial vehicle at the current moment; f (F) k Is a state transition matrix; x is x k-1|k-1 The space pose of the unmanned aerial vehicle is the last moment; i 3×3 Is a unit matrix; Δt (delta t) k|k-1 For a 3 x 3 diagonal matrix with a diagonal element of Δt, Δt is the time difference between the current time and the previous time;is the position; />Is the speed; />Is the Euler angle of the gesture; />Is the angular velocity.
The motion of the object from time k-1 to time k is determined by the speed at time k-1, the angular velocity, and the acceleration and angular acceleration over this period of time, based on the kinematics and dynamics of the object motion. In the unmanned aerial vehicle landing application scene, the pose estimation time interval of two adjacent times is measured in units of milliseconds, namely, the time interval delta t between k-1 and k moments is smaller, so that the dynamic part of unmanned aerial vehicle motion can be ignored, namely, the unmanned aerial vehicle does uniform motion and rotation in the time period delta t.
In a next embodiment, for step 104, according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current time and other system observables, a predicted value of the measured value of the visual anchor of the unmanned aerial vehicle at the current time is obtained by using a system observation equation, including:
according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables, the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained by utilizing a system observation equation, and the predicted value is as follows:
wherein z is k|k-1 The predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment; h (·) is the system observation equation; s is an image projection normalization factor; k' is the matrix of parameters in the camera, f is the focal length of the camera, d x And d y The actual width and height of each pixel, respectively, (c) x ,c y ) Pixel coordinates for the center point of the image; according to the physical coordinate system involved in the ground based vision system and the unmanned aerial vehicle system illustrated in figure 2,representing a homogeneous transformation matrix from a holder coordinate system g to a camera coordinate system c; />Representing a homogeneous transformation matrix from a holder base coordinate system g' to a holder coordinate system g; />Representing a homogeneous transformation matrix from a world coordinate system w to a holder base coordinate system g'; />The homogeneous transformation matrix from the unmanned plane body coordinate system b to the world coordinate system w can be directly obtained by x according to the transformation relation among Euler angles, translation vectors and the homogeneous transformation matrix k|k-1 ComprisesDeriving and obtaining; />For all visual anchor points in the target coordinate system +.>Is a spatially homogeneous matrix of positions.
Other system observations includeAnd->
The system observation equation h can be considered as taking all visual anchor points from the target volume coordinate systemConversion to an image coordinate systemDuring which multiple coordinate system transformations and image projections of the spatial points are involved.
In this embodiment, according to the predicted value z of the measured value of the visual anchor point of the unmanned plane at the current moment k|k-1 Image position predicted value of unmanned aerial vehicle vision anchor point at current moment (image position is included in world coordinate systemEuler angles of positions and attitudes in (a)
In another embodiment, for step 105, according to the unmanned aerial vehicle pose estimation extended kalman filter model, the unmanned aerial vehicle spatial pose at the current time is obtained by using the unmanned aerial vehicle spatial pose prediction value, the image position prediction value of the unmanned aerial vehicle visual anchor at the current time and the image position measurement value of the unmanned aerial vehicle visual anchor at the current time through an unmanned aerial vehicle state update equation, including:
501: according to the system state prediction equation, a linear equation is adopted, and the error covariance matrix of the current moment is predicted by utilizing the error covariance matrix of the last moment;
502: according to the predicted current time error covariance matrix, acquiring a current time Kalman gain by utilizing the unmanned aerial vehicle space pose predicted value and the image position predicted value of the unmanned aerial vehicle visual anchor point at the current time;
503: and updating the unmanned aerial vehicle state and the error covariance matrix according to the Kalman gain at the current moment, and obtaining the spatial pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the image position measured value of the visual anchor point of the unmanned aerial vehicle at the current moment.
In one embodiment, for step 501, predicting the error covariance matrix at the current time by using the error covariance matrix at the previous time according to the system state prediction equation as a linear equation includes:
according to the system state prediction equation, a linear equation is adopted, and the error covariance matrix of the previous moment is used for predicting the error covariance matrix of the current moment as follows:
wherein P is k|k-1 Predicting an error covariance matrix at the current moment; p (P) k-1|k-1 The covariance matrix is the error covariance matrix at the last moment; f (F) k Is a state transition matrix; q (Q) k Covariance matrix of state prediction noise.
In a next embodiment, for step 502, obtaining, according to the predicted current time error covariance matrix, a current time kalman gain using the unmanned aerial vehicle spatial pose predicted value and the image position predicted value of the unmanned aerial vehicle visual anchor at the current time, including:
according to the predicted current time error covariance matrix, using the unmanned aerial vehicle space pose predicted value x k|k-1 And the image position predicted value h (x k|k-1 ) The Kalman gain at the current moment is obtained as follows:
wherein K is k The Kalman gain is the current moment; h k Is a measurement matrix; s is S k A covariance matrix is observed as a margin; r is R k Is the observed noise covariance matrix.
In another embodiment, for step 503, updating the unmanned aerial vehicle state and the error covariance matrix according to the kalman gain at the current moment, and obtaining the spatial pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment includes:
updating the unmanned aerial vehicle state and the error covariance matrix according to the Kalman gain at the current moment, and obtaining the spatial pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment, wherein the spatial pose of the unmanned aerial vehicle at the current moment is:
x k|k =x k|k-1 +K k (z k -z k|k-1 )
P k|k =(1-K k H k )P k|k-1
wherein x is k|k The space pose of the unmanned aerial vehicle is the current moment; z k The visual anchor point measurement value of the unmanned aerial vehicle at the current moment; p (P) k|k And the current time error covariance matrix.
The visual anchor point-based unmanned aerial vehicle landing pose filtering estimation method provided by the invention is explained by a specific application example, a ground-based visual object system is constructed, and the method provided by the invention is used for estimating the space pose of the unmanned aerial vehicle in the landing process in real time. To verify the advantages of the method of the present invention over conventional methods, conventional PnP target pose solutions algorithms are employed for comparison therewith. Fig. 3 shows a unmanned landing trace generated using the method of the present invention (denoted FP) and the conventional PnP method (denoted NP). When the unmanned aerial vehicle is located E, F and G respectively, unmanned aerial vehicle image and unmanned aerial vehicle's the visual anchor point that ground camera shot are demonstrated in the picture. Fig. 4 is a graph of the estimated Root Mean Square Error (RMSE) of the target pose of FP and NP at three stages of unmanned landing, respectively, in two sets of physical experiments. Overall, FP exhibits higher estimation accuracy in both position and attitude than conventional NP. The estimation errors of the two methods gradually decrease as the drone gets closer to the camera. This is because as the unmanned aerial vehicle approaches the camera, the larger its imaging scale, the lower the duty ratio of the visual anchor point observation error of the same pixel in the target area, the smaller the influence on pose estimation accuracy. According to the actual application requirement of unmanned aerial vehicle landing, the unmanned aerial vehicle should be located in the runway overhead as much as possible in the whole landing process, so that the tolerance of the unmanned aerial vehicle to the positioning error of the unmanned aerial vehicle on the plane of the runway is determined by the length and the width of the runway. Typically, the positioning error along the runway should be below 70m, and the positioning error along the vertical direction of the runway should be below 20m, i.e. the root mean square error in the Y direction is below 70m and the root mean square error in the x direction is below 20m. According to the statistical result, the FP meets the requirements at each stage of unmanned plane landing. Secondly, the accuracy of the ground clearance high estimated value of the unmanned aerial vehicle in the near-ground stage is very critical, and the root mean square error in the Z direction in the pull-drift stage is reflected in a concentrated manner. FP achieved a Z-direction estimated root mean square error of less than 1m in both sets of experiments. Finally, the precision of the roll angle and pitch angle of the unmanned aerial vehicle is also very critical in the unmanned aerial vehicle floating stage. The unmanned plane rolling angle and pitch angle root mean square error of the FP algorithm in the floating stage are respectively lower than 5 degrees and 2 degrees. Overall, FP exhibits higher unmanned aerial vehicle positioning and attitude accuracy and stronger robustness to observation errors than conventional method NP.
In summary, the method is oriented to the space position and posture estimation requirements in the unmanned aerial vehicle landing process, an unmanned aerial vehicle posture estimation expansion Kalman filtering model based on visual anchor point observability is constructed, compared with a traditional method, the method realizes remarkable improvement of unmanned aerial vehicle posture estimation accuracy and robustness, provides powerful technical support for constructing an unmanned aerial vehicle autonomous landing foundation visual auxiliary system, and has high practical value.
The invention also provides an unmanned aerial vehicle falling pose filtering estimation system based on the visual anchor point, which comprises the following steps:
the model construction module is used for constructing an extended Kalman filtering model for estimating the pose of the unmanned aerial vehicle according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation;
the unmanned aerial vehicle space pose prediction module is used for acquiring the unmanned aerial vehicle space pose at the last moment, and acquiring a predicted value of the unmanned aerial vehicle space pose at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the unmanned aerial vehicle space pose at the last moment;
the image position prediction module of the unmanned aerial vehicle visual anchor point is used for defining an unmanned aerial vehicle visual anchor point measured value according to the image generalized characteristics and the application field Jing Tedian of the unmanned aerial vehicle; according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment, a system observation equation is utilized to obtain the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment, and the predicted value of the image position of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained according to the predicted value of the measured value;
the unmanned aerial vehicle space pose acquisition module is used for acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment by utilizing the unmanned aerial vehicle space pose prediction value, the image position prediction value of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
The invention also proposes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (10)

1. The unmanned aerial vehicle falling pose filtering estimation method based on the visual anchor points is characterized by comprising the following steps of:
according to the measurement condition of a visual anchor point in the unmanned aerial vehicle landing process, constructing an unmanned aerial vehicle pose estimation expansion Kalman filtering model; the model comprises a system state prediction equation and a system observation equation;
defining a visual anchor point measured value of the unmanned aerial vehicle according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle;
acquiring a space pose of the unmanned aerial vehicle at the previous moment, and acquiring a predicted value of the space pose of the unmanned aerial vehicle at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the previous moment;
according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables, a predicted value of a measured value of a visual anchor point of the unmanned aerial vehicle at the current moment is obtained by utilizing a system observation equation, and an image position predicted value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained according to the predicted value of the measured value;
acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the pose of the unmanned aerial vehicle, and acquiring the pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the predicted value of the pose of the unmanned aerial vehicle, the predicted value of the image position of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
2. The visual anchor point-based unmanned aerial vehicle landing pose filter estimation method of claim 1, wherein obtaining the unmanned aerial vehicle landing pose predicted value at the current moment by using a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the unmanned aerial vehicle space pose at the previous moment comprises:
according to the acceleration item input by the unmanned aerial vehicle system at the current moment and the space pose of the unmanned aerial vehicle at the last moment, the system state prediction equation is utilized to obtain the predicted value of the space pose of the unmanned aerial vehicle at the current moment as follows:
x k|k-1 =f s (x k-1|k-1 ,u k )
wherein f s (. Cndot.) is the system state prediction equation; u (u) k The input of the unmanned aerial vehicle system at the current moment;
according to the unmanned aerial vehicle application scene, ignoring the dynamic part of unmanned aerial vehicle motion, and obtaining the predicted value of the unmanned aerial vehicle space pose at the current moment is as follows:
wherein x is k|k-1 The method comprises the steps of predicting a space pose of the unmanned aerial vehicle at the current moment; f (F) k Is a state transition matrix; x is x k-1|k-1 The space pose of the unmanned aerial vehicle is the last moment; i 3×3 Is a unit matrix; Δt (delta t) k|k-1 For a 3 x 3 diagonal matrix with a diagonal element of Δt, Δt is the time difference between the current time and the previous time;is the position; />Is the speed; />Is the Euler angle of the gesture; />Is the angular velocity.
3. The visual anchor point-based unmanned aerial vehicle landing pose filter estimation method of claim 1, wherein defining unmanned aerial vehicle visual anchor point measurement values according to the image generalized characteristics and the application field Jing Tedian of the unmanned aerial vehicle comprises:
according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle, defining the visual anchor point measured value of the unmanned aerial vehicle as follows:
wherein z is a visual anchor point measured value of the unmanned aerial vehicle; m is the number of visual anchor points;is the image location of the mth visual anchor.
4. The visual anchor point-based unmanned aerial vehicle landing pose filtering estimation method according to claim 1, wherein the method for obtaining the predicted value of the visual anchor point measured value of the unmanned aerial vehicle at the current moment by using a system observation equation according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables comprises the following steps:
according to the predicted value of the spatial pose of the unmanned aerial vehicle at the current moment and other system observables, the predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment is obtained by utilizing a system observation equation, and the predicted value is as follows:
wherein z is k|k-1 The predicted value of the measured value of the visual anchor point of the unmanned aerial vehicle at the current moment; h (·) is the system observation equation; s is an image projection normalization factor; k' is the matrix of parameters in the camera, f is the focal length of the camera, d x And d y The actual width and height of each pixel, respectively, (c) x ,c y ) Pixel coordinates for the center point of the image;representing a homogeneous transformation matrix from a holder coordinate system g to a camera coordinate system c; />Representing a homogeneous transformation matrix from a holder base coordinate system g' to a holder coordinate system g; />Representing a homogeneous transformation matrix from a world coordinate system w to a holder base coordinate system g'; />Representing a homogeneous transformation matrix from the unmanned aerial vehicle body coordinate system b to the world coordinate system w; />For all visual anchor points in the target coordinate system +.>Is a spatially homogeneous matrix of positions.
5. The visual anchor point-based unmanned aerial vehicle landing pose filter estimation method according to claim 1, wherein the unmanned aerial vehicle landing pose estimation extension kalman filter model is used for obtaining the unmanned aerial vehicle spatial pose at the current moment through an unmanned aerial vehicle state update equation by using the unmanned aerial vehicle spatial pose prediction value, the image position prediction value of the unmanned aerial vehicle visual anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle visual anchor point at the current moment, and the method comprises the following steps:
according to the system state prediction equation, a linear equation is adopted, and the error covariance matrix of the current moment is predicted by utilizing the error covariance matrix of the last moment;
according to the predicted current time error covariance matrix, acquiring a current time Kalman gain by utilizing the unmanned aerial vehicle space pose predicted value and the image position predicted value of the unmanned aerial vehicle visual anchor point at the current time;
and updating the unmanned aerial vehicle state and the error covariance matrix according to the Kalman gain at the current moment, and obtaining the spatial pose of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation by utilizing the image position measured value of the visual anchor point of the unmanned aerial vehicle at the current moment.
6. The visual anchor point-based unmanned aerial vehicle drop pose filter estimation method of claim 5, wherein predicting the error covariance matrix at the current moment by using the error covariance matrix at the last moment according to the system state prediction equation as a linear equation comprises:
according to the system state prediction equation, a linear equation is adopted, and the error covariance matrix of the previous moment is used for predicting the error covariance matrix of the current moment as follows:
wherein P is k|k-1 Predicting an error covariance matrix at the current moment; p (P) k-1|k-1 The covariance matrix is the error covariance matrix at the last moment; f (F) k Is a state transition matrix; q (Q) k Covariance matrix of state prediction noise.
7. The visual anchor point-based unmanned aerial vehicle landing pose filter estimation method of claim 5, wherein obtaining the current moment kalman gain by using the unmanned aerial vehicle spatial pose predicted value and the current moment unmanned aerial vehicle visual anchor point image position predicted value according to the predicted current moment error covariance matrix comprises:
according to the predicted current time error covariance matrix, using the unmanned aerial vehicle space pose predicted value x k|k-1 And the image position predicted value h (x k|k-1 ) The Kalman gain at the current moment is obtained as follows:
wherein K is k The Kalman gain is the current moment; h k Is a measurement matrix; s is S k A covariance matrix is observed as a margin; r is R k Is the observed noise covariance matrix.
8. The visual anchor point-based unmanned aerial vehicle landing pose filter estimation method of claim 5, wherein updating the unmanned aerial vehicle state and the error covariance matrix according to the current moment kalman gain, and obtaining the current moment unmanned aerial vehicle spatial pose by using the image position measurement value of the current moment unmanned aerial vehicle visual anchor point comprises:
according to the Kalman gain at the current moment, updating the unmanned aerial vehicle state and an error covariance matrix, and obtaining the spatial pose of the unmanned aerial vehicle at the current moment by using the image position measurement value of the visual anchor point of the unmanned aerial vehicle at the current moment through an unmanned aerial vehicle state updating equation, wherein the spatial pose of the unmanned aerial vehicle at the current moment is:
x k|k =x k|k-1 +K k (z k -z k|k-1 )
P k|k =(1-K k H k )P k|k-1
wherein x is k|k The space pose of the unmanned aerial vehicle is the current moment; z k The visual anchor point measurement value of the unmanned aerial vehicle at the current moment; p (P) k|k And the current time error covariance matrix.
9. Unmanned aerial vehicle falls pose filtering estimation system based on visual anchor point, characterized by comprising:
the model construction module is used for constructing an extended Kalman filtering model for estimating the pose of the unmanned aerial vehicle according to the measurement condition of the visual anchor point in the landing process of the unmanned aerial vehicle; the model comprises a system state prediction equation and a system observation equation; defining a visual anchor point measured value of the unmanned aerial vehicle according to the generalized image characteristics and the application field Jing Tedian of the unmanned aerial vehicle;
the unmanned aerial vehicle space pose prediction module is used for acquiring the unmanned aerial vehicle space pose at the last moment, and acquiring a predicted value of the unmanned aerial vehicle space pose at the current moment by utilizing a system state prediction equation according to the input of the unmanned aerial vehicle system at the current moment and the unmanned aerial vehicle space pose at the last moment;
the system observation equation is utilized to obtain a predicted value of a measured value of the unmanned aerial vehicle visual anchor point at the current moment, and the predicted value of the image position of the unmanned aerial vehicle visual anchor point at the current moment is obtained according to the predicted value of the measured value;
the unmanned aerial vehicle space pose acquisition module is used for acquiring an image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment, estimating an extended Kalman filtering model according to the unmanned aerial vehicle pose, and acquiring the unmanned aerial vehicle space pose at the current moment through an unmanned aerial vehicle state update equation by utilizing the unmanned aerial vehicle space pose prediction value, the image position prediction value of the unmanned aerial vehicle vision anchor point at the current moment and the image position measurement value of the unmanned aerial vehicle vision anchor point at the current moment.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1-7 when the computer program is executed.
CN202011237125.8A 2020-11-09 2020-11-09 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points Active CN112504261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011237125.8A CN112504261B (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011237125.8A CN112504261B (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points

Publications (2)

Publication Number Publication Date
CN112504261A CN112504261A (en) 2021-03-16
CN112504261B true CN112504261B (en) 2024-02-09

Family

ID=74955547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011237125.8A Active CN112504261B (en) 2020-11-09 2020-11-09 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points

Country Status (1)

Country Link
CN (1) CN112504261B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420606B (en) * 2021-05-31 2022-06-14 华南理工大学 Method for realizing autonomous navigation of robot based on natural language and machine vision
CN114237300A (en) * 2021-12-31 2022-03-25 西安富沃德光电科技有限公司 Unmanned aerial vehicle accurate landing method based on differential barometer
CN114812513A (en) * 2022-05-10 2022-07-29 北京理工大学 Unmanned aerial vehicle positioning system and method based on infrared beacon
CN115690205B (en) * 2022-10-09 2023-12-05 北京自动化控制设备研究所 Visual relative pose measurement error estimation method based on point-line comprehensive characteristics
CN116627154B (en) * 2023-06-09 2024-04-30 上海大学 Unmanned aerial vehicle guiding landing method based on pose prediction and track optimization and unmanned aerial vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN109298389A (en) * 2018-08-29 2019-02-01 东南大学 Indoor pedestrian based on multiparticle group optimization combines position and orientation estimation method
CN110865650A (en) * 2019-11-19 2020-03-06 武汉工程大学 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN109298389A (en) * 2018-08-29 2019-02-01 东南大学 Indoor pedestrian based on multiparticle group optimization combines position and orientation estimation method
CN110865650A (en) * 2019-11-19 2020-03-06 武汉工程大学 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vision-based Online Localization and Trajectory Smoothing for Fixed-wing UAV Tracking a Moving Target;Zhou Yong et al.;2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS;全文 *
多机器人协同导航技术综述;张辰;周乐来;李贻斌;;无人系统技术(02);全文 *

Also Published As

Publication number Publication date
CN112504261A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112504261B (en) Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
US8666661B2 (en) Video navigation
CN110081881B (en) Carrier landing guiding method based on unmanned aerial vehicle multi-sensor information fusion technology
CN107727079A (en) The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite
Cai et al. Mobile robot localization using gps, imu and visual odometry
CN112254729B (en) Mobile robot positioning method based on multi-sensor fusion
CN116182837A (en) Positioning and mapping method based on visual laser radar inertial tight coupling
CN112179357B (en) Monocular camera-based visual navigation method and system for plane moving target
CN111426320A (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
Wang et al. A visual navigation framework for the aerial recovery of UAVs
CN108645408B (en) Unmanned aerial vehicle autonomous recovery target prediction method based on navigation information
CN111238469B (en) Unmanned aerial vehicle formation relative navigation method based on inertia/data chain
CN113155126B (en) Visual navigation-based multi-machine cooperative target high-precision positioning system and method
CN108961319A (en) Analysis method of the twin-line array TDI space camera to dynamic airplane kinetic characteristic
CN108731683B (en) Unmanned aerial vehicle autonomous recovery target prediction method based on navigation information
Zheng et al. Integrated navigation system with monocular vision and LIDAR for indoor UAVs
CN116681733A (en) Near-distance real-time pose tracking method for space non-cooperative target
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
CN113689501A (en) Double-machine cooperative target machine positioning and tracking control method based on convergence point
CN108897029B (en) Non-cooperative target short-distance relative navigation vision measurement system index evaluation method
Yang et al. Inertial-aided vision-based localization and mapping in a riverine environment with reflection measurements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant