CN109764864B - Color identification-based indoor unmanned aerial vehicle pose acquisition method and system - Google Patents
Color identification-based indoor unmanned aerial vehicle pose acquisition method and system Download PDFInfo
- Publication number
- CN109764864B CN109764864B CN201910039933.4A CN201910039933A CN109764864B CN 109764864 B CN109764864 B CN 109764864B CN 201910039933 A CN201910039933 A CN 201910039933A CN 109764864 B CN109764864 B CN 109764864B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- camera
- image
- propeller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides an indoor unmanned aerial vehicle pose acquisition method based on color recognition, which is used for preprocessing an unmanned aerial vehicle image acquired by an indoor wall or ground fixed camera; identifying a rotating propeller of the unmanned aerial vehicle, and extracting a circular or elliptical outline of the propeller; then, acquiring color distribution of the propeller, and calculating outline parameters of the propeller image; judging the orientation and the distance of the airplane according to the color distribution of the propeller image and the propeller profile parameters; calculating the position and the posture of the unmanned aerial vehicle according to the propeller image, the distance and the azimuth information and a relation model between the unmanned aerial vehicle pose and the unmanned aerial vehicle propeller elliptic parameters which are established in advance; the invention realizes the accurate calculation of the pose information of the unmanned aerial vehicle.
Description
Technical Field
The invention relates to an indoor unmanned aerial vehicle pose acquisition method and system based on color recognition, and belongs to the field of unmanned aerial vehicle calculation.
Background
Along with the development of unmanned aerial vehicle technique, unmanned aerial vehicle's application is more and more extensive. In addition to developing hands out in outdoor space, free flight of drones in indoor space has received increasing attention. The position information can be acquired by using a GPS in an outdoor environment, but the position information of the unmanned aerial vehicle is difficult to acquire in an indoor or other closed area environment. The unmanned aerial vehicle positioning technology replacing GPS indoors is the key of indoor application of the unmanned aerial vehicle.
The pose estimation method of the existing indoor Unmanned aerial vehicle (1) ([ 1 ]) pravidira, C; chowdhary, G.; johnson, E.A. compatibility optimization for observer flight vehicles [ C ]. Dension and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on, vol., no., pp.3572-3577,12-15Dec.2011.[2] Mustafah Y M, azan A W, akbar F.Indomotor position using stereo vision sensor J. Procedia Engineering,2012,41
In document [1], a method for estimating the pose of an unmanned aerial vehicle using wall tracking logic is proposed by using a strategy of mapping the position of the unmanned aerial vehicle to a surrounding area by using a laser range finder. The method uses 2D range information from a laser range scanner to generate speed commands by fusing a sensor-based random tree boundary detector with a wall-following speed field generator. The method utilizes the effective detection capability based on boundary guidance, and is beneficial to maintaining good laser scanning geometric shape, so that the pose estimation precision of the unmanned aerial vehicle is improved. In the document [2], the position and pose estimation of the indoor unmanned aerial vehicle is realized by using a binocular camera, and real-time accurate positioning is realized by adopting a rapid positioning algorithm. Document [3] uses a monocular camera and a laser scanner to realize the robust estimation of the speed and the position of the indoor unmanned aerial vehicle, and the whole system does not need any remote sensing information or off-line computing capability. A monocular vision/inertial navigation combined pose estimation method is proposed in a document [4 ]. Acquiring absolute scale information of monocular vision based on a least square method by means of an ultrasonic sensor, and calculating to obtain a vision pose estimation result; establishing a system equation according to a kinematic model of an Inertial Measurement Unit (IMU), carrying out inertial navigation solution, and deducing an error state equation to model a system error state; and the estimation result of the visual pose is used as the observed quantity, and the optimal estimation value of the system error state is obtained based on the extended Kalman filtering, so that the inertial navigation resolving result is corrected. The method has a Root Mean Square Error (RMSE) of less than 0.995m for a position estimate for an outdoor 150m range scene, an RMSE of 2.235 for a heading angle estimate, and an RMSE of less than 1.915 for a horizontal attitude angle estimate.
The prior patents related to unmanned aerial vehicle pose estimation include: (1) An unmanned aerial vehicle pose estimation method based on a trapezoidal and circular combined landmark (publication number: CN 108122255A); (2) A multi-rotor unmanned aerial vehicle pose acquisition method based on big and small square-shaped markers (publication number: CN 102967305A); (3) An unmanned aerial vehicle pose estimation method based on a cooperative target characteristic line (publication number: CN 101833761A); (4) Monocular camera pose estimation and optimization method and system based on characteristic point line (publication number: CN 107871327A). The unmanned aerial vehicle pose estimation method proposed in (1) and (2) needs to set landmarks or markers in a specific form, the unmanned aerial vehicle pose estimation method proposed in (3) needs cooperative targets with characteristic lines, and the unmanned aerial vehicle pose estimation method proposed in (4) requires obvious point-line characteristics for target images. The above methods have specific requirements on the target environment and are not suitable for the general target environment.
Disclosure of Invention
The invention aims to provide an indoor unmanned aerial vehicle pose acquisition method based on color recognition, which is used for carrying out video acquisition and feature extraction on an unmanned aerial vehicle in indoor flight and calculating key parameters, so that the estimation of the three-dimensional position and the attitude of the indoor unmanned aerial vehicle is realized. The method has the advantages that the camera is fixed on the indoor wall or the ground, so that the load of the airplane is reduced, the space and the energy are saved, the imaging problem caused by unstable flight of the airplane is reduced, a landmark or a marker can be prevented from being arranged in the environment, and the use convenience of the method is improved.
The technical scheme adopted by the invention is as follows:
an indoor unmanned aerial vehicle pose acquisition method based on color identification comprises the following steps,
s1, building an indoor unmanned aerial vehicle pose acquisition experiment platform, installing a plurality of cameras on an indoor wall or the ground, labeling propellers of the unmanned aerial vehicle, and coating different colors on different propellers;
s2, collecting unmanned aerial vehicle image frames and preprocessing the image frames: performing target extraction on the image acquired by the camera, extracting an elliptical contour of the propeller of the unmanned aerial vehicle, and performing accumulation processing on the acquired image to obtain an elliptical contour image of the propeller;
s3, calculating image parameters of the elliptical outline images of the four propellers, wherein the image parameters comprise color distribution, the perimeter of the elliptical outline and the eccentricity of an ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to the image color distribution of the elliptical outline image of the propeller; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical outline and the eccentricity of the ellipse;
and S4, calculating the position of the unmanned aerial vehicle according to the elliptical contour image of the propeller and the distance between the unmanned aerial vehicle and the camera to obtain the posture and the position of the unmanned aerial vehicle.
The step S2 specifically comprises the following steps,
s21, setting camera parameters;
s22, reading the video stream from the camera, extracting image frames from the video stream, extracting an elliptical outline image of the propeller from the image frames, and performing accumulation calculation on the propeller images of m continuous frames.
The step S3 specifically includes the following steps:
s31, carrying out color distribution calculation on the elliptical image of the propeller extracted in the S2, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
s32, measuring the length of a long shaft and a short shaft of a propeller ellipse, the perimeter of the propeller ellipse, the area of the propeller ellipse and the eccentricity of the propeller ellipse under the condition that a plurality of groups of unmanned aerial vehicles and cameras are at different distances and the unmanned aerial vehicles are at different positions or different postures through calibration experiments, and establishing a mathematical model of the distance between the unmanned aerial vehicles and the cameras and airplane parameters; and estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
Step S4 specifically includes the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein the plane of the two-dimensional image is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown as formula (1):
wherein, (X, Y, Z) T The coordinate of the unmanned aerial vehicle in a camera coordinate system, Z is equal to the distance D from the camera to the unmanned aerial vehicle;
f x expressing a normalized focal length of the camera in the direction of the X axis in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the current propeller with the previous frame to obtain a pixel coordinate (pixel coordinate of the unmanned aerial vehicle) of the center point of the current propeller image (u, v), and obtaining a conversion relation from a pixel coordinate system to a camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system, namely coordinates of the current propeller center point in the camera coordinate system, according to the conversion relation from the pixel coordinate system to the camera coordinate system.
An indoor unmanned aerial vehicle pose acquisition system based on color recognition comprises an unmanned aerial vehicle pose acquisition experiment platform, an image frame preprocessing unit, an image parameter calculation unit and an unmanned aerial vehicle position calculation unit;
the unmanned aerial vehicle pose acquisition experiment platform comprises a plurality of cameras, the cameras are arranged on indoor walls or the ground, propellers of the unmanned aerial vehicle are labeled in the unmanned aerial vehicle pose acquisition experiment platform, and different propellers are coated with different colors;
the image frame preprocessing unit is used for extracting a target of an image acquired by the camera, extracting elliptical outlines of four propellers of the unmanned aerial vehicle, and accumulating the acquired image to obtain an elliptical outline image of the propellers;
the image parameter calculating unit calculates the image parameters of the elliptical outline image of the propeller, wherein the image parameters comprise color distribution, the perimeter of the elliptical outline and the eccentricity of an ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to the image color distribution of the elliptical outline image of the propeller; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical outline and the eccentricity of the ellipse;
the unmanned aerial vehicle position calculation unit calculates the position of the unmanned aerial vehicle according to the elliptical profile image of the propeller and the distance between the unmanned aerial vehicle and the camera, and the posture and the position of the unmanned aerial vehicle are obtained.
The image frame preprocessing unit is used for preprocessing the image frame and specifically comprises the following steps:
s21, setting camera parameters;
s22, reading the video stream from the camera, extracting image frames from the video stream, extracting an elliptical outline image of the propeller from the image frames, and performing accumulation calculation on the propeller images of m continuous frames.
The image parameter calculation by the image parameter calculation unit specifically comprises the following steps:
s31, carrying out color distribution calculation on the elliptical image of the propeller, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
s32, measuring the length of a long shaft and a short shaft of a propeller ellipse, the perimeter of the propeller ellipse, the area of the propeller ellipse and the eccentricity of the propeller ellipse under the condition that a plurality of groups of unmanned aerial vehicles and cameras are at different distances and the unmanned aerial vehicles are at different positions or different postures through calibration experiments, and establishing a mathematical model of the distance between the unmanned aerial vehicles and the cameras and airplane parameters; and estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
The unmanned aerial vehicle position calculation unit carries out unmanned aerial vehicle position calculation and specifically comprises the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein a plane where the two-dimensional image is located is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown as formula (1):
wherein, (X, Y, Z) T The coordinate of the unmanned aerial vehicle in a camera coordinate system, Z is equal to the distance D from the camera to the unmanned aerial vehicle;
f x expressing a camera normalized focal length in an X-axis direction in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the current propeller with the previous frame to obtain a pixel coordinate (the pixel coordinate of the unmanned aerial vehicle) of the central point of the current propeller image (u, v), and obtaining a conversion relation from a pixel coordinate system to a camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system, namely coordinates of the current propeller center point in the camera coordinate system, according to the conversion relation from the pixel coordinate system to the camera coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
according to the color recognition-based indoor unmanned aerial vehicle pose acquisition method and the existing method, the propellers of the unmanned aerial vehicle are coated with different colors, the flight direction of the unmanned aerial vehicle is judged according to the color distribution of the collected images of the propellers, large-size markers do not need to be additionally placed on the unmanned aerial vehicle, and the problems of extra wind resistance, asymmetric structure and the like are avoided;
the invention shoots the flying unmanned aerial vehicle by the fixed camera, and estimates the position and the attitude of the unmanned aerial vehicle by utilizing the acquired image color information, gray information and the like. The method avoids the setting of landmarks or markers or cooperative targets in the environment, and simultaneously avoids the processes of feature extraction and tracking of the landmarks or markers or cooperative targets.
According to the invention, the camera is fixed on an indoor wall or the ground, so that the situation that a camera and other measuring devices are additionally arranged on the unmanned aerial vehicle in the conventional unmanned aerial vehicle position and pose estimation method is avoided, the load of the aircraft is reduced, the space is saved, and meanwhile, the energy consumption of the unmanned aerial vehicle is greatly reduced, so that the cruising ability of the unmanned aerial vehicle can be improved;
according to the invention, the camera is fixed on the indoor wall or the ground, and compared with the situation that the camera is additionally arranged on the unmanned aerial vehicle, the problem of imaging jitter caused by the instability of the flight of the unmanned aerial vehicle on the camera is reduced, so that the imaging quality of the camera is improved, and convenience and high quality data are brought to the subsequent image processing;
the direction of the unmanned aerial vehicle is judged according to the color distribution of the propeller image of the unmanned aerial vehicle, the pose estimation algorithm of the unmanned aerial vehicle is simple and reliable, the cost is low, and the realizability of the method is improved.
Drawings
FIG. 1 is a flow chart of an indoor unmanned aerial vehicle pose acquisition method based on color identification according to the invention;
fig. 2 is a schematic view of four propellers of a quad-rotor drone.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
As shown in fig. 1, a method for acquiring the pose of an indoor unmanned aerial vehicle based on color recognition comprises the following steps,
s1, building an indoor unmanned aerial vehicle pose acquisition experiment platform: installing a plurality of cameras on an indoor wall or the ground for collecting images of the indoor unmanned aerial vehicle; marking propellers of the unmanned aerial vehicle (taking a quad-rotor unmanned aerial vehicle as an example), and coating different colors on different propellers (for example, the propellers 1 are red, the propellers 2 are yellow, the propellers 3 are blue, and the propellers 4 are green, as shown in fig. 2);
s2, collecting unmanned aerial vehicle image frames and preprocessing the image frames: and extracting a target from the image acquired by the camera, and extracting the elliptical outlines of the four propellers of the unmanned aerial vehicle. In order to improve the stability of extracting the propeller elliptical contour, accumulating the acquired images to obtain an elliptical contour image of the propeller;
s3, calculating image parameters of the elliptical outline images of the four propellers, wherein the image parameters comprise color distribution, the perimeter of the elliptical outline and the eccentricity of an ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to image color distribution information of the elliptical contour images of the four propellers; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical outline and the eccentricity of the ellipse;
the attitude of the unmanned aerial vehicle refers to a yaw angle psi, a pitch angle theta and a roll angle phi of the unmanned aerial vehicle;
and S4, calculating the position of the unmanned aerial vehicle according to the elliptical contour image of the propeller and the distance between the unmanned aerial vehicle and the camera.
The step S2 specifically comprises the following steps,
s21, setting camera parameters, opening a camera, and preparing for reading a video stream;
and S22, reading the video stream from the camera, and extracting the image frame from the video stream. And extracting an elliptical outline image of the propeller from the image frame, and performing accumulation calculation on the propeller images of m continuous frames.
The step S3 specifically includes the following steps:
s31, calculating color distribution of the extracted elliptical image (RGB image) of the propeller in S2, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
assuming that the initial positions of the profiles of the four propellers are as shown in fig. 1, the profile center line of the propeller 1 (the center line is a line o-o1 from the center point o of the drone to the profile center o1 of the propeller 1) points to the true north direction (the yaw angle ψ of the drone is 0 degrees), the center line o-o2 of the propeller 2 points to the true west direction, the center line o-o3 of the propeller 3 points to the true south direction, and the center line o-o4 of the propeller 4 points to the true east direction. By analogy, when the contour center line o-o1 of the propeller 1 points to the east direction, the yaw angle psi of the unmanned aerial vehicle is 90 degrees, at the moment, the center line o-o2 of the propeller 2 points to the north direction, the center line o-o3 of the propeller 3 points to the west direction, and the center line o-o4 of the propeller 4 points to the south direction;
and S32, carrying out parameter calculation on the perimeter of the elliptical outline of the rotating machine extracted in the S2 and the eccentricity of the ellipse. And calculating the pitch angle theta, the roll angle phi and the distance of the unmanned aerial vehicle according to the perimeter of the elliptical contour and the eccentricity of the ellipse.
Through calibration experiments, the length of the long shaft and the short shaft of the propeller ellipse, the perimeter of the ellipse, the area and the eccentricity of the propeller ellipse at different positions and different postures under different groups of different distances (the distances refer to the distances between the central point o of the propeller of the airplane and the central point of the camera) are measured, and a mathematical model of the distances and airplane parameters (the airplane parameters comprise airplane attitude angles and image shape parameters of the propeller) is established. And estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
Step S4 specifically includes the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein a plane where the two-dimensional image is located is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown as formula (1):
wherein, (X, Y, Z) T The coordinate of the unmanned aerial vehicle in a camera coordinate system, Z is equal to the distance D from the camera to the unmanned aerial vehicle;
f x expressing a normalized focal length of the camera in the direction of the X axis in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the current propeller with the previous frame to obtain a pixel coordinate (pixel coordinate of the unmanned aerial vehicle) of the center point of the current propeller image (u, v), and obtaining a conversion relation from a pixel coordinate system to a camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in the camera coordinate system, namely coordinates of the current propeller center point in the camera coordinate system according to the conversion relation from the pixel coordinate system to the camera coordinate system.
Since the camera is fixed in the system, the camera coordinate system is also fixed and can be used as a world coordinate system. The position of the drone is thus obtained (using the position coordinates of the central point of the propeller as the position of the drone).
An indoor unmanned aerial vehicle pose acquisition system based on color recognition comprises an unmanned aerial vehicle pose acquisition experiment platform, an image frame preprocessing unit, an image parameter calculation unit and an unmanned aerial vehicle position calculation unit;
the unmanned aerial vehicle pose acquisition experiment platform comprises a plurality of cameras, the cameras are arranged on indoor walls or the ground, propellers of the unmanned aerial vehicle are labeled in the unmanned aerial vehicle pose acquisition experiment platform, and different propellers are coated with different colors;
the image frame preprocessing unit is used for extracting a target of an image acquired by the camera, extracting elliptical outlines of four propellers of the unmanned aerial vehicle, and accumulating the acquired image to obtain an elliptical outline image of the propellers;
the image parameter calculating unit calculates image parameters of the elliptical outline images of the four propellers, wherein the image parameters comprise color distribution, the perimeter of the elliptical outline and the eccentricity of an ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to the image color distribution of the elliptical contour images of the four propellers; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical outline and the eccentricity of the ellipse;
the unmanned aerial vehicle position calculation unit calculates the position of the unmanned aerial vehicle according to the elliptical profile image of the propeller and the distance between the unmanned aerial vehicle and the camera, and the posture and the position of the unmanned aerial vehicle are obtained.
The image frame preprocessing unit is used for preprocessing the image frame and specifically comprises the following steps:
s21, setting camera parameters;
s22, reading the video stream from the camera, extracting image frames from the video stream, extracting an elliptical outline image of the propeller from the image frames, and performing accumulation calculation on the propeller images of m continuous frames.
The image parameter calculation by the image parameter calculation unit specifically comprises the following steps:
s31, calculating color distribution of the elliptical image of the propeller, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
s32, measuring the length of a long shaft and a short shaft of a propeller ellipse, the perimeter of the propeller ellipse, the area of the propeller ellipse and the eccentricity of the propeller ellipse under the condition that a plurality of groups of unmanned aerial vehicles and cameras are at different distances and the unmanned aerial vehicles are at different positions or different postures through calibration experiments, and establishing a mathematical model of the distance between the unmanned aerial vehicles and the cameras and airplane parameters; and estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
The unmanned aerial vehicle position calculation unit carries out unmanned aerial vehicle position calculation and specifically comprises the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein a plane where the two-dimensional image is located is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown as formula (1):
wherein, (X, Y, Z) T For unmanned aerial vehicle at cameraCoordinates in the coordinate system, Z is equal to the distance D from the camera to the unmanned aerial vehicle;
f x expressing a normalized focal length of the camera in the direction of the X axis in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the current propeller with the previous frame to obtain a pixel coordinate (the pixel coordinate of the unmanned aerial vehicle) of the central point of the current propeller image (u, v), and obtaining a conversion relation from a pixel coordinate system to a camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system, namely coordinates of the current propeller center point in the camera coordinate system, according to the conversion relation from the pixel coordinate system to the camera coordinate system.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or groups of devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may additionally be divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. Modules or units or groups in embodiments may be combined into one module or unit or group and, in addition, may be divided into sub-modules or sub-units or sub-groups. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that although some embodiments described herein include some features included in other embodiments, not others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the means for performing the functions performed by the elements for the purpose of carrying out the invention.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
The above is only a preferred embodiment of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (6)
1. An indoor unmanned aerial vehicle pose acquisition method based on color identification is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
s1, building an indoor unmanned aerial vehicle pose acquisition experiment platform, installing a plurality of cameras on an indoor wall or the ground, labeling propellers of the unmanned aerial vehicle, and coating different propellers with different colors;
s2, collecting an image frame of the unmanned aerial vehicle, extracting a target of the image collected by the camera, extracting an elliptical contour of a propeller of the unmanned aerial vehicle, and accumulating the collected image to obtain an elliptical contour image of the propeller;
s3, calculating image parameters of an elliptical contour image of the propeller, wherein the image parameters comprise color distribution, the perimeter of the elliptical contour and the eccentricity of an ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to the image color distribution of the elliptical outline image of the propeller; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical contour and the eccentricity of the ellipse;
s4, calculating the position of the unmanned aerial vehicle according to the elliptical contour image of the propeller and the distance between the unmanned aerial vehicle and the camera;
the step S3 specifically includes the following steps:
s31, carrying out color distribution calculation on the elliptical image of the propeller extracted in the S2, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
s32, measuring the length of a long shaft and a short shaft of a propeller ellipse, the perimeter of the propeller ellipse, the area of the propeller ellipse and the eccentricity of the propeller ellipse when the unmanned aerial vehicle is in different positions or different postures under the condition of measuring the distances between a plurality of groups of different unmanned aerial vehicles and cameras through a calibration experiment, and establishing a mathematical model of the distances between the unmanned aerial vehicles and the cameras and the parameters of the airplane; and estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
2. The method for acquiring the pose of the indoor unmanned aerial vehicle based on color recognition according to claim 1, characterized in that: the step S2 specifically comprises the following steps,
s21, setting camera parameters;
s22, reading the video stream from the camera, extracting image frames from the video stream, extracting an elliptical outline image of the propeller from the image frames, and performing accumulation calculation on the propeller images of m continuous frames.
3. The color identification-based indoor unmanned aerial vehicle pose acquisition method according to claim 1, characterized in that:
the step S4 specifically includes the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein a plane where the two-dimensional image is located is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown in formula (1):
wherein, (X, Y, Z) T Z is the coordinate of the unmanned aerial vehicle in a camera coordinate system and is equal to the distance D from the camera to the unmanned aerial vehicle;
(u,v) T pixel coordinates of the unmanned aerial vehicle;
f x expressing a normalized focal length of the camera in the direction of the X axis in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the propeller of the current frame with the previous frame to obtain pixel coordinates (u, v) of the unmanned aerial vehicle, and obtaining a conversion relation from a pixel coordinate system to a camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in the camera coordinate system according to the conversion relation from the pixel coordinate system to the camera coordinate system.
4. The utility model provides an indoor unmanned aerial vehicle position appearance acquisition system based on colour discernment which characterized in that:
the unmanned aerial vehicle position and pose acquisition experiment platform, the image frame preprocessing unit, the image parameter calculation unit and the unmanned aerial vehicle position calculation unit are included;
the unmanned aerial vehicle pose acquisition experiment platform comprises a plurality of cameras, and the cameras are arranged on indoor walls or the ground; marking propellers of the unmanned aerial vehicle in the unmanned aerial vehicle pose acquisition experiment platform, and coating different colors on different propellers;
the image frame preprocessing unit is used for extracting a target from an image acquired by the camera, extracting an elliptical contour of the propeller of the unmanned aerial vehicle, and accumulating the acquired image to obtain an elliptical contour image of the propeller;
the image parameter calculating unit calculates the image parameters of the elliptical outline image of the propeller, wherein the image parameters comprise color distribution, the perimeter of the elliptical outline and the eccentricity of the ellipse; calculating to obtain a yaw angle psi of the unmanned aerial vehicle according to the image color distribution of the elliptical outline image of the propeller; calculating a pitch angle theta and a roll angle phi of the unmanned aerial vehicle and a distance between the unmanned aerial vehicle and the camera according to the perimeter of the elliptical contour and the eccentricity of the ellipse;
the unmanned aerial vehicle position calculating unit calculates the position of the unmanned aerial vehicle according to the elliptical profile image of the propeller and the distance between the unmanned aerial vehicle and the camera;
the image parameter calculation by the image parameter calculation unit specifically comprises the following steps:
s31, calculating color distribution of the elliptical image of the propeller, and calculating the yaw angle psi of the unmanned aerial vehicle according to the color distribution;
s32, measuring the length of a long shaft and a short shaft of a propeller ellipse, the perimeter of the propeller ellipse, the area of the propeller ellipse and the eccentricity of the propeller ellipse under the condition that a plurality of groups of unmanned aerial vehicles and cameras are at different distances and the unmanned aerial vehicles are at different positions or different postures through calibration experiments, and establishing a mathematical model of the distance between the unmanned aerial vehicles and the cameras and the parameters of the airplane; and estimating the pitch angle theta and the roll angle phi of the unmanned aerial vehicle and the distance between the unmanned aerial vehicle and the camera according to the established mathematical model and the measured ellipse parameters.
5. The color identification based indoor unmanned aerial vehicle pose acquisition system of claim 4, wherein: the image frame preprocessing unit is used for preprocessing the image frame and specifically comprises the following steps:
s21, setting camera parameters;
s22, reading the video stream from the camera, extracting image frames from the video stream, extracting an elliptical outline image of the propeller from the image frames, and performing accumulation calculation on the propeller images of m continuous frames.
6. The color identification based indoor unmanned aerial vehicle pose acquisition system of claim 4, wherein:
the unmanned aerial vehicle position calculation by the unmanned aerial vehicle position calculation unit specifically comprises the following steps:
s41, projecting the unmanned aerial vehicle in the three-dimensional space to a camera physical imaging plane according to the pinhole imaging model; O-X-Y-Z is a camera coordinate system, O is an optical center of the camera, the Z axis points to the front of the camera, and the pointing directions of the X axis and the Y axis are determined according to a right-hand rule; after passing through the small hole O, the object in the three-dimensional space is imaged on a physical imaging coordinate plane O ' -X ' -Y '; in a camera, obtaining a two-dimensional image consisting of pixels, wherein the plane of the two-dimensional image is a pixel coordinate plane o-u-v; the conversion relation from the camera coordinate system to the pixel coordinate system is obtained according to the triangular similarity relation and is shown in formula (1):
wherein, (X, Y, Z) T The coordinate of the unmanned aerial vehicle in a camera coordinate system, Z is equal to the distance D from the camera to the unmanned aerial vehicle;
f x expressing a camera normalized focal length in an X-axis direction in a camera coordinate system;
f y a camera normalized focal length representing a Y-axis direction in a camera coordinate system;
(c x ,c y ) Representing camera optical center coordinates;
s42, processing the propeller image of the unmanned aerial vehicle in real time to obtain real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in a camera coordinate system;
comparing the image of the current propeller with the previous frame to obtain the pixel coordinates (u, v) of the center point of the current propeller image, and obtaining the conversion relation from the pixel coordinate system to the camera coordinate system as shown in a formula (2) according to the conversion relation from the camera coordinate system to the pixel coordinate system;
and obtaining real-time position coordinates (X, Y, Z) of the unmanned aerial vehicle in the camera coordinate system according to the conversion relation from the pixel coordinate system to the camera coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910039933.4A CN109764864B (en) | 2019-01-16 | 2019-01-16 | Color identification-based indoor unmanned aerial vehicle pose acquisition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910039933.4A CN109764864B (en) | 2019-01-16 | 2019-01-16 | Color identification-based indoor unmanned aerial vehicle pose acquisition method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109764864A CN109764864A (en) | 2019-05-17 |
CN109764864B true CN109764864B (en) | 2022-10-21 |
Family
ID=66452359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910039933.4A Active CN109764864B (en) | 2019-01-16 | 2019-01-16 | Color identification-based indoor unmanned aerial vehicle pose acquisition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109764864B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112558595A (en) * | 2019-09-06 | 2021-03-26 | 苏州科瓴精密机械科技有限公司 | Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium |
CN112308900B (en) * | 2020-10-22 | 2022-10-21 | 大连理工大学 | Four-rotor unmanned aerial vehicle relative pose estimation method based on LED (light emitting diode) ring detection |
CN114137975B (en) * | 2021-11-26 | 2024-07-19 | 南京工程学院 | Unmanned vehicle navigation deviation correcting method based on ultrasonic-assisted fusion positioning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833761A (en) * | 2010-04-20 | 2010-09-15 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines |
JP2013097154A (en) * | 2011-10-31 | 2013-05-20 | Olympus Corp | Distance measurement device, imaging apparatus, and distance measurement method |
CN104049641A (en) * | 2014-05-29 | 2014-09-17 | 深圳市大疆创新科技有限公司 | Automatic landing method and device and air vehicle |
CN105847207A (en) * | 2016-06-08 | 2016-08-10 | 金陵科技学院 | Optical wave information-based unmanned aerial vehicle identity recognition device and information modulation and demodulation method |
CN107871327A (en) * | 2017-10-23 | 2018-04-03 | 武汉大学 | The monocular camera pose estimation of feature based dotted line and optimization method and system |
JP2018119852A (en) * | 2017-01-25 | 2018-08-02 | 株式会社トプコン | Position specification device, position specification method, position specification system, program for position specification, unmanned aircraft, and target for identifying unmanned aircraft |
-
2019
- 2019-01-16 CN CN201910039933.4A patent/CN109764864B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833761A (en) * | 2010-04-20 | 2010-09-15 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) position and orientation estimation method based on cooperative target characteristic lines |
JP2013097154A (en) * | 2011-10-31 | 2013-05-20 | Olympus Corp | Distance measurement device, imaging apparatus, and distance measurement method |
CN104049641A (en) * | 2014-05-29 | 2014-09-17 | 深圳市大疆创新科技有限公司 | Automatic landing method and device and air vehicle |
CN105847207A (en) * | 2016-06-08 | 2016-08-10 | 金陵科技学院 | Optical wave information-based unmanned aerial vehicle identity recognition device and information modulation and demodulation method |
JP2018119852A (en) * | 2017-01-25 | 2018-08-02 | 株式会社トプコン | Position specification device, position specification method, position specification system, program for position specification, unmanned aircraft, and target for identifying unmanned aircraft |
CN107871327A (en) * | 2017-10-23 | 2018-04-03 | 武汉大学 | The monocular camera pose estimation of feature based dotted line and optimization method and system |
Non-Patent Citations (3)
Title |
---|
Quadrotor control using dual camera visual feedback;Erding Altuk,等;《Proceedings of the 2003 IEEE International Conference on Robotics & Automition》;20031231;全文 * |
基于机器视觉的悬臂式掘进机机身位姿检测系统;杜雨馨等;《煤炭学报》;20161115(第11期);全文 * |
基于视觉的空地机器人协作方法研究;李丹;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20131215(第S1期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109764864A (en) | 2019-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8797400B2 (en) | Apparatus and method for generating an overview image of a plurality of images using an accuracy information | |
JP6496323B2 (en) | System and method for detecting and tracking movable objects | |
CN102967305B (en) | Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square | |
EP2423873B1 (en) | Apparatus and Method for Generating an Overview Image of a Plurality of Images Using a Reference Plane | |
CN106548173B (en) | A kind of improvement no-manned plane three-dimensional information acquisition method based on classification matching strategy | |
CN104197928B (en) | Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle | |
CN106326892B (en) | Visual landing pose estimation method of rotary wing type unmanned aerial vehicle | |
CN106127683B (en) | A kind of real-time joining method of unmanned aerial vehicle SAR image | |
CN113850126A (en) | Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle | |
CN108955685B (en) | Refueling aircraft taper sleeve pose measuring method based on stereoscopic vision | |
CN108665499B (en) | Near distance airplane pose measuring method based on parallax method | |
CN114332360A (en) | Collaborative three-dimensional mapping method and system | |
Yahyanejad et al. | Incremental mosaicking of images from autonomous, small-scale uavs | |
CN114004977B (en) | Method and system for positioning aerial data target based on deep learning | |
CN110570463B (en) | Target state estimation method and device and unmanned aerial vehicle | |
CN109460046B (en) | Unmanned aerial vehicle natural landmark identification and autonomous landing method | |
CN109764864B (en) | Color identification-based indoor unmanned aerial vehicle pose acquisition method and system | |
EP2166375A2 (en) | System and method of extracting plane features | |
Mondragón et al. | Omnidirectional vision applied to Unmanned Aerial Vehicles (UAVs) attitude and heading estimation | |
CN108122255A (en) | It is a kind of based on trapezoidal with circular combination terrestrial reference UAV position and orientation method of estimation | |
CN110083177A (en) | A kind of quadrotor and control method of view-based access control model landing | |
CN112947526B (en) | Unmanned aerial vehicle autonomous landing method and system | |
CN111812978B (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
CN111402324B (en) | Target measurement method, electronic equipment and computer storage medium | |
CN115144879A (en) | Multi-machine multi-target dynamic positioning system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |