[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117119287A - Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium - Google Patents

Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium Download PDF

Info

Publication number
CN117119287A
CN117119287A CN202311085070.7A CN202311085070A CN117119287A CN 117119287 A CN117119287 A CN 117119287A CN 202311085070 A CN202311085070 A CN 202311085070A CN 117119287 A CN117119287 A CN 117119287A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
shooting
target object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311085070.7A
Other languages
Chinese (zh)
Inventor
贾国忠
王劲
董继鹏
董杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huku Technology Co ltd
Zhejiang Geely Holding Group Co Ltd
Original Assignee
Shenzhen Huku Technology Co ltd
Zhejiang Geely Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huku Technology Co ltd, Zhejiang Geely Holding Group Co Ltd filed Critical Shenzhen Huku Technology Co ltd
Priority to CN202311085070.7A priority Critical patent/CN117119287A/en
Publication of CN117119287A publication Critical patent/CN117119287A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device and a medium for determining a shooting angle of an unmanned aerial vehicle, which are suitable for the technical field of unmanned aerial vehicles. And whether the target object is at the optimal position in the shooting visual field range can be determined through the relation between the actual coordinate data and the preset coordinate data, then the optimal shooting position is taken as the center, the unmanned aerial vehicle shoots images containing the target object at different angles in a surrounding way, and the time and labor cost for shooting after searching the optimal angle in the shooting process are saved. Based on the images of the target object contained in the photographed different angles, the photographing algorithm model is used for determining target photographing images in the images of the plurality of photographing angles so as to take the photographing angle of the photographing target photographing image as the optimal photographing angle, and the image information photographed by the optimal photographing angle is acquired in the images photographed by the different angles so as to meet the requirements of users, so that the unmanned aerial vehicle is controlled by people to find the optimal angle, and the photographing difficulty is reduced.

Description

Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a method, a device and a medium for determining shooting angles of unmanned aerial vehicles.
Background
In the journey or play process, because the different shooting angles have different film-forming effects and qualities, if a good-looking photo is wanted to be shot, a user shoots a photo in a self-shooting mode or a professional photographer searches an optimal shooting point position based on the current environment, light and different shooting angles to determine the optimal shooting angle.
Along with shooting technique and unmanned aerial vehicle's technique combination, artificial unmanned aerial vehicle of control through unmanned aerial vehicle's remote control equipment control looks for different angles in order to adjust unmanned aerial vehicle to best shooting angle, and whole shooting process needs the artificial best shooting angle of looking for, can't release staff operation, leads to unmanned aerial vehicle shooting process waste time and energy.
Therefore, how to solve the problem of time and effort consuming for finding the shooting angle by manually operating the unmanned aerial vehicle to realize automatic adjustment of the shooting angle by the unmanned aerial vehicle is needed to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method, a device and a medium for determining a shooting angle of an unmanned aerial vehicle, so as to solve the problems that the existing unmanned aerial vehicle needs to find the optimal shooting angle manually, manual operation cannot be released, and the shooting process is time-consuming and labor-consuming.
In order to solve the above technical problems, the present invention provides a method for determining a shooting angle of an unmanned aerial vehicle, which is applied to an unmanned aerial vehicle with a camera shooting function, and includes:
Acquiring coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system to which the unmanned aerial vehicle belongs;
determining a preferable shooting position of the target object according to the coordinate data and preset coordinate data;
surrounding the target object around the preferable shooting position to shoot images containing the target object at a plurality of shooting angles;
and determining a target shooting image in the images of the shooting angles through a shooting algorithm model so as to take the shooting angle of shooting the target shooting image as a preferable shooting angle.
Preferably, the determining the preferred shooting position of the target object according to the coordinate data and preset coordinate data includes:
judging whether the coordinate data are located in a coordinate preset range of the preset coordinate data or not;
if yes, taking the coordinate data of the target object as the preferable shooting position;
if not, returning to the step of acquiring the coordinate data of the target object acquired by the unmanned aerial vehicle camera in the preset coordinate system to which the unmanned aerial vehicle belongs until the coordinate data is positioned in the coordinate preset range of the preset coordinate data.
Preferably, the coordinate data includes position information and size data, and determining that the coordinate data is located in a coordinate preset range of the preset coordinate data includes:
judging whether the position information is located in a position coordinate preset range of the preset coordinate data, wherein the position information represents two-dimensional position information of the unmanned aerial vehicle under a preset coordinate system;
if the position information is located in a position coordinate preset range of the preset coordinate data, judging whether the size data is located in a size proportion preset range of the preset coordinate data, wherein the size data represents the size proportion of the target object under a preset coordinate system of the unmanned aerial vehicle;
and if the size data is positioned in the size proportion preset range of the preset coordinate data, determining that the coordinate data is positioned in the coordinate preset range of the preset coordinate data.
Preferably, determining that the unmanned aerial vehicle starts the camera includes:
receiving a first behavioral action of the target object;
analyzing the first behavior action to determine a camera opening instruction of the unmanned aerial vehicle;
starting the camera according to the camera starting instruction;
Wherein the first behavior is in a form of at least one of:
controlling a touch operation instruction of the unmanned aerial vehicle;
and controlling the voice control instruction of the unmanned aerial vehicle.
Preferably, the unmanned aerial vehicle and the target object both carry equipment with a magnetic field positioning function, and determining that the unmanned aerial vehicle starts a camera includes:
determining the actual distance between the unmanned aerial vehicle and the target object based on a magnetic field positioning function according to equipment carried by the unmanned aerial vehicle and the target object;
and in the preset time, if the actual distance is within the preset range of the distance deviation, determining that the relative distance between the unmanned aerial vehicle and the target object is kept unchanged, and starting the camera of the unmanned aerial vehicle.
Preferably, before the unmanned aerial vehicle starts the camera, the method further comprises:
pre-acquiring object characteristics of the target object to be stored in the unmanned aerial vehicle;
and identifying the target object according to the object characteristics so that the unmanned aerial vehicle can track the target object, and entering the step of starting the camera by the unmanned aerial vehicle.
Preferably, the device carried by the target object has a magnetic field positioning function, and the tracking process of the unmanned aerial vehicle for tracking the target object includes:
Receiving magnetic field intensity information sent by equipment carried by the target object;
determining the actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
and maintaining the actual distance within a preset range to track the target object.
Preferably, the terminal carried by the target object has a magnetic field positioning function and a GPS positioning function, and the tracking process of the unmanned aerial vehicle for tracking the target object includes:
receiving magnetic field intensity information and GPS positioning information sent by a terminal carried by the target object;
determining a first actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
determining a second actual distance between the unmanned aerial vehicle and the target object according to the GPS positioning information;
if the deviation between the first actual distance and the second actual distance is within a deviation preset range, carrying out mean value processing on the first actual distance and the second actual distance to obtain the actual distance between the unmanned aerial vehicle and the target object;
and if the deviation between the first actual distance and the second actual distance is not within the deviation preset range, taking the first actual distance as the actual distance between the unmanned aerial vehicle and the target object so as to track the target object.
Preferably, after determining the preferred shooting position of the target object according to the coordinate data and preset coordinate data, before the unmanned aerial vehicle flies around, the method further comprises:
receiving a second behavioral action of the target object;
analyzing the second behavior action to determine a shooting instruction for the target object so as to facilitate surrounding flight shooting of the unmanned aerial vehicle;
wherein the second behavioral action is in a form of at least one of:
controlling a touch operation instruction of the unmanned aerial vehicle;
controlling a voice control instruction of the unmanned aerial vehicle;
the position information of the target object is positioned at the preferable shooting position;
and controlling an object feature matching instruction of the target object to which the unmanned aerial vehicle belongs.
Preferably, the photographic algorithm model is built based on a photographic professional knowledge base, is trained by combining photographic light and the target object position parameters, and at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
Preferably, the shooting algorithm model stores shooting behavior habit parameters and hobby parameters of the user in advance, is built based on a shooting professional knowledge base, is obtained by combining shooting light and the target object position parameters through training, and at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
Preferably, the determining, by the photography algorithm model, a target photographed image in the images of the plurality of photographing angles includes:
acquiring a photographing behavior habit parameter of the user;
invoking the photographic algorithm model to input the shooting behavior habit parameters and a plurality of images of shooting angles;
and obtaining output parameters of the photographic algorithm model as the target photographed image.
Preferably, after the preferred photographing angle, the method further comprises:
marking a shooting point position prompt line in an image shot by the optimal shooting angle based on SLAM technology and AR enhancement technology;
and prompting a user to adjust the shooting angle and the shooting position of the target object according to the shooting point position prompting line.
Preferably, after determining the target captured image, the method further includes:
receiving a third behavioral action of the target object;
analyzing and processing the third behavior action to determine a camera closing instruction of the unmanned aerial vehicle;
and closing the camera of the unmanned aerial vehicle according to the camera closing instruction.
Preferably, after determining the preferred shooting position, the method further includes:
the user is reminded to shoot in a voice reminding mode.
Preferably, after determining the preferred shooting position, the method further includes:
Determining a motion state of the target object;
if the motion state is a static state, entering into the step of surrounding flight taking the preferable shooting position of the target object as a center to shoot images containing the target object at a plurality of shooting angles;
and if the motion state is a moving state, returning to the step of acquiring coordinate data of the target object acquired by the unmanned aerial vehicle camera in a preset coordinate system to which the unmanned aerial vehicle belongs, and tracking the target object until the motion state of the target object is the static state.
Preferably, after determining the preferred shooting angle, the method further includes:
screening the target shooting images based on personal shooting image preference of the user to obtain specific target shooting images with the characteristics of the user;
and sending the specific target shooting image to a terminal so as to facilitate the user to screen and obtain a final shooting image.
In order to solve the technical problem, the invention also provides an unmanned aerial vehicle, which has a camera shooting function and comprises:
the acquisition module is used for acquiring coordinate data of the target object acquired by the unmanned aerial vehicle camera in a preset coordinate system to which the unmanned aerial vehicle belongs;
The first determining module is used for determining the preferable shooting position of the target object according to the coordinate data and preset coordinate data;
a photographing module for performing round-robin flight centering on a preferred photographing position of the target object to photograph images including the target object at a plurality of photographing angles;
and the second determining module is used for determining a target shooting image in the images of the shooting angles through a shooting algorithm model so as to take the shooting angle of shooting the target shooting image as a preferable shooting angle.
In order to solve the technical problem, the invention further provides a device for determining the shooting angle of the unmanned aerial vehicle, which comprises the following steps:
a memory for storing a computer program;
and the processor is used for realizing the steps of the unmanned aerial vehicle shooting angle determining method when executing the computer program.
In order to solve the above technical problem, the present invention further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the steps of the method for determining a shooting angle of an unmanned aerial vehicle.
The method is applied to an unmanned aerial vehicle with a camera function, coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system of the unmanned aerial vehicle is acquired, and a preferable shooting position of the target object in a shooting interface of the camera of the unmanned aerial vehicle is determined according to the relation between the coordinate data and the preset coordinate data. And whether the target object is at the optimal position in the shooting visual field range can be determined through the relation between the actual coordinate data and the preset coordinate data, then the optimal shooting position is taken as the center, the unmanned aerial vehicle shoots images containing the target object at different angles in a surrounding way, and the time and labor cost for shooting after searching the optimal angle in the shooting process are saved. According to the method, based on the shot images with different angles including the target object, the shooting algorithm model is used for determining target shooting images in the images with the shooting angles, the shooting angles of the shooting target shooting images are used as optimal shooting angles, image information shot by the optimal shooting angles is acquired in the images shot with different angles so as to meet the requirements of users, the unmanned aerial vehicle is controlled to find the optimal angle, the shooting algorithm model is used for determining the optimal shooting angles for the images with the shooting angles, and shooting difficulty is reduced.
In addition, the invention also provides a device and a medium for determining the shooting angle of the unmanned aerial vehicle, and the device and the medium have the same beneficial effects as the method for determining the shooting angle of the unmanned aerial vehicle.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a flowchart of a method for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 3 is a structural diagram of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 4 is a block diagram of a device for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The invention aims to provide a method, a device and a medium for determining the shooting angle of an unmanned aerial vehicle, so as to solve the problems that the existing unmanned aerial vehicle needs to find the optimal shooting angle manually, manual operation cannot be released, and the shooting process is time-consuming and labor-consuming.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
It should be noted that, the shooting of the existing unmanned aerial vehicle is to shoot the target object by manually controlling the unmanned aerial vehicle to find different angles, the unmanned aerial vehicle needs to be manually controlled, the remote control equipment of the unmanned aerial vehicle is manually controlled to find different angles, and the whole shooting process cannot be carried out by releasing the manual operation, so that the shooting process is time-consuming and labor-consuming. The method for determining the shooting angle of the unmanned aerial vehicle can solve the technical problems, and can find the shooting angle without manual operation.
Fig. 1 is a flowchart of a method for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention, where, as shown in fig. 1, the method is applied to an unmanned aerial vehicle with a camera shooting function, and includes:
s11: acquiring coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system to which the unmanned aerial vehicle belongs;
S12: determining a preferable shooting position of the target object according to the coordinate data and the preset coordinate data;
s13: surrounding the flying around the preferable shooting position of the target object to shoot images containing the target object at a plurality of shooting angles;
s14: and determining a target shooting image in the images with a plurality of shooting angles through a shooting algorithm model so as to take the shooting angle of the shooting target shooting image as a preferable shooting angle.
Specifically, coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system to which the unmanned aerial vehicle belongs is acquired, that is, only a target object following mode performed by the unmanned aerial vehicle after the camera is started, the preset coordinate system to which the unmanned aerial vehicle belongs may be a coordinate system set on an operation interface based on the operation interface of the unmanned aerial vehicle, may be a coordinate system set on a pixel maximum value of the unmanned aerial vehicle camera, may be a virtual coordinate system for simulating and recording coordinate data of the target object, and is not limited herein, a coordinate system determined by a pixel maximum value (480 x 480) of the unmanned aerial vehicle camera, and the upper left is (0, 0), and the maximum values of an abscissa and an ordinate of the coordinate axes are 480 respectively, and may be corresponding coordinate systems set according to practical situations.
The target object may be a user object, or may be other animals or objects, etc., without limitation, and the obtaining of the coordinate data of the target object in the preset coordinate system is based on the coordinate data of the target object tracked by the unmanned aerial vehicle camera in real time to be reflected to the preset coordinate system. It can be understood that the target object moves all the time, and the coordinate data of the target object changes in real time, and the method can acquire the coordinate data of the target object in real time or in interval time.
The preferred shooting position of the target object is determined according to the coordinate data and the preset coordinate data, and it should be noted that the preset coordinate data may be a center origin of a preset coordinate system, or may be any point in the preset coordinate system, where the point may be set according to an actual situation.
The method comprises the steps of determining a specific position of a target object in a preset coordinate system of the unmanned aerial vehicle based on coordinate data and preset coordinate data, determining a preferable shooting position of the target object in the preset coordinate system when the coordinate data is located in a preset range of the preset coordinate data, determining a relative distance and a bias position of the target object from the unmanned aerial vehicle according to the specific position and the size of the coordinate data, and determining the preferable shooting position according to the relative distance and the bias position.
As specific positions and sizes, after both the specific positions and the sizes meet certain respective conditions, the relative distance between the target object and the unmanned aerial vehicle can be determined to be unchanged, and the position corresponding to the current coordinate data of the target object is determined to be the optimal shooting position under the condition that the relative distance is kept unchanged. The specific value of the relative distance is preset, and a user preset the specific value at the control end according to the requirement.
After determining the preferred photographing position, the subject is flown around centering on the preferred photographing position of the subject to photograph images including the subject at a plurality of photographing angles. The round-robin flight may be a preset round-robin speed, i.e. lateral speed. The surrounding speed can also be calculated based on the current flight parameters of the unmanned aerial vehicle according to a certain algorithm. The plurality of shooting angles can be equally divided according to 360 degrees of surrounding, or according to the preset position of the target object, the surrounding angle interval of the angle range based on the preset position is small so as to achieve dense shooting angles, the surrounding angle interval of the non-angle range is large so as to achieve loose shooting angles, and the like, the shooting is not limited herein, and shooting can be performed according to actual conditions.
The shooting algorithm model is based on the angle of a professional photographer in the algorithm model training process, extraction training of feature vectors is carried out by using sample data which is imported in advance, the extracted feature vectors are trained, so that the shooting algorithm model which accords with the angle of the professional photographer is obtained, and the feature vectors can be determined by considering parameters such as light rays of pictures, surrounding environments and the like. It should be noted that, in the embodiment of the present invention, the processing of the photography algorithm model may be the processing of the unmanned aerial vehicle itself, or may be other terminals connected by communication of the unmanned aerial vehicle, such as a vehicle terminal, a mobile phone, or other devices on the ground terminal, which are not limited herein, and may be implemented based on other terminals of the unmanned aerial vehicle as a preferred embodiment based on the situation that more resources are occupied by image processing.
The shooting algorithm model is used for determining target shooting images in the images with the shooting angles, and the target shooting images can be determined by the terminal and then sent to the unmanned aerial vehicle, or can be determined by the unmanned aerial vehicle. The number of the target captured images is not limited, and may be one or a plurality of.
Taking the shooting angle of shooting the target shooting image as the preferable shooting angle, the shooting angle of the unmanned aerial vehicle when shooting the target object and the distance between the unmanned aerial vehicle and the target object can be known through the target shooting image.
The method is applied to an unmanned aerial vehicle with a camera function, coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system of the unmanned aerial vehicle is acquired, and a preferable shooting position of the target object in a shooting interface of the camera of the unmanned aerial vehicle is determined according to the relation between the coordinate data and the preset coordinate data. And whether the target object is at the optimal position in the shooting visual field range can be determined through the relation between the actual coordinate data and the preset coordinate data, then the optimal shooting position is taken as the center, the unmanned aerial vehicle shoots images containing the target object at different angles in a surrounding way, and the time and labor cost for shooting after searching the optimal angle in the shooting process are saved. According to the method, based on the shot images with different angles including the target object, the shooting algorithm model is used for determining target shooting images in the images with the shooting angles, the shooting angles of the shooting target shooting images are used as optimal shooting angles, image information shot by the optimal shooting angles is acquired in the images shot with different angles so as to meet the requirements of users, the unmanned aerial vehicle is controlled to find the optimal angle, the shooting algorithm model is used for determining the optimal shooting angles for the images with the shooting angles, and shooting difficulty is reduced.
On the basis of the above embodiment, determining the preferred shooting position of the target object according to the coordinate data and the preset coordinate data in step S12 includes:
judging whether the coordinate data are located in a coordinate preset range of preset coordinate data or not;
if yes, taking coordinate data of the target object as a preferable shooting position;
if not, returning to the step of acquiring the coordinate data of the target object acquired by the unmanned aerial vehicle camera in the preset coordinate system of the unmanned aerial vehicle until the coordinate data is positioned in the coordinate preset range of the preset coordinate data.
Specifically, since the unmanned aerial vehicle is in a real-time flight state, the coordinate data of the unmanned aerial vehicle is always in a change state, if the coordinate data is strictly matched with the preset coordinate data, that is, the coordinate data is completely the same as the preset coordinate data, and certain errors exist in the unmanned aerial vehicle in the flight process due to interference factors, if the coordinate data of the target object is in the preset range of the preset coordinate data, the coordinate data of the target object can be used as the preferable shooting position. If the coordinate data is not located in the coordinate preset range of the preset coordinate data, the step S11 is continued to be returned to, and tracking is continued until the coordinate data is located in the coordinate preset range of the preset coordinate data.
Also, in order to precisely select the photographing position, the photographing position may be set according to the actual situation, if the coordinate data is completely matched with the preset coordinate data.
The preferred shooting position of the target object is determined, and the error existing in the interference is considered, so that the possibility of the preferred shooting position is further expanded by expanding the preferred shooting position to the coordinate preset range of the preset coordinate system.
On the basis of the above embodiment, the coordinate data includes position information and size data, and determining that the coordinate data is located in a coordinate preset range of preset coordinate data includes:
judging whether the position information is located in a position coordinate preset range of preset coordinate data, wherein the position information represents two-dimensional position information of the unmanned aerial vehicle under a preset coordinate system;
if the position information is located in a position coordinate preset range of the preset coordinate data, judging whether the size data is located in a size proportion preset range of the preset coordinate data, wherein the size data represents the size proportion of the target object under the preset coordinate system of the unmanned plane;
if the size data is located in the size proportion preset range of the preset coordinate data, determining that the coordinate data is located in the coordinate preset range of the preset coordinate data.
And if the coordinate data is within the coordinate preset range of the preset coordinate data, determining the coordinate data of the target object as the preferable shooting position. If the relative distance is not within the coordinate preset range, the current distance between the unmanned aerial vehicle and the target object is either far or near, the relative distance cannot be kept unchanged, and tracking is continued until the relative distance is unchanged. It should be noted that, the coordinate data includes position information and size data, the position information is a specific position of the target object under the preset coordinate system of the unmanned aerial vehicle, the position information is two-dimensional position information under the preset coordinate system for characterizing the unmanned aerial vehicle, the size data is used for characterizing a relative distance between the target object and the unmanned aerial vehicle, and the size data is based on a size of the target object mapped to the target object under the preset coordinate system to which the unmanned aerial vehicle belongs in the unmanned aerial vehicle tracking process. If the size data is larger and exceeds the preset size data, the current target object is indicated to be closer to the unmanned aerial vehicle. Both data needs meet the corresponding preset range, and the coordinate data can be determined to be located in the coordinate preset range of the preset coordinate data.
The determining process for determining that the coordinate data is located in the coordinate preset range of the preset coordinate data is determined based on the position information and the size data, so that accuracy of a preferable shooting position can be determined later.
On the basis of the above embodiment, determining that the unmanned aerial vehicle starts the camera includes:
receiving a first behavior of the target object;
analyzing the first behavior action to determine a camera opening instruction of the unmanned aerial vehicle;
starting a camera according to a camera starting instruction;
wherein the first behavior is in the form of at least one of:
controlling a touch operation instruction of the unmanned aerial vehicle;
and controlling the voice control instruction of the unmanned aerial vehicle.
Specifically, behavioral actions refer to various actions or actions that a human or animal can emit, including inductive reactions and mental activities. Inductive reactions are actions that rely on intrinsic or natural conditions, such as blinks, sneezes, happiness, etc.; while the mind activities are the activities of mental, emotion and mind in the advanced nervous system of the person, which lead to the conscious concept and autonomous control of the person, such as walking, eating, dancing, etc. The first behavior of the target object is received, and based on instruction control of a control end of the target object, the instruction control is generally a touch operation instruction, a voice control instruction and the like, wherein the touch operation instruction can be key operation or touch operation, the key operation can be performed by controlling an operation platform (such as a handle and the like) of the unmanned aerial vehicle to trigger on a key, the key can be a key such as a camera or the like, or a terminal for establishing remote communication with the unmanned aerial vehicle, such as a terminal of a wearable device or a locomotive end, a mobile phone, a computer and the like, to perform specific key operation. The corresponding touch operation can be performed by controlling a touch screen at a display interface on an operation platform of the unmanned aerial vehicle, or by establishing a certain terminal of remote communication with the unmanned aerial vehicle, such as a terminal of a wearable device or a locomotive terminal, a mobile phone, a computer and the like touch virtual keys or a certain instruction trigger. The present invention is not limited to this, and may be set according to actual conditions. The voice control instruction is a certain control operation instruction obtained by collecting voice through a voice sensor, converting the voice into a voice signal and performing signal conversion processing, and the voice signal can be converted into a corresponding control operation instruction after being collected through terminal equipment which is remotely connected with an unmanned aerial vehicle and is used for collecting the voice sensor, such as wearable equipment, a locomotive end, a computer, a mobile phone and the like. The process of collecting and processing the voice signal may be the same as the existing signal processing process, or a proprietary signal processing method may be set, which is not limited herein.
The two control instructions are based on the control instruction of the unmanned aerial vehicle, and the control may be remote control or direct control of the unmanned aerial vehicle, and the control is not limited herein. The first behavior is analyzed to determine a camera-opening instruction of the unmanned aerial vehicle, and it should be noted that the analysis processing may be performed in an algorithmic manner, or may be performed based on a preset instruction, for example, a high level is a signal for specifically opening the camera, and a low level is a signal for specifically maintaining the current camera state. And controlling the unmanned aerial vehicle according to the camera opening instruction to determine to open the camera.
According to the camera for starting the unmanned aerial vehicle based on the first behavior action, provided by the embodiment of the invention, the power consumption problem of the unmanned aerial vehicle is considered, energy is saved to a certain extent, the camera is started only after the first behavior action occurs, and the cruising ability of the unmanned aerial vehicle is improved.
On the basis of the embodiment, for the following positioning between the unmanned aerial vehicle and the target object, the unmanned aerial vehicle and the target object both carry equipment with a magnetic field positioning function so as to realize the following of the unmanned aerial vehicle and the target object through magnetic field positioning, namely, the following distance and the position information of the unmanned aerial vehicle and the target object can be obtained through the change of the magnetic field intensity.
As an embodiment, the unmanned aerial vehicle and the target object both carry a device with a magnetic field positioning function, determining that the unmanned aerial vehicle starts the camera, including:
determining the actual distance between the unmanned aerial vehicle and the target object based on the magnetic field positioning function according to equipment carried by the unmanned aerial vehicle and the target object;
and in the preset time, if the actual distance is within the preset range of the distance deviation, determining that the relative distance between the unmanned aerial vehicle and the target object is kept unchanged, and starting a camera of the unmanned aerial vehicle.
Specifically, the positioning between the unmanned aerial vehicle and the target equipment is realized by adopting a magnetic field positioning function, replacing the magnetic field positioning function with a global positioning system (Global Positioning System, GPS) positioning function, wherein the equipment carried by the unmanned aerial vehicle and the target object is required to have the magnetic field positioning function so as to determine the actual distance between the unmanned aerial vehicle and the target object, and in a certain preset time, if the actual distance is within a preset range of distance deviation, the relative distance between the unmanned aerial vehicle and the target object is kept unchanged, and a camera is started. The relative distance is kept unchanged, and the actual distance corresponding to the unmanned aerial vehicle is unchanged when the target object is in a moving state or a static state.
For magnetic field positioning, the actual distance between the unmanned aerial vehicle and the target object can be determined by adopting the existing magnetic field positioning method, and the embodiment is not particularly limited. The change of the magnetic field intensity in the magnetic field positioning is based on seasonality, and compared with the situation that the GPS positioning signal has larger interference of the interference signal, the signal of the magnetic field positioning is stable.
The unmanned aerial vehicle starting camera provided by the embodiment of the invention is based on magnetic field positioning, the unmanned aerial vehicle tracking target object is started under the condition of keeping the relative distance unchanged, and the signal of the magnetic field positioning is stable under the condition of larger interference than the GPS positioning signal receiving interference signal.
On the basis of the above embodiment, as an embodiment, before the unmanned aerial vehicle starts the camera, the method further includes:
object characteristics of a target object are obtained in advance to be stored in the unmanned aerial vehicle;
and identifying the target object according to the object characteristics so that the unmanned aerial vehicle can track the target object, and entering a step of starting a camera of the unmanned aerial vehicle.
Specifically, the embodiment is combined with the triggering instruction for triggering the unmanned aerial vehicle to start the camera, wherein the object characteristics of the target object are stored in the unmanned aerial vehicle in advance, and the target object is identified according to the stored object characteristics.
That is, the feature recognition in the embodiment of the invention can perform matching recognition on the parameters such as the size and the model of the object and the stored object features under the condition that the camera is not started, only the tracking process is realized, and the situations such as follow-up loss of the unmanned aerial vehicle are avoided. When the unmanned aerial vehicle realizes tracking the target object, the method proceeds to step S11, and a camera is started so as to facilitate subsequent shooting.
The tracking condition provided by the embodiment of the invention is that the target object is identified through the object characteristics so as to facilitate the follow-up starting of the camera for shooting.
On the basis of the above embodiment, as one embodiment, the device carried by the target object has a magnetic field positioning function, and the tracking process of tracking the target object by the unmanned aerial vehicle includes:
receiving magnetic field intensity information sent by equipment carried by a target object;
determining the actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
the actual distance is maintained within a preset range to achieve tracking of the target object.
In this embodiment, tracking is only implemented through magnetic field positioning, specifically, the unmanned aerial vehicle receives magnetic field intensity information sent by the target object, determines an actual distance between the magnetic field intensity signal and the target object according to a mapping relationship between the magnetic field intensity signal and the distance, and if the actual distance is maintained within a preset range, then the target object is tracked.
As another embodiment, a terminal carried by a target object has a magnetic field positioning function and a GPS positioning function, and a tracking process for tracking the target object by an unmanned aerial vehicle includes:
receiving magnetic field intensity information and GPS positioning information sent by a terminal carried by a target object;
determining a first actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
determining a second actual distance between the unmanned aerial vehicle and the target object according to the GPS positioning information;
if the deviation between the first actual distance and the second actual distance is within the deviation preset range, carrying out average value processing on the first actual distance and the second actual distance to obtain the actual distance between the unmanned aerial vehicle and the target object;
and if the deviation between the first actual distance and the second actual distance is not within the deviation preset range, taking the first actual distance as the actual distance between the unmanned aerial vehicle and the target object so as to track the target object.
Specifically, tracking in this embodiment, by combining two functions of magnetic field positioning and GPS positioning, GPS positioning is used as a backup reference, and a first actual distance and a second actual distance between the unmanned aerial vehicle and the target object are determined according to respective positioning functions, so as to verify the accuracy of positioning.
If the deviation of the first actual distance and the second actual distance is within the deviation preset range, average value processing of the two actual distances is carried out to determine the actual distance between the unmanned aerial vehicle and the target object. If the deviation is not within the deviation preset range, the first actual distance, namely the first actual distance under the magnetic field positioning function, is taken as the final actual distance.
According to the embodiment of the invention, under the condition that two positioning functions exist, the determination of the actual distance between the unmanned aerial vehicle and the target object is accurate based on the first actual distance and the second actual distance which are respectively determined by the two positioning functions and the processing mode between the first actual distance and the deviation.
On the basis of the above embodiment, two-dimensional coordinate size data of the target object in the photographed viewing area is matched with preset coordinate size data, and when the preferred photographing position is already determined, two situations can occur, namely, the target object directly photographs or videos, and the target object continues to move. For the above-mentioned case, the subsequent shooting instruction of the target object may also be determined according to the behavior action. Therefore, as an embodiment, after determining the preferred shooting position of the target object according to the coordinate data and the preset coordinate data, before the unmanned aerial vehicle flies around, further comprising:
Receiving a second behavioral action of the target object;
analyzing the second behavior action to determine a shooting instruction of the target object so as to facilitate the unmanned aerial vehicle to shoot around the flying;
wherein the second behavioral action is in a form of at least one of:
controlling a touch operation instruction of the unmanned aerial vehicle;
a voice control instruction for controlling the unmanned aerial vehicle;
the position information of the target object is located at a preferable shooting position;
and controlling an object feature matching instruction of a target object to which the unmanned aerial vehicle belongs.
Specifically, a second behavior action of the target object is received, the second behavior action is distinguished from the first behavior action in the above embodiment, specific behavior actions of the second behavior action and the first behavior action can be the same, but a control instruction of the unmanned aerial vehicle is determined to be different later, the first behavior action is a camera instruction for starting the unmanned aerial vehicle, the second behavior action is a shooting instruction for controlling the unmanned aerial vehicle to the target object, and the purpose after the second behavior action in the embodiment is to realize shooting of the unmanned aerial vehicle around flying.
The specific form of the second behavior is the same as the touch operation command and the voice control command of the first behavior, and the two control commands may refer to the above embodiment, which is not described herein again. And triggering can be performed based on the target object, for example, the position information of the target object is located at a preferable shooting position, and the characteristic matching instruction of the target object to which the unmanned aerial vehicle belongs is controlled, wherein the characteristic matching instruction is based on the behavior shape of the target object and the key point data of the target object. For the behavior shape of the target object, identifying and detecting the shape, outline and other information of the target object drawn in an image or video through a certain identification algorithm, and matching the shape, outline and other information with outline information pre-stored in a database to identify the target object; for the key point data of the target object, the key point labeling refers to labeling key points, such as face feature points, human skeleton connection points and the like, on a designated position in a manual mode, and is commonly used for training a face recognition model and a statistical model. The keypoints may represent various aspects of the image, such as corners, edges, or specific features. For example, in facial recognition they may label eyes, nose and mouth, while in human pose estimation, key points may represent joints of the body. The use of key points is one of the most accurate labeling methods. Training data may be prepared for the following fields: facial expression recognition, human and animal pose estimation, navigation and driver behavior analysis, livestock behavior tracking, gesture recognition, activity recognition, robot and manufacturing, video surveillance, motion analysis, 3D reconstruction, and the like. Keypoint labeling is used for some challenging computer vision tasks. For example, keypoints and keypoint skeletons are critical for human gesture estimation or gesture recognition, as these tasks require more accurate and detailed data. To predicting coordinates of key points in an image or video frame. The keypoint model will predict the exact location of a particular keypoint in the image or frame. This technique is often used in conjunction with keypoint detection for motion tracking. Keypoint labels are also well suited for analyzing spatial relationships between multiple objects or particles, such as football players on a field. Key points provide high quality data, but they require a large number of manual annotations. Bounding boxes and polygon labeling are generally easier to label, and are typically used for simpler computer vision tasks such as basic object detection.
According to the method and the device for identifying the shooting instruction of the unmanned aerial vehicle to the target object based on the second behavior action, which is provided by the embodiment of the invention, the unmanned aerial vehicle can conveniently shoot around the flight, and in addition to passive triggering of the unmanned aerial vehicle in the identification process, the unmanned aerial vehicle is automatically triggered based on the characteristics of the shot target object, so that the experience of a user is improved.
Based on the above embodiments, as one embodiment, the photographic algorithm model is built based on a photographic professional knowledge base, and is obtained by combining photographic light and target object position parameter training, and the photographic algorithm model at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
Specifically, the photographic algorithm model provided by the embodiment of the invention is built based on a photographic professional knowledge base to replace a professional photographer, and is obtained by combining photographic light of the surrounding environment and position parameters of a target object. For photographing light, determining setting of photographing angle and combination of photographing posture; the position parameter of the target object, in this embodiment, may be set by parameters such as size data and distance of a specific position. The specific model is based on one or more models of machine learning, deep learning, artificial intelligence, etc., and is not limited herein, and may be determined by referring to an existing model algorithm.
The photographic algorithm model provided by the embodiment of the invention can improve the experience of a user by taking the professional photographic knowledge base and the specific shooting scene into consideration.
As another embodiment, the photographic algorithm model is pre-stored with the habit parameters and hobby parameters of the photographing behavior of the user, is built based on a photographic professional knowledge base, is trained by combining photographic light and the position parameters of the target object, and at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
Specifically, two models provided by the embodiment of the invention are built based on a photographic professional knowledge base to replace a professional photographer, and are obtained by combining photographic light of the surrounding environment, user photographing preference behavior habit and target object position parameter training. For photographing light, a combination of photographing angle setting and photographing gesture is determined, and based on a user's personal specific behavior, for example, the left face of the user is mirrored with respect to the right face, the photographing gesture needs to be matched based on the user's behavior habit. The position parameter of the target object, in this embodiment, may be set by parameters such as size data and distance of a specific position. The specific model is based on one or more models of machine learning, deep learning, artificial intelligence, etc., and is not limited herein, and may be determined by referring to an existing model algorithm.
The photographic algorithm model provided by the embodiment of the invention considers the professional photographic knowledge base and the specific photographic scene, and simultaneously adds the behavior habit of the user, so that compared with the embodiment, the photographic algorithm model can improve the specificity of the user photographing to improve the experience of the user.
On the basis of the above embodiment, determining a target photographed image from among the images of a plurality of photographing angles through the photographing algorithm model in step S14 includes:
acquiring a photographing behavior habit parameter of a user;
invoking a photographic algorithm model to input photographing behavior habit parameters and images of a plurality of photographing angles;
and obtaining output parameters of the photographic algorithm model as target photographic images.
Specifically, in the original photography algorithm model, the images of a plurality of photographing angles and the photographing behavior habit parameters of the user are input, and the photographing behavior habit parameters in the embodiment may be changed at any time or periodically based on the user, which is not limited herein. For example, the photographing behavior habit parameters input to the photographing algorithm model for the first time are still continuously used when the images of a plurality of photographing angles are input subsequently if the photographing behavior habit parameters are not modified subsequently. The model can be updated in real time before being input into the model each time, or the display interface can remind a user whether to need modification or not. The output parameters of the shooting algorithm model are obtained as target shooting images, and the target shooting images can be one image or a plurality of images, are not limited herein, and can be respectively set based on each parameter of shooting behavior habits of a user.
For the photographing behavior habit parameters of the user, parameters of specific photographing habits of the user, such as exposure degree, micro-distance, aperture and the like, or better behaviors based on the face and standing posture of the user are mainly set.
The shooting behavior habit parameters of the user are added to output and determine the target shooting image, so that the specificity of the user shooting is combined in the target image to improve the experience of the user.
On the basis of the above embodiment, as an embodiment, after the photographing angle is preferable, it further includes:
marking a shooting point position prompt line in an image shot at a preferable shooting angle based on SLAM technology and AR enhancement technology;
and prompting a user to adjust the shooting angle and the position of the target object according to the shooting point position prompting line.
Specifically, after determining the preferred shooting angle and the preferred shooting position, shooting point cue lines may be marked in the image, using the lines to create visual points of interest by pattern, rhythm, and flow, and to draw attention to things. One of the most important lines in photography is a guide line. The guide line is a line in the composition that guides the eyes of the viewer from one portion of the picture to another. It is typically used to point to a subject or region of interest in a picture and is a very effective tool to control how a viewer reads a picture. Guide lines are used for all types of photography to guide the audience and the tissue frame.
The synchronous positioning and mapping (Simultaneous Localization and Mapping, SLAM) is mainly used for solving the problems of positioning and mapping when unknown environment moves. The SLAM technology is combined with the AR technology, and the AR principle is that the surrounding environment is digitalized for 3D modeling, and virtual images are overlapped for rendering. The positioning tracking technology can effectively position offset phenomenon and scene confusion, and can be used for constructing a real virtual space by combining depth with a geographic coordinate system based on visual tracking including SLAM technology.
Through the two technologies, the prompt line is marked in the projection image of the display interface, and the user is prompted to adjust the shooting angle and the shooting position of the target object according to the marked prompt line.
The SLAM technology and the AR enhancement technology are based, so that the shooting points and the shooting gesture prompt lines in the projected image can be marked conveniently, a user can be guided to adjust the shooting angle and the setting of the position of the target object, and the shooting efficiency of the user is improved.
On the basis of the above embodiment, considering the power consumption problem of the unmanned aerial vehicle, as an embodiment, after determining the target captured image, the method further includes:
receiving a third behavior action of the target object;
analyzing and processing the third behavior action to determine a camera closing instruction of the unmanned aerial vehicle;
And closing the camera of the unmanned aerial vehicle according to the camera closing instruction.
Specifically, the third behavior action is distinguished from the first behavior action and the second behavior action in the above embodiment, and the specific behavior actions of the three may be the same, but the control instruction of the unmanned aerial vehicle is different in the subsequent determination, where the first behavior action is a camera instruction for starting the unmanned aerial vehicle, the second behavior action is a shooting instruction for controlling the unmanned aerial vehicle to a target object, the third behavior action is a control instruction for controlling the unmanned aerial vehicle to close the camera, and the purpose after the third behavior action in this embodiment is to close the camera of the unmanned aerial vehicle.
The form of the third behavior action may be the same as or different from the form of the first behavior action and the second behavior action, and the corresponding analysis processing method may be the same as or different from the analysis processing method of the first behavior action and the second behavior action, and may be set according to the actual situation without limitation.
The unmanned aerial vehicle closing camera instruction is identified based on the third behavior action, so that energy consumption is saved, and cruising ability is prolonged.
On the basis of the above embodiment, as one embodiment, after determining the preferred shooting position, it further includes:
The user is reminded to shoot in a voice reminding mode.
It should be noted that, when the preferred shooting position is obtained, since the target object is in a moving state, the user may miss the process of the subsequent shooting due to other factors in the shooting process, so a prompting mode is added, which may be a prompt displayed on the interface, and based on the actual situation of the user, the prompt displayed on the interface may be easily ignored outdoors, so the embodiment considers that the user is prompted to shoot through a voice prompting mode.
The voice reminding mode may include a bell mode, a vibration sound reminding mode, a voice calling mode and the like, and is not limited herein.
After the preferred shooting position is determined, the user is reminded to shoot in a voice reminding mode, so that shooting efficiency of the user is improved, and shooting is prevented from being missed in the moving process of the target object.
On the basis of the above embodiment, two cases may occur for the preferred photographing position to have been determined, one is that the target object is photographed directly or video, and one is that the target object continues to move, as an embodiment, after the preferred photographing position is determined, the method further includes:
Determining the motion state of a target object;
if the moving state is a stationary state, entering a step of surrounding and flying around the preferable shooting position of the target object to shoot images containing the target object at a plurality of shooting angles;
and if the motion state is a moving state, returning to the step of acquiring coordinate data of the target object acquired by the unmanned aerial vehicle camera in a preset coordinate system to which the unmanned aerial vehicle belongs, and tracking the target object until the motion state of the target object is a static state.
Specifically, after determining the preferred shooting position, it is necessary to determine the motion state of the target object, and when the motion state is a stationary state, a shooting command is started, and the method proceeds to a step of shooting the image of the target object by shooting the surrounding flight of the target object, that is, by shooting the image of the target object around the preferred shooting position of the target object as a center, so as to shoot images of the target object at a plurality of shooting angles.
If the motion state of the target object is a moving state, tracking the target is continued to determine the relative position relationship between the two until the motion state of the target object is a static state.
The method and the device for determining the motion state of the target object can realize the surrounding flight of the unmanned aerial vehicle on the target object only when the motion state is in a static state so as to shoot images of a plurality of shooting angles and improve the accuracy of the shooting angles.
On the basis of the above embodiment, as one embodiment, after determining the preferred shooting angle, it further includes:
screening the target shooting images based on personal shooting image preference of the user to obtain specific target shooting images with the characteristics of the user;
and sending the specific target shooting image to the terminal so as to facilitate the screening of the user to obtain a final shooting image.
After determining the preferred photographing angle, there may be a plurality of photographing angles at this time, and at the same time, the target photographing images are screened again based on the user's personal photographing image preference to obtain a specific target photographing image specific to the user, which retains the user's own characteristics. It should be noted that the user takes a photograph of his/her person
Image preference can be understood as taking images based on the specific target of the user himself, such as user side or upper body, whole body, etc.
It can be understood that, in the embodiment of the present invention, the screening process for screening the target captured image to obtain the specific target captured image may be based on a specific screening rule, or may be based on a screening model, which is not limited herein. The specific target shooting image obtained through screening in the embodiment can be carried out in the unmanned aerial vehicle, and also can be screened through a terminal or an operation platform connected with the unmanned aerial vehicle.
The specific target photographed image is sent to a terminal so as to be convenient for a user to screen to obtain a final photographed image, wherein the terminal is terminal equipment carried by the user for collection, wearable equipment and the like. The screening may be performed in the same manner as the screening used in the previous stage, or may be performed in a different manner, and may be set according to actual conditions. The wearable device according to the embodiment of the present invention is preferably an augmented Reality (Augmented Reality, AR) glasses, virtual Reality (VR) glasses, or a wearable device based on an augmented Reality (XR) technology, which is not limited herein, and may also be a wearable watch, etc.
The target shooting images are screened based on personal shooting image preference of the user to obtain specific target shooting images with the characteristics of the user; and sending the specific target shooting image to the terminal so as to facilitate the user to screen and obtain a final shooting image, wherein the screening requirement is the optimal photo selection habit recommendation preset by the user first or selected according to each time of the user, so that the time for the user to screen photos is saved.
Fig. 2 is a flowchart of another method for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention, as shown in fig. 2, including:
S21: the unmanned aerial vehicle follows the target object;
s22: waking up the unmanned aerial vehicle to start the camera;
s23: the unmanned aerial vehicle views a view to find a target object to acquire two-dimensional coordinate information of the target object;
s24: taking a target object as a center point to obtain a plurality of optimal shooting angles;
s25: the target object and the optimal shooting point are marked on the map based on the optimal shooting angle.
Specifically, the following distance and the position information of the unmanned aerial vehicle and the target object can be obtained through the change of the magnetic field intensity, compared with a simple GPS or a simple image recognition framing mode, the unmanned aerial vehicle can follow whether the GPS of the unmanned aerial vehicle is accurate or not or whether the camera of the unmanned aerial vehicle is started or not, and compared with the unmanned aerial vehicle, the unmanned aerial vehicle can save energy to a certain extent without starting the camera function in the following process, and the cruising duration of the unmanned aerial vehicle can be increased. The unmanned aerial vehicle follows the target object until the unmanned aerial vehicle is awakened by a user (through a button or touch of a wearable device, voice control and the like), the unmanned aerial vehicle starts a camera, the unmanned aerial vehicle camera searches the target object to acquire current two-dimensional coordinate data of the target object, the unmanned aerial vehicle shoots and frames the target object through an automatic cradle head camera of the unmanned aerial vehicle, certain two-dimensional coordinate size data are preset, and the two-dimensional coordinate size data of the target object in a shooting and viewing area are matched with preset coordinate size data. The unmanned aerial vehicle surrounds the target object to find the optimal shooting point position, and shoots and acquires a plurality of images with optimal shooting angles. Through the best positioning, the coordinate information and the target object position of the best positioning shooting are displayed in the positioning of AR enhanced display through SLAM technology, and the best shooting point is marked, so that when the best positioning is recorded for other users to take photos, the best photos can be taken directly based on the best positioning of the positioning display.
The method for determining the shooting angle of the unmanned aerial vehicle is applied to the unmanned aerial vehicle with a camera shooting function, the unmanned aerial vehicle shoots images containing target objects at different angles in a surrounding way, and therefore time and labor cost for shooting after searching the optimal angle in the shooting process are saved. According to the method, based on the shot images with different angles including the target object, the shooting algorithm model is used for determining target shooting images in the images with the shooting angles, the shooting angles of the shooting target shooting images are used as optimal shooting angles, image information shot by the optimal shooting angles is acquired in the images shot with different angles so as to meet the requirements of users, the unmanned aerial vehicle is controlled to find the optimal angle, the shooting algorithm model is used for determining the optimal shooting angles for the images with the shooting angles, and shooting difficulty is reduced.
The invention further discloses the unmanned aerial vehicle corresponding to the method on the basis of the detailed description of the embodiments corresponding to the method for determining the shooting angle of the unmanned aerial vehicle, and fig. 3 is a structural diagram of the unmanned aerial vehicle provided by the embodiment of the invention. As shown in fig. 3, the unmanned aerial vehicle includes:
an acquisition module 11, configured to acquire coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system to which the unmanned aerial vehicle belongs;
A first determining module 12, configured to determine a preferred shooting position of the target object according to the coordinate data and preset coordinate data;
a photographing module 13 for flying around the preferable photographing position of the target object as a center to photograph images including the target object at a plurality of photographing angles;
the second determining module 14 is configured to determine, by using a photography algorithm model, a target photographed image from among the images of the plurality of photographed angles, so as to take the photographed angle of the photographed target photographed image as a preferred photographed angle.
Since the embodiments of the device portion correspond to the above embodiments, the embodiments of the device portion are described with reference to the embodiments of the method portion, and are not described herein.
For the description of the device for determining the shooting angle of the unmanned aerial vehicle provided by the invention, refer to the embodiment of the method, the invention is not repeated herein, and the device has the same beneficial effects as the method for determining the shooting angle of the unmanned aerial vehicle.
Fig. 4 is a block diagram of a device for determining a shooting angle of an unmanned aerial vehicle according to an embodiment of the present invention, as shown in fig. 4, where the device includes:
a memory 21 for storing a computer program;
the processor 22 is configured to implement the steps of the method for determining the shooting angle of the unmanned aerial vehicle when executing the computer program.
Processor 22 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like, among others. The processor 22 may be implemented in hardware in at least one of a digital signal processor (Digital Signal Processor, DSP), a Field programmable gate array (Field-Programmable Gate Array, FPGA), a programmable logic array (Programmable Logic Array, PLA). The processor 22 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 22 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in responsible for rendering and rendering of content required for display by the display screen. In some embodiments, the processor 22 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 21 may include one or more computer-readable storage media, which may be non-transitory. Memory 21 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 21 is at least used for storing a computer program 211, where the computer program is loaded and executed by the processor 22 to implement the relevant steps of the method for determining the shooting angle of the unmanned aerial vehicle disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 21 may further include an operating system 212, data 213, and the like, and the storage manner may be transient storage or permanent storage. The operating system 212 may include Windows, unix, linux, among other things. The data 213 may include, but is not limited to, data related to a method for determining a photographing angle of the unmanned aerial vehicle, and the like.
In some embodiments, the device for determining the shooting angle of the unmanned aerial vehicle may further include a display screen 23, an input/output interface 24, a communication interface 25, a power supply 26, and a communication bus 27.
It will be appreciated by those skilled in the art that the configuration shown in fig. 4 does not constitute a limitation of the determination device of the unmanned aerial vehicle photographing angle, and may include more or less components than those illustrated.
The processor 22 invokes the instructions stored in the memory 21 to implement the method for determining the shooting angle of the unmanned aerial vehicle provided in any of the above embodiments.
For the description of the device for determining the shooting angle of the unmanned aerial vehicle provided by the invention, refer to the embodiment of the method, the invention is not repeated herein, and the device has the same beneficial effects as the method for determining the shooting angle of the unmanned aerial vehicle.
Further, the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by the processor 22 implements the steps of the method for determining a shooting angle of a drone as described above.
It will be appreciated that the methods of the above embodiments, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the introduction of the computer readable storage medium provided by the present invention, please refer to the above method embodiment, the present invention is not described herein, and the method has the same advantages as the above method for determining the shooting angle of the unmanned aerial vehicle.
The method for determining the shooting angle of the unmanned aerial vehicle, the device for determining the shooting angle of the unmanned aerial vehicle and the medium provided by the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (20)

1. The method for determining the shooting angle of the unmanned aerial vehicle is characterized by being applied to the unmanned aerial vehicle with a camera shooting function and comprising the following steps of:
acquiring coordinate data of a target object acquired by a camera of the unmanned aerial vehicle in a preset coordinate system to which the unmanned aerial vehicle belongs;
determining a preferable shooting position of the target object according to the coordinate data and preset coordinate data;
surrounding the target object around the preferable shooting position to shoot images containing the target object at a plurality of shooting angles;
and determining a target shooting image in the images of the shooting angles through a shooting algorithm model so as to take the shooting angle of shooting the target shooting image as a preferable shooting angle.
2. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 1, wherein the determining the preferred shooting position of the target object according to the coordinate data and preset coordinate data comprises:
judging whether the coordinate data are located in a coordinate preset range of the preset coordinate data or not;
if yes, taking the coordinate data of the target object as the preferable shooting position;
if not, returning to the step of acquiring the coordinate data of the target object acquired by the unmanned aerial vehicle camera in the preset coordinate system to which the unmanned aerial vehicle belongs until the coordinate data is positioned in the coordinate preset range of the preset coordinate data.
3. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 2, wherein the coordinate data includes position information and size data, and determining that the coordinate data is located in a coordinate preset range of the preset coordinate data includes:
judging whether the position information is located in a position coordinate preset range of the preset coordinate data, wherein the position information represents two-dimensional position information of the unmanned aerial vehicle under a preset coordinate system;
if the position information is located in a position coordinate preset range of the preset coordinate data, judging whether the size data is located in a size proportion preset range of the preset coordinate data, wherein the size data represents the size proportion of the target object under a preset coordinate system of the unmanned aerial vehicle;
and if the size data is positioned in the size proportion preset range of the preset coordinate data, determining that the coordinate data is positioned in the coordinate preset range of the preset coordinate data.
4. The method of determining a shooting angle of an unmanned aerial vehicle according to claim 1, wherein determining that the unmanned aerial vehicle starts the camera comprises:
receiving a first behavioral action of the target object;
Analyzing the first behavior action to determine a camera opening instruction of the unmanned aerial vehicle;
starting the camera according to the camera starting instruction;
wherein the first behavior is in a form of at least one of:
controlling a touch operation instruction of the unmanned aerial vehicle;
and controlling the voice control instruction of the unmanned aerial vehicle.
5. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 1, wherein the unmanned aerial vehicle and the target object each carry a device having a magnetic field positioning function, and determining that the unmanned aerial vehicle starts a camera comprises:
determining the actual distance between the unmanned aerial vehicle and the target object based on a magnetic field positioning function according to equipment carried by the unmanned aerial vehicle and the target object;
and in the preset time, if the actual distance is within the preset range of the distance deviation, determining that the relative distance between the unmanned aerial vehicle and the target object is kept unchanged, and starting the camera of the unmanned aerial vehicle.
6. The method of determining a shooting angle of a drone according to claim 1, further comprising, before the drone starts the camera:
pre-acquiring object characteristics of the target object to be stored in the unmanned aerial vehicle;
And identifying the target object according to the object characteristics so that the unmanned aerial vehicle can track the target object, and entering the step of starting the camera by the unmanned aerial vehicle.
7. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 6, wherein the device carried by the target object has a magnetic field positioning function, and the tracking process of the unmanned aerial vehicle for tracking the target object comprises:
receiving magnetic field intensity information sent by equipment carried by the target object;
determining the actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
and maintaining the actual distance within a preset range to track the target object.
8. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 6, wherein the terminal carried by the target object has a magnetic field positioning function and a GPS positioning function, and the tracking process of the unmanned aerial vehicle for tracking the target object comprises:
receiving magnetic field intensity information and GPS positioning information sent by a terminal carried by the target object;
determining a first actual distance between the unmanned aerial vehicle and the target object according to the mapping relation between the magnetic field intensity information and the distance;
Determining a second actual distance between the unmanned aerial vehicle and the target object according to the GPS positioning information;
if the deviation between the first actual distance and the second actual distance is within a deviation preset range, carrying out mean value processing on the first actual distance and the second actual distance to obtain the actual distance between the unmanned aerial vehicle and the target object;
and if the deviation between the first actual distance and the second actual distance is not within the deviation preset range, taking the first actual distance as the actual distance between the unmanned aerial vehicle and the target object so as to track the target object.
9. The method according to claim 2, characterized in that after said determining a preferred shooting position of said target object based on said coordinate data and preset coordinate data, before said unmanned aerial vehicle flies around, further comprising:
receiving a second behavioral action of the target object;
analyzing the second behavior action to determine a shooting instruction for the target object so as to facilitate surrounding flight shooting of the unmanned aerial vehicle;
wherein the second behavioral action is in a form of at least one of:
Controlling a touch operation instruction of the unmanned aerial vehicle;
controlling a voice control instruction of the unmanned aerial vehicle; the position information of the target object is positioned at the preferable shooting position;
and controlling an object feature matching instruction of the target object to which the unmanned aerial vehicle belongs.
10. The unmanned aerial vehicle shooting angle determining method according to claim 1, wherein the shooting algorithm model is built based on a shooting expertise base, is trained by combining shooting rays and the target object position parameters, and at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
11. The method for determining the shooting angle of the unmanned aerial vehicle according to claim 1, wherein the shooting algorithm model is pre-stored with shooting behavior habit parameters and hobby parameters of a user, is built based on a shooting expertise base, is obtained by training in combination with shooting light and the target object position parameters, and at least comprises one or more model combinations of machine learning, deep learning and/or artificial intelligence.
12. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 11, wherein determining a target shooting image from among a plurality of images of the shooting angle by a shooting algorithm model comprises:
Acquiring a photographing behavior habit parameter of the user;
invoking the photographic algorithm model to input the shooting behavior habit parameters and a plurality of images of shooting angles;
and obtaining output parameters of the photographic algorithm model as the target photographed image.
13. The method for determining a shooting angle of an unmanned aerial vehicle according to any one of claims 1 to 12, further comprising, after the preferred shooting angle:
marking a shooting point position prompt line in an image shot by the optimal shooting angle based on SLAM technology and AR enhancement technology;
and prompting a user to adjust the shooting angle and the shooting position of the target object according to the shooting point position prompting line.
14. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 13, further comprising, after determining the target shooting image:
receiving a third behavioral action of the target object;
analyzing and processing the third behavior action to determine a camera closing instruction of the unmanned aerial vehicle;
and closing the camera of the unmanned aerial vehicle according to the camera closing instruction.
15. The method of determining a shooting angle of a drone according to claim 2, further comprising, after determining the preferred shooting position:
The user is reminded to shoot in a voice reminding mode.
16. The method of determining a shooting angle of a drone according to claim 2, further comprising, after determining the preferred shooting position:
determining a motion state of the target object;
if the motion state is a static state, entering into the step of surrounding flight taking the preferable shooting position of the target object as a center to shoot images containing the target object at a plurality of shooting angles;
and if the motion state is a moving state, returning to the step of acquiring coordinate data of the target object acquired by the unmanned aerial vehicle camera in a preset coordinate system to which the unmanned aerial vehicle belongs, and tracking the target object until the motion state of the target object is the static state.
17. The method for determining a shooting angle of an unmanned aerial vehicle according to claim 1, further comprising, after determining the preferred shooting angle:
screening the target shooting images based on personal shooting image preference of the user to obtain specific target shooting images with the characteristics of the user;
and sending the specific target shooting image to a terminal so as to facilitate the user to screen and obtain a final shooting image.
18. An unmanned aerial vehicle, its characterized in that, unmanned aerial vehicle has the function of making a video recording, includes:
the acquisition module is used for acquiring coordinate data of the target object acquired by the unmanned aerial vehicle camera in a preset coordinate system to which the unmanned aerial vehicle belongs;
the first determining module is used for determining the preferable shooting position of the target object according to the coordinate data and preset coordinate data;
a photographing module for performing round-robin flight centering on a preferred photographing position of the target object to photograph images including the target object at a plurality of photographing angles;
and the second determining module is used for determining a target shooting image in the images of the shooting angles through a shooting algorithm model so as to take the shooting angle of shooting the target shooting image as a preferable shooting angle.
19. The utility model provides a determining means of unmanned aerial vehicle shooting angle which characterized in that includes:
a memory for storing a computer program;
a processor for implementing the steps of the method for determining a shooting angle of a drone according to any one of claims 1 to 17 when executing the computer program.
20. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method of determining a shooting angle of a drone according to any one of claims 1 to 17.
CN202311085070.7A 2023-08-25 2023-08-25 Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium Pending CN117119287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311085070.7A CN117119287A (en) 2023-08-25 2023-08-25 Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311085070.7A CN117119287A (en) 2023-08-25 2023-08-25 Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium

Publications (1)

Publication Number Publication Date
CN117119287A true CN117119287A (en) 2023-11-24

Family

ID=88807158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311085070.7A Pending CN117119287A (en) 2023-08-25 2023-08-25 Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium

Country Status (1)

Country Link
CN (1) CN117119287A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636166A (en) * 2023-11-30 2024-03-01 南宁职业技术学院 Unmanned aerial vehicle-based aerial photo fruit counting method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636166A (en) * 2023-11-30 2024-03-01 南宁职业技术学院 Unmanned aerial vehicle-based aerial photo fruit counting method and system
CN117636166B (en) * 2023-11-30 2024-08-20 南宁职业技术学院 Unmanned aerial vehicle-based aerial photo fruit counting method and system

Similar Documents

Publication Publication Date Title
US11509817B2 (en) Autonomous media capturing
US10057485B2 (en) Imaging apparatus and methods for generating a guide display using imaging height posture information
KR101964223B1 (en) System and method for augmented and virtual reality
US9460340B2 (en) Self-initiated change of appearance for subjects in video and images
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
JP2019145108A (en) Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
US20130177296A1 (en) Generating metadata for user experiences
WO2017160370A1 (en) Visualization of image themes based on image content
CN108600632B (en) Photographing prompting method, intelligent glasses and computer readable storage medium
US20200371535A1 (en) Automatic image capturing method and device, unmanned aerial vehicle and storage medium
US20240290009A1 (en) Augmented reality map curation
US20220398775A1 (en) Localization processing service
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
KR20150126938A (en) System and method for augmented and virtual reality
CN109905593A (en) A kind of image processing method and device
EP3087727B1 (en) An emotion based self-portrait mechanism
CN117119287A (en) Unmanned aerial vehicle shooting angle determining method, unmanned aerial vehicle shooting angle determining device and unmanned aerial vehicle shooting angle determining medium
US20200250498A1 (en) Information processing apparatus, information processing method, and program
US11875080B2 (en) Object sharing method and apparatus
US12100229B2 (en) Object scanning for subsequent object detection
CN112655021A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110084306B (en) Method and apparatus for generating dynamic image
CN117061857A (en) Unmanned aerial vehicle automatic shooting method and device, unmanned aerial vehicle and medium
JP7559758B2 (en) Image processing device, image processing method, and program
US12149819B2 (en) Autonomous media capturing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination