CN114666505A - Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system - Google Patents
Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system Download PDFInfo
- Publication number
- CN114666505A CN114666505A CN202210298220.1A CN202210298220A CN114666505A CN 114666505 A CN114666505 A CN 114666505A CN 202210298220 A CN202210298220 A CN 202210298220A CN 114666505 A CN114666505 A CN 114666505A
- Authority
- CN
- China
- Prior art keywords
- shooting scene
- unmanned aerial
- aerial vehicle
- scene model
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000004891 communication Methods 0.000 claims description 18
- 230000005540 biological transmission Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 241001465754 Metazoa Species 0.000 description 7
- 230000035945 sensitivity Effects 0.000 description 5
- 244000025254 Cannabis sativa Species 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a method for controlling shooting of an unmanned aerial vehicle, wherein the unmanned aerial vehicle comprises a holder camera, and the method comprises the following steps: constructing a plurality of shooting scene models, wherein each shooting scene model has corresponding preset shooting parameters; selecting one shooting scene model from a plurality of shooting scene models according to user input; determining whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model; and when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model, setting the shooting parameters of the pan-tilt camera to be preset shooting parameters corresponding to the currently selected shooting scene model. The invention also relates to a system for controlling a drone, a drone system and a computer-readable storage medium. By adopting the technical scheme, the intelligent and specialized degree of the video shot by the unmanned aerial vehicle is improved, the quality and efficiency of video shooting and video editing are improved, the operation is simple and interesting, and the customer experience is improved.
Description
Technical Field
The application belongs to the technical field of unmanned aerial vehicles, and particularly relates to a method for controlling shooting of an unmanned aerial vehicle, a system for controlling the unmanned aerial vehicle, an unmanned aerial vehicle system and a computer readable storage medium.
Background
With the rapid development of intelligent hardware, artificial intelligence and internet streaming media technologies, the technologies are combined innovatively, so that a better updating experience is brought to users, and the method is not only trending but also expected by the users. The unmanned aerial vehicle aerial photography uses an unmanned aerial vehicle as an aerial platform, uses airborne remote sensing equipment such as a high-resolution CCD digital camera, a light optical camera, an infrared scanner, a laser scanner, a magnetic measuring instrument and the like to acquire information, processes image information by using a computer, and makes the image according to certain precision requirements. The whole system has outstanding characteristics in the aspects of design and optimal combination, and is a novel application technology integrating high-altitude shooting, remote control, remote measuring technology, video image microwave transmission and computer image information processing.
The unmanned aerial vehicle aerial image has the advantages of high definition, large scale, small area and the like. The method is particularly suitable for acquiring aerial images (roads, railways, rivers, reservoirs, coastlines and the like) in banded regions. And unmanned aerial vehicle provides convenience for the photography of taking photo by plane, is the remote sensing platform in easily switching place. The unmanned aerial vehicle has the advantages that the unmanned aerial vehicle is less limited by the field when taking off and landing, can take off and land on playgrounds, highways or other open grounds, is high in stability and good in safety, and is very easy to shoot when being converted into the field. But the current unmanned aerial vehicle shoots most all needs the artifical object that needs to shoot of display device selection through the ground satellite station, then the parameter and the shooting mode of the video of shooing of manual setting are controlled and are shot, and such shooting control mode has certain requirement to user's shooting level, hardly shoots the image picture of ideal effect. Even if different users shoot videos in the same scene, the video quality is uneven and difficult to guarantee, especially for novices. In addition, in the process of editing the shot video, if professional video processing is not performed or video editing processing is not understood, the quality and the effect of the edited video are also different, and high-quality video works are difficult to edit.
Disclosure of Invention
In view of one or more drawbacks of the prior art, the present invention provides a method of controlling a filming of a drone, the drone including a pan-tilt camera, the method comprising:
s101, constructing a plurality of shooting scene models, wherein each shooting scene model has corresponding preset shooting parameters;
s102: selecting one shooting scene model from the plurality of shooting scene models according to user input;
s103: determining whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model; and
s104: and when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model, setting the shooting parameters of the holder camera to be preset shooting parameters corresponding to the currently selected shooting scene model.
According to an aspect of the present invention, wherein the step S102 comprises: displaying a plurality of options corresponding to the plurality of shooting scene models on an electronic device in communication with the drone, and receiving a user selection of the plurality of options.
According to an aspect of the present invention, wherein the step S103 comprises:
receiving a high-definition image returned by the real-time image transmission of the unmanned aerial vehicle;
decoding and compressing the high-definition image;
calculating the matching degree of the decoded and compressed image and the selected shooting scene model, wherein when the matching degree is higher than a threshold value, the image returned by the unmanned aerial vehicle is determined to be matched with the selected shooting scene model.
According to an aspect of the present invention, wherein the step S104 comprises: and when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model, transmitting the preset shooting parameters corresponding to the currently selected shooting scene model to the unmanned aerial vehicle by utilizing wireless communication through the electronic equipment.
According to an aspect of the present invention, wherein the step S103 further comprises:
acquiring elements in the image;
and comparing the image with the selected shooting scene model according to the elements in the image, and judging whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model.
According to an aspect of the invention, further comprising:
s105: and when the image returned by the unmanned aerial vehicle is not matched with the selected shooting scene model, controlling the unmanned aerial vehicle not to shoot or to pause shooting.
According to one aspect of the invention, the method further comprises the following steps:
s106: receiving a video shot by the unmanned aerial vehicle;
s107: receiving a video clip requirement of a user;
s108: selecting one of the video clip models from a plurality of preset video clip modules according to the video and the video clip requirements;
s109: and processing and outputting the video according to the selected video clip model.
The invention also relates to a system for controlling a drone, comprising:
the device comprises a construction unit, a storage unit and a processing unit, wherein the construction unit is used for constructing a plurality of shooting scene models, and each shooting scene model is provided with corresponding preset shooting parameters;
a user input unit for selecting one photographing scene model from the plurality of photographing scene models;
the matching unit is used for determining whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model or not;
and the transmission unit is used for transmitting the preset shooting parameters corresponding to the currently selected shooting scene model to the unmanned aerial vehicle when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model.
The invention also relates to an unmanned aerial vehicle system comprising:
an unmanned aerial vehicle comprising a pan-tilt camera disposed thereon, the pan-tilt camera configured to capture an image;
an electronic control device in wireless communication with the drone and configured to perform the method as described above.
According to an aspect of the invention, further comprising:
a server in communication with the electronic control device and configured to provide the plurality of shooting scene models to the electronic control device.
The invention also relates to a computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method as described above.
By adopting the technical scheme of the invention, the intelligent and specialized degree of the video shot by the unmanned aerial vehicle is improved, the quality and efficiency of video shooting and video editing are improved, and the operation is simple, convenient and interesting, so that the customer experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 shows a flow chart of a method of controlling drone filming according to one embodiment of the invention;
fig. 2 shows a schematic diagram of a drone system 200 according to one embodiment of the invention;
FIG. 3A shows a schematic diagram of an electronic control device 203 according to one embodiment of the invention;
fig. 3B shows a schematic diagram of an electronic control device 203 according to another embodiment of the invention;
fig. 3C shows a schematic diagram of an electronic control device 203 according to a preferred embodiment of the invention;
fig. 4 shows a schematic diagram of determining whether the image returned by the drone 201 matches the selected shooting scene model; and
fig. 5 shows a schematic diagram of a system 300 for controlling a drone according to one embodiment of the invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like are used in the orientations and positional relationships indicated in the drawings, which are merely for convenience of description and simplicity of description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection, either mechanically, electrically, or in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Moreover, the present invention may repeat reference numerals and/or reference letters in the various examples, which have been repeated for purposes of simplicity and clarity and do not in themselves dictate a relationship between the various embodiments and/or configurations discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or uses of other materials.
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 shows a flow chart of a method of controlling drone filming according to one embodiment of the invention. The unmanned aerial vehicle includes a pan-tilt camera disposed thereon, the pan-tilt camera configured to capture images. The various steps of method 100 are described in detail below.
In step S101, a plurality of shooting scene models are constructed, each having corresponding preset shooting parameters.
When using unmanned aerial vehicle to shoot, there can be various shooting demands, common shooting demand for example shoot city building, people and streams, meadow forest, people portrait, animal portrait, still scene portrait, motion capture, night scene shoot, mountain river etc.. For each shooting scene, there may be corresponding preset shooting parameters. For example, for night scene shooting, it is necessary to set a longer exposure time and higher sensitivity; for animal portraits, shorter exposure times and lower sensitivity are required. Therefore, according to each shooting scene, corresponding shooting parameters can be set according to the specific situation of the scene, so as to construct a shooting scene model. According to a preferred embodiment of the present invention, the shooting scene model may include parameters of the scene, one or more example pictures, and preset shooting parameters. Or optionally, the shooting scene model may include the scene name (or number) and preset shooting parameters
According to a preferred embodiment of the invention, according to various shooting requirements, the common shooting scenes can be intelligently classified through an AI deep learning intelligent framing shooting algorithm, and each class is trained through the deep learning algorithm to generate a corresponding shooting scene model.
According to one embodiment of the invention, a model of a shooting scene of a grass forest is constructed, for example. The unmanned aerial vehicle shoots multiframe images in the shooting scene of the grassland forest, and marks the shot images according to the grassland, trees, green and other elements (key objects) in the images. And generating a training sample according to the marked image, training the newly input image, and generating a shooting scene model of the grassland forest. And each shooting scene model has corresponding preset shooting parameters. The shooting parameters include, but are not limited to, for example, focal length, aperture ratio, shutter speed, sensitivity ISO, white balance, and the like. The preset shooting parameters refer to shooting parameters corresponding to the best shooting effect of each shooting scene model. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
In step S102, one shooting scene model is selected from a plurality of shooting scene models according to a user input.
Fig. 2 shows a schematic diagram of a drone system 200 according to one embodiment of the invention. The system 200 includes: a drone 201 comprising a pan-tilt camera 202 disposed thereon, the pan-tilt camera 202 configured to capture an image; an electronic control device 203, said electronic control device 203 being in wireless communication with said drone 201 and configured to execute the method 100 of controlling the filming of the drone 201; a server 204, said server 204 being in communication with said electronic control device 203 and configured to provide a plurality of photographic scene models to said electronic control device 203.
Electronic control equipment 203 can be the dedicated controller of unmanned aerial vehicle, also can be the cell-phone of installing unmanned aerial vehicle control APP, PAD, computer or other types of electronic equipment, carries out the communication through bluetooth, Wifi, 3G, 4G, 5G wireless network and unmanned aerial vehicle. An AI deep learning intelligent framing and shooting algorithm can be run on the server 204, and a shooting scene model is generated according to the input pictures or videos shot in each scene and provided to the electronic control device 203. The electronic device 203 may perform real-time communication with the server 204 to obtain the shooting scene model in real time, or may store the shooting scene model locally in advance, and periodically perform communication with the server 204 to update the shooting scene model, which are all within the protection scope of the present invention. In fig. 2, the electronic control device 203 and the server 204 are shown as separate devices, and the server 204 may also be integrated into the electronic control device 203, which may result in higher demands on the power consumption and the calculation power of the electronic control device 203.
Fig. 3A shows a schematic view of an electronic control device 203 according to an embodiment of the invention. As shown in fig. 3A, the electronic control device 203 includes a plurality of mechanical keys/knobs 2032. The plurality of mechanical keys/knobs 2032 correspond to the plurality of shooting scene models one to one. Specifically, as shown in fig. 3A, the mechanical button/knob 2032-1 corresponds to the shooting scene "city building", 2032-2 corresponds to the shooting scene "people stream", 2032-3 corresponds to the shooting scene "grass forest", 2032-4 corresponds to the shooting scene "people portrait", 2032-5 corresponds to the shooting scene "animals portrait", 2032-6 corresponds to the shooting scene "still scenery portrait", 2032-7 corresponds to the shooting scene "motion capture", 2032-8 corresponds to the shooting scene "night scenery shooting", and 2032-9 corresponds to the shooting scene "mountain river". The user can directly select the required shooting scene through the plurality of mechanical keys/knobs. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention. That is, the shooting scene is not limited thereto, and the arrangement, type, number of the mechanical keys/knobs 2032 are not limited thereto.
Fig. 3B shows a schematic diagram of an electronic control device 203 according to another embodiment of the invention. As shown in fig. 3B, the electronic control device 203 includes a display screen 2031 and a plurality of mechanical keys/knobs 2032. The display screen 2031 is configured to display a plurality of options corresponding to a plurality of shooting scene models, and the plurality of mechanical keys/knobs 2032 are in one-to-one correspondence with the plurality of shooting scene models displayed by the display screen 2031, and are configured to receive a selection of a user for an option corresponding to a shooting scene model. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention. That is, the present invention is not limited to the specific type, number, and arrangement of the display 2031 and the mechanical keys/knobs 2032.
Fig. 3C shows a schematic view of the electronic control device 203 according to a preferred embodiment of the invention. As shown in fig. 3C, the electronic control apparatus 203 includes a display screen 2031. The display screen 2031 is used for displaying a plurality of options corresponding to a plurality of shooting scene models. Preferably, the display screen 2031 is a liquid crystal touch screen, and a user can select a desired shooting scene model on the display screen 2031 directly by using a finger. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention. In other words, the present invention does not limit the specific model of the liquid crystal touch screen.
In step S103, it is determined whether the image returned by the drone 201 matches the selected shooting scene model.
Fig. 4 shows a schematic diagram of determining whether the image returned by the drone 201 matches the selected shooting scene model. In step S102, the user selects one of the plurality of shooting scene models through the electronic control device 203 according to the own shooting demand. After receiving an input instruction of a shooting scene model selected by a user, the electronic control device 203 wirelessly communicates with the unmanned aerial vehicle 201 to perform test shooting, receives a high-definition image returned by real-time image transmission of the unmanned aerial vehicle 201 in step S1031, and performs decoding and compression processing on the high-definition image in step S1032. According to a preferred embodiment of the invention, the high definition image is processed into an image with a resolution of 224 x 224. Then, in step S1033, a matching degree of the compressed image and a sample image in the shooting scene model selected by the user is calculated, wherein when the matching degree is higher than a threshold value (for example, 75%), it is determined that the image returned by the drone matches the selected shooting scene model. Specifically, elements in the decoded and compressed image are obtained, and according to the elements in the image, the elements are compared with elements in a sample image in the selected shooting scene model, so that whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model is judged. The elements described herein refer to key objects in the image. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
According to a preferred embodiment of the present invention, for example, the shooting scene model selected by the user is "motion capture", and after receiving the high definition image returned by the real-time image transmission of the drone 201 and performing decoding and compression processing, a plurality of elements (key objects) in the image are extracted, specifically, for example, colors, contours, and the like can be extracted, and the present invention is not limited to a specific extraction method. If there are badminton court, person who plays, badminton racket among a plurality of elements (key objects) that extract can judge that what unmanned aerial vehicle shoots is the scene of "motion capture", the image that unmanned aerial vehicle 201 returns matches with the shooting scene model of the user's selection "motion capture". It is to be understood that this invention is by way of illustration only and is not to be construed as limiting the invention.
According to another preferred embodiment of the present invention, for example, the shooting scene selected by the user is "mountain river", after the high-definition image returned by the real-time image transmission of the drone 201 is received and decoded and compressed into a resolution, a plurality of elements (key objects) in the image are extracted, and if there is a mountain or a river in the extracted plurality of elements (key objects), it can be determined that the scene shot by the drone 201 is "mountain river", and the scene is matched with the shooting scene model of "mountain river" selected by the user. Similarly, if the user selects another shooting scene model, such as city building, traffic flow, grass forest, portrait, animal portrait, still scene portrait, night scene shooting, etc., it may be determined whether the image returned by the drone 201 matches the shooting scene model selected by the user according to a similar method. It should be understood that, in principle, the more elements (key objects) are extracted, the more accurate the determination result is, but if too many elements (key objects) are extracted, the more calculation amount is large, and therefore, considering the accuracy of the result and the calculation amount in a balanced manner, it is preferable that 3 elements (key objects) be extracted and compared with the sample image corresponding to the selected shooting scene. It is to be understood that this invention is by way of illustration only and is not to be construed as limiting the invention. According to another preferred embodiment of the present invention, a matching degree between the decoded and compressed image and a sample image corresponding to the shooting scene model selected by the user may also be calculated, and when the matching degree is higher than a threshold (for example, 75%), it may be considered that the image returned by the drone 201 matches the shooting scene model selected by the user. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
According to another embodiment of the present invention, an AI image classification model may be built into the electronic control device 203, and the model is trained in advance, and may classify the inputted pictures into one of a plurality of preset types, including but not limited to one or more of the above-mentioned types "city building", "people portrait", "motion capture", "people stream", "animal portrait", "night scene shot", "grass forest", "still scene portrait", "mountain river", and the like. According to a preferred embodiment of the invention, the model may be generated in the server 204 and provided to the electronic control device 203 to save effort and power consumption of the electronic control device 203.
In step S104, when the image returned by the drone 201 matches the selected shooting scene model, the shooting parameters of the pan-tilt camera 202 are set to the preset shooting parameters corresponding to the currently selected shooting scene model.
When it is determined that the image returned by the unmanned aerial vehicle 201 matches the shooting scene model selected by the user, the electronic control device 203 transmits the preset shooting parameters corresponding to the shooting scene model currently selected by the user to the unmanned aerial vehicle 201 by using wireless communication, sets the shooting parameters of the pan/tilt/zoom camera 202 to the corresponding preset shooting parameters, and controls the unmanned aerial vehicle 201 to start formal shooting. According to a preferred embodiment of the present invention, if the user selects the shooting scene model of "mountain river" to shoot, the image to be shot generally has a large clear range, i.e. requires a large depth of field, and the aperture coefficient needs to be increased and the aperture needs to be decreased, so that the preset shooting parameters corresponding to the shooting scene model of "mountain river" may be the focal length: 90mm, aperture: f10, shutter speed: 1/250s, sensitivity: ISO 100. If the user selects the shooting scene model shooting of "night scene shooting", the corresponding preset shooting parameters may be the shutter 1/30, the aperture F10, and the sensitivity 100. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
In each shooting scene model, there may be more detailed division, and according to a preferred embodiment of the present invention, the shooting scene model such as "mountain river" may be further divided into "shoot silk-like running water", "shoot lake panorama", "shoot light-measuring mountain", and so on. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
Through adopting the shooting scene model to correspond predetermine the shooting parameter and shoot, the user need not to set up the shooting parameter control unmanned aerial vehicle 201 shooting of various complicacies, just can obtain the high-quality image of various shooting scenes, and simple intelligence is humanized, and is very friendly to the new hand of photography. Of course, if a professional photographer has a more refined requirement on photographing, various photographing parameters can be set according to the self requirement DIY, a default photographing scene model is optimized or a new photographing scene model is created, and the default photographing scene model is shared with the related social platform to communicate experience.
As shown in fig. 1, if the image returned by the drone does not match the selected shooting scene model, proceeding to step S105, the drone is controlled not to shoot or to pause shooting.
When the drone 201 is controlled to complete shooting according to the shooting scene model selected by the user, the shot image or video can be clipped. With continued reference to fig. 1, the step of clipping the video captured by the drone 201 includes steps S106-S109. As described in detail below.
In step S106, a video captured by the drone 201 is received. After the unmanned aerial vehicle 201 finishes shooting according to the shooting scene model selected by the user, the video shot by the unmanned aerial vehicle 201 is received.
In step S107, a video clip demand of the user is received. In step S108, one of a plurality of preset video clip models is selected according to the video and the video clip requirements. The video clip requirement is a video clip model obtained by deep learning algorithm training on various types of videos. The various video clip models correspond to the various constructed shooting scene models, and the user can select a desired video clip model from among a plurality of preset video clip models through the display screen 2031 and/or the mechanical keys/knobs 2032 on the electronic control apparatus 203.
According to an embodiment of the present invention, when, for example, a video shot by the drone 201 through the "animal portrait" shooting scene model is received, the option "animal portrait video clip" is selected; when receiving a video shot by the unmanned aerial vehicle 201 through the "people stream" shooting scene model, selecting an option of "people stream video clip"; when receiving a video shot by the unmanned aerial vehicle 201 through the "grassland forest" shooting scene model, selecting an option of "grassland forest video editing"; when receiving a video shot by the drone 201 through the "portrait" shooting scene model, the option "portrait video clip" is selected. Similarly, the video clip model is selected for the video shot by the drone 201 using other shooting scene models, and is not described again. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
According to a preferred embodiment of the invention, a video clip model of a scene, such as "mountain river", is constructed. The video editing algorithm model provides editing logic of corresponding scenes, such as important editing elements of a lake panorama, important scenic spots in the lake, people, trees beside the lake, buildings around the lake, ships in the lake and the like, the video of a video segment containing the important editing elements is edited and stored by searching the content in the video, the repeated images are removed, the head and the tail of a film are added, the fade-in and the fade-out are performed, corresponding video scene subtitles are added, the video is spliced in a transition mode, the edited important segments are synthesized into a new video, and background music is configured. It should be understood that the present embodiment is illustrative only and is not to be construed as limiting the invention.
In step S109, the video is processed and output according to the selected video clip model. According to one embodiment of the invention, after the user-selected video clip model is the "character-portrait clip model", for example, the AI video clip algorithm module clips the output finished video by combining the user-selected video clip model and performing algorithm analysis on the content of the video to be clipped. The output finished product video can be directly played and can be shared to various social platforms.
Through adopting the video clip model to clip the video or the image that unmanned aerial vehicle 201 shot, the user need not to possess various clipping skills, just can obtain the high quality video of various scenes of shooing, and simple intelligence is humanized, and is very friendly to the new hand. Certainly, if a professional user has a demand for more refinement of the video clip, the professional user can also personally perform the video clip by DIY, and can also optimize a default shooting scene model or create a new shooting scene model and share the experience of communication with a related social platform. Of course, for both novices and professionals, not only the images or videos shot by the drone 201 can be clipped through the video clip model, but also videos shot by other devices can be imported into the video clip model for clipping. The invention is not limited to the source of the image or video.
By adopting the technical scheme of the invention, the intelligent and specialized degree of the image (video) shot by the unmanned aerial vehicle is improved, the quality is good, the efficiency is high, the operation is simple, convenient and interesting, and the user experience is greatly improved.
The invention also relates to a system 300 for controlling a drone. Fig. 5 shows a schematic diagram of a system 300 for controlling a drone according to one embodiment of the invention. As shown in fig. 5, the system 300 includes:
a construction unit 301, configured to construct a plurality of shooting scene models, where each shooting scene model has corresponding preset shooting parameters;
a user input unit 302 for selecting one photographing scene model from the plurality of photographing scene models;
a matching unit 303, configured to determine whether the image returned by the drone matches the selected shooting scene model;
a transmitting unit 304, configured to transmit, to the drone, a preset shooting parameter corresponding to the currently selected shooting scene model when the image returned by the drone matches the selected shooting scene model.
The invention also relates to a drone system 200, with reference to fig. 2, said system 200 comprising:
a drone 201 comprising a pan-tilt camera 202 disposed thereon, the pan-tilt camera configured to capture images;
a server 204, said server 204 being in communication with said electronic control device 203 and configured to provide said plurality of photographic scene models to said electronic control device 203.
An electronic control device 203, the electronic control device 203 in wireless communication with the drone 201 and configured to perform the method 100 as described above.
The invention also relates to a computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method 100 as described above.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (11)
1. A method of controlling a filming of a drone, the drone including a pan-tilt camera, the method comprising:
s101, constructing a plurality of shooting scene models, wherein each shooting scene model has corresponding preset shooting parameters;
s102: selecting one shooting scene model from the plurality of shooting scene models according to user input;
s103: determining whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model; and
s104: and when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model, setting the shooting parameters of the holder camera to be preset shooting parameters corresponding to the currently selected shooting scene model.
2. The method of claim 1, wherein the step S102 comprises: displaying a plurality of options corresponding to the plurality of shooting scene models on an electronic control device in communication with the drone, and receiving a user selection of the plurality of options.
3. The method of claim 1, wherein the step S103 comprises:
receiving a high-definition image returned by the real-time image transmission of the unmanned aerial vehicle;
decoding and compressing the high-definition image;
calculating the matching degree of the decoded and compressed image and the selected shooting scene model, wherein when the matching degree is higher than a threshold value, the image returned by the unmanned aerial vehicle is determined to be matched with the selected shooting scene model.
4. The method of claim 2, wherein the step S104 comprises: when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model, the preset shooting parameters corresponding to the currently selected shooting scene model are transmitted to the unmanned aerial vehicle through the electronic control equipment by utilizing wireless communication.
5. The method according to any of claims 1-3, wherein the step S103 further comprises:
acquiring elements in the image;
and comparing the image with the selected shooting scene model according to the elements in the image, and judging whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model.
6. The method of any of claims 1-3, further comprising:
s105: and when the image returned by the unmanned aerial vehicle is not matched with the selected shooting scene model, controlling the unmanned aerial vehicle not to shoot or to pause shooting.
7. The method of claim 1, further comprising:
s106: receiving a video shot by the unmanned aerial vehicle;
s107: receiving a video clip requirement of a user;
s108: selecting one of a plurality of preset video clip models according to the video and the video clip requirements;
s109: and processing and outputting the video according to the selected video clip model.
8. A system for controlling a drone, comprising:
the device comprises a construction unit, a storage unit and a processing unit, wherein the construction unit is used for constructing a plurality of shooting scene models, and each shooting scene model is provided with corresponding preset shooting parameters;
a user input unit for selecting one photographing scene model from the plurality of photographing scene models;
the matching unit is used for determining whether the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model or not;
and the transmission unit is used for transmitting the preset shooting parameters corresponding to the currently selected shooting scene model to the unmanned aerial vehicle when the image returned by the unmanned aerial vehicle is matched with the selected shooting scene model.
9. An unmanned aerial vehicle system comprising:
an unmanned aerial vehicle comprising a pan-tilt camera disposed thereon, the pan-tilt camera configured to capture an image;
an electronic control device in wireless communication with the drone and configured to perform the method of any one of claims 1-7.
10. The drone system of claim 9, further comprising:
a server in communication with the electronic control device and configured to provide the plurality of shooting scene models to the electronic control device.
11. A computer-readable storage medium comprising computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210298220.1A CN114666505A (en) | 2022-03-24 | 2022-03-24 | Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210298220.1A CN114666505A (en) | 2022-03-24 | 2022-03-24 | Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114666505A true CN114666505A (en) | 2022-06-24 |
Family
ID=82031775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210298220.1A Pending CN114666505A (en) | 2022-03-24 | 2022-03-24 | Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114666505A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107465855A (en) * | 2017-08-22 | 2017-12-12 | 上海歌尔泰克机器人有限公司 | Image pickup method and device, the unmanned plane of image |
CN107566529A (en) * | 2017-10-18 | 2018-01-09 | 维沃移动通信有限公司 | A kind of photographic method, mobile terminal and cloud server |
CN108521866A (en) * | 2017-12-29 | 2018-09-11 | 深圳市大疆创新科技有限公司 | A kind of video acquiring method, control terminal, aircraft and system |
CN108965689A (en) * | 2017-05-27 | 2018-12-07 | 昊翔电能运动科技(昆山)有限公司 | Unmanned plane image pickup method and device, unmanned plane and ground control unit |
CN109479090A (en) * | 2017-12-22 | 2019-03-15 | 深圳市大疆创新科技有限公司 | Information processing method, unmanned plane, remote control equipment and non-volatile memory medium |
CN111243632A (en) * | 2020-01-02 | 2020-06-05 | 北京达佳互联信息技术有限公司 | Multimedia resource generation method, device, equipment and storage medium |
CN111416940A (en) * | 2020-03-31 | 2020-07-14 | 维沃移动通信(杭州)有限公司 | Shooting parameter processing method and electronic equipment |
CN112289347A (en) * | 2020-11-02 | 2021-01-29 | 李宇航 | Stylized intelligent video editing method based on machine learning |
CN112565599A (en) * | 2020-11-27 | 2021-03-26 | Oppo广东移动通信有限公司 | Image shooting method and device, electronic equipment, server and storage medium |
-
2022
- 2022-03-24 CN CN202210298220.1A patent/CN114666505A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108965689A (en) * | 2017-05-27 | 2018-12-07 | 昊翔电能运动科技(昆山)有限公司 | Unmanned plane image pickup method and device, unmanned plane and ground control unit |
CN107465855A (en) * | 2017-08-22 | 2017-12-12 | 上海歌尔泰克机器人有限公司 | Image pickup method and device, the unmanned plane of image |
CN107566529A (en) * | 2017-10-18 | 2018-01-09 | 维沃移动通信有限公司 | A kind of photographic method, mobile terminal and cloud server |
CN109479090A (en) * | 2017-12-22 | 2019-03-15 | 深圳市大疆创新科技有限公司 | Information processing method, unmanned plane, remote control equipment and non-volatile memory medium |
CN108521866A (en) * | 2017-12-29 | 2018-09-11 | 深圳市大疆创新科技有限公司 | A kind of video acquiring method, control terminal, aircraft and system |
CN111243632A (en) * | 2020-01-02 | 2020-06-05 | 北京达佳互联信息技术有限公司 | Multimedia resource generation method, device, equipment and storage medium |
CN111416940A (en) * | 2020-03-31 | 2020-07-14 | 维沃移动通信(杭州)有限公司 | Shooting parameter processing method and electronic equipment |
CN112289347A (en) * | 2020-11-02 | 2021-01-29 | 李宇航 | Stylized intelligent video editing method based on machine learning |
CN112565599A (en) * | 2020-11-27 | 2021-03-26 | Oppo广东移动通信有限公司 | Image shooting method and device, electronic equipment, server and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8212911B2 (en) | Imaging apparatus, imaging system, and imaging method displaying recommendation information | |
CN103369228A (en) | Camera setting method and device, and camera | |
WO2013139100A1 (en) | Intelligent photographing method, device and mobile terminal based on cloud service | |
CN111294488B (en) | Image pickup apparatus, control method thereof, and storage medium | |
WO2020119588A1 (en) | Image capture method and device | |
CN103971547B (en) | Photography artificial teaching method and system based on mobile terminal | |
KR102146855B1 (en) | Photographing apparatus and method for sharing setting values, and a sharing system | |
CN112702521A (en) | Image shooting method and device, electronic equipment and computer readable storage medium | |
US10602064B2 (en) | Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device | |
WO2014169582A1 (en) | Configuration parameter sending and receiving method and device | |
CN107026966A (en) | A kind of picture shooting method, terminal and server | |
JP2020198556A (en) | Image processing device, control method of the same, program, and storage medium | |
JP2009058834A (en) | Imaging apparatus | |
CN110830712A (en) | Autonomous photographing system and method | |
CN108540722B (en) | Method and device for controlling camera to shoot and computer readable storage medium | |
CN112189334A (en) | Shutter speed adjusting method, safety shutter calibrating method, portable equipment and unmanned aerial vehicle | |
KR102172123B1 (en) | Video editing, video recording method and device | |
JP2007067870A (en) | Digital camera system and calibration method of photographing condition | |
CN114666505A (en) | Method and system for controlling unmanned aerial vehicle to shoot and unmanned aerial vehicle system | |
CN107395989A (en) | Image split-joint method, mobile terminal and system for image mosaic | |
TWI628626B (en) | Multiple image source processing methods | |
WO2020061993A1 (en) | Photographing method, photographing system, and terminal | |
US20150249792A1 (en) | Image processing device, imaging device, and program | |
CN112734602A (en) | Scenic spot thing allies oneself with system | |
CN112887588B (en) | Method and apparatus for generating video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20220624 |