[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113905175A - Video generation method and device, electronic equipment and readable storage medium - Google Patents

Video generation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113905175A
CN113905175A CN202111141301.2A CN202111141301A CN113905175A CN 113905175 A CN113905175 A CN 113905175A CN 202111141301 A CN202111141301 A CN 202111141301A CN 113905175 A CN113905175 A CN 113905175A
Authority
CN
China
Prior art keywords
target
image
video
camera
parameter information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111141301.2A
Other languages
Chinese (zh)
Inventor
马常耀
孙亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111141301.2A priority Critical patent/CN113905175A/en
Publication of CN113905175A publication Critical patent/CN113905175A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video generation method and device, electronic equipment and a readable storage medium, and belongs to the technical field of camera shooting. The method comprises the following steps: displaying a target video image on a shooting preview interface, wherein the target video image comprises a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera; and generating a first target video according to the target video image.

Description

Video generation method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a video generation method and device, electronic equipment and a readable storage medium.
Background
At present, the use of electronic devices (e.g., mobile phones) to produce video is becoming increasingly popular with users. With the increase of the number and the types of the cameras which can be supported by the mobile phone, better technical support is provided for the video production of the user.
In the conventional technology, a single camera or multiple cameras are mainly used for video shooting to obtain multiple videos, and then the multiple videos are subjected to video editing to obtain a desired target video. When the mode is used for video editing in the later period, the operation steps are complicated, and the consumed time is long.
Disclosure of Invention
An embodiment of the present application provides a video generation method, an apparatus, an electronic device, and a readable storage medium, which can solve the problem in the related art that the operation is complicated when video editing is performed in the later stage.
In a first aspect, an embodiment of the present application provides a video generation method, where the method includes:
displaying a target video image on a shooting preview interface, wherein the target video image comprises a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera;
and generating a first target video according to the target video image.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
the first display module is used for displaying a target video image on a shooting preview interface, wherein the target video image comprises a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera;
and the first generation module is used for generating a first target video according to the target video image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, an image of a first target area in a first video image captured by a first camera (i.e., a first target image) and an image of a second target area in a second video image captured by a second camera (i.e., a second target image) may be displayed in a shooting preview interface, and then, a first target video may be generated according to the displayed target video image. Each frame of target video image in the generated first target video comprises images of partial or all areas in the video images acquired by the two cameras respectively, so that the images acquired by the two cameras can be edited in the video shooting process without editing and clipping the shot video in the later period, the operation steps of video clipping are simplified, the operation cost is saved, and the operation duration of the video clipping is shortened.
Drawings
Fig. 1 is a flowchart of a video generation method provided in an embodiment of the present application;
FIG. 2A is one of the schematic diagrams of a video generation interface provided by the embodiments of the present application;
fig. 2B is a second schematic diagram of a video generation interface provided in the present application;
fig. 2C is a third schematic diagram of a video generation interface provided by the embodiment of the present application;
FIG. 3A is one of schematic diagrams of a parameter adjustment control provided by an embodiment of the present application;
FIG. 3B is a second schematic diagram of a parameter adjustment control provided in the present application;
FIG. 3C is a third schematic diagram of a parameter adjustment control provided in the present application;
FIG. 4A is a fourth schematic diagram of a video generation interface provided by an embodiment of the present application;
FIG. 4B is a fifth schematic diagram of a video generation interface provided by an embodiment of the present application;
FIG. 4C is a sixth schematic view of a video generation interface provided by embodiments of the present application;
FIG. 4D is a seventh schematic view of a video generation interface provided by embodiments of the present application;
fig. 5 is a block diagram of a video generation apparatus provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video generation method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flow chart of a video generation method according to an embodiment of the present application is shown, which may be applied to a video generation apparatus configured with at least two cameras.
Optionally, in consideration of a high requirement on the stability of a shooting scene, the video generation device may be mounted on a pan-tilt for shooting and recording a video, so as to expand an angle range of the shooting scene, perform positioning tracking shooting on a target object, and avoid the target object from drawing.
For example, if the first camera is an optical zoom lens and the second camera is a digital zoom lens, then by using the first camera and the second camera to shoot a video, a mixed zoom combining the optical zoom and the digital zoom may be utilized, and a zoom processing algorithm is matched to solve the problem that the definition of a pure digital zoom is not sufficient, and to overcome the defect that a high-power optical zoom cannot be realized due to the hardware limitation of a video generation device (e.g., a mobile phone).
Specifically, as shown in fig. 1, the method may specifically include the following steps:
step 101, displaying a target video image on a shooting preview interface, where the target video image includes a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera.
Optionally, the target video images may further include more than 2 target images, such as a third target image and a fourth target image, and the third target image may be an image of a third target region in the third video image acquired by the third camera, and the fourth target image may be an image of a fourth target region in the fourth video image acquired by the fourth camera, which have similar principles and are not described again one by one.
The target area may be a partial image area in the video image or an area of the entire video image.
The target video image may be an image obtained by fusing the first target image and the second target image.
And 102, generating a first target video according to the target video image.
Wherein the display effect of the first target video is the effect of the previewed target video image.
Wherein the first target video comprises each frame of the target video image generated in a video recording process.
In the embodiment of the application, an image of a first target area in a first video image captured by a first camera (i.e., a first target image) and an image of a second target area in a second video image captured by a second camera (i.e., a second target image) may be displayed in a shooting preview interface, and then the first target video may be generated according to the displayed target video images. Each frame of target video image in the generated first target video comprises images of partial or all areas in the video images acquired by the two cameras respectively, so that the images acquired by the two cameras can be edited in the video shooting process without editing and clipping the shot video in the later period, the operation steps of video clipping are simplified, the operation cost is saved, and the operation duration of the video clipping is shortened.
Optionally, before performing step 101, the method according to the embodiment of the present application may further include: firstly, receiving a first input of a user to a shooting preview interface; displaying at least two camera identifications in response to the first input, one camera identification indicating one camera; then, receiving second input of a first camera identification and a second camera identification in the at least two camera identifications from a user; and finally, responding to the second input, controlling a first camera to collect a first video image and a second camera to collect a second video image, wherein the first camera is the camera identified and indicated by the first camera, and the second camera is the camera identified and indicated by the second camera.
The input to a certain object according to various embodiments of the present application may include: the click input of the user on the certain object, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application. Therefore, for specific implementation manners of the first input, the second input, the third input, the fourth input, the fifth input, the sixth input, and the like described in the embodiments of the present application, reference may be made to the description herein, and details of the implementation manners are not described below.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
For example, fig. 2A shows a shooting preview interface according to an embodiment of the present application, where a user clicks an icon of a setting menu through gesture 1 to call a mult Camera option, where the mult Camera is one option in the setting menu; then, by clicking the Mulit Camera option through the gesture 2, a list of english acronyms of various cameras supported by the mobile phone can be displayed, and the list schematically shows Wide (i.e. main shooting common Wide-angle lens), Ultra (Ultra Wide-angle lens), Tele (Tele lens), of course, the Camera identification is not limited to the english acronyms exemplified herein, and can also be identification information such as a chinese name, a Camera ID, a Camera icon, and the like; then, the cameras in the list are clicked through the gesture 3, so that the cameras required to be adopted in the video shooting are selected. And the front-end box of the selected camera in the list, which is abbreviated as English, is blackened to represent that the camera is selected.
For example, a combination of a telephoto lens + a wide-angle lens, a telephoto lens + an ultra wide-angle lens, a macro lens + a wide-angle lens, and the like may be used. In addition, the user can select various freely combined modes such as double-shot combination or multi-shot combination, and the purpose is to utilize the characteristics of each lens according to the field range difference of different cameras to achieve the function of naturally splicing images shot by each camera, namely the fusion of the application.
In the shooting preview interface illustrated in fig. 2A, the user selects the double shot mode, specifically, selects the double shot combination of Wide (i.e., the first camera) and Tele (i.e., the second camera).
In the present embodiment, as can be seen from the shooting preview interface of fig. 2A, the shooting preview interface displays preview contents of one frame image generated by the selected double shot in the conventional method.
Alternatively, after selecting the combination of the cameras in fig. 2A, the user may click any position of the shooting preview interface in fig. 2A to indicate that the combination of the cameras is selected completely, and then may control the first camera to capture the first video image to obtain the first video image 41 in the shooting preview interface shown in fig. 2B, for example, and control the second camera to capture the second video image to obtain the second video image 42 in the shooting preview interface shown in fig. 2B, for example.
Optionally, three circular controls of 0.6, 1x, 2 are shown in fig. 2A for adjusting the focal length of the image captured by the camera.
In the embodiment of the application, the camera identifications of at least two cameras can be displayed on a video preview interface, then the two selected cameras are determined based on the input of the user to the camera identifications, the two cameras are controlled to collect respective video images respectively, the camera combination required by the video shooting at this time can be selected by the user according to the requirement, the user-defined combination of the cameras shot by the video is realized, the flexibility of the video shooting on the camera selection is improved, and the content of the shot pictures is enriched.
Optionally, on the basis of any one of the above embodiments, before step 101, the method according to this embodiment of the present application may further include:
displaying the first video image acquired by a first camera in a first display area of a shooting preview interface; receiving a third input of a user to a first target area in the first video image; obtaining a first target image in response to the third input;
displaying the second video image acquired by a second camera in a second display area of the shooting preview interface; receiving a fourth input of a user to a second target area in the second video image; and responding to the fourth input to obtain a second target image.
Specifically, in order to enable a user to visually see the difference between the video images acquired by the different cameras and the shooting characteristics of each camera in the double shooting, the shooting preview interface may be divided into at least two display areas, so that the video images acquired by the different cameras in the double shooting are respectively displayed in the different display areas.
For example, in fig. 2B, the shooting preview interface may be divided into a display area 11 corresponding to a first camera and a display area 12 corresponding to a second camera, the display area 11 displaying a first video image 41 captured by the first camera, and the display area 12 displaying a second video image 42 captured by the second camera.
In this embodiment, the user may also trigger input to a target area concerned in each captured video image by taking two shots, so as to select a target object, i.e., an object corresponding to the target area.
In the example of fig. 2B, the user selects a first target area in the first video image 41, so as to obtain a first target image 411 of a first target object corresponding to the first target area; however, for the second video image 42, the user does not perform the matting, and therefore, in this example, the second video image 42 is also the second target image 421.
In addition, when the target image is scratched from the video image respectively collected by the camera, the parameter adjustment control can be operated to realize the aim. Illustratively, in FIG. 2B, the first target image 411 may be acquired by operating a "scratch-out" option within the first parameter adjustment control 13 within the first display region 11. Specifically, after clicking the "scratch" option, the first target image is obtained through a third input.
Optionally, the shooting preview interface may be divided into a plurality of display areas, so that the video images captured by the cameras may be displayed in different display areas.
In the embodiment of the application, the video images collected by different cameras can be respectively displayed in different display areas of the shooting preview interface, so that the video images collected by different cameras can be respectively displayed in different areas of the shooting preview interface, the preview images of all the cameras can be independently displayed, and the user can conveniently edit the images.
Optionally, the method according to the embodiment of the present application may further include: displaying a first parameter adjustment control and a second parameter adjustment control (wherein this step may be performed before or after step 101); in the event that a fifth input to the first parameter adjustment control by the user is received, updating image parameters of the first target image in response to the fifth input; in an instance in which a sixth input to the second parameter adjustment control is received by the user, updating the image parameters of the second target image in response to the sixth input.
Alternatively, by performing input on the parameter adjustment control, the parameter information updated on the target image is not limited to the image parameters here, and may also include fusion information of the first target image and the second target image when the first target image and the second target image are fused to generate the target video image, and fusion information of the second target image.
Alternatively, by performing input to the parameter adjustment control, the parameter information updated for the target image is not limited to the image parameters herein, and may also include photographic subject information corresponding to the first target image, and photographic subject information corresponding to the second target object.
Optionally, the image parameters may include, but are not limited to, at least one of: transparency, color, image style (e.g., black and white style, punk style, etc.), and the like.
Optionally, the fusion information may include, but is not limited to, at least one of the following: layer information, image proportion, relative position and the like.
Alternatively, the photographic subject information may include image feature information of the photographic subject, and the target area where the photographic subject is located may be accurately identified in one image by using the image feature information, so as to determine an image of a first target area in the first video image, that is, a first target image, and determine an image of a second target area in the second video image, that is, a second target image.
Optionally, as for the shot object information, the shot object information may be an object of the same class in the first video image, for example, the first video image includes a plurality of faces, face features of the plurality of faces may be automatically identified, and regions of the plurality of faces are scratched to obtain a plurality of first target images corresponding to the first video image, but the type of the object is not limited to the face object.
Optionally, as for the shot object information, the shot object information may also be the same object in the first video image, for example, a face of a certain user is subjected to face feature point recognition, and a region of the face of the user is subjected to matting to obtain a first target image corresponding to the first video image.
The specific content of the subject information of the second target image is similar to the specific content of the subject information of the first target image, which is exemplified here, and is not described again.
Therefore, the user can perform parameter adjustment on at least one of the image parameter, the shooting object information and the fusion information of the first target image by operating the parameter adjustment control, so as to obtain updated parameter information of the first target image.
Illustratively, as shown in fig. 2B, the first display area 11 and the second display area 12 are respectively displayed with a parameter adjusting control 13 and a parameter adjusting control 14 (also called touchball), wherein the parameter adjusting control 13 is used for updating the parameter information of the first target image 411 determined in the first display area 11, and the parameter adjusting control 14 is used for updating the parameter information of the second target image 421 in the second display area 12.
In this example, the parameter information adjusted by the parameter adjustment control 13 and the parameter adjustment control 14 includes matting information, transparency, layers, weights, and Zoom, but the parameter information adjusted by the parameter adjustment control is not limited to the example here, and may also include the specific parameter information of the image parameter, the shooting object information, and the fusion information described above, and the user may select a required parameter to update according to the requirement.
The matting information is used to represent object information corresponding to the target image generated by matting, and the object information may be feature information of the target image in the video image.
And the transparency is used for representing the respective transparency information of the first target image and the second target image when the first target image and the second target image are fused.
And the image layers are used for representing the image layers where the first target image and the second target image are located when being fused, and the first target image and the second target image can be in the same image layer or different image layers.
And the weight is used for representing the display proportion of the first target image and the second target image in the fused target video image respectively when the first target image and the second target image are fused.
Zoom, which is used to represent the Zoom magnification of the camera.
For example, if Zoom of the parameter adjustment control 13 is operated, the Zoom magnification of the first camera may be updated, so that the first camera acquires the first video image at the updated Zoom magnification, and further the Zoom magnification of the first target image in the first video image is updated.
The following describes each parameter related to the parameter adjustment control in fig. 2B in detail with reference to a specific example.
For the matting option, a user can manually or automatically matte the previewed video image by selecting the matting option in the parameter adjusting control, and the characteristic information of the target object corresponding to the matte target image is identified and stored, so that the stored characteristic information can be utilized to track the area of the target object in the video recording process, and real-time matting is facilitated.
Exemplarily, as shown in fig. 2B, by clicking a matting option in the parameter adjustment control 13 in the first display region 11, the matting of the first target region corresponding to the first target image 411 in the first video image 41 of the preview in the first display region 11 can be realized, and the matting operation can be manual matting by a finger of the user; in addition, considering that there may be a problem of too long operation for the matting operation in the invention, it may also be that automatic matting is performed on a desired object by an Artificial Intelligence (AI) method, for example, a partial template is used for a user to select a matting object, the system may be compatible with a partial AI algorithm, and a quick matting option for target objects such as foreground characters, pets, children, vehicles, and the like is provided. For example, the first target image 411 of the person object is cut out here, and the cut-out first target image 411 is marked with an outline frame in the first video image displayed in the first display area 11.
Optionally, when the object is scratched through an artificial intelligence method, taking a person object as an example, when a person object exists in a video image, automatically scratching a target area of the person object to obtain a target image, and storing feature information of the person object; when a plurality of character objects exist in the video image, the character objects are matched with character objects in a local photo album of a mobile phone of a user, the character objects in the local photo album in the video image are used as identified target objects, and a target area of the target objects is identified to obtain target images.
In other examples, if the second video image 42 displayed in the second display area 12 needs to be subjected to the matting processing, the processing manner is similar to the above-mentioned matting processing method for the first video image 41, and details are not repeated here.
In addition, when the matting processing is performed on one frame of video image generated by shooting with one camera, the target areas of one or more objects can be scratched as needed, so as to determine a plurality of target images in the video image, and the method is not limited to the example of fig. 2B.
For other parameter options in fig. 2B except for the matting option, if the video image is not scratched, the other parameter options are used to update the parameters of the video image; and if the target image is obtained by matting the video image, other parameter options are used for updating the parameters of the target image.
For example, in fig. 2B, when the transparency, layer, weight, Zoom in the parameter adjustment control 13 is selected to update the parameters of the image, the parameters of the transparency, layer, weight, Zoom of the first target image 411 obtained by matting are updated without updating the parameters of the first video image 41 displayed in the first display area 11;
and the second video image 42 displayed in the second display area 12 is not scratched, so when the user updates the parameter of the image by operating the parameter option (except for the scratch option) of the parameter adjustment control 14 in the second display area 12, the processed image is the second video image 42, that is, the second target image 421, and the two images are the same.
The weights adjusted by the parameter adjustment control in fig. 2B are used to represent the display scales of the first target image 411 and the second target image 421, which are respectively generated after fusion in the target video image 16 shown in fig. 2C. For example, the weight value set for the target image as the foreground map may be higher, and the weight value set for the target image as the background map may be lower.
The transparency adjusted by the parameter adjustment control in fig. 2B is used to represent the transparency information of the first target image 411 and the second target image 421, respectively, in the target video image 16 shown in fig. 2C generated after fusion. For example, the first target image 411 in fig. 2B, the transparency of which is set to 20% by the parameter adjustment control 13, the first target image 411 is displayed in the fused target video image 16 in fig. 2C with 20% transparency.
For the layers adjusted by the parameter adjustment control in fig. 2B, the layers are used to represent the first target image 411 and the second target image 421, and each layer is located in the target video image 16 shown in fig. 2C generated after fusion. In the example of fig. 2A and 2B, since there are only two cameras, and therefore, two layers are included at most, the first target image 411 may be set as a first layer, and the second target image 421 may be set as a second layer, so that the first target image 411 is displayed as a foreground image and the second target image 421 is displayed as a background image, presenting the display effect of the target video image 16 as shown in fig. 2C.
The relative position information in the fusion information is used for representing the relative position relationship between the first target image and the second target image in the fused target video image.
For example, in the example of fig. 2A and 2B, the first target image 411 may be set to be located at a position lower than the middle of the second target image, and of course, the specific position information may be coordinate information, which is only schematically described here and is not intended to limit the present application. In this way, in the video recording process, when each frame of target video image in the video is generated, the first target image in the first video image captured by the first camera (i.e., the person areas indicated by the two children in fig. 2A) is fused to the middle-lower position of the second target image in the second video image captured by the second camera (i.e., the image of the sun in fig. 2B).
In the embodiment of the application, the image parameters of the first target image and the second target image are updated by displaying the parameter adjusting control and inputting the parameter adjusting control, so that the respective image parameters of the first target image and the second target image can be independently updated, and the operation flow of video editing is simplified.
Optionally, after the parameter adjustment control is input to update the parameter information (including at least one of the image parameter, the fusion information, and the shooting object information) of the first target image and the second target image, the shooting preview interface shown in fig. 2B may be changed to the shooting preview interface shown in fig. 2C. Specifically, the first target image 411 after updating the parameter information and the second target image 421 after updating the parameter information in fig. 2B may be fused, and the fused target video image 16 may be displayed on the shooting preview interface shown in fig. 2C.
Alternatively, after the parameter adjustment control is input to update the parameter information (including at least one of the image parameter, the fusion information, and the subject information) of the first target image and the second target image, the first display area 11 and the second display area 12 of fig. 2B may be cancelled, the third display area 15 of fig. 2C may be displayed, and the target video image 16 may be displayed in the third display area 15.
For example, referring to fig. 2B and 2C, after updating the parameter information of the first target image 411 and the second target image 421 by performing input to the parameter adjustment control 13 and the parameter adjustment control 14 in fig. 2B, the user may browse the display effect of the target video image by, for example, double-clicking the center position of the entire display area (including the first display area 11 and the second display area 12) of fig. 2B, or an arbitrary position.
In this embodiment, if the user is not satisfied with the fusion effect of the images after browsing the target video image 16 in fig. 2C, the user may return to the shooting preview interface shown in fig. 2B by operating the shooting preview interface shown in fig. 2C, and continue to update the image parameters of the first target image and the second target image, where the specific update manner is referred to above and is not described herein again.
In the embodiment of the application, a user can determine whether the effect of the target video image meets the requirement by browsing the fused target video image, so as to determine whether the parameter information of the first target image and the second target image needs to be adjusted again, and the operation is convenient.
In the embodiment of the application, the first camera and the second camera can be selected on the shooting preview interface, and then video images collected by different cameras are displayed in different display areas. After adjusting the image parameters of the first target image and the second target image acquired by different cameras, the user can determine whether the image parameters of the first target image and the second target image need to be adjusted again by previewing the target video image.
For step 102 in the embodiment of fig. 1, in combination with the above operation flows of sequentially performing operations on fig. 2A, fig. 2B, and fig. 2C, if the user browses the preview target video image 16 of fig. 2C and determines that the parameter information of the first target image and the second target image does not need to be updated, the user may start recording a video by clicking the video recording control 17 in fig. 2C; in the video recording process, a first camera and a second camera respectively acquire a first video image and a second video image according to the updated Zoom, then a first target image is obtained according to the first video image, and a second target image is obtained according to the second video image; and performing parameter adjustment on the first target image and the second target image, and then performing image fusion to obtain a target video image.
Illustratively, if the first target image and/or the second target image are/is a cutout image, in the video recording process, the method of the embodiment of the present application may perform dynamic cutout in real time and perform image fusion on the cutout image. According to the example of fig. 2A, 2B, and 2C, two children in the first target image 411 originally run in the forest under the mountain, and then after the image processing according to the embodiment of the present application, in the first target video, the two children run always under the sun.
Optionally, after the parameter information is updated for the first target image and the second target image, the updated parameter information may be stored in a buffer (buffer), and after the video recording is started, each frame of the first video image and each frame of the second video image acquired in the video recording process may be processed by using the stored parameter information to obtain the desired first target video.
Optionally, since the parameter information of the camera itself may also have some special effects, the user may select the camera with reference to the performance of the camera itself. Therefore, the user can also shoot special videos by the permutation and combination of different image parameters and the combination of the parameter information of the selected camera. For example, according to the performance of the camera, the camera can shoot videos similar to double exposure, videos combining color and black and white, videos with fused pictures in the seepponken style, and the like.
In this embodiment of the present application, before generating the first target video, respective parameter information may be updated for target images acquired by different cameras, where the parameter information includes at least one of the following: image parameters, shot object information and fusion information; and image processing and image fusion are carried out according to the updated parameter information, and the target video image subjected to editing processing can be obtained in real time in the video recording process. Moreover, due to the difference of the shooting effects of different cameras, videos with different special effect effects (such as different depths of field of foreground images and background images) can be shot and generated to generate videos with specific effects.
In fig. 2B, the parameter adjustment control 13 of the first target image 411 and the parameter adjustment control 14 of the second target image 421 are respectively displayed in different display areas, but the parameter adjustment controls of different target images may also be a unified parameter adjustment control, and may also be displayed in the same area, which is not limited in this application.
When the parameter adjustment control is input to update the parameter information of the target image, the description is given with reference to the example of fig. 2B.
Illustratively, with reference to fig. 2B, for example, if the user clicks the matting option of the parameter adjustment control 13 in the first display area 11 and scrambles the first target image 411, the feature information of the first target image 411 is the shooting object information of the first target image;
for another example, when the user clicks the layer option of the parameter adjustment control 13 in the first display area 11, the layer information of the first target image 411 may be set, for example, to be a first layer or a second layer.
According to the method and the device, the parameter options in the parameter adjusting control are operated in the shooting preview interface, so that the parameter of the target image is updated, and the user can obtain the target video image which is satisfied by the user before recording the video.
Optionally, when performing input on the parameter adjustment control to update the parameter information of the target image, taking the operation of the first parameter adjustment control as an example, the method may specifically be implemented in the following manner:
illustratively, the user clicks the layer option of the first parameter adjustment control 13 in fig. 2B, as shown in fig. 3A, and clicks the layer option with a finger, and displays a scale wheel of the layer option, here, the scale wheel of the layer option shown in fig. 3B, where two selectable parameter values, 1 and 2, are displayed in the scale wheel shown in fig. 3B; here, since the target image includes two, i.e., the first target image and the second target image, there are at most two image layers. Namely, when the first target image and the second target image are fused, the values of the image layers are only 1 and 2; in other embodiments, if the number of cameras is increased, the value of the image layer may be more.
In addition, as can be seen from fig. 3B, a pointer 21 is further disposed in the scale dial, and the pointer 21 may initially point to any one of the values, or may point to any one of the positions between the two values.
In addition, although the shape of the first parameter adjustment control 13 is a circle in this example, the shape of the parameter adjustment control is not limited in this application, and may be a strip shape, a triangle shape, or the like; similarly, the shape of the scale dial is not limited to a circle.
The first parameter adjustment control 13 is rotatable, or the pointer 21 is rotatable. In fig. 3B, the pointer can be manually rotated to a desired value.
The setting modes of other image parameters are the same, and are not repeated one by one.
With continued reference to fig. 3C, after the parameter setting is completed, the user may click the circle center position of the scale dial, and the display effect of the first parameter adjustment control 13 shown in fig. 2B is restored again.
Optionally, after step 102, the method of the embodiment of the present application may further include: acquiring first video parameter information of the first target video, wherein the first video parameter information comprises first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image; and shooting videos based on the first video parameter information to obtain a second target video.
Wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
The above-mentioned subject information, image parameter information, and fusion information can refer to the description and explanation of the above-mentioned embodiments, and are not repeated here.
Optionally, after the first target video is recorded, the first target parameter information and the second target parameter information may be stored, and then the second target video is directly recorded according to the stored parameter information, and before the second target video is recorded, the user may directly apply the parameter information of the first target video without setting again.
In the embodiment of the present application, if the user is not satisfied with the shooting effect of the generated first target video, or needs to replace the shooting object in the first target video (for example, in the embodiment of fig. 2A to 2C, needs to replace the running character object, or needs to replace the running background image of the character object), part of the parameter information of the generated first target video may be used indiscriminately, so as to simplify the user operation and improve the video recording efficiency.
Optionally, after step 102, the method of the embodiment of the present application may further include: acquiring the first video parameter information of the first target video, wherein the first video parameter information comprises first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image; and generating a first shooting template according to the first video parameter information.
Wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
That is, after the first target video is generated, the video parameter information of the first target video may be saved as a shooting template, which is named as a first shooting template. When shooting the video with the special effect similar to that of the first target video again, the first shooting template can be directly selected, the first video parameter information is completely applied, and parameter setting is not needed.
Optionally, before step 101, the method of the embodiment of the present application may further include: receiving a fifth input of the user to the shooting preview interface; in response to the fifth input, displaying at least one shooting identifier, each shooting identifier being used to indicate a shooting template; and receiving a sixth input of the target shooting identifier in the at least one shooting identifier from the user, and displaying the target video image on the shooting preview interface according to the shooting template indicated by the target shooting identifier in response to the sixth input when the step 101 is executed.
The shooting mark in the application is a character, a symbol, an image and the like for indicating information, and a control or other container can be used as a carrier for displaying information, including but not limited to a character mark, a symbol mark and an image mark. After a user selects one shooting template, the shooting effect corresponding to the shooting template is displayed, and after the user determines to adopt the corresponding shooting template, the parameter setting is not needed, so that the operation is convenient. If the user is not satisfied with some parameters in the shooting template, the parameter adjusting control can be called to adjust.
In this embodiment, parameter information of the video may also be directly applied, as shown in fig. 4A, a user opens a local album of the mobile phone by clicking an icon 31 in a shooting preview interface of the video, and jumps to an album interface schematic diagram shown in fig. 4B, where the album interface displays thumbnails of videos and thumbnails of pictures, and in fig. 4B, the user triggers playing of a target video B by clicking the target video B (for example, a first target video) so as to jump to the interface of fig. 4C; in fig. 4C, the video content of the target video b is being played, and the user clicks the icon 32, so that a menu 33 is displayed on the video preview interface, a plurality of options for applying the video parameters of the target video b are displayed in the menu 33, and the user can click and select a desired application mode. The shooting flag is not limited to the example in the menu 33, and may be other flag information.
When the option of completely applying the mode is selected, video recording is performed completely based on the video image parameters of the target video b. And when the option of applying the target area is selected, recording the video according to the characteristic information of the target object corresponding to the first target image and the second target image in the target video b.
And when the camera in the application mode is selected, recording the video by using the camera in the target video b.
When other options are selected, other video parameter information of the target video b can be used indiscriminately to record the video.
Referring to fig. 4D, after the option of applying the target region is selected in fig. 4C, the display interface in fig. 4D is obtained, and the user can perform matting on the region of the first target image 511 without operating the matting option of the parameter adjustment control 13 in the first display region 11.
Optionally, the user is satisfied with the special effect of the preview image, and can directly record the preview image; if the user is not satisfied with the special effect of the preview image, the parameter information of the first target image 511 (here, the parameter information of the target video b is applied) can be fine-tuned by operating the parameter adjusting control 13 in fig. 4D, and the parameter information of the second target image 521 (here, the parameter information of the target video b is applied) can be fine-tuned by operating the parameter adjusting control 14 in fig. 4D, and then video recording is performed.
In the embodiment of the application, the video parameters of the shot video can be used for recording the video, the mode of using the shooting template can be used for recording the video, the operation is simple, and the content of the shot video can be enriched.
In fig. 2A to 2C, 3A to 3C, and 4A to 4D, the same reference numerals denote the same objects, and therefore, the same reference numerals in the drawings are not explained and described one by one, and the reference numerals already described may be referred to.
In the video generation method provided in the embodiment of the present application, the execution subject may be a video generation apparatus, or a control module in the video generation apparatus for executing the video generation method. The video generation apparatus provided in the embodiment of the present application will be described with reference to an example in which a video generation apparatus executes a video generation method.
Referring to fig. 5, a block diagram of a video generation apparatus of one embodiment of the present application is shown. The video generation apparatus includes:
the first display module 201 is configured to display a target video image on a shooting preview interface, where the target video image includes a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera;
a first generating module 202, configured to generate a first target video according to the target video image.
In the embodiment of the application, an image of a first target area in a first video image captured by a first camera (i.e., a first target image) and an image of a second target area in a second video image captured by a second camera (i.e., a second target image) may be displayed in a shooting preview interface, and then, a first target video may be generated according to the displayed target video image. Each frame of target video image in the generated first target video comprises images of partial or all areas in the video images acquired by the two cameras respectively, so that the images acquired by the two cameras can be edited in the video shooting process without editing and clipping the shot video in the later period, the operation steps of video clipping are simplified, the operation cost is saved, and the operation duration of the video clipping is shortened.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving first input of a user to the shooting preview interface;
a second display module for displaying at least two camera identifications in response to the first input, one camera identification indicating one camera;
the second receiving module is used for receiving second input of a first camera identification and a second camera identification in the at least two camera identifications from a user;
and the control module is used for responding to the second input and controlling a first camera to acquire a first video image and a second camera to acquire a second video image, wherein the first camera is the camera identified and indicated by the first camera, and the second camera is the camera identified and indicated by the second camera.
In the embodiment of the application, the camera identifications of at least two cameras can be displayed on a video preview interface, then the two selected cameras are determined based on the input of the user to the camera identifications, the two cameras are controlled to collect respective video images respectively, the camera combination required by the video shooting at this time can be selected by the user according to the requirement, the user-defined combination of the cameras shot by the video is realized, the flexibility of the video shooting on the camera selection is improved, and the content of the shot pictures is enriched.
Optionally, the apparatus further comprises:
the third display module is used for displaying the first video image acquired by the first camera in a first display area of the shooting preview interface;
the third receiving module is used for receiving a third input of a user to the first target area in the first video image;
the first acquisition module is used for responding to the third input to obtain a first target image;
the fourth display module is used for displaying a second video image acquired by the second camera in a second display area of the shooting preview interface;
the fourth receiving module is used for receiving fourth input of a user to a second target area in the second video image;
and the second acquisition module is used for responding to the fourth input to obtain a second target image.
In the embodiment of the application, the video images collected by different cameras can be respectively displayed in different display areas of the shooting preview interface, so that the video images collected by different cameras can be respectively displayed in different areas of the shooting preview interface, the preview images of all the cameras can be independently displayed, and the user can conveniently edit the images.
Optionally, the apparatus further comprises:
the fourth display module is used for displaying the first parameter adjusting control and the second parameter adjusting control;
a first updating module, configured to, in a case that a fifth input to the first parameter adjustment control by the user is received, update the image parameters of the first target image in response to the fifth input;
a second updating module, configured to, in a case that a sixth input to the second parameter adjustment control by the user is received, update the image parameter of the second target image in response to the sixth input.
In the embodiment of the application, the image parameters of the first target image and the second target image are updated by displaying the parameter adjusting control and inputting the parameter adjusting control, so that the respective image parameters of the first target image and the second target image can be independently updated, and the operation flow of video editing is simplified.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain first video parameter information of the first target video, where the first video parameter information includes first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image;
the fourth acquisition module is used for carrying out video shooting based on the first video parameter information to obtain a second target video;
wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
In the embodiment of the present application, if the user is not satisfied with the shooting effect of the generated first target video, or needs to replace the shooting object in the first target video (for example, in the embodiment of fig. 2A to 2C, needs to replace the running character object, or needs to replace the running background image of the character object), part of the parameter information of the generated first target video may be used, so as to simplify the user operation and improve the video recording efficiency.
Optionally, the apparatus further comprises:
the fifth receiving module is used for receiving fifth input of the user to the shooting preview interface;
a fifth display module, configured to display at least one shooting identifier in response to the fifth input, where each shooting identifier is used to indicate a shooting template;
the sixth receiving module is used for receiving sixth input of the target shooting identification in the at least one shooting identification by the user;
the first display module 201 includes:
and the display sub-module is used for responding to the sixth input and displaying the target video image on a shooting preview interface according to the shooting template indicated by the target shooting identification.
In the embodiment of the application, the video parameters of the shot video can be used for recording the video, the mode of using the shooting template can be used for recording the video, the operation is simple, and the content of the shot video can be enriched.
Optionally, the apparatus further comprises:
a fifth obtaining module, configured to obtain first video parameter information of the first target video, where the first video parameter information includes first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image;
the second generation module is used for generating a first shooting template according to the first video parameter information;
wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
In the embodiment of the present application, after the first target video is generated, the video parameter information of the first target video may be saved as a shooting template, which is named as a first shooting template. When shooting the video with the special effect similar to that of the first target video again, the first shooting template can be directly selected, the first video parameter information is completely applied, and parameter setting is not needed.
The video generation device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video generation device provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 2000 is further provided in the embodiment of the present application, and includes a processor 2002, a memory 2001, and a program or an instruction stored in the memory 2001 and executable on the processor 2002, where the program or the instruction is executed by the processor 2002 to implement each process of the above-mentioned video generation method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 1006 is configured to display a target video image on a shooting preview interface, where the target video image includes a first target image and a second target image, the first target image is an image of a first target area in a first video image captured by a first camera, and the second target image is an image of a second target area in a second video image captured by a second camera;
and a processor 1010, configured to generate a first target video according to the target video image.
In the embodiment of the application, an image of a first target area in a first video image captured by a first camera (i.e., a first target image) and an image of a second target area in a second video image captured by a second camera (i.e., a second target image) may be displayed in a shooting preview interface, and then, a first target video may be generated according to the displayed target video image. Each frame of target video image in the generated first target video comprises images of partial or all areas in the video images acquired by the two cameras respectively, so that the images acquired by the two cameras can be edited in the video shooting process without editing and clipping the shot video in the later period, the operation steps of video clipping are simplified, the operation cost is saved, and the operation duration of the video clipping is shortened.
Optionally, a user input unit 1007 used for receiving a first input of the shooting preview interface from the user; receiving second input of a first camera identification and a second camera identification in the at least two camera identifications from a user;
a display unit 1006 for displaying at least two camera identifications in response to the first input, one camera identification indicating one camera;
the processor 1010 is configured to, in response to the second input, control a first camera to acquire a first video image and a second camera to acquire a second video image, where the first camera is a camera identified and indicated by the first camera, and the second camera is a camera identified and indicated by the second camera.
In the embodiment of the application, the camera identifications of at least two cameras can be displayed on a video preview interface, then the two selected cameras are determined based on the input of the user to the camera identifications, the two cameras are controlled to collect respective video images respectively, the camera combination required by the video shooting at this time can be selected by the user according to the requirement, the user-defined combination of the cameras shot by the video is realized, the flexibility of the video shooting on the camera selection is improved, and the content of the shot pictures is enriched.
Optionally, the display unit 1006 is configured to display a first video image captured by the first camera in a first display area of the shooting preview interface; displaying a second video image acquired by a second camera in a second display area of the shooting preview interface;
a user input unit 1007, configured to receive a third input by a user to a first target region in the first video image; receiving a fourth input of a user to a second target area in the second video image;
a processor 1010, configured to obtain a first target image in response to the third input; and responding to the fourth input to obtain a second target image.
In the embodiment of the application, the video images collected by different cameras can be respectively displayed in different display areas of the shooting preview interface, so that the video images collected by different cameras can be respectively displayed in different areas of the shooting preview interface, the preview images of all the cameras can be independently displayed, and the user can conveniently edit the images.
Optionally, a display unit 1006, configured to display a first parameter adjustment control and a second parameter adjustment control; a processor 1010, configured to update the image parameters of the first target image in response to a fifth input to the first parameter adjustment control by the user if the user input unit 1007 receives the fifth input from the user; in a case where the user input unit 1007 receives a sixth input to the second parameter adjustment control by the user, the image parameter of the second target image is updated in response to the sixth input.
In the embodiment of the application, the image parameters of the first target image and the second target image are updated by displaying the parameter adjusting control and inputting the parameter adjusting control, so that the respective image parameters of the first target image and the second target image can be independently updated, and the operation flow of video editing is simplified.
Optionally, the processor 1010 is configured to obtain first video parameter information of the first target video, where the first video parameter information includes first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image; and shooting videos based on the first video parameter information to obtain a second target video.
Wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
In the embodiment of the present application, if the user is not satisfied with the shooting effect of the generated first target video, or needs to replace the shooting object in the first target video (for example, in the embodiment of fig. 2A to 2C, needs to replace the running character object, or needs to replace the running background image of the character object), part of the parameter information of the generated first target video may be used, so as to simplify the user operation and improve the video recording efficiency.
Optionally, a user input unit 1007 used for receiving a fifth input of the shooting preview interface from the user; receiving a sixth input of the target shooting identification in the at least one shooting identification by the user;
a display unit 1006, configured to display at least one shooting identifier in response to the fifth input, each shooting identifier being used to indicate one shooting template; and responding to the sixth input, and displaying a target video image on a shooting preview interface according to the shooting template indicated by the target shooting identification.
In the embodiment of the application, the video parameters of the shot video can be used for recording the video, the mode of using the shooting template can be used for recording the video, the operation is simple, and the content of the shot video can be enriched.
Optionally, the processor 1010 is configured to obtain first video parameter information of the first target video, where the first video parameter information includes first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image; generating a first shooting template according to the first video parameter information;
wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
In the embodiment of the present application, after the first target video is generated, the video parameter information of the first target video may be saved as a shooting template, which is named as a first shooting template. When shooting the video with the special effect similar to that of the first target video again, the first shooting template can be directly selected, the first video parameter information is completely applied, and parameter setting is not needed.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned video generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video generation method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of video generation, the method comprising:
displaying a target video image on a shooting preview interface, wherein the target video image comprises a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera;
and generating a first target video according to the target video image.
2. The method of claim 1, wherein prior to displaying the target video image in the capture preview interface, the method further comprises:
receiving a first input of a user to a shooting preview interface;
displaying at least two camera identifications in response to the first input, one camera identification indicating one camera;
receiving second input of a first camera identification and a second camera identification in the at least two camera identifications from a user;
and responding to the second input, and controlling a first camera to collect a first video image and a second camera to collect a second video image, wherein the first camera is the camera identified and indicated by the first camera, and the second camera is the camera identified and indicated by the second camera.
3. The method of claim 1, wherein prior to displaying the target video image in the capture preview interface, the method further comprises:
displaying a first video image acquired by a first camera in a first display area of a shooting preview interface;
receiving a third input of a user to a first target area in the first video image;
obtaining a first target image in response to the third input;
displaying a second video image acquired by a second camera in a second display area of the shooting preview interface;
receiving a fourth input of a user to a second target area in the second video image;
and responding to the fourth input to obtain a second target image.
4. The method of claim 1, further comprising:
displaying a first parameter adjustment control and a second parameter adjustment control;
in the event that a fifth input to the first parameter adjustment control by the user is received, updating image parameters of the first target image in response to the fifth input;
in an instance in which a sixth input to the second parameter adjustment control is received by the user, updating the image parameters of the second target image in response to the sixth input.
5. The method of claim 1, wherein after the generating the first target video, the method further comprises:
acquiring first video parameter information of the first target video, wherein the first video parameter information comprises first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image;
video shooting is carried out based on the first video parameter information to obtain a second target video;
wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
6. The method of claim 1, wherein prior to displaying the target video image in the capture preview interface, the method further comprises:
receiving a fifth input of the user to the shooting preview interface;
in response to the fifth input, displaying at least one shooting identifier, each shooting identifier being used to indicate a shooting template;
receiving a sixth input of the target shooting identification in the at least one shooting identification by the user;
the displaying of the target video image on the shooting preview interface includes:
and responding to the sixth input, and displaying a target video image on a shooting preview interface according to the shooting template indicated by the target shooting identification.
7. The method of claim 6, wherein after the generating the first target video, further comprising:
acquiring first video parameter information of the first target video, wherein the first video parameter information comprises first target parameter information and second target parameter information, the first target parameter information is parameter information of the first target image, and the second target parameter information is parameter information of the second target image;
generating a first shooting template according to the first video parameter information;
wherein the first target parameter information comprises at least one of: the image processing method comprises the steps of obtaining first shooting object information, first image parameter information and first fusion information; the second target parameter information includes at least one of: second shooting object information, second image parameter information and second fusion information.
8. A video generation apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying a target video image on a shooting preview interface, wherein the target video image comprises a first target image and a second target image, the first target image is an image of a first target area in a first video image acquired by a first camera, and the second target image is an image of a second target area in a second video image acquired by a second camera;
and the first generation module is used for generating a first target video according to the target video image.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video generation method of any of claims 1 to 7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video generation method according to any one of claims 1 to 7.
CN202111141301.2A 2021-09-27 2021-09-27 Video generation method and device, electronic equipment and readable storage medium Pending CN113905175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111141301.2A CN113905175A (en) 2021-09-27 2021-09-27 Video generation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111141301.2A CN113905175A (en) 2021-09-27 2021-09-27 Video generation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113905175A true CN113905175A (en) 2022-01-07

Family

ID=79029760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111141301.2A Pending CN113905175A (en) 2021-09-27 2021-09-27 Video generation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113905175A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520875A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114745505A (en) * 2022-04-28 2022-07-12 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN115174812A (en) * 2022-07-01 2022-10-11 维沃移动通信有限公司 Video generation method, video generation device and electronic equipment
CN115278082A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
CN115334242A (en) * 2022-08-19 2022-11-11 维沃移动通信有限公司 Video recording method, video recording device, electronic equipment and medium
CN118229963A (en) * 2024-05-24 2024-06-21 杭州天眼智联科技有限公司 Metal content identification method, device, equipment and medium based on alloy material

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561234A (en) * 2013-10-24 2014-02-05 Tcl商用信息科技(惠州)股份有限公司 Display control method, device and system of electronic doorbell
CN105635557A (en) * 2015-04-30 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Image processing method and system based on two rear cameras, and terminal
CN109274926A (en) * 2017-07-18 2019-01-25 杭州海康威视系统技术有限公司 A kind of image processing method, equipment and system
CN111010605A (en) * 2019-11-26 2020-04-14 杭州东信北邮信息技术有限公司 Method for displaying video picture-in-picture window
CN111327856A (en) * 2020-03-18 2020-06-23 北京金和网络股份有限公司 Five-video acquisition and synthesis processing method and device and readable storage medium
CN112887617A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN112911147A (en) * 2021-01-27 2021-06-04 维沃移动通信有限公司 Display control method, display control device and electronic equipment
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN112995500A (en) * 2020-12-30 2021-06-18 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561234A (en) * 2013-10-24 2014-02-05 Tcl商用信息科技(惠州)股份有限公司 Display control method, device and system of electronic doorbell
CN105635557A (en) * 2015-04-30 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Image processing method and system based on two rear cameras, and terminal
CN109274926A (en) * 2017-07-18 2019-01-25 杭州海康威视系统技术有限公司 A kind of image processing method, equipment and system
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN111010605A (en) * 2019-11-26 2020-04-14 杭州东信北邮信息技术有限公司 Method for displaying video picture-in-picture window
CN111327856A (en) * 2020-03-18 2020-06-23 北京金和网络股份有限公司 Five-video acquisition and synthesis processing method and device and readable storage medium
CN112995500A (en) * 2020-12-30 2021-06-18 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN112887617A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN112911147A (en) * 2021-01-27 2021-06-04 维沃移动通信有限公司 Display control method, display control device and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520875A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114520875B (en) * 2022-01-28 2024-04-02 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114745505A (en) * 2022-04-28 2022-07-12 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN115174812A (en) * 2022-07-01 2022-10-11 维沃移动通信有限公司 Video generation method, video generation device and electronic equipment
CN115278082A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
CN115278082B (en) * 2022-07-29 2024-06-04 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
CN115334242A (en) * 2022-08-19 2022-11-11 维沃移动通信有限公司 Video recording method, video recording device, electronic equipment and medium
CN118229963A (en) * 2024-05-24 2024-06-21 杭州天眼智联科技有限公司 Metal content identification method, device, equipment and medium based on alloy material
CN118229963B (en) * 2024-05-24 2024-07-30 杭州天眼智联科技有限公司 Metal content identification method, device, equipment and medium based on alloy material

Similar Documents

Publication Publication Date Title
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
US8584043B2 (en) Mobile terminal including touch screen and method of controlling operation thereof
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
CN111654635A (en) Shooting parameter adjusting method and device and electronic equipment
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
WO2022206582A1 (en) Video processing method and apparatus, electronic device and storage medium
US20170034451A1 (en) Mobile terminal and control method thereof for displaying image cluster differently in an image gallery mode
CN113794829B (en) Shooting method and device and electronic equipment
CN112449110B (en) Image processing method and device and electronic equipment
CN113918522A (en) File generation method and device and electronic equipment
CN111885298B (en) Image processing method and device
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113794831A (en) Video shooting method and device, electronic equipment and medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114143455B (en) Shooting method and device and electronic equipment
CN112422846B (en) Video recording method and electronic equipment
CN114390205A (en) Shooting method and device and electronic equipment
CN112261483A (en) Video output method and device
CN112788239A (en) Shooting method and device and electronic equipment
CN112672059B (en) Shooting method and shooting device
CN115442527B (en) Shooting method, device and equipment
CN113489901B (en) Shooting method and device thereof
CN112492206B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220107