CN114157810A - Shooting method, shooting device, electronic equipment and medium - Google Patents
Shooting method, shooting device, electronic equipment and medium Download PDFInfo
- Publication number
- CN114157810A CN114157810A CN202111577115.3A CN202111577115A CN114157810A CN 114157810 A CN114157810 A CN 114157810A CN 202111577115 A CN202111577115 A CN 202111577115A CN 114157810 A CN114157810 A CN 114157810A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- focus
- controlling
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a shooting method, a shooting device, electronic equipment and a shooting medium, and belongs to the technical field of shooting. The method comprises the following steps: controlling a first camera to focus a first object to obtain a first image of a first field angle; controlling a second camera to focus a second object to obtain a second image of a second field angle; and performing image synthesis processing on the first image and the second image, and outputting a third image, wherein the third image comprises the first object and the second object.
Description
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a shooting method, a shooting device, electronic equipment and a shooting medium.
Background
With the continuous development of scientific technology and the improvement of economic level, the dependence degree of users on electronic equipment (such as mobile phones, tablet computers and the like) is higher and higher, the requirements of users on the shooting effect of the electronic equipment are higher and higher, and the users do not meet simple daily shooting and pursue the interest and originality of pictures or videos more and more.
The existing interesting shooting or video recording mode has high shooting requirements on users, and requires the users to strictly repeat shooting scenes and find a proper shooting angle to realize an interesting effect. Some application software can moderately simplify the shooting process, but a series of subsequent processing is needed, time and labor are wasted, images or videos meeting actual requirements are difficult to shoot or record quickly, the shooting difficulty of users is greatly increased, and the shooting slice rate is low.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device, electronic equipment and a shooting medium, and the problems that in the prior art, time and labor are wasted when interesting images are shot or interesting videos are recorded, and the images meeting actual requirements are difficult to quickly shoot or record are solved.
In a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
controlling a first camera to focus a first object to obtain a first image of a first field angle;
controlling a second camera to focus a second object to obtain a second image of a second field angle;
and performing image synthesis processing on the first image and the second image, and outputting a third image, wherein the third image comprises the first object and the second object.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the first image acquisition module is used for controlling the first camera to focus the first object to obtain a first image of a first field angle;
the second image acquisition module is used for controlling the second camera to focus the second object to obtain a second image of a second field angle;
and a third image output module, configured to perform image synthesis processing on the first image and the second image, and output a third image, where the third image includes the first object and the second object.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the shooting method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the shooting method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the first camera is controlled to focus on the first object to obtain the first image with the first view angle, the second camera is controlled to focus on the second object to obtain the second image with the second view angle, the first image and the second image are subjected to image synthesis processing to output the third image, and the third image comprises the first object and the second object. This application embodiment focuses to different objects through combining two cameras and obtains the image of different field angles to carry out the image synthesis, thereby can obtain the taste image, this in-process need not the user and strictly carves again and shoot the scene and look for suitable shooting angle, and need not to carry out subsequent processing, can shoot fast or record the image or the video of taste effect, when having reduced the user and shoot the degree of difficulty, has improved and has shot the piece rate.
Drawings
Fig. 1 is a flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of an interesting shooting mode provided in an embodiment of the present application;
FIG. 3 is a schematic view of an interesting image provided by an embodiment of the present application;
FIG. 4 is a schematic view of another interesting image provided in the embodiments of the present application;
FIG. 5 is a schematic view of another interesting image provided in the embodiments of the present application;
FIG. 6 is a schematic view of another interesting image provided in the embodiments of the present application;
fig. 7 is a schematic diagram of an interesting recording mode provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a process of recording an interesting video according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart of a shooting method provided in an embodiment of the present application is shown, and as shown in fig. 1, the shooting method may include the following steps:
step 101: and controlling the first camera to focus the first object to obtain a first image with a first field angle.
The embodiment of the application can be applied to the scenes that two cameras are combined to focus different objects to obtain images with different field angles, and then the two images are synthesized to obtain interesting images or videos.
The embodiment can be applied to an electronic device having two cameras (i.e. a first camera and a second camera), in practical application, the first camera can be used for focusing and shooting a shooting object in a shooting field of view, the second camera can be used for focusing and shooting a single shooting object in the shooting field of view, the first camera can shoot an image with a large field of view, and the second camera can shoot an image with a small field of view.
In the process of shooting an interesting image or recording an interesting video by using an electronic device, a user can start a camera APP (Application) of the electronic device and select an interesting shooting mode or an interesting recording mode, as shown in fig. 2, after the user starts the camera APP, a plurality of options are displayed on an APP interface, as shown in the left diagram in fig. 2, a plurality of options of "panorama", "beauty", "photo", "video" and "more" are displayed in the interface, after the user selects the "more" option, a button of "interesting shooting" can be displayed, as shown in the right diagram in fig. 2, and after the user clicks the button of "interesting shooting", the shooting mode is entered. Alternatively, as shown in fig. 7, after the user clicks the record button, an option of "interesting recording" may be displayed, and after the user clicks the option, a mode of interesting recording a video may be entered, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
The first image is a preview image obtained by focusing a first object in a shooting field of view by using a first camera.
When the electronic equipment is used for shooting an interesting image or recording an interesting video, the first camera can be controlled to focus on a first object in a shooting visual field so as to obtain a first preview image of a first visual field angle.
Step 102: and controlling the second camera to focus the second object to obtain a second image of a second field angle.
The second image is a preview image obtained by focusing a second object in the shot object by using a second camera on the electronic equipment.
The second object refers to an object focused by the second camera, and in this example, the type of the second object may be the same as that of the first object, for example, the first object is a person, the second object is also a person, and so on. Of course, the type of the second object may be different from the type of the second object, e.g., the first object is a person, the second object is a building, etc.
After the first camera is controlled to focus the first object to obtain the first image with the first viewing angle, the second camera may be controlled to focus the second object to obtain the second image with the second viewing angle, where the first viewing angle is different from the second viewing angle, in this example, the first viewing angle may be larger than the second viewing angle, and the first viewing angle may also be smaller than the second viewing angle, specifically, the magnitude relationship corresponding to the first viewing angle and the second viewing angle may be determined according to the usage requirement, which is not limited by the embodiment.
After the first image and the second image are obtained, step 103 is performed.
Step 103: and performing image synthesis processing on the first image and the second image, and outputting a third image, wherein the third image comprises the first object and the second object.
After the first image and the second image are obtained, the first image and the second image may be subjected to image synthesis processing to synthesize a third image, and the third image is output by the electronic device, where the third image includes the first object and the second object with different field angles, so as to realize shooting of an interesting image, for example, as shown in fig. 3, a cylindrical object is an image with a small field angle shot by using the sub-camera, two characters are images with a large field angle shot by using the main camera, and the two images are spliced and fused to obtain an interesting target image as shown in fig. 3. Or, as shown in fig. 4, the portrait of the person with the larger size is obtained by shooting with the auxiliary camera, and the portrait of the person with the smaller size is obtained by shooting with the main camera, and the two are spliced and fused to obtain the target image with interest as shown in fig. 4. Alternatively, as shown in fig. 5 and 6, the main camera and the sub camera can be used to capture images, thereby capturing interesting images.
In this embodiment, after the image synthesis of the first image and the second image is performed, the matting processing may be further performed on the first image and the second image, and specifically, the detailed description may be made in conjunction with the following specific implementation manner.
In a specific implementation manner of the present application, before the step 103, the method may further include:
step A1: and carrying out cutout processing on the first image to generate a first background image and a first object image.
In the present embodiment, the first background image is an image obtained after the first object in the first image is extracted.
The first object image is an image formed by an image area including the first object in the first image.
After the first camera is controlled to focus the first object to obtain the first image with the first field angle, the first image may be subjected to matting processing to generate a first background image and a first object image, specifically, an image region including the first object in the first image may be scratched out, at this time, the obtained image including the first object is the first object image, and images of other partial regions are the first background image.
Step A2: and carrying out cutout processing on the second image to generate a second background image and a second object image.
The second background image is an image obtained after the second object in the second image is extracted.
The second object image is an image formed by an image area including the second object in the second image.
After the second camera is controlled to focus the second object to obtain a second image with a second field angle, the second image may be subjected to matting processing to generate a second background image and a second object image, specifically, an image area including the second object in the second image may be scratched out, at this time, the obtained image including the second object is the second object image, and the images of other partial areas are the second background image.
The embodiment of the application obtains two background images and two object images by matting the first image and the second image with different field angles, thereby combining the two background images and the two object images to carry out image synthesis processing and realizing synthesis of interesting images.
In this embodiment, before the image synthesis of the first image and the second image is performed, the display positions of the first object image and the second object image may also be updated to realize the position adjustment of the object, and in particular, the detailed description may be made in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, before the step 103, the method may further include:
step B1: receiving a first input of the first background image, the first object image, the second background image and the second object image from a user.
In the present embodiment, the first input is an input for adjusting positions of the first object image and the second object image, and in the present example, the first input may be an input formed by a user dragging at least one of the first background image, the first object image, the second background image, and the second object image. Of course, in practical applications, the first input may also be other types of inputs, and specifically, may be determined according to a use requirement, which is not limited in this embodiment.
After the first image is subjected to image deduction to obtain a first background image and a first object image, and the second image is subjected to image matting to obtain a second background image and a second object image, first input of a user on the first background image, the first object image, the second background image and the second object image can be received.
After receiving the first input, step B2 is performed.
Step B2: in response to the first input, updating display positions of the first object image and the second object image.
After receiving a first input of the user to the first background image, the first object image, the second background image and the second object image, the display positions of the first object image and the second object image may be updated in response to the first input to implement position adjustment of the photographic object.
According to the embodiment of the application, the display position of the shot object can be freely adjusted by combining user input, and the shooting experience of a user is further improved.
In this embodiment, in the process of recording an interesting video by using an electronic device, the shooting magnifications of the main camera and the auxiliary camera can be dynamically adjusted according to the motion trajectory of the shooting object, so as to record the interesting video, and in particular, the detailed description can be made according to the following specific implementation manner.
In another specific implementation manner of the present application, the step 101 may include:
substep C1: in the process of video shooting, the video recording magnification of the first camera is adjusted based on the motion track of the first object.
In this embodiment, in the process of recording an interesting video by using the electronic device, if the first object is in a moving state, the moving track of the first object may be obtained, and the recording magnification of the first camera may be adjusted according to the moving track of the first object.
After adjusting the recording magnification of the first camera based on the motion trajectory of the first object, sub-step C2 is performed.
Substep C2: and controlling the first camera after the video recording magnification adjustment to focus the first object to obtain the first image.
After the video recording magnification of the first camera is adjusted, the first camera with the adjusted video recording magnification can be controlled to focus on the first object to obtain a first image, namely, the video recording magnification of the first camera is automatically adjusted in the process of moving the first object, and the size of the first object is adjusted in real time.
The step 102 may include:
substep D1: and adjusting the video recording magnification of the second camera based on the motion track.
In the process that the first object is in the motion state, the video recording magnification of the second camera can be adjusted according to the motion track of the first object.
Substep D2: and the second camera after controlling the video magnification adjustment focuses on the second object to obtain the second image.
After the video recording magnification of the second camera is adjusted, the second camera with the adjusted video recording magnification can be controlled to focus on the second object to obtain a second image.
The above process can be described in detail with reference to fig. 8 as follows.
As shown in fig. 8, taking "the month can be picked by hand" as an example, when recording a funny video, the main camera records a motion process that a person jumps to "pick the month", the periscopic camera focuses on the moon, the periscopic camera changes the magnification while the person jumps upwards to be close to the moon, the feature of the moon gradually becomes larger, the position is basically fixed, the main camera reduces the magnification to shoot the person, the video preview shows that the person and the moon move in opposite directions, in the process that the person "picks the moon" and falls back to the ground, the periscopic camera zooms the moon to continue to enlarge, the person synchronously enlarges along with the moon, and the moon moves along with the motion track of the person, so as to achieve the animation effect of "picking the month".
It should be understood that the above examples are only for the purpose of better understanding the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation on the embodiments.
According to the embodiment of the application, the video recording multiplying power of the first camera and the second camera is adjusted by combining the motion track of the object, so that the size of the recorded object is adjusted, and a video which is interesting can be recorded.
In this embodiment, an image synthesis mode may also be set in advance, and image synthesis processing may be performed on the first image and the second image in the image synthesis mode, and in particular, detailed description may be made in conjunction with the following specific implementation.
In another specific implementation manner of the present application, before the step 101, the method may further include:
step E1: a fourth input by the user is received.
In the present embodiment, the fourth input refers to an input for selecting an image combination mode.
In a specific implementation, a selection control of the image synthesis mode may be displayed in a display screen of the electronic device, and after the user triggers the control, an image synthesis mode option may be displayed so that the user selects a desired image synthesis mode, in the process, an operation of clicking the image synthesis mode option by the user may be regarded as a fourth input.
Of course, the image composition mode corresponding to the specific preset gesture may be pre-stored in the system, and the user may input the specific gesture to select the image composition mode.
After receiving the fourth input by the user, step E2 is performed.
Step E2: in response to the fourth input, an image composition mode is determined.
The image composition mode is a mode for compositing images of two different angles of view.
In this example, the image synthesis mode may be a synthesis mode of a reference image, and the like, and specifically, a specific mode of the image synthesis mode may be determined according to a use requirement, and the embodiment is not limited thereto, and after receiving a fourth input by the user, the image synthesis mode may be determined in response to the fourth input.
The step 103 may include:
sub-step F1: and according to the image synthesis mode, carrying out image synthesis processing on the first image and the second image, and outputting the third image.
After the first image and the second image with different field angles are obtained, the first image and the second image may be subjected to a combining process in accordance with an image combining mode to generate and output a third image.
According to the image synthesis method and device, the user-defined automatic synthesis of the image in the image synthesis mode is combined, manual operation of the user is not needed, and the shot interesting image can meet the requirements of the user.
In this embodiment, when the object to be photographed includes a plurality of objects, the object to be photographed with different field angles may be selected according to an input of a user, and the camera may be controlled to photograph an image of the selected object, specifically, in combination with the following specific implementation manner, before the step 101, the method may further include:
step G1: and receiving a second input of the first object from the user.
In this embodiment, the second input refers to an input for controlling the first camera to focus, which is input by the user to the first object.
In the process of shooting the interesting image by the electronic equipment, a camera on the electronic equipment can be firstly adopted to focus an object to be shot, a preview image is formed on a display screen of the electronic equipment, and the preview image comprises at least two objects to be shot.
After displaying the preview image, a second input to the first object by the user may be received, which may instruct the first camera to focus on the first object.
The step 101 may include:
substep H1: and responding to the second input, and controlling the first camera to focus the first object to obtain the first image.
After receiving a second input of the first object by the user, the first camera can be controlled to focus on the first object to obtain the first image in response to the second input.
Before the step 102, the method may further include:
step I1: and receiving a third input of the second object from the user.
The third input is an input for controlling the second camera to focus on the second object.
After the preview screen is displayed, a third input by the user to a second object within the preview screen may be received.
The step 102 may include:
substep J1: and responding to the third input, and controlling the second camera to focus the second object to obtain the second image.
After receiving a third input to the second object from the user, the second camera may be controlled to focus on the second object to obtain a second image in response to the third input.
According to the embodiment of the application, the two cameras can focus on different shooting objects by combining the input of the user to the shooting objects, images with different field angles are obtained, the operation mode is simple, and the shooting efficiency of interesting images can be greatly improved.
In the shooting method provided by the embodiment of the application, the first camera is controlled to focus the first object to obtain the first image with the first view angle, the second camera is controlled to focus the second object to obtain the second image with the second view angle, the first image and the second image are subjected to image synthesis processing to output the third image, and the third image comprises the first object and the second object. This application embodiment focuses to different objects through combining two cameras and obtains the image of different field angles to carry out the image synthesis, thereby can obtain the taste image, this in-process need not the user and strictly carves again and shoot the scene and look for suitable shooting angle, and need not to carry out subsequent processing, can shoot fast or record the image or the video of taste effect, when having reduced the user and shoot the degree of difficulty, has improved and has shot the piece rate.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
Referring to fig. 9, which shows a schematic structural diagram of a camera according to an embodiment of the present disclosure, as shown in fig. 9, the camera 900 may include the following modules:
a first image obtaining module 910, configured to control a first camera to focus on a first object, so as to obtain a first image with a first field angle;
a second image obtaining module 920, configured to control the second camera to focus on the second object, so as to obtain a second image of a second field angle;
a third image output module 930, configured to perform image synthesis processing on the first image and the second image, and output a third image, where the third image includes the first object and the second object.
Optionally, the apparatus further comprises:
the first object image generation module is used for carrying out cutout processing on the first image to generate a first background image and a first object image;
and the second object image generation module is used for carrying out cutout processing on the second image to generate a second background image and a second object image.
Optionally, the apparatus further comprises:
a first input receiving module, configured to receive a first input of the first background image, the first object image, the second background image, and the second object image from a user;
a display position update module for updating display positions of the first object image and the second object image in response to the first input.
Optionally, the first image obtaining module 910 includes:
the first video magnification adjusting unit is used for adjusting the video magnification of the first camera based on the motion track of a first object in the video shooting process;
the first image acquisition unit is used for controlling the first camera with the video recording magnification adjusted to focus the first object to obtain a first image;
the second image obtaining module 920 includes:
the second video magnification adjusting unit is used for adjusting the video magnification of the second camera based on the motion track;
and the second image acquisition unit is used for controlling the second camera with the adjusted video recording magnification to focus the second object to obtain the second image.
Optionally, the apparatus further comprises:
the fourth input receiving module is used for receiving a fourth input of the user;
an image composition mode determination module for determining an image composition mode in response to the fourth input;
the third image output module 930 includes:
and a third image output unit configured to perform image synthesis processing on the first image and the second image according to the image synthesis mode, and output the third image.
Optionally, the apparatus further comprises:
the second input receiving module is used for receiving a second input of the first object by the user;
the first image acquisition module 910 includes:
responding to the second input, controlling the first camera to focus the first object to obtain the first image;
the device further comprises:
the third input receiving module is used for receiving a third input of the user to the second object;
the second image obtaining module 920 includes:
and the second image acquisition unit is used for responding to the third input and controlling the second camera to focus the second object to obtain the second image.
The shooting device provided by the embodiment of the application obtains a first image with a first view angle by controlling the first camera to focus on a first object, obtains a second image with a second view angle by controlling the second camera to focus on a second object, and outputs a third image by performing image synthesis processing on the first image and the second image, wherein the third image comprises a first object and a second object. The embodiment of the application focuses to different objects through a plurality of cameras to obtain the images of different field angles, and image synthesis is carried out, so that interesting images can be obtained, in the process, a user is not required to strictly and repeatedly shoot scenes and find a suitable shooting angle, follow-up processing is not required, images or videos with interesting effects can be rapidly shot or recorded, the shooting difficulty of the user is reduced, and the shooting filming rate is improved.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001, a memory 1002, and a program or an instruction stored in the memory 1002 and executable on the processor 1001, where the program or the instruction is executed by the processor 1001 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here. The processor 1110 is configured to control the first camera to focus on the first object, so as to obtain a first image with a first field angle; controlling a second camera to focus a second object to obtain a second image of a second field angle; and performing image synthesis processing on the first image and the second image, and outputting a third image, wherein the third image comprises the first object and the second object.
This application embodiment focuses to different objects through combining two cameras and obtains the image of different field angles to carry out the image synthesis, thereby can obtain the taste image, this in-process need not the user and strictly carves again and shoot the scene and look for suitable shooting angle, and need not to carry out subsequent processing, can shoot fast or record the image or the video of taste effect, when having reduced the user and shoot the degree of difficulty, has improved and has shot the piece rate.
Optionally, the processor 1110 is further configured to perform a matting process on the first image, and generate a first background image and a first object image; and carrying out cutout processing on the second image to generate a second background image and a second object image.
Optionally, a user input unit 1107, configured to receive a first input of the first background image, the first object image, the second background image, and the second object image by a user;
a display unit 1106 configured to update display positions of the first object image and the second object image in response to the first input.
Optionally, in the process of video shooting, adjusting the video recording magnification of the first camera based on the motion track of the first object; controlling the first camera with the adjusted video recording magnification to focus the first object to obtain the first image; adjusting the video recording magnification of the second camera based on the motion track; and the second camera after controlling the video magnification adjustment focuses on the second object to obtain the second image.
Optionally, a user input unit 1107, further configured to receive a fourth input from the user;
a processor 1110 further for determining an image composition mode in response to the fourth input; and according to the image synthesis mode, carrying out image synthesis processing on the first image and the second image, and outputting the third image.
Optionally, the user input unit 1107 is further configured to receive a second input to the first object from the user;
the processor 1110 is further configured to control the first camera to focus on the first object in response to the second input, so as to obtain the first image;
a user input unit 1107, further configured to receive a third input to the second object by the user;
the processor 1110 is further configured to control the second camera to focus on the second object in response to the third input, so as to obtain the second image.
According to the embodiment of the application, the interesting image can be automatically generated by manually adjusting the display position of the shooting object by a user or adjusting the display position of the shooting object by combining a reference image, the operation is simple, time and labor are saved, and the shooting interest of the user is greatly improved.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, an application program or instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1109 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The memory 1109 may be used for storing software programs and various data including, but not limited to, application programs and an operating system. Processor 1110 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A photographing method, characterized by comprising:
controlling a first camera to focus a first object to obtain a first image of a first field angle;
controlling a second camera to focus a second object to obtain a second image of a second field angle;
and performing image synthesis processing on the first image and the second image, and outputting a third image, wherein the third image comprises the first object and the second object.
2. The method according to claim 1, before performing image synthesis processing on the first image and the second image and outputting a third image, further comprising:
performing cutout processing on the first image to generate a first background image and a first object image;
and carrying out cutout processing on the second image to generate a second background image and a second object image.
3. The method according to claim 2, wherein before the image synthesis processing of the first image and the second image and the outputting of the third image, the method further comprises:
receiving a first input of the first background image, the first object image, the second background image and the second object image by a user;
in response to the first input, updating display positions of the first object image and the second object image.
4. The method of claim 1, wherein controlling the first camera to focus the first object to obtain the first image at the first field of view comprises:
in the video shooting process, adjusting the video recording magnification of the first camera based on the motion track of a first object;
controlling the first camera with the adjusted video recording magnification to focus the first object to obtain the first image;
the controlling the second camera to focus on the second object to obtain a second image of a second field angle includes:
adjusting the video recording magnification of the second camera based on the motion track;
and the second camera after controlling the video magnification adjustment focuses on the second object to obtain the second image.
5. The method of claim 1, further comprising, before said controlling the first camera to focus the first object to obtain the first image at the first field angle:
receiving a fourth input from the user;
determining an image composition mode in response to the fourth input;
the image synthesis processing of the first image and the second image and the output of a third image includes:
and according to the image synthesis mode, carrying out image synthesis processing on the first image and the second image, and outputting the third image.
6. The method of claim 1, wherein before controlling the first camera to focus the first object to obtain the first image at the first field of view, further comprising:
receiving a second input of the first object by the user;
the controlling the first camera to focus on the first object to obtain a first image with a first field angle includes:
responding to the second input, controlling the first camera to focus the first object to obtain the first image;
before the controlling the second camera to focus on the second object to obtain the second image with the second field angle, the method further includes:
receiving a third input of the second object by the user;
the controlling the second camera to focus on the second object to obtain a second image of a second field angle includes:
and responding to the third input, and controlling the second camera to focus the second object to obtain the second image.
7. A camera, comprising:
the first image acquisition module is used for controlling the first camera to focus the first object to obtain a first image of a first field angle;
the second image acquisition module is used for controlling the second camera to focus the second object to obtain a second image of a second field angle;
and a third image output module, configured to perform image synthesis processing on the first image and the second image, and output a third image, where the third image includes the first object and the second object.
8. The apparatus of claim 7, further comprising:
the first object image generation module is used for carrying out cutout processing on the first image to generate a first background image and a first object image;
and the second object image generation module is used for carrying out cutout processing on the second image to generate a second background image and a second object image.
9. An electronic device, characterized in that it comprises a processor and a memory, said memory storing a program or instructions executable on said processor, said program or instructions, when executed by said processor, implementing the steps of the shooting method according to any one of claims 1-6.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111577115.3A CN114157810B (en) | 2021-12-21 | 2021-12-21 | Shooting method, shooting device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111577115.3A CN114157810B (en) | 2021-12-21 | 2021-12-21 | Shooting method, shooting device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114157810A true CN114157810A (en) | 2022-03-08 |
CN114157810B CN114157810B (en) | 2023-08-18 |
Family
ID=80451653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111577115.3A Active CN114157810B (en) | 2021-12-21 | 2021-12-21 | Shooting method, shooting device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114157810B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0404523A2 (en) * | 1989-06-19 | 1990-12-27 | Nikon Corporation | Automatic focusing device |
CN105721763A (en) * | 2014-12-05 | 2016-06-29 | 深圳富泰宏精密工业有限公司 | System and method for composition of photos |
CN106603820A (en) * | 2016-11-25 | 2017-04-26 | 努比亚技术有限公司 | Area amplifying method and area amplifying device |
CN107633235A (en) * | 2017-09-27 | 2018-01-26 | 广东欧珀移动通信有限公司 | Solve lock control method and Related product |
CN107767430A (en) * | 2017-09-21 | 2018-03-06 | 努比亚技术有限公司 | One kind shooting processing method, terminal and computer-readable recording medium |
CN111586296A (en) * | 2020-04-27 | 2020-08-25 | 北京小米移动软件有限公司 | Image capturing method, image capturing apparatus, and storage medium |
CN112399078A (en) * | 2020-10-30 | 2021-02-23 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112492209A (en) * | 2020-11-30 | 2021-03-12 | 维沃移动通信有限公司 | Shooting method, shooting device and electronic equipment |
CN112702497A (en) * | 2020-12-28 | 2021-04-23 | 维沃移动通信有限公司 | Shooting method and device |
CN112839166A (en) * | 2020-12-02 | 2021-05-25 | 维沃移动通信(杭州)有限公司 | Shooting method and device and electronic equipment |
CN112887609A (en) * | 2021-01-27 | 2021-06-01 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN112911059A (en) * | 2021-01-22 | 2021-06-04 | 维沃移动通信(杭州)有限公司 | Photographing method and device, electronic equipment and readable storage medium |
CN113014820A (en) * | 2021-03-15 | 2021-06-22 | 联想(北京)有限公司 | Processing method and device and electronic equipment |
WO2021129198A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Method for photography in long-focal-length scenario, and terminal |
-
2021
- 2021-12-21 CN CN202111577115.3A patent/CN114157810B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0404523A2 (en) * | 1989-06-19 | 1990-12-27 | Nikon Corporation | Automatic focusing device |
CN105721763A (en) * | 2014-12-05 | 2016-06-29 | 深圳富泰宏精密工业有限公司 | System and method for composition of photos |
CN106603820A (en) * | 2016-11-25 | 2017-04-26 | 努比亚技术有限公司 | Area amplifying method and area amplifying device |
CN107767430A (en) * | 2017-09-21 | 2018-03-06 | 努比亚技术有限公司 | One kind shooting processing method, terminal and computer-readable recording medium |
CN107633235A (en) * | 2017-09-27 | 2018-01-26 | 广东欧珀移动通信有限公司 | Solve lock control method and Related product |
WO2021129198A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Method for photography in long-focal-length scenario, and terminal |
CN111586296A (en) * | 2020-04-27 | 2020-08-25 | 北京小米移动软件有限公司 | Image capturing method, image capturing apparatus, and storage medium |
CN112399078A (en) * | 2020-10-30 | 2021-02-23 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112492209A (en) * | 2020-11-30 | 2021-03-12 | 维沃移动通信有限公司 | Shooting method, shooting device and electronic equipment |
CN112839166A (en) * | 2020-12-02 | 2021-05-25 | 维沃移动通信(杭州)有限公司 | Shooting method and device and electronic equipment |
CN112702497A (en) * | 2020-12-28 | 2021-04-23 | 维沃移动通信有限公司 | Shooting method and device |
CN112911059A (en) * | 2021-01-22 | 2021-06-04 | 维沃移动通信(杭州)有限公司 | Photographing method and device, electronic equipment and readable storage medium |
CN112887609A (en) * | 2021-01-27 | 2021-06-01 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN113014820A (en) * | 2021-03-15 | 2021-06-22 | 联想(北京)有限公司 | Processing method and device and electronic equipment |
Non-Patent Citations (3)
Title |
---|
GURUPRASAD SOMASUNDARAM;: "Classification and Counting of Composite Objects in Traffic Scenes Using Global and Local Image Analysis", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS * |
吴昊;徐丹;: "数字图像合成技术综述", no. 11 * |
胡勤龙: "高清摄像机镜头中的新技术", 视听界(广播电视技术), no. 1 * |
Also Published As
Publication number | Publication date |
---|---|
CN114157810B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112714255B (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN112637507B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN112887617B (en) | Shooting method and device and electronic equipment | |
CN114125179B (en) | Shooting method and device | |
CN112565611A (en) | Video recording method, video recording device, electronic equipment and medium | |
CN112637500A (en) | Image processing method and device | |
CN112087579B (en) | Video shooting method and device and electronic equipment | |
CN112784081A (en) | Image display method and device and electronic equipment | |
CN112839166A (en) | Shooting method and device and electronic equipment | |
CN112637515A (en) | Shooting method and device and electronic equipment | |
CN114785969A (en) | Shooting method and device | |
CN114390197A (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN114500852B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN114222069B (en) | Shooting method, shooting device and electronic equipment | |
CN112887624B (en) | Shooting method and device and electronic equipment | |
CN114125297B (en) | Video shooting method, device, electronic equipment and storage medium | |
CN114025237B (en) | Video generation method and device and electronic equipment | |
CN114390205B (en) | Shooting method and device and electronic equipment | |
CN114157810B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN114785957A (en) | Shooting method and device thereof | |
CN114285988A (en) | Display method, display device, electronic equipment and storage medium | |
CN114449172B (en) | Shooting method and device and electronic equipment | |
CN112672059B (en) | Shooting method and shooting device | |
CN114666513A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |