CN115988312A - Shooting method and device, electronic equipment and storage medium - Google Patents
Shooting method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115988312A CN115988312A CN202211028584.4A CN202211028584A CN115988312A CN 115988312 A CN115988312 A CN 115988312A CN 202211028584 A CN202211028584 A CN 202211028584A CN 115988312 A CN115988312 A CN 115988312A
- Authority
- CN
- China
- Prior art keywords
- image
- input
- shooting
- preview interface
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000004044 response Effects 0.000 claims abstract description 24
- 238000005520 cutting process Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 4
- 244000025254 Cannabis sativa Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000009966 trimming Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a shooting method, a shooting device, electronic equipment and a storage medium, and belongs to the technical field of shooting. The method comprises the following steps: under the condition that the shooting preview interface is displayed, receiving a first input of the shooting preview interface, wherein the first input is used for determining a first image; in response to a first input, displaying a first image in a shooting preview interface in a target display style; receiving a photographing input of a user; in response to a photographing input, photographing is performed based on the first image displayed in the target display style, resulting in a second image.
Description
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a shooting method, a shooting device, electronic equipment and a storage medium.
Background
Currently, when a user shoots through an electronic device, the user may select a shooting template provided by the electronic device to shoot so as to obtain an image corresponding to the shooting template.
However, the shooting template is usually a fixed display style, so that the image effect obtained by shooting by the electronic device is single, and the requirement of a user cannot be met, and thus, the flexibility of shooting the image by the electronic device is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a shooting method, an apparatus, an electronic device, and a storage medium, which can solve the problem of low flexibility of shooting an image by an electronic device.
In a first aspect, an embodiment of the present application provides a shooting method, where the shooting method includes: under the condition that a shooting preview interface is displayed, receiving a first input of a user to the shooting preview interface, wherein the first input is used for determining a first image; in response to a first input, displaying a first image in a shooting preview interface in a target display style; receiving a photographing input of a user; in response to a photographing input, photographing based on displaying the first image in the target display style, resulting in a second image.
In a second aspect, an embodiment of the present application provides a camera, including: the device comprises a receiving module, a display module and a shooting module. The receiving module is used for receiving a first input of the shooting preview interface under the condition that the shooting preview interface is displayed, wherein the first input is used for determining a first image. And the display module is used for responding to the first input received by the receiving module and displaying the first image in the shooting preview interface in the target display style. The receiving module is also used for receiving shooting input of a user. And the shooting module is used for responding to the shooting input received by the receiving module and shooting based on the first image displayed in the target display mode to obtain a second image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in the case of displaying the shooting preview interface, the electronic device may display the first image in the target display style on the shooting preview interface according to a first input of the user to the first image, so that the electronic device may shoot the first image displayed based on the target display style to obtain the second image. In the scheme, the electronic equipment can set the first image selected by the user as the target display style, and obtain the second image according to the first image displayed by the target display style, namely when the electronic equipment shoots the image, the electronic equipment can set the first image selected by the user as the target display style, and obtain the second image through the first image displayed by the target display style, so that the situation that the image effect obtained by shooting the electronic equipment is single due to the fact that the shooting template is usually a fixed display style is avoided, and thus, the flexibility of shooting the image by the electronic equipment is improved.
Drawings
Fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an example of an interface of a shooting method according to an embodiment of the present disclosure;
fig. 3 is a second schematic view of an example of an interface of a shooting method according to an embodiment of the present disclosure;
fig. 4 is a third schematic diagram of an example of an interface of a shooting method according to an embodiment of the present disclosure;
fig. 5 is a fourth schematic view of an example of an interface of a shooting method provided in an embodiment of the present application;
fig. 6 is a fifth schematic view of an example of an interface of a shooting method according to an embodiment of the present disclosure;
fig. 7 is a sixth schematic view of an example of an interface of a shooting method according to an embodiment of the present disclosure;
fig. 8 is a seventh schematic view of an example of an interface of a shooting method provided in the embodiment of the present application;
fig. 9 is an eighth schematic diagram of an example of an interface of a shooting method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 12 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
At present, with the development of communication technology, the shooting functions in the electronic device are increasing, for example, a user may select a shooting template in the electronic device to shoot a shooting object, so that an image with a shooting effect corresponding to the shooting template may be obtained. Specifically, when a user uses the electronic device to shoot, elements such as a shooting background, a shooting foreground, and a shooting frame of an image shot by the electronic device can only be selected from the elements provided by the shooting templates, however, the elements provided by the shooting templates are often not required by the user, or the shooting templates used by different users are all the same, so that the display effects of the images shot by different users are uniform, which cannot meet the requirements of the users who need personalization and customization, and thus, the flexibility of shooting the image by the electronic device is low.
In the embodiment of the application, in the case of displaying the shooting preview interface, the electronic device may display the first image on the shooting preview interface in the target display style according to a first input of the user to the first image, so that the electronic device may shoot based on the first image to obtain the second image. In the scheme, the first image selected by the user can be set as the target display style by the electronic equipment, the second image is obtained according to the first image displayed according to the target display style, namely when the electronic equipment shoots the image, the first image selected by the user can be set as the target display style by the electronic equipment, the second image is obtained through the first image displayed according to the target display style, and the situation that the image effect obtained by shooting the electronic equipment is single due to the fact that the shooting template is usually the fixed display style is avoided, so that the flexibility of the electronic equipment for shooting the image is improved.
The embodiment of the application provides a shooting method, and fig. 1 shows a flowchart of the shooting method provided by the embodiment of the application. As shown in fig. 1, the photographing method provided by the embodiment of the present application may include steps 201 to 204 described below.
In an embodiment of the present application, the first input is used to determine the first image.
In the embodiment of the application, a user can perform first input on a first image, so that the electronic device can display the first image in a shooting preview interface in a target display mode, the first image displayed in the target display mode and a shooting object collected by the electronic device are displayed to the user, and then the user can perform shooting input, so that an image meeting the user requirement is obtained.
Optionally, in this embodiment of the application, the first input may be click input, long-press input, sliding input, or preset track input of the user on the first image; or a physical key combination (e.g., power key and volume key). The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the present application, the shooting object may be at least one of: people, objects, landscapes, animals, etc.
Optionally, in this embodiment of the application, the first image may be an image selected by a user in a first application (e.g., an album application), or an image downloaded by the electronic device through a second application (e.g., a browser application).
Optionally, in this embodiment of the application, when the electronic device displays a system desktop, a user may input an icon of a third application icon (that is, a camera application) displayed in the system desktop, so that the electronic device may start the application program corresponding to the third application icon and display a shooting preview interface.
Optionally, the target display style may include at least one of: a shot foreground style, a shot background style, and a shot border style.
Specifically, the step 202 can be specifically realized by the following embodiments.
The electronic equipment can set the first image as a shooting foreground of the shooting preview interface according to first input of a user, and the first image is displayed in the shooting preview interface in a shooting foreground mode; or the electronic equipment can set the first image as a shooting background of the shooting preview interface according to a first input of a user, and display the first image in the shooting background mode in the shooting preview interface; or, the electronic device may set the first image as a shooting frame of the shooting preview interface according to a first input of a user, and display the first image in the shooting frame mode in the shooting preview interface.
It can be understood that if the electronic device sets the first image to be in the shooting foreground mode, the first image is overlaid on the shooting object; if the electronic equipment sets the first image to be a shooting background style, the first image is covered on the original background of the shooting object, and the shooting object collected by the electronic equipment cannot be influenced (namely, the phenomenon of shielding the shooting object does not exist); if the electronic equipment sets the first image to be in the shooting frame mode, the first image is covered on the shooting image, but the shooting object is not shielded.
Optionally, in this embodiment of the application, when the electronic device sets the first image to be a frame shooting style, the electronic device may display a hollow-out style selection control (the hollow-out style selection control is used to select a hollow-out style of a frame shooting), so that a user may input the hollow-out style selection control, and the electronic device may cover the first image according to the hollow-out style selected by the user.
Optionally, in this embodiment of the application, the hollow pattern may be pre-stored in the electronic device or user-defined.
Specifically, the user-defined hollow pattern may be that the user may input a second control (for example, a user-defined control) in the shooting preview interface, and then the user may perform sliding input in the shooting preview interface to draw the hollow pattern.
Illustratively, the closed area formed by the track of the user sliding input is the hollow pattern drawn by the user.
Optionally, the track formed by the sliding input is a closed area which can be formed by the sliding input of the user; or the sliding input of the user and the edge line of the electronic equipment form a closed area together.
Exemplarily, taking an electronic device as a mobile phone as an example, as shown in fig. 2 (a), when the mobile phone displays the shooting preview interface 10, a user may click a frame key 11, so that the mobile phone may display at least one frame style, at this time, the mobile phone selects a first square frame style by default, and displays the default selected square frame style in the shooting preview interface 10. As shown in fig. 2 (B), the user may then click-input other frame styles (e.g., circular frame styles) of the at least one frame style, so that the mobile phone may change the frame style displayed in the photographing preview interface 10 according to the user's needs.
Further exemplarily, as shown in fig. 3 (a), in a case where the mobile phone displays the shooting preview interface 10, the user may click the frame key 11 so that the mobile phone may display 4 frame styles, at this time, the mobile phone selects a first frame style by default (represented by a square frame style in fig. 3), and displays the default selected frame style in the shooting preview interface 10. As shown in fig. 3 (B), the user may then click on the custom control 12. As shown in fig. 3 (C), the mobile phone may determine a user-defined frame pattern (indicated by a heart shape in fig. 3) according to the sliding track of the user on the first image, and display the user-defined frame pattern in the shooting preview interface 10.
Optionally, in this embodiment of the application, after the user selects the hollow pattern, the user may input the pattern corresponding to the hollow pattern displayed in the shooting preview interface, so as to adjust the size or the display position of the pattern corresponding to the hollow pattern in the shooting preview interface.
Optionally, in this embodiment of the present application, the first image may be one or more images, and specifically, in the case that the first image is one image, the display style corresponding to the first image is a shooting foreground style, a shooting background style, or a shooting frame style; when the first image is a plurality of images, each image of the plurality of images corresponds to one display style.
For example, in the case that the first image is three images, a first image of the three images may correspond to a photographing foreground pattern, a second image of the three images may correspond to a photographing background pattern, and a third image of the three images may correspond to a photographing border pattern.
In the embodiment of the application, the electronic device can shoot according to the shooting input of the user to obtain the second image.
Optionally, in this embodiment of the application, the shooting input may be input to a shooting control by a user; alternatively, user input to a physical key (e.g., a volume key); or, a user's combined input of physical keys (e.g., a volume key and a power key); or voice input by the user, etc. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, the shooting input may include any one of: click input, long-press input, sliding input and preset track input; or a physical key combination (e.g., power key and volume key). The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
And step 204, the electronic equipment responds to the shooting input, and shoots based on the first image displayed in the target display mode to obtain a second image.
It is understood that the second image captured by the user includes not only the image captured by the user but also the first image displayed in the target display mode, and thus the captured second image is an image with a capturing effect required by the user.
Optionally, in this embodiment of the present application, the step 204 may be specifically implemented by the step 204a described below.
And 204a, the electronic equipment performs fusion operation on the first image and the image acquired by the shooting preview interface to obtain a second image.
In the embodiment of the application, the electronic device can synthesize the first image and the image acquired by the shooting preview interface into one image according to the shooting input of the user so as to obtain the second image.
Optionally, in this embodiment of the present application, the step 204a may be specifically implemented by the following step 204a1 or step 204a 2.
Step 204a1, under the condition that the target display style is the background style, the electronic device covers the first image in a background image area in the image collected by the shooting preview interface to obtain a second image.
In the embodiment of the application, under the condition that the target display style is the background style, the electronic device can identify a shooting area (namely a background image area) except for a shooting object in a shooting scene, extract the background image area, perform fusion processing on the background image area and the first image to obtain a shooting background of a second image, and then set the shooting object at a position corresponding to the shooting background to further obtain the second image.
It is understood that the background image area is determined according to the shooting object, for example, an area other than the area where the shooting object is located in the second image is the background image area.
Step 204a2, under the condition that the target display style is a foreground style or a frame style, the electronic device covers the first image on the image acquired by the shooting preview interface to obtain a second image.
In the embodiment of the application, under the condition that the target display style is a foreground style or a frame style, the electronic device can directly cover the first image on the image acquired by the shooting preview interface to obtain the second image.
It should be noted that, when the target display style is the frame style, since the user sets the hollow area, the first image is directly overlaid on the image acquired by the shooting preview interface, and the shooting object in the image acquired by the shooting preview interface is not blocked.
In the embodiment of the application, the electronic equipment can obtain the second image by adopting different synthesis modes according to different display styles of the first image in the shooting preview interface, so that the flexibility of acquiring the shot image by the electronic equipment is improved.
The embodiment of the application provides a shooting method, and under the condition that a shooting preview interface is displayed, electronic equipment can display a first image on the shooting preview interface in a target display mode according to first input of a user on the first image, so that the electronic equipment can shoot based on the first image to obtain a second image. In the scheme, the electronic equipment can set the first image selected by the user as the target display style, and obtain the second image according to the first image displayed by the target display style, namely when the electronic equipment shoots the image, the electronic equipment can set the first image selected by the user as the target display style, and obtain the second image through the first image displayed by the target display style, so that the situation that the image effect obtained by shooting the electronic equipment is single due to the fact that the shooting template is usually a fixed display style is avoided, and thus, the flexibility of shooting the image by the electronic equipment is improved.
Optionally, in this embodiment of the present application, before "before receiving the first input of the first image by the user" in step 201 described above, the method provided in this embodiment of the present application further includes step 301 and step 305 described below.
Step 301, the electronic device receives a second input of the user to the shooting preview interface.
In the embodiment of the application, the electronic device can select the target frame structure required by the user according to the second input of the user in the shooting preview interface, so that the splicing mode of the M images is determined according to the target frame structure, and the first image is obtained according to the splicing mode of the M images.
Optionally, in this embodiment of the present application, the shooting preview interface includes a first control, where the first control is used to instruct the electronic device to display at least one frame structure; under the condition that the electronic equipment displays a shooting preview interface, a user can input the first control to enable the electronic equipment to enter a self-defined shooting template mode, the self-defined shooting template mode comprises at least one picture structure, and therefore the user can input a second input to the at least one picture structure to determine a target picture structure required by the user.
Optionally, in this embodiment of the application, the electronic device may display at least one frame structure on a upper bar (for example, an upper half portion in the shooting preview interface) or a lower bar (for example, a lower half portion in the shooting preview interface) in the shooting preview interface; or, the electronic device may display a first window in the shooting preview interface, where the first window includes at least one frame structure, so that the user may view the at least one frame structure and select a target frame structure required by the user.
For example, assuming that at least one frame structure is 6, the user may click input on the shooting mode control so that the electronic device may display 3 frame structures in the chin bar in the shooting preview interface, and then, the user may slide input on the chin bar so that the electronic device displays the hidden 3 frame structures, while the electronic device may hide the initially displayed 3 frame structures.
Optionally, in this embodiment of the present application, the user may input to the first window to update a display position of the first window in the shooting preview interface, or update a display size of the first window in the shooting preview interface.
Optionally, in this embodiment of the application, the electronic device may display at least one frame structure in the first window in the form of a thumbnail.
Optionally, in this embodiment of the application, a user may customize a frame structure, and add the customized frame structure to at least one frame structure, thereby facilitating subsequent use by the user and satisfying personalized requirements of the user.
Optionally, in this embodiment of the application, the electronic device may upload a frame structure (hereinafter, referred to as a first frame structure) customized by a user (hereinafter, referred to as a first user) to a server, so that another user (hereinafter, referred to as a second user) may use the first frame structure.
Optionally, in this embodiment of the application, the electronic device may upload the user information and the first frame structure to the server together, so that a second user may communicate with the first user when using the first frame structure, so as to achieve an effect of shooting friends.
Optionally, in this embodiment of the application, the user information may include at least one of the following: user name, gender, age and time information of user-defined frame structure.
And step 302, the electronic equipment responds to the second input and displays the target picture structure in the shooting preview interface.
In this embodiment, the target frame structure includes N frame structure regions, where N is a positive integer.
In the embodiment of the application, the electronic device may determine a target frame structure in the shooting preview interface according to a second input of the user, and obtain the first image through the target frame structure.
Step 303, the electronic device receives a third input from the user.
In an embodiment of the present application, the third input is input to N frame structure regions of the N frame structure regions and N images.
In the embodiment of the application, a user can perform third input on all M of the N frame structure areas, so that the electronic device can add M images to the M frame structure areas and display the M images in a shooting preview interface.
Optionally, in this embodiment of the application, each of the N frame structures in the frame structure regions may correspond to a different display style, that is, the electronic device may preset a display style corresponding to the N frame structure regions, so that after the user adds the N images to the N frame structure regions in the N frame structure regions, the electronic device may determine the display style of the N images in the shooting preview interface.
Optionally, in this embodiment of the application, under the condition that each of the N frame structure areas may correspond to a different display style, after the user adds the M images to the N frame structure areas, the electronic device may display, in the shooting preview interface, the display style of the M images in the shooting preview interface, that is, the electronic device may display, in real time, the display style of the M images in the shooting preview interface.
Optionally, the M images may be selected by the user in a first application (album application); alternatively, the M images may be selected by the user in the second application (browser application), or a part of the M images may be selected in the first application and another part of the M images may be selected in the second application.
Illustratively, as shown in (a) of fig. 4, in a case where the mobile phone displays the shooting preview interface 10, the user can click on the template control 13. As shown in fig. 4 (B), so that the mobile phone displays the frame structure selection bar 14. The frame structure selection bar 14 displays 4 frame structures, and the user can then click on the second frame structure of the 4 frame structures. As shown in fig. 4 (C), the mobile phone may divide the shooting preview interface 10 into 2 image adding areas (indicated by 15 and 16 in fig. 4) according to the frame structure selected by the user, and each image adding area includes an adding control 17 therein. As shown in (D) in fig. 4, the user can make a click input to the addition control 17 in the image addition area 15. As shown in (E) in fig. 4, so that the mobile phone can jump to the first interface 18 in the album application, 9 pictures (represented by pictures 1 to 9 in fig. 4) are displayed in the first interface 18, and the user can click and input the picture 1. As shown in (F) in fig. 4, the cellular phone can display the picture 1 selected by the user in the image adding area 15.
Optionally, in this embodiment of the application, the user may input the shooting preview interface, so that the electronic device may cancel displaying the N frame structure selection bars in the shooting preview interface.
Optionally, in this embodiment of the application, after the electronic device adds the M images to the M frame structure regions, the electronic device may combine the M images into a first image; alternatively, after the electronic device adds the M images to the M frame structure regions, the user may click on the third control, so that the electronic device may synthesize the M images into the first image.
And step 304, the electronic equipment responds to a third input and displays N images in the N picture structure areas.
Optionally, in this embodiment of the application, the electronic device may display N images in the N frame structure regions, so that a user may adjust the N images, and then the electronic device may obtain the first image according to the adjusted N images.
Step 305, the electronic device obtains a first image according to the N images.
In the embodiment of the present application, the N frame structure regions correspond to the N images one to one.
In this embodiment of the application, after displaying N images in the N frame structure regions, the electronic device may perform synthesis processing on the N images to obtain a first image.
Specifically, the electronic device may perform stitching processing on the N images to obtain the first image.
In the embodiment of the application, the electronic equipment can display the target frame structure selected by the user in the shooting preview interface, so that the electronic equipment synthesizes the N images according to the target frame structure selected by the user to obtain the first image, and thus, the display effect of shooting the images by the electronic equipment is improved.
Optionally, in this embodiment, after the step 304, the shooting method provided in this embodiment further includes a step 401 and a step 402 described below, and the step 305 may specifically be a step 305a described below.
Step 401, the electronic device receives a fourth input of the target image by the user.
In this embodiment, the target image is one of N images displayed in N frame structure regions.
Optionally, in this embodiment of the present application, after the electronic device displays N images in the N frame structure regions, the electronic device may receive an input of a user for the N images, so as to adjust display positions of the N images in the N frame structures, or adjust display sizes of the N images, or perform a cropping operation on the N images.
Specifically, taking one image as an example, the above-mentioned performing the trimming operation on the M images means that the user can drag the one image to the edge area of the target frame structure area, so that the electronic device can perform the trimming operation on the one image.
And 402, the electronic equipment responds to the fourth input, and performs first editing operation on the target image to obtain an edited target image.
In an embodiment of the present application, the first editing operation includes any one of: adjusting display parameters of the N images, adjusting layer relation between the N images and videos collected by a shooting preview interface, and performing cutting operation on the N images.
Step 305a, the electronic device performs synthesis processing on the edited target image to obtain a first image.
In the embodiment of the application, the electronic device can splice the edited target image to obtain the first image.
In the embodiment of the application, the electronic device can edit the target image in the N images to obtain the first image, so that the diversity and flexibility of displaying the first image by the electronic device are improved.
Optionally, in this embodiment of the application, after the step 202, the shooting method provided in this embodiment of the application further includes a step 501 described below, and the step 203 "shooting the first image displayed in the target display style to obtain the second image" may specifically be implemented by the step 203b described below.
Step 501, the electronic device determines the target display style of the N images.
In the embodiment of the present application, the target display styles of the N images correspond to the N frame structure regions one to one.
In this embodiment of the application, each of the N frame structure regions corresponds to one target display style, so that after the electronic device adds N images to the N frame structure regions, the electronic device may determine the target display style of the N images according to the target display style corresponding to each frame structure region, and then obtain a second image through the N images displayed in the target display style.
Optionally, in this embodiment of the application, the target display style corresponding to each of the N frame structure areas may be preset by the electronic device or user-defined.
Step 203b, the electronic device obtains a second image according to the N images displayed in the target display style.
In the embodiment of the present application, the target display style corresponding to each of the N images is different.
It is understood that the electronic device may obtain the second image according to the N images in different target display modes.
In the embodiment of the application, the electronic device can display the N images and the target display styles corresponding to the N images in the shooting preview interface, so that a user shoots the images, and the electronic device obtains second images containing different target display styles. Therefore, the diversity of the second image shot by the electronic equipment is improved.
Optionally, in this embodiment, after the step 202, the shooting method provided in this embodiment further includes the following steps 501 to 503, and the step 204 may be specifically implemented by the following step 204 b.
Step 501, the electronic device receives a fifth input of the first image by the user.
In this embodiment of the application, the electronic device may display a style selection interface according to a fifth input of the user on the first image, so that the electronic device may perform an editing operation on the first image in the style selection interface, thereby obtaining the first image after the editing operation (i.e., a third image described below).
Step 502, the electronic device responds to the fifth input, and performs a second editing operation on the first image to obtain a third image.
In an embodiment of the present application, the second editing operation includes at least one of: adjusting the display parameters of the first image, adjusting the layer relation between the first image and the image collected by the shooting preview interface, and cutting the first image.
Optionally, in an embodiment of the present application, the display parameter includes at least one of: transparency, brightness, contrast, grayscale, and resolution.
Optionally, in this embodiment, the user may input a fourth control (for example, an editing control) to trigger the electronic device to perform an editing operation on the first image.
For example, as shown in fig. 5 (a), in a case where the mobile phone displays the photographing preview interface 10, the mobile phone may set a first picture as a photographing background style by default, set the transparency of the first picture to 0, and display the transparency adjustment control 20. As shown in (B) and (C) of fig. 5, the user may perform a drag input on the transparency adjustment control 20 to increase the transparency of the first image, and display the transparency-adjusted first image on the photographing preview interface 10.
Further illustratively, as shown in (a) of fig. 6, in the case where the cell phone displays the photographing preview interface 10, the cell phone may set the first picture as a photographing foreground style by default, set the transparency of the first picture to 0.5, and display the transparency adjustment control 20. As shown in (B) and (C) of fig. 6, the user may perform a drag input on the transparency adjustment control 20 to adjust the transparency of the first image, and display the transparency-adjusted first image on the photographing preview interface 10.
Step 503, the electronic device updates the first image displayed in the shooting preview interface to a third image displayed in a target display style.
Optionally, in this embodiment of the application, after obtaining the third image, the electronic device may display the third image in the shooting preview interface according to a default display style; alternatively, the user may input a fourth control to trigger the electronic device to set the third image to the target display style and display the third image in the shooting preview interface.
Optionally, in this embodiment of the application, the user may input the fourth control again, so that the electronic device may change a display style of the third image in the shooting preview interface.
And step 204b, the electronic equipment shoots based on the third image to obtain a second image.
In the embodiment of the application, the electronic device may combine the third image and the image acquired by the shooting preview interface into one image to obtain the second image.
It should be noted that, the third image and the image acquired by the shooting preview interface are combined into one image to obtain the second image, and a specific implementation process may refer to step 204a1 or step 204a2, which is not described herein again to avoid repetition.
In the embodiment of the application, the electronic equipment can edit the first image according to the input of the user to obtain the third image, and then the second image is obtained according to the third image, so that the situation that the image obtained by shooting of the electronic equipment is single in effect and cannot meet the requirements of the user is avoided, and the flexibility of image shooting is improved.
Optionally, in this embodiment of the present application, the step 502 may be specifically implemented by the step 502a described below.
Step 502a, the electronic device responds to a fifth input, and performs a cropping operation on the first image according to an input track of the fifth input to obtain a third image.
In this embodiment of the application, when the target display style is a foreground style or a background style, the third image is an image included in a fifth input track; when the target display style is the frame style, the third image is an image other than an image included in the input trajectory of the fifth input in the first image.
Optionally, in this embodiment of the present application, the electronic device may perform a cropping operation on a complete closed region formed by an input track of a user to obtain a third image; or, the electronic device may perform a cropping operation on a complete closed region formed by the input track of the user and an edge line of the electronic device to obtain the third image.
In the embodiment of the application, the electronic equipment can cut the first image according to the input track of the user, namely, the user can customize the area of the self requirement, so that the flexibility of the electronic equipment for editing the image is improved.
Optionally, in an embodiment of the present application, the target display style is a foreground style or a background style; the step 502 can be specifically realized by the following steps 502b to 502 d.
Step 502b, the electronic device performs a segmentation operation on the first image in response to the fifth input, so as to obtain at least two image elements.
In this embodiment, the electronic device may process the first image through image segmentation to obtain at least two image elements in the first image.
It is understood that the above-described image elements are each of the photographic subjects constituting the first image.
Exemplarily, assuming that the first image consists of blue sky, white cloud, grass, the image elements are blue sky, white cloud, grass.
Optionally, in this embodiment of the application, the electronic device may perform image segmentation processing on the first image according to an input of the user to the fifth control, so as to obtain at least two image elements in the first image.
Optionally, in this embodiment of the application, after obtaining the at least two image elements in the first image, the electronic device may set the at least two image elements as a shooting foreground style by default, and set transparency of the at least two image elements as 0.
Optionally, in this embodiment of the application, after obtaining the at least two image elements in the first image, the electronic device may perform a marking process on the at least two image elements through the first marker.
Optionally, in an embodiment of the present application, the first mark may include at least one of: color markings, line markings (e.g., dashed lines), and label markings.
Illustratively, as shown in (a) in fig. 7, in a case where the shooting preview interface 10 is displayed, the user can input to the image segmentation control 21. As shown in fig. 7 (B), the phone may perform image segmentation on the first image to obtain 5 image elements (denoted by 22-26 in fig. 7), and then mark the boundary of each of the 5 image elements with a dotted line, and distinguish different independent elements in different colors.
Step 502c, the electronic device receives a sixth input of the user to a target image element of the at least two image elements.
In this embodiment, the electronic device may select a target image element of the at least two image elements according to a fifth input of the user, and further perform an editing operation on the target image element.
And step 502d, the electronic equipment responds to the sixth input, and performs a third editing operation on the target image element to obtain a fourth image.
In an embodiment of the present application, the third editing operation includes at least one of: deleting the target image element, repositioning the target image element, adjusting the display parameter of the target image element, and adjusting the layer relation between the target image element and the image acquired by the shooting preview interface.
It should be noted that the layer may distinguish the shielding relationship between all elements in the shooting foreground or all elements in the shooting background, that is, the upper layer element may shield the lower layer element, and the foreground element layer is always located on the background element layer, and finally, the layer relationship of the second image is that the shooting foreground is located on the image collected by the shooting preview interface, and the image collected by the shooting preview interface is located on the shooting background.
Optionally, in this embodiment of the application, the deleting operation on the target image element may specifically be that the user can drag the target image element to an edge area of the electronic device, so that the electronic device can delete the target image element; alternatively, the user may enter a long press of the target image element to cause the electronic device to display a delete prompt box, so that the user may enter the delete prompt box to cause the electronic device to delete the target image element.
Illustratively, as shown in (C) of fig. 7, each individual image element (e.g., sun image element 22) that the user can drag is moved to the image element deletion area 27, and then the user can also move the individual image element 26 (i.e., the tree in fig. 7) to the right to change the display position of the individual image element 26 in the photographing preview interface 10. As shown in fig. 7 (D), so that the cellular phone can delete the sun picture element 22 and adjust the display position of the tree picture element 26.
Optionally, in this application, when the electronic device deletes the target image element, the target image element may be controlled to be displayed on the uppermost layer of the layer, so as to avoid a user from generating an incorrect operation.
Optionally, in this embodiment of the application, the repositioning operation on the target image element may specifically be that the user may perform a drag input on the target image element, so that the electronic device may change a display position of the target image element in the shooting preview interface.
Optionally, in this embodiment of the application, the adjusting of the display parameter of the target image element may specifically be that the user may input the fourth control, so that the electronic device may adjust the display parameter of the target image element.
Optionally, in this embodiment of the application, the adjusting of the layer relationship between the target image element and the image acquired by the shooting preview interface may specifically be that the user may input the sixth control, so that the electronic device may adjust the layer relationship between the target image element and the image acquired by the shooting preview interface.
Illustratively, as shown in fig. 8 (a), after the user selects a target image element 24 (e.g., a castle in fig. 8), the user may make an input to the layer control 28. As shown in fig. 8 (B), so that the cell phone can change the layer relationship between the castle image element 24 and the image captured by the shooting preview interface.
Optionally, in this embodiment of the application, after the electronic device edits the first image, the user may input the fifth control again. So that the electronic equipment exits the second editing operation to obtain a third image.
Illustratively, as shown in fig. 9 (a), the user may click on the segmentation control 21. As shown in fig. 9 (B), at this time, the shooting preview interface 10 will exhibit a preview shooting effect in real time.
In the embodiment of the application, the electronic device can acquire at least two image elements in the first image according to the input of the user, and then the electronic device can edit the target image elements according to the input of the user to the target image elements, so that the flexibility of the electronic device in editing the image is improved.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device. In the embodiment of the present application, a shooting method executed by a shooting device is taken as an example, and the shooting device provided in the embodiment of the present application is described.
Fig. 10 shows a schematic diagram of a possible structure of the photographing apparatus according to the embodiment of the present application. As shown in fig. 10, the photographing device 70 may include: a receiving module 71, a display module 72 and a photographing module 73.
The receiving module 71 is configured to receive a first input to the shooting preview interface by a user when the shooting preview interface is displayed, where the first input is used to determine a first image. The display module 72 is configured to display the first drawing in the shooting preview interface in the target display style in response to the first input received by the receiving module 71. The receiving module 71 is further configured to receive a shooting input from a user. The capturing module 73 is configured to capture a second image based on the first image displayed in the target display mode in response to the capturing input received by the receiving module.
In a possible implementation manner, the receiving module 71 is further configured to receive a second input of the user to the shooting preview interface before receiving the first input of the user to the first image. The display module 72 is further configured to, in response to the second input received by the receiving module 71, display the target frame structure in a shooting preview interface, where the shooting preview interface includes N frame structure areas, and N is a positive integer. The receiving module 71 is further configured to receive a third input from the user, where the third input is input to N frame structure regions and N images in the N frame structure regions. The display module 73 is further configured to display N images in the N frame structure regions in response to the third input received by the receiving module 71, so as to obtain the first image, where the N frame structure regions correspond to the N images one to one.
In a possible implementation manner, the shooting apparatus provided in the embodiment of the present application further includes: the device comprises an editing module and an updating module. The receiving module is further configured to receive a fourth input of the user to the target image, where the target image is one of the N images displayed in the N frame structure areas. And the editing module is used for responding to the fourth input, and performing first editing operation on the target image to obtain an edited target image. An updating module, configured to perform synthesis processing on the edited target image to obtain a first image, where the first editing operation includes any one of: adjusting display parameters of the N images, adjusting layer relation between the N images and videos collected by a shooting preview interface, and performing cutting operation on the N images.
In a possible implementation manner, the first image includes N images; the shooting device that this application embodiment provided still includes: and determining a module. And the determining module is used for determining the target display styles of the N images after the first image is displayed in the shooting preview interface in the target display styles, wherein the target display styles of the N images correspond to the N picture structure areas one by one. And the updating module is also used for obtaining a second image according to the N images displayed in the target display style.
In a possible implementation manner, the shooting apparatus provided in the embodiment of the present application further includes: the device comprises an editing module and an updating module. The receiving module 71 is further configured to receive a fifth input of the first image from the user after the displaying module 73 displays the first image in the shooting preview interface in the target display style. An editing module, configured to perform a second editing operation on the first image in response to the fifth input received by the receiving module 71 to obtain a third image, where the second editing operation includes at least one of: adjusting the display parameters of the first image, adjusting the layer relation between the first image and the image collected by the shooting preview interface, and cutting the first image. And the updating module is used for updating the first image displayed in the shooting preview interface into a third image displayed in a target display mode. The shooting module 73 is specifically configured to shoot based on the third image to obtain the second image.
In a possible implementation manner, the editing module is specifically configured to perform a cropping operation on the first image according to a fifth input track to obtain a third image. Wherein, under the condition that the target display style is a foreground style or a background style, the third image is an image contained in a fifth input track; when the target display style is the frame style, the third image is an image other than an image included in the input trajectory of the fifth input in the first image.
In a possible implementation manner, the target display style is a foreground style or a background style; the editing module is specifically configured to perform a segmentation operation on the first image to obtain at least two image elements; the receiving module 71 is further configured to receive a sixth input from the user to the target image element of the at least two image elements. The editing module is further configured to, in response to the sixth input received by the receiving module 71, perform a third editing operation on the target image element to obtain a third image, where the third editing operation includes at least one of: deleting the target image element, repositioning the target image element, adjusting the display parameter of the target image element, and adjusting the layer relation between the target image element and the image acquired by the shooting preview interface.
In a possible implementation manner, the shooting module 71 is specifically configured to perform a fusion operation on the first image and an image acquired by the shooting preview interface to obtain a second image.
In a possible implementation manner, the shooting module 71 is specifically configured to, when the target display style is a background style, overlay the first image in a background image area in an image acquired by a shooting preview interface to obtain a second image; and under the condition that the target display style is a foreground style or a frame style, covering the first image on the image acquired by the shooting preview interface to obtain a second image.
The embodiment of the application provides a shooting device, because the shooting device can set the first image that the user selected as the target display style, and according to the first image that the target display style shows, obtain the second image, shooting device is when shooting the image, shooting device can set the first image that the user selected as the target display style, and through the first image that this target display style shows, obtain the second image, avoided shooting the template and generally led to the shooting device to shoot the image effect that obtains comparatively single for fixed display style, so, the flexibility of shooting device shooting image has been promoted.
The shooting device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in an electronic apparatus. The device can be mobile electronic equipment or non-mobile electronic equipment. The Mobile electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Storage), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 9, and is not described here again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 90 is further provided in this embodiment of the present application, and includes a processor 91 and a memory 92, where the memory 92 stores a program or an instruction that can be executed on the processor 91, and when the program or the instruction is executed by the processor 91, the steps of the foregoing shooting method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 107 is configured to receive a first input to the shooting preview interface by a user when the shooting preview interface is displayed, where the first input is used to determine a first image. The display unit 106 is configured to display the first image in the shooting preview interface in the target display style in response to the first input. The user input unit 107 is also used for receiving a shooting input from a user. The processor 110 is configured to perform a photographing based on the first image displayed in the target display style in response to a photographing input, and obtain a second image.
The embodiment of the application provides an electronic device, because the electronic device can set a first image selected by a user as a target display style, and obtain a second image according to the first image displayed by the target display style, namely when the electronic device is used for shooting an image, the electronic device can set the first image selected by the user as the target display style, and obtain the second image through the first image displayed by the target display style, the situation that the effect of the image shot by the electronic device is single due to the fact that a shooting template is usually a fixed display style is avoided, and thus, the flexibility of shooting the image by the electronic device is improved.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a second input of the user to the shooting preview interface before receiving the first input of the user to the first image. The display unit 106 is further configured to display the target frame structure in a shooting preview interface in response to the second input, where the shooting preview interface includes N frame structure areas, and N is a positive integer. The user input unit 107 is further configured to receive a third input from the user, where the third input is input to N frame structure regions and N images of the N frame structure regions. The display unit 106 is further configured to display N images in the N frame structure regions in response to a third input, so as to obtain a first image, where the N frame structure regions correspond to the N images one to one.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a fourth input from the user to the target image after displaying N images in the N frame structure areas in response to the third input, where the target image is one of the N images displayed in the N frame structure areas. The processor 110 is further configured to perform a first editing operation on the target image in response to a fourth input, so as to obtain an edited target image. The processor 110 is specifically configured to perform synthesis processing on the edited target image to obtain a first image, where the first editing operation includes any one of: adjusting display parameters of the N images, adjusting layer relation between the N images and videos collected by a shooting preview interface, and performing cutting operation on the N images.
Optionally, in this embodiment of the application, the first image includes N images; the processor 110 is further configured to determine a target display style of the N images after the target display style displays the first image in the shooting preview interface, where the target display style of the N images corresponds to the N frame structure regions one to one. The processor 110 is specifically configured to obtain the second image according to the N images displayed in the target display style.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a fifth input of the first image by the user after the first image is displayed in the shooting preview interface in the target display style.
The processor 110 is further configured to perform a first editing operation on the first image in response to a fifth input, so as to obtain a third image, where the first editing operation includes at least one of: adjusting display parameters of the first image, adjusting the layer relation between the first image and an image collected by a shooting preview interface, and performing cutting operation on the first image; and updating the first image displayed in the shooting preview interface into a third image displayed in a target display style. The processor 110 is specifically configured to perform shooting based on the third image to obtain the second image.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to perform a cropping operation on the first image according to a fifth input track to obtain a third image; wherein, under the condition that the target display style is a foreground style or a background style, the third image is an image contained in the input track of the fourth input; and when the target display style is the frame style, the third image is an image except for the image contained in the input track of the fourth input in the first image.
Optionally, in this embodiment of the application, the target display style is a foreground style or a background style;
the processor 110 is specifically configured to perform a segmentation operation on the first image to obtain at least two image elements. The user input unit 107 is specifically configured to receive a sixth input from the user to the target image element of the at least two image elements. The processor 110 is further configured to perform a third editing operation on the target image element in response to a sixth input to obtain a third image, where the second editing operation includes at least one of: deleting the target image element, repositioning the target image element, adjusting the display parameter of the target image element, and adjusting the layer relation between the target image element and the image collected by the shooting preview interface.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to perform a fusion operation on the first image and an image acquired by the shooting preview interface to obtain a second image.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, when the target display style is a background style, cover the first image in a background image area in an image acquired by a shooting preview interface to obtain a second image;
and under the condition that the target display style is a foreground style or a frame style, covering the first image on the image acquired by the shooting preview interface to obtain a second image.
The electronic device provided by the embodiment of the application can realize each process realized by the method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 109 may comprise volatile memory or non-volatile memory, or memory 109 may comprise both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. A photographing method, characterized in that the method comprises:
receiving a first input to a shooting preview interface under the condition that the shooting preview interface is displayed, wherein the first input is used for determining a first image;
in response to the first input, displaying the first image in the capture preview interface in a target display style;
receiving a photographing input of a user;
and responding to the shooting input, and shooting based on the first image displayed in the target display style to obtain a second image.
2. The method of claim 1, wherein prior to receiving the first input of the first image by the user, the method further comprises:
receiving a second input to the shooting preview interface;
responding to the second input, and displaying a target picture structure in the shooting preview interface, wherein the target picture structure comprises N picture structure areas, and N is a positive integer;
receiving a third input of a user, wherein the third input is input to N picture structure areas and N images in the N picture structure areas;
displaying the N images in the N frame structure areas in response to the third input;
and obtaining the first image according to the N images, wherein the N picture structure areas correspond to the N images one by one.
3. The method of claim 2, wherein after displaying the N images in the N frame structure areas in response to the third input, the method further comprises:
receiving a fourth input of a user to a target image, wherein the target image is one of the N images displayed in the N picture structure areas;
responding to the fourth input, and performing first editing operation on the target image to obtain an edited target image;
obtaining the first image according to the N images, including:
synthesizing the edited target image to obtain the first image;
wherein the first editing operation comprises any one of: adjusting display parameters of the N images, adjusting layer relation between the N images and videos collected by the shooting preview interface, and performing cutting operation on the N images.
4. The method of claim 1 or 3, wherein the first image comprises N images; after the displaying the first image in the capture preview interface in the target display style, the method further comprises:
determining target display styles of the N images, wherein the target display styles of the N images correspond to the N picture structure areas one by one;
the shooting based on the first image displayed in the target display style to obtain a second image comprises:
and obtaining the second image according to the N images displayed in the target display style.
5. The method of claim 1, wherein after the displaying the first image in the capture preview interface in the target display style, the method further comprises:
receiving a fifth input of the first image by the user;
in response to the fifth input, performing a second editing operation on the first image to obtain a third image, wherein the second editing operation includes at least one of: adjusting display parameters of the first image, adjusting a layer relation between the first image and an image collected by the shooting preview interface, and performing a first cropping operation on the first image;
updating the first image displayed in the shooting preview interface into the third image displayed in the edited target display style;
the photographing based on the first image displayed in the target display style to obtain a second image includes:
and shooting based on the third image to obtain the second image.
6. The method of claim 5, wherein performing the first editing operation on the first image to obtain a third image comprises:
performing a cropping operation on the first image according to the fifth input track to obtain a third image;
wherein, when the target display style is a foreground style or a background style, the third image is an image included in the input track of the fifth input; and when the target display style is a frame style, the third image is an image of the first image except for an image included in the input track of the fifth input.
7. The method according to claim 5, wherein the target display style is a foreground style or a background style;
the performing a first editing operation on the first image to obtain a third image includes:
performing segmentation operation on the first image to obtain at least two image elements;
receiving a sixth input of a user to a target image element of the at least two image elements;
in response to the sixth input, performing a third editing operation on the target image element to obtain the fourth image, the third editing operation including at least one of: deleting the target image element, repositioning the target image element, adjusting the display parameter of the target image element, and adjusting the layer relation between the target image element and the image acquired by the shooting preview interface.
8. The method of claim 1, wherein said capturing based on the first image resulting in a second image comprises:
and carrying out fusion operation on the first image and the image acquired by the shooting preview interface to obtain the second image.
9. The method according to claim 8, wherein the fusing the first image and the image captured by the shooting preview interface to obtain the second image comprises:
under the condition that the target display style is a background style, covering the first image in a background image area in an image acquired by the shooting preview interface to obtain a second image;
and under the condition that the target display style is a foreground style or a frame style, covering the first image on the image acquired by the shooting preview interface to obtain the second image.
10. A photographing apparatus, characterized by comprising: the device comprises a receiving module, a display module and a shooting module;
the receiving module is used for receiving a first input of the shooting preview interface under the condition that the shooting preview interface is displayed, wherein the first input is used for determining a first image;
the display module is used for responding to the first input received by the receiving module, and displaying the first image in the shooting preview interface in a target display mode;
the receiving module is also used for receiving shooting input of a user;
the shooting module is used for responding to the shooting input received by the receiving module, shooting based on the first image displayed in the target display mode, and obtaining a second image.
11. An electronic device, characterized in that it comprises a processor, a memory and a program or instructions stored on said memory and executable on said processor, said program or instructions, when executed by said processor, implementing the steps of the shooting method according to any one of claims 1 to 9.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the shooting method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211028584.4A CN115988312A (en) | 2022-08-25 | 2022-08-25 | Shooting method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211028584.4A CN115988312A (en) | 2022-08-25 | 2022-08-25 | Shooting method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115988312A true CN115988312A (en) | 2023-04-18 |
Family
ID=85972630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211028584.4A Pending CN115988312A (en) | 2022-08-25 | 2022-08-25 | Shooting method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115988312A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015032099A1 (en) * | 2013-09-09 | 2015-03-12 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
CN106327551A (en) * | 2016-08-30 | 2017-01-11 | 华侨大学 | Painting automatic enframing method based on edge detection and image splicing |
CN108174109A (en) * | 2018-03-15 | 2018-06-15 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN112308780A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN112995500A (en) * | 2020-12-30 | 2021-06-18 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and medium |
CN114143461A (en) * | 2021-11-30 | 2022-03-04 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN114598823A (en) * | 2022-03-11 | 2022-06-07 | 北京字跳网络技术有限公司 | Special effect video generation method and device, electronic equipment and storage medium |
-
2022
- 2022-08-25 CN CN202211028584.4A patent/CN115988312A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015032099A1 (en) * | 2013-09-09 | 2015-03-12 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and terminal |
CN106327551A (en) * | 2016-08-30 | 2017-01-11 | 华侨大学 | Painting automatic enframing method based on edge detection and image splicing |
CN108174109A (en) * | 2018-03-15 | 2018-06-15 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN112308780A (en) * | 2020-10-30 | 2021-02-02 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN112995500A (en) * | 2020-12-30 | 2021-06-18 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and medium |
CN114143461A (en) * | 2021-11-30 | 2022-03-04 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN114598823A (en) * | 2022-03-11 | 2022-06-07 | 北京字跳网络技术有限公司 | Special effect video generation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112135049B (en) | Image processing method and device and electronic equipment | |
CN111612873A (en) | GIF picture generation method and device and electronic equipment | |
CN111935505B (en) | Video cover generation method, device, equipment and storage medium | |
CN112135046A (en) | Video shooting method, video shooting device and electronic equipment | |
CN113905175A (en) | Video generation method and device, electronic equipment and readable storage medium | |
WO2024051556A1 (en) | Wallpaper display method, electronic device and storage medium | |
CN114598819A (en) | Video recording method and device and electronic equipment | |
CN113259743A (en) | Video playing method and device and electronic equipment | |
CN114520876A (en) | Time-delay shooting video recording method and device and electronic equipment | |
US20170186207A1 (en) | Information processing method and terminal, and computer storage medium | |
CN114430460A (en) | Shooting method and device and electronic equipment | |
CN112991248A (en) | Image processing method and device | |
CN115988312A (en) | Shooting method and device, electronic equipment and storage medium | |
CN114025237B (en) | Video generation method and device and electronic equipment | |
CN114222069B (en) | Shooting method, shooting device and electronic equipment | |
CN116033260A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN114584704A (en) | Shooting method and device and electronic equipment | |
CN115334242B (en) | Video recording method, device, electronic equipment and medium | |
KR20150135591A (en) | Capture two or more faces using a face capture tool on a smart phone, combine and combine them with the animated avatar image, and edit the photo animation avatar and server system, avatar database interworking and transmission method , And photo animation on smartphone Avatar display How to display caller | |
CN112492206B (en) | Image processing method and device and electronic equipment | |
CN115278378B (en) | Information display method, information display device, electronic apparatus, and storage medium | |
CN115002555B (en) | Theme making method, device and equipment applied to live broadcasting room | |
CN116628244A (en) | Picture display method and device | |
CN117112826A (en) | Image generation method, device, computer equipment and storage medium | |
CN117097945A (en) | Video processing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |