CN114466140B - Image shooting method and device - Google Patents
Image shooting method and device Download PDFInfo
- Publication number
- CN114466140B CN114466140B CN202210121041.0A CN202210121041A CN114466140B CN 114466140 B CN114466140 B CN 114466140B CN 202210121041 A CN202210121041 A CN 202210121041A CN 114466140 B CN114466140 B CN 114466140B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- horizon
- target object
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000005484 gravity Effects 0.000 claims abstract description 28
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 18
- 230000008859 change Effects 0.000 claims description 46
- 230000004044 response Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
Abstract
The application discloses an image shooting method and device, wherein the method comprises the following steps: receiving a first input of a user to a first image under the condition that the shooting preview interface displays the first image containing the target object, wherein the first input is used for determining the horizon direction of the first image; responding to the first input, and synthesizing the first background image and the first main body rotation image to obtain a first target image; the first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction; displaying a first target image; belonging to the technical field of image processing.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image shooting method and device.
Background
With the continuous improvement of image technology of smart phones and the popularity of fire heat in image application, more and more people use mobile phones to take pictures or record videos to share life. Some interesting images can be obtained by a professional making difficult actions or by complex post-map processing, such as extreme moving images, aerial walking images, etc. It is difficult for a user group without motor skills or image processing skills to acquire interesting images.
At present, the difficulty of obtaining interesting images by users is high.
Disclosure of Invention
The embodiment of the application aims to provide an image shooting method and device, which can solve the problem that the difficulty of obtaining interesting images is high for a user.
In a first aspect, an embodiment of the present application provides an image capturing method, where the method is applied to an electronic device, and the method includes:
receiving a first input of a user to a first image under the condition that the shooting preview interface displays the first image containing the target object, wherein the first input is used for determining the horizon direction of the first image;
responding to the first input, and synthesizing the first background image and the first main body rotation image to obtain a first target image; the first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction;
the first target image is displayed.
In a second aspect, an embodiment of the present application provides an image capturing apparatus, where the apparatus is applied to an electronic device, and the apparatus includes:
the receiving module is used for receiving a first input of a user on the first image under the condition that the first image containing the target object is displayed on the shooting preview interface, and the first input is used for determining the horizon direction of the first image;
the synthesizing module is used for responding to the first input, synthesizing the first background image and the first main body rotation image, and obtaining a first target image; the first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of the first gravity direction relative to the horizon direction;
and the display module is used for displaying the first target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
Drawings
Fig. 1 is a schematic flow chart of an image capturing method according to an embodiment of the present application;
fig. 2 is an operation schematic diagram of a control for opening a shooting mode according to an embodiment of the present application;
FIG. 3 is a schematic view of a first image according to an embodiment of the present application;
FIG. 4 is a schematic view of a first target image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative direction identification in a first image provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image capturing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Many existing limit sports videos and interesting videos are obtained by shooting by professional sports personnel in cooperation with professional shooting personnel. Professionals are at risk for doing these actions, and ordinary users cannot do so and most users do not have these shooting skills. The post-production technology has high threshold, and professional staff is required to produce time and money cost, so that the ordinary users are difficult to shoot interesting videos (such as inverted aerial walking, aerial overturning and the like), and the videos are often attractive; at present, the difficulty of obtaining interesting images by users is high.
The image capturing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image capturing method according to an embodiment of the present application. As shown in fig. 1, the method may include the steps of:
s110, in a case where the photographing preview interface displays a first image including the target object, a first input of the first image by the user is received.
The first input is used for determining the horizon direction of the first image, the horizon direction is used for determining the rotation angle of the image of the target object when the first target image is synthesized, and the horizon direction is determined by the first input of the user to the first image.
In one embodiment, the image of the target object and the first background image each include an alternative direction identification; the alternative direction identification is used for representing characteristic edge trend in the image of the target object and the first background image; s110: receiving a first input from a user to a first image may include:
and receiving a first input of a user for a target direction identifier in the alternative direction identifiers, wherein the direction indicated by the target direction identifier is taken as the horizon direction.
In one embodiment, in S110, before receiving the first input of the first image by the user in the case that the photographing preview interface displays the first image including the target object, the method may further include:
receiving a third input of a user to the first image under the condition that the first image containing the target object is displayed on the shooting preview interface; in response to the third input, an interesting shooting mode is triggered.
The third input may be an input to a shooting mode control displayed on the shooting preview interface, the shooting mode control may display an identifier for identifying a shooting mode, so that a user can distinguish the effect of the shooting mode control, as shown in fig. 2, the shooting mode control may display an "interesting recording" word, and the user can enable the electronic device to enter an interesting shooting mode by triggering the shooting mode control through the third input. The third input may also be used to trigger the capture mode control, which may be in a variety of forms, e.g., click, slide, long press, etc.
And S120, responding to the first input, and combining the first background image and the first main body rotation image to obtain a first target image.
The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of the first gravity direction relative to the horizon direction; the first background image and the first subject rotation image can be obtained by image processing of the target object in advance. The first gravity direction can be acquired through a gravity sensor arranged on the electronic equipment, and the first rotation angle can be calculated based on the first gravity direction and the horizon direction. When the first target image is synthesized, after the image of the target object rotates, a part of the image of the target object can be covered on the first background image, a part of blank can appear in the original position of the image of the target object, the blank can be patched by taking the first background image as a reference through an image patching algorithm, the image patching algorithm can be selected as an inpaint algorithm, the first target image is finally obtained, and the first target image can be shown as a figure 4.
In one embodiment, at S120: in response to the first input, combining the first background image and the first subject rotation image to obtain a first target image, the method further comprising:
an image of a target object in the first image and a first background image are identified.
When the image of the target object of the first image and the first background image are identified, an image identification algorithm can be applied to identify each feature of the first image, the image identification algorithm can be selected as a yolo_v3 algorithm, the yolo_v3 algorithm can be obtained through training of an original image and a tag image corresponding to the original image, the original image corresponds to the first image, the tag image corresponds to the image of the target object, and the image of the target object and the first background image are identified and distinguished through the image identification algorithm. The image of the target object may be a person, an animal, etc., and the first background image is Beijing, excluding the person and the animal. For example, as shown in fig. 3, in which "girl" is an image of a target object, a background other than "girl" is a first background image.
S130, displaying the first target image.
After the first target image is obtained, the first target image can be displayed, and a user can intuitively observe the shooting effect of the first target image, so that user experience is optimized. According to the embodiment of the application, the first target image of the target object can be continuously shot, and the first target image is displayed in time sequence to form a video, the background image in each frame of the video does not rotate, and the main image rotates according to a uniform angle to form a video effect with interesting.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
In one embodiment, the first input is a sliding input and the horizon direction is the sliding direction of the first input.
The user can set the expected horizon direction of the user through sliding input to optimize user experience based on the mode of determining the horizon direction through the first input.
In one embodiment, as shown in FIG. 5, the image of the target object and the first background image each include an alternative direction identification; the alternative direction identification is used for representing characteristic edge trend in the image of the target object and the first background image; s110: receiving a first input from a user to a first image may include:
and receiving a first input of a user for a target direction identifier in the alternative direction identifiers, wherein the direction indicated by the target direction identifier is taken as the horizon direction.
Wherein, based on the mode of the first input determining the horizon direction, the user can quickly set the expected horizon direction by selecting the alternative direction identification, optimizing the user experience.
In one embodiment, the first direction of gravity is acquired at a first time instant, the first time instant being a time instant at which the first image of the target object is acquired; at S130: after displaying the first target image, the method may further include:
a second image of the target object is acquired and a second gravitational direction is acquired.
Calculating a second rotation angle of the second gravity direction relative to the first direction; the first direction is the direction of the horizon after the horizon is changed according to a preset horizon change rule.
Identifying an image of a target object of the second image and a second background image; the second background image is an image other than the image of the target object in the second image.
The principle of identifying the second main image and the second background image of the second image is similar to that of identifying the target object of the first image and the first background image, and will not be described herein.
Synthesizing the second background image and the second rotation image to obtain a second target image, wherein the second rotation image is an image obtained by rotating an image of a target object of the second image by a second rotation angle;
and displaying the second target image.
The second electronic equipment is acquired in a second gravity direction at a second moment; the second moment is the moment of collecting a second image of the target object; the first direction is a direction in which the horizon direction is changed according to a preset horizon change rule, for example, the preset horizon change rule may be that the horizon rotates at a constant speed, and specifically, the second image rotates by 10 ° every acquired frame.
The second rotation image is an image obtained by rotating the second main body image by a second rotation angle; the principle of synthesizing the second target image is similar to that of synthesizing the first target image, and will not be described here again.
In the embodiment of the application, after the second target image is obtained, the second target image is displayed, so that a user can intuitively observe the shooting effect of the second target image, and the user experience is optimized. According to the embodiment of the application, the second target image of the target object can be continuously shot, the first target image and the second target image are displayed in time sequence to form a video, the background image in each frame of the video does not rotate, and the main image rotates according to the angle of uniform speed transformation to form a video effect with full interest.
In one embodiment, before acquiring the second image of the target object, the method may further comprise:
displaying alternative horizon change rule options;
receiving a second input of a target horizon change rule in the alternative horizon change rule options by a user;
in response to the second input, a target horizon change rule is determined as a preset horizon change rule.
The candidate horizon change rule option can provide several horizon change modes for the user to select, and after the user selects the target horizon change rule through the second input, the target horizon change rule is determined to be a preset horizon change rule, so that the finally presented horizon change mode accords with the user expectation, and the user mode is optimized.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
It should be noted that, in the image capturing method provided in the embodiment of the present application, the execution subject may be an image capturing device, or a control module in the image capturing device for executing the image capturing method. In the embodiment of the application, an image capturing device is taken as an example to execute an image capturing method by using the image capturing device, and the image capturing device provided by the embodiment of the application is described.
Fig. 6 is a schematic structural diagram of an image capturing device according to an embodiment of the present application, where each module in the device shown in fig. 6 has a function of implementing each step in fig. 1, and achieves the corresponding technical effects. As shown in fig. 6, the apparatus may include:
the receiving module 610 is configured to receive a first input from a user to a first image, where the first image including the target object is displayed on the capturing preview interface, and the first input is used to determine a horizon direction of the first image.
The synthesizing module 620 is configured to synthesize the first background image and the first subject rotation image in response to the first input, to obtain a first target image. The first background image is an image of the first image except for an image of the target object, the first subject rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of the first gravity direction relative to the horizon direction.
And a display module 630, configured to display the first target image.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
In one embodiment, the apparatus further comprises an identification module.
And the identification module is used for identifying the image of the target object in the first image and the first background image before the first background image and the first main body rotation image are synthesized in response to the first input to obtain the first target image.
In one embodiment, the first input is a sliding input and the horizon direction is the sliding direction of the first input.
In one embodiment, the image of the target object and the first background image each include an alternative direction identification. The alternative direction identifies feature edge trends in the image used to characterize the target object and the first background image.
The receiving module is specifically configured to receive a first input of a target direction identifier from the candidate direction identifiers by a user, and take a direction indicated by the target direction identifier as a horizon direction.
In one embodiment, the first direction of gravity is acquired at a first time instant at which the first image of the target object is acquired.
The device also comprises an acquisition module, a calculation module and an identification module.
And the acquisition module is used for acquiring a second image of the target object after the first target image is displayed and acquiring a second gravity direction.
The calculating module is used for calculating a second rotation angle of the second gravity direction relative to the first direction. The first direction is the direction of the horizon after the horizon is changed according to a preset horizon change rule.
And the identification module is used for identifying the image of the target object of the second image and the second background image. The second background image is an image other than the image of the target object in the second image.
The synthesizing module is further configured to synthesize a second background image and a second rotation image, so as to obtain a second target image, where the second rotation image is an image obtained by rotating an image of a target object of the second image by a second rotation angle.
And the display module is also used for displaying the second target image.
In one embodiment, the apparatus further comprises a determination module.
The display module is further used for displaying alternative horizon change rule options before the second image of the target object is acquired.
The receiving module is further used for receiving a second input of the target horizon change rule in the alternative horizon change rule options.
And the determining module is used for responding to the second input and determining the target horizon change rule as a preset horizon change rule.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
The image capturing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The image capturing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image capturing device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a detailed description is omitted herein.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and capable of running on the processor 701, where the program or the instruction implements each process of the embodiment of the image capturing method when executed by the processor 701, and the process can achieve the same technical effects, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the input unit 804 may be used to perform the following steps:
s110, when the first image including the target object is displayed on the photographing preview interface, receiving a first input of the first image from the user, and when the first image including the target object is displayed on the photographing preview interface, receiving a first input of the photographing mode control displayed on the photographing preview interface from the user.
And S120, responding to the first input, and combining the first background image and the first main body rotation image to obtain a first target image.
The display unit 806 may be configured to perform the following steps:
s130, displaying the first target image.
Optionally, the input unit 804 may be further configured to perform the following steps:
and receiving a first input of a user for a target direction identifier in the alternative direction identifiers, wherein the direction indicated by the target direction identifier is taken as the horizon direction.
Wherein, based on the mode of the first input determining the horizon direction, the user can quickly set the expected horizon direction by selecting the alternative direction identification, optimizing the user experience.
Optionally, the input unit 804 may be further configured to perform the following steps:
a second image of the target object is acquired and a second gravitational direction is acquired.
Calculating a second rotation angle of the second gravity direction relative to the first direction; the first direction is the direction of the horizon after the horizon is changed according to a preset horizon change rule.
Identifying an image of a target object of the second image and a second background image; the second background image is an image other than the image of the target object in the second image.
The principle of identifying the second main image and the second background image of the second image is similar to that of identifying the target object of the first image and the first background image, and will not be described herein.
And synthesizing the second background image and the second rotation image to obtain a second target image, wherein the second rotation image is an image obtained by rotating an image of a target object of the second image by a second rotation angle.
The second electronic equipment is acquired in a second gravity direction at a second moment; the second moment is the moment of collecting a second image of the target object; the first direction is a direction in which the horizon direction is changed according to a preset horizon change rule, for example, the preset horizon change rule may be that the horizon rotates at a constant speed, and specifically, the second image rotates by 10 ° every acquired frame.
The second rotation image is an image obtained by rotating the second main body image by a second rotation angle; the principle of synthesizing the second target image is similar to that of synthesizing the first target image, and will not be described here again.
According to the embodiment of the application, the second target image of the target object can be continuously shot, the first target image and the second target image are displayed in time sequence to form a video, the background image in each frame of the video does not rotate, and the main image rotates according to the angle of uniform speed transformation to form a video effect with full interest.
Optionally, the display unit 806 may also be used to perform the following steps:
and displaying the second target image.
In the embodiment of the application, after the second target image is obtained, the second target image is displayed, so that a user can intuitively observe the shooting effect of the second target image, and the user experience is optimized.
Optionally, the display unit 806 may also be used to perform the following steps:
displaying alternative horizon change rule options;
the input unit 804 may be further configured to perform the following steps:
receiving a second input of a target horizon change rule in the alternative horizon change rule options by a user;
in response to the second input, a target horizon change rule is determined as a preset horizon change rule.
The candidate horizon change rule option can provide several horizon change modes for the user to select, and after the user selects the target horizon change rule through the second input, the target horizon change rule is determined to be a preset horizon change rule, so that the finally presented horizon change mode accords with the user expectation, and the user mode is optimized.
In the embodiment of the application, under the condition that a first image containing a target object is displayed on a shooting preview interface, a first input of a user to the first image is received, the first input is used for determining the horizon direction of the first image, and then the first background image and the first main body rotation image are synthesized in response to the first input to obtain and display the first target image. The first background image is an image except for an image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction of the first electronic equipment relative to a horizon direction. The user is not required to have higher motor skills or image processing skills, and the interesting image can be obtained only by opening the corresponding shooting mode and selecting the horizon direction, so that the difficulty of obtaining the interesting image by the user is greatly reduced.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, the graphics processor 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8081 and other input devices 8072. Touch panel 8081, also referred to as a touch screen. The touch panel 8081 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 810 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image capturing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the image shooting method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (12)
1. An image capturing method, wherein the method is applied to an electronic device, the method comprising:
receiving a first input of a user to a first image containing a target object under the condition that the first image is displayed on a shooting preview interface, wherein the first input is used for determining the horizon direction of the first image;
synthesizing a first background image and a first main body rotation image in response to the first input to obtain a first target image; the first background image is an image except for the image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction relative to the horizon direction;
and displaying the first target image.
2. The image capturing method according to claim 1, wherein before synthesizing the first background image and the first subject rotation image in response to the first input, the method further comprises:
an image of the target object in the first image and the first background image are identified.
3. The image capturing method according to claim 1, wherein the first input is a sliding input, and the horizon direction is a sliding direction of the first input.
4. The image capturing method of claim 1, wherein the image of the target object and the first background image each include an alternative direction identification; the alternative direction identification is used for representing the characteristic edge trend in the image of the target object and the first background image;
the receiving a first input from a user to the first image includes:
and receiving a first input of a user for a target direction identifier in the alternative direction identifiers, wherein the direction indicated by the target direction identifier is the horizon direction.
5. The image capturing method according to any one of claims 1 to 4, wherein the first gravitational direction is acquired at a first timing, the first timing being a timing at which a first image of the target object is acquired;
after displaying the first target image, the method further comprises:
acquiring a second image of the target object and acquiring a second gravity direction;
calculating a second rotation angle of the second gravity direction relative to the first direction; the first direction is the direction of the horizon after the horizon is changed according to a preset horizon change rule;
identifying an image of a target object of the second image and a second background image; the second background image is an image except for the image of the target object in the second image;
synthesizing the second background image and a second rotation image to obtain a second target image, wherein the second rotation image is an image obtained by rotating an image of a target object of the second image by the second rotation angle;
and displaying the second target image.
6. The image capturing method of claim 5, wherein prior to acquiring the second image of the target object, the method further comprises:
displaying alternative horizon change rule options;
receiving a second input of a target horizon change rule in the alternative horizon change rule options by a user;
and responding to the second input, and determining the target horizon change rule as the preset horizon change rule.
7. An image capturing apparatus, the apparatus being applied to an electronic device, the apparatus comprising:
the receiving module is used for receiving a first input of a user to a first image containing a target object under the condition that the first image is displayed on a shooting preview interface, and the first input is used for determining the horizon direction of the first image;
the synthesizing module is used for responding to the first input, synthesizing the first background image and the first main body rotation image, and obtaining a first target image; the first background image is an image except for the image of the target object in the first image, the first main body rotation image is an image obtained by rotating the image of the target object by a first rotation angle, and the first rotation angle is an angle of a first gravity direction relative to the horizon direction;
and the display module is used for displaying the first target image.
8. The image capture device of claim 7, wherein the device further comprises an identification module;
the identification module is used for identifying the image of the target object and the first background image in the first image before the first background image and the first main body rotation image are synthesized in response to the first input to obtain the first target image.
9. The image capturing apparatus according to claim 7, wherein the first input is a sliding input, and the horizon direction is a sliding direction of the first input.
10. The image capturing apparatus of claim 7, wherein the image of the target object and the first background image each include an alternative direction identification; the alternative direction identification is used for representing the characteristic edge trend in the image of the target object and the first background image;
the receiving module is specifically configured to receive a first input of a target direction identifier from the candidate direction identifiers by a user, and take a direction indicated by the target direction identifier as the horizon direction.
11. The image capturing apparatus according to any one of claims 7 to 10, wherein the first gravitational direction is acquired at a first timing, the first timing being a timing at which a first image of the target object is acquired;
the device also comprises an acquisition module, a calculation module and an identification module;
the acquisition module is used for acquiring a second image of the target object after the first target image is displayed, and acquiring a second gravity direction;
the calculating module is used for calculating a second rotation angle of the second gravity direction relative to the first direction; the first direction is the direction of the horizon after the horizon is changed according to a preset horizon change rule;
the identification module is used for identifying an image of a target object of the second image and a second background image; the second background image is an image except for the image of the target object in the second image;
the synthesizing module is further configured to synthesize the second background image and a second rotation image to obtain a second target image, where the second rotation image is an image obtained by rotating an image of a target object of the second image by the second rotation angle;
the display module is further used for displaying the second target image.
12. The image capture device of claim 11, wherein the device further comprises a determination module;
the display module is further used for displaying alternative horizon change rule options before acquiring the second image of the target object;
the receiving module is further configured to receive a second input of a target horizon change rule in the alternative horizon change rule options from a user;
the determining module is used for responding to the second input and determining the target horizon change rule as the preset horizon change rule.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210121041.0A CN114466140B (en) | 2022-02-09 | 2022-02-09 | Image shooting method and device |
PCT/CN2023/074523 WO2023151527A1 (en) | 2022-02-09 | 2023-02-06 | Image photographing method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210121041.0A CN114466140B (en) | 2022-02-09 | 2022-02-09 | Image shooting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114466140A CN114466140A (en) | 2022-05-10 |
CN114466140B true CN114466140B (en) | 2023-10-24 |
Family
ID=81414209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210121041.0A Active CN114466140B (en) | 2022-02-09 | 2022-02-09 | Image shooting method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114466140B (en) |
WO (1) | WO2023151527A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114466140B (en) * | 2022-02-09 | 2023-10-24 | 维沃移动通信有限公司 | Image shooting method and device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164387A (en) * | 2013-04-16 | 2013-06-19 | 北京大学 | Method of confirming direction angles between surface features |
CN103941975A (en) * | 2014-03-18 | 2014-07-23 | 联想(北京)有限公司 | Information processing method and electronic equipment |
KR20150012887A (en) * | 2013-07-26 | 2015-02-04 | 엘지전자 주식회사 | Mobile terminal and capture object realignment method thereof |
CN105657260A (en) * | 2015-12-31 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Shooting method and terminal |
CN111756995A (en) * | 2020-06-17 | 2020-10-09 | 维沃移动通信有限公司 | Image processing method and device |
CN111857910A (en) * | 2020-06-28 | 2020-10-30 | 维沃移动通信有限公司 | Information display method and device and electronic equipment |
CN112492209A (en) * | 2020-11-30 | 2021-03-12 | 维沃移动通信有限公司 | Shooting method, shooting device and electronic equipment |
CN112839164A (en) * | 2019-11-25 | 2021-05-25 | 北京安云世纪科技有限公司 | Photographing method and device |
CN113170050A (en) * | 2020-06-22 | 2021-07-23 | 深圳市大疆创新科技有限公司 | Image acquisition method, electronic equipment and mobile equipment |
CN113794829A (en) * | 2021-08-02 | 2021-12-14 | 维沃移动通信(杭州)有限公司 | Shooting method and device and electronic equipment |
CN113810588A (en) * | 2020-06-11 | 2021-12-17 | 青岛海信移动通信技术股份有限公司 | Image synthesis method, terminal and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060078215A1 (en) * | 2004-10-12 | 2006-04-13 | Eastman Kodak Company | Image processing based on direction of gravity |
JP2015149600A (en) * | 2014-02-06 | 2015-08-20 | ソニー株式会社 | image processing apparatus, image processing method, and program |
CN104135612B (en) * | 2014-07-11 | 2018-05-01 | 努比亚技术有限公司 | A kind of image pickup method and filming apparatus of adjustable shooting body position |
CN108055463A (en) * | 2017-12-26 | 2018-05-18 | 努比亚技术有限公司 | Image processing method, terminal and storage medium |
KR102439502B1 (en) * | 2018-01-11 | 2022-09-05 | 삼성전자 주식회사 | Electronic device and method for processing image of the same |
CN109544496A (en) * | 2018-11-19 | 2019-03-29 | 南京旷云科技有限公司 | Generation method, the training method and device of object detection model of training data |
CN110312070B (en) * | 2019-04-23 | 2021-08-24 | 维沃移动通信有限公司 | Image processing method and terminal |
CN114466140B (en) * | 2022-02-09 | 2023-10-24 | 维沃移动通信有限公司 | Image shooting method and device |
-
2022
- 2022-02-09 CN CN202210121041.0A patent/CN114466140B/en active Active
-
2023
- 2023-02-06 WO PCT/CN2023/074523 patent/WO2023151527A1/en unknown
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164387A (en) * | 2013-04-16 | 2013-06-19 | 北京大学 | Method of confirming direction angles between surface features |
KR20150012887A (en) * | 2013-07-26 | 2015-02-04 | 엘지전자 주식회사 | Mobile terminal and capture object realignment method thereof |
CN103941975A (en) * | 2014-03-18 | 2014-07-23 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105657260A (en) * | 2015-12-31 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Shooting method and terminal |
CN112839164A (en) * | 2019-11-25 | 2021-05-25 | 北京安云世纪科技有限公司 | Photographing method and device |
CN113810588A (en) * | 2020-06-11 | 2021-12-17 | 青岛海信移动通信技术股份有限公司 | Image synthesis method, terminal and storage medium |
CN111756995A (en) * | 2020-06-17 | 2020-10-09 | 维沃移动通信有限公司 | Image processing method and device |
CN113170050A (en) * | 2020-06-22 | 2021-07-23 | 深圳市大疆创新科技有限公司 | Image acquisition method, electronic equipment and mobile equipment |
CN111857910A (en) * | 2020-06-28 | 2020-10-30 | 维沃移动通信有限公司 | Information display method and device and electronic equipment |
CN112492209A (en) * | 2020-11-30 | 2021-03-12 | 维沃移动通信有限公司 | Shooting method, shooting device and electronic equipment |
CN113794829A (en) * | 2021-08-02 | 2021-12-14 | 维沃移动通信(杭州)有限公司 | Shooting method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2023151527A1 (en) | 2023-08-17 |
CN114466140A (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112954210B (en) | Photographing method and device, electronic equipment and medium | |
CN111901896A (en) | Information sharing method, information sharing device, electronic equipment and storage medium | |
CN112422817B (en) | Image processing method and device | |
CN112333382B (en) | Shooting method and device and electronic equipment | |
CN112770058A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN113794834B (en) | Image processing method and device and electronic equipment | |
CN112911147B (en) | Display control method, display control device and electronic equipment | |
CN112492215B (en) | Shooting control method and device and electronic equipment | |
CN112784081A (en) | Image display method and device and electronic equipment | |
CN111669495B (en) | Photographing method, photographing device and electronic equipment | |
CN112437231A (en) | Image shooting method and device, electronic equipment and storage medium | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN114466140B (en) | Image shooting method and device | |
CN112511743B (en) | Video shooting method and device | |
CN113873100B (en) | Video recording method, device, electronic equipment and storage medium | |
CN114025100A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN113866782A (en) | Image processing method and device and electronic equipment | |
CN112019686A (en) | Display method and device and electronic equipment | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN114245017B (en) | Shooting method and device and electronic equipment | |
CN113794943B (en) | Video cover setting method and device, electronic equipment and storage medium | |
CN113794833B (en) | Shooting method and device and electronic equipment | |
CN113014799B (en) | Image display method and device and electronic equipment | |
CN114339051A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN113965792A (en) | Video display method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |