CN112492211A - Shooting method, electronic equipment and storage medium - Google Patents
Shooting method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112492211A CN112492211A CN202011398975.6A CN202011398975A CN112492211A CN 112492211 A CN112492211 A CN 112492211A CN 202011398975 A CN202011398975 A CN 202011398975A CN 112492211 A CN112492211 A CN 112492211A
- Authority
- CN
- China
- Prior art keywords
- face
- shooting
- area
- main body
- subject
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 230000003796 beauty Effects 0.000 claims abstract description 69
- 230000009471 action Effects 0.000 claims abstract description 57
- 238000005520 cutting process Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 3
- 238000001914 filtration Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 4
- 239000012535 impurity Substances 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention relates to a shooting method, electronic equipment and a storage medium, which can extract a shooting subject and a subject gesture through an image, filter invalid background characters and the like before shooting, and can realize the filtering of the invalid background characters and the like through split screen, displaying the shooting subject in real time in a first screen area, displaying a beauty template in a second screen area, different judging actions are obtained by recognizing the gestures of the main body, including template selecting action and shooting action, wherein the template selection action can trigger the selection of the beauty template, so that the user can remotely operate to replace the beauty template, and the shooting subject processed by the beauty template can be displayed to the user in the first screen area in real time, after the user confirms the beauty template, the shooting action realizes the starting and ending of the shooting of the short video, realizes the selection of the beauty template in remote or single-hand operation, and finally directly presents the beautified image required by the user.
Description
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a shooting method, electronic equipment and a storage medium.
Background
With the development of communication technology, people increasingly use terminal devices for entertainment and interaction. Among them, the production of short videos is rapidly developing, and a large amount of short videos are uploaded and released every day. People release the short videos shot by themselves through corresponding application APPs in the terminal equipment and interact with people with the same hobbies and interests.
Short videos gradually get the favor of users due to the characteristics of short videos, fast videos, large videos and the like. People like to perform various personalized editing processes on short videos in the shooting process and the short video publishing process, such as beautifying, adding subtitles, adding icons, doodling and the like, and the editing processes can greatly enrich the video content and meet the personalized requirements of users. The existing short video beauty shooting scheme mainly comprises the steps of using a mobile phone and other equipment to present a beauty template, then determining and beautifying shot images after a user clicks a selection template through the mobile phone, and cutting the images after foreground extraction is carried out after image imaging is carried out on the images to present final imaging.
The method in the prior art needs to hold a mobile phone while clicking a selection template during shooting, is inconvenient to operate, and cannot select the template when used for equipment such as a selfie stick.
Disclosure of Invention
The embodiment of the invention aims to provide a shooting method, electronic equipment and a storage medium, which solve the problems that in the prior art, a mobile phone needs to be held by hand and a template is clicked and selected, the operation is inconvenient, and the template selection cannot be carried out when equipment such as a selfie stick is used.
To solve the above technical problem, in a first aspect, an embodiment of the present invention provides a shooting method, including:
segmenting an image acquired by a camera to obtain a shooting subject and a corresponding subject gesture;
displaying the shooting subject in a first screen area of a display screen, and displaying at least one beauty template in a second screen area of the display screen;
identifying the main body gesture in real time to obtain a gesture identification result, wherein the gesture identification result comprises a template selection action and a shooting action;
selecting any beauty template from the second screen area based on the template selection action, and performing beauty treatment on the shooting subject displayed in the first screen area in real time based on the selected beauty template;
and shooting based on the shooting action.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the photographing method according to the embodiment of the first aspect of the present invention.
In a third aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method according to the embodiment of the first aspect of the present invention.
Compared with the prior art, the embodiment of the invention extracts the shooting subject and the subject gesture by segmenting the image acquired by the camera, can filter out invalid background characters and the like before shooting, and independently displays the shooting subject in a first screen area in real time by screen splitting, so that a clean required picture is displayed finally for shooting; the beauty template is independently displayed in the second screen area, different judging actions are obtained by recognizing gestures of a main body, the judgment actions comprise a template selecting action and a shooting action, the template selecting action can trigger the selection of the beauty template, a user can remotely operate and replace the beauty template, the shooting main body processed by the beauty template can be displayed to the user in the first screen area in real time, the gesture action can be replaced after the user confirms the beauty template, the shooting of short videos is started and ended after the shooting action is recognized, the beauty template selection is realized during remote or single-hand operation, beautified images needed by the user are finally and directly presented, and post-processing after imaging is avoided.
In addition, after the body gesture is recognized in real time and a gesture recognition result is obtained, the method further comprises the following steps:
and displaying the gesture recognition result in a third screen area.
In addition, the image obtained by the camera is segmented to obtain a shooting subject and a subject gesture, and the method specifically includes:
performing foreground segmentation on an image acquired by a camera, and extracting the positions of all human faces and the positions of all gestures in the image;
taking the face with the largest area as a main body face, taking the face with the distance relative to the main body face smaller than a preset threshold value or the face with the area where the main body face is located repeatedly as a co-shooting object, and counting the face into the main body face to obtain a shooting main body;
and scaling the distance between the actual face and the hand in an equal proportion based on the image proportion to determine the region where the main body gesture corresponding to the main body face is located, so as to obtain the main body gesture.
In addition, after extracting the positions of all human faces in the image, the method further comprises the following steps:
connecting all key points extracted from the human face, and connecting all the key points one by one to obtain a human face curve;
respectively fitting each face curve, and carrying out smoothing treatment on different parts according to corresponding preset curvatures to obtain a face area corresponding to each face;
and counting the pixel points of each face region to obtain the area of each face.
In addition, after obtaining the face region corresponding to each face, the method further includes:
acquiring the width and the height of a region where a main face is located to determine a main rectangular region where the main face is located;
and adaptively and proportionally adjusting the main rectangular area based on the size of the display area of the first screen area.
In addition, after obtaining the face region corresponding to each face, the method further includes:
extracting the position of hair corresponding to the face of the main body, and if judging that the region where the hair is located is not coincident with the regions where all the gestures are located, recording the hair into the face of the main body;
and updating the maximum value and the minimum value of the width and the height of the pixel point of the region of the main body face.
In addition, after the hair is recorded into the face of the subject, the method further comprises the following steps:
if the display area size of the main rectangular area is judged to be not matched with the display area size of the first screen area, judging the coincidence of the area where the hair is located and the area where the main face is located, and adaptively cutting part of the hair which is lower than the area where the main face is located so as to match the display area size of the first screen area.
In addition, after obtaining the shooting subject and the subject gesture, the method further comprises the following steps:
determining a final display area according to a target area selected by a user;
and dynamically adjusting the size and the shape of the shooting subject in the first screen area based on the final display area.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a photographing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a video shooting process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image segmentation flow according to an embodiment of the present invention;
fig. 4 is a block diagram of a server according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, the terms "comprise" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a system, product or apparatus that comprises a list of elements or components is not limited to only those elements or components but may alternatively include other elements or components not expressly listed or inherent to such product or apparatus. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
According to the method in the prior art, when the mobile phone needs to be held by hand during shooting, the selection of the template is clicked, the operation is inconvenient, the template selection cannot be performed when the method is used for equipment such as a selfie stick, meanwhile, video imaging depends on later-stage video cutting, the shooting is invalid due to the fact that the situation that the cutting is difficult may occur, and the final effective short video cannot be generated.
Therefore, embodiments of the present invention provide a shooting method, an electronic device, and a storage medium, which extract a shooting subject and a subject gesture, and can implement different determination actions including a template selection action and a shooting action on an invalid background person before shooting, where the template selection action can trigger selection of a beauty template, so that a user can remotely operate and replace the beauty template, the shooting action implements start and end of shooting of a short video, and finally, selection of the beauty template during remote or one-handed operation can be implemented, and finally, a beautified image required by the user is directly presented, thereby avoiding post-processing after imaging. The following description and description will proceed with reference being made to various embodiments.
A first embodiment of the present invention relates to a photographing method. The specific flow is shown in fig. 1. The method comprises the following steps:
s10, segmenting the image acquired by the camera to obtain a shooting subject and a corresponding subject gesture;
specifically, a camera is started to obtain a complete image, a main character body without an interference background is taken as a shooting body in the image, a head area and a hand function area/impurity background area are finally drawn by segmenting irrelevant shooting of non-main characters, and the image is prevented from being cut after shooting; the head area is used as a shooting subject, and the hand function area is used as a subject gesture.
Step S20, displaying the shooting subject in a first screen area of a display screen, and displaying at least one beauty template in a second screen area of the display screen;
specifically, the display screen can be automatically split at the front-end interface, the shooting main body is displayed in the first screen area according to the picture splitting result, a beauty template is provided for the user in the second screen area, one beauty template can be displayed in the second screen area at a time, and a plurality of beauty templates can be displayed simultaneously, so that the user can select the display mode of the beauty template based on different shooting distances.
Step S30, recognizing the main body gesture in real time to obtain a gesture recognition result, wherein the gesture recognition result comprises a template selection action and a shooting action;
specifically, an image obtained by processing the identified shooting subject in the area is cut, scaled, and the like is displayed in the first screen area, gesture recognition is performed on a non-subject image (such as a magazine background) and the gesture recognition result can be used for selecting a beauty template and controlling shooting.
Step S40, selecting any beauty template from the second screen area based on the image, and carrying out beauty treatment on the shooting subject displayed in the first screen area in real time based on the selected beauty template;
specifically, the recognition result of the template selection action corresponds to trigger actions such as upward, downward, page turning, use and the like of the beauty template selection, and the beauty of the shooting subject is performed. After the beauty template is selected, the shooting main body displayed in the first screen area can be beautified in real time according to the beauty template, at the moment, a video is not shot, the user can also replace the beauty template to observe the effects of different beauty templates, and finally the satisfied beauty template is selected.
And step S50, confirming the starting node or the ending node of the shooting based on the shooting action.
Specifically, the recognition result of the shooting action is corresponded to the trigger action such as the start, pause, and end of shooting to control the start and stop of shooting, and video shooting and picture shooting can be performed. The shooting start and stop of the short video and the selection of the beauty template are not limited in sequence, the final sequence is based on a gesture recognition result, if the template selection action is not recognized but the shooting action is recognized, the non-beauty shooting can be carried out, and if the template selection action is recognized in the shooting process, the beauty treatment can be carried out in real time according to the selected beauty template in the middle stage of the video shooting.
On the basis of the above embodiment, as a preferred implementation, the method further includes:
and displaying the gesture recognition result in a third screen area of the display screen.
Specifically, the third screen area may be split or not split according to the selection of the user, as shown in fig. 2, two choices in a single-hand or remote shooting mode are provided for the user at the front-end interface:
1. the front end displays a split screen interface, at the moment, a display screen is divided into three parts, namely a first screen area, a second screen area and a third screen area, and the sizes of the three parts can be dynamically adjusted, wherein the first screen area is a video area which can be displayed finally and is used for displaying a shooting subject in real time; the second screen area is a display area of the beauty template; the third screen area is an impurity background or an area for displaying a gesture recognition result;
firstly, dividing a screen into three parts in a front end region according to a preset template and a size ratio, acquiring a complete image after a camera is started, and prompting a user to adjust the head position so as to enable a human face to be matched with a corresponding region; the method comprises the steps of extracting and dividing a shooting subject of an image, and drawing a head area and a hand function area (gesture area), wherein the head area is used as the shooting subject and displayed in a first screen area, a second screen area is provided with a beauty template, and the hand function area is used as a subject gesture and displayed in a third screen area after being recognized, so that a user can adjust hand motions according to prompts.
2. The front end does not display a split screen interface: the front-end interface only provides a first screen area and a second screen area to respectively display an actual final imaging area image and a beauty template, the image acquired by the camera is subject extracted and segmented through a bottom layer algorithm, the impurity background is removed, and finally the shooting subject without the magazine background is imaged and displayed in the first screen area for users to beautify and shoot.
On the basis of the foregoing embodiment, as a preferred implementation manner, the segmenting an image acquired by a camera to obtain a shooting subject and a subject gesture specifically includes:
performing foreground segmentation on an image acquired by a camera, and extracting the positions of all human faces and the positions of all gestures in the image;
taking the face with the largest area as a main body face, taking the face with the distance relative to the main body face smaller than a preset threshold value or the face with the area where the main body face is located repeatedly as a co-shooting object, and counting the face into the main body face to obtain a shooting main body;
and scaling the distance between the actual face and the hand in an equal proportion based on the image proportion to determine the region where the main body gesture corresponding to the main body face is located, so as to obtain the main body gesture.
Specifically, in step S10, as shown in fig. 3, the specific steps of segmenting the image acquired by the camera are:
and S101, performing foreground segmentation on the whole image acquired by the camera, and extracting Face _ mask positions of all faces, hand _ mask positions of gestures, hair _ mask positions of hair and the like.
Face_mask,hand_mask,other_mask=CNN(s)。
In this step, the segmentation result may be directly obtained through a Neural network obtained by pre-training, and in this embodiment, a Convolutional Neural Network (CNN) trained in advance is used to segment the image.
Step S102, connecting all key points extracted from the human face, and connecting all the key points one by one to obtain a human face curve;
respectively fitting each face curve, and carrying out smoothing treatment on different parts according to corresponding preset curvatures to obtain a face area corresponding to each face;
and counting the pixel points of each face area to obtain the area of each face.
Specifically, in this embodiment, for the positions of all extracted faces, region size statistics is performed, and priority ranking is performed.
Fask_mask_new=f(Face_mask)
Face_mask_major=max(s(Fask_mask_new))
The f function is a face curve fitting function, and key points extracted from each face are connected one by one and are smoothed according to different curvatures of different parts. The s function is approximated by an ellipse-based area algorithm, the area of each face is obtained, the s function is completed in a mode of being approximated to integral, and pixel point statistics can be directly completed on the region to be obtained. The method and the device have the advantages that curve fitting restoration is carried out on each face area in the image, and the situation that a plurality of faces are sheltered when being photographed together, so that the determination of the face area of a main body is influenced is avoided.
Step S103, if the main face is overlapped or has a small distance with the main face, judging that the main face is a close shot object, and expanding the range of the main face; determining the Face with the largest area region as a main body Face, simultaneously judging the correlation distance between the faces of other regions and the main body Face, when the correlation distance is less than a preset threshold value or the Face regions are repeated, judging the Face to be a co-shooting object, counting the co-shooting object into the main body Face, updating the Face _ mask _ major, and finally determining the Face to be the display region of the main body Face.
Where n is the number of persons allowed to shoot images at the same time, and len (Face _ mask _ major) is used to find the number of Face masks at present.
Length (x, y) is used for solving the distance between two face masks, and specifically comprises the following steps:
Length(x,y)=min(max(xw)-min(yw),max(xh)-min(yh))。
wherein x and y respectively represent two faces; w is a pixel axis in the width direction, and h is a pixel axis in the height direction; xw is the pixel point position of the w axis of mask x, and xh is the pixel point position of the h axis of mask x.
S104, scaling the distance between the actual face and the hand in an equal proportion based on the image proportion to determine the region of the main body gesture corresponding to the main body face to obtain the main body gesture;
specifically, based on the hand _ mask position and the Face _ mask _ major of the main body Face position, scaling in equal proportion according to the image proportion, the actual head portrait size and the hand distance to finally determine the main body gesture area. And performing gesture recognition on the main body gesture area by using an image recognition algorithm. And if the hand-mask area contains a plurality of main body gestures, performing gesture recognition on the whole hand-mask area. The gesture recognition algorithm can be completed by adopting a CNN network to recognize and classify the gesture area images. Based on the algorithm: the range of the hand movement can be larger as long as the main body face area is not shielded.
In addition to the above embodiments, as a preferred embodiment, after obtaining the subject, the method further includes:
step S201, obtaining the maximum value and the minimum value of the width and the height of a pixel point of a region where a main body face is located to determine a main body rectangular region where the main body face is located;
and adaptively and proportionally adjusting the main rectangular area based on the size of the display area of the first screen area.
Step S202, extracting the position of hair corresponding to the face of the main body, and if judging that the area where the hair is located is not coincident with the areas where all gestures are located, recording the hair into the face of the main body; and updating the maximum value and the minimum value of the width and the height of the pixel point of the region of the main body face.
Specifically, the maximum circumscribed rectangle is selected based on a main body Face range frame, the scaling of a main body Face image is determined according to the actual display size of a first screen area, the Face image is displayed in a corresponding area, the Face _ mask _ major _ last of the main body Face area is used for filtering a gesture area and a background irrelevant area, the maximum value and the minimum value of the pixel point position width of the main body Face area are obtained, when the Face _ mask area is not overlapped with the gesture area, the Face _ mask _ major _ last area can be added into the Face _ mask _ major _ last area, and the maximum value and the minimum value of the pixel point position width of the area are updated. And a main body rectangular area is defined according to the frame, the image size is adjusted based on the size of the first screen area, such as scaling, rotating, expanding and the like, when the face area in the main body is too small, the face area is too large after being scaled up, the aesthetic feeling is influenced, the main body area is imaged, the area expansion except for the background gesture is carried out, the condition that the head is upward and the size of each face is moderate is ensured, namely, the size of the front-end split-screen interface can be dynamically adjusted within a set range.
In addition, step 203, after adding hair to the face of the subject, further includes:
if the display area size of the main rectangular area is judged to be not matched with the display area size of the first screen area, judging the coincidence of the area where the hair is located and the area where the main face is located, and adaptively cutting part of the hair which is lower than the area where the main face is located so as to match the display area size of the first screen area.
Specifically, after hair is added to the main body face, the main body face area is possibly caused to be not in accordance with the display size proportion, at the moment, after the hair is added, the main body face is mainly increased in the height direction, the top cutting can affect the aesthetic feeling, and therefore the cutting can be carried out only from the lower side.
In addition to the above embodiments, as a preferred embodiment, after obtaining the subject and the subject gesture, the method further includes:
determining a final display area according to a target area selected by a user;
and dynamically adjusting the size and the shape of the shooting subject in the first screen area based on the final display area.
Specifically, the first screen area in each of the above embodiments shows a shooting subject automatically extracted according to an image segmentation result, for some users, the user may need to select a corresponding area for shooting, and in an actual shooting process, if a front-end display split screen interface is selected, the shooting subject is divided into three screens for displaying, at this time, a method of prompting the user may be used, a final display area is determined directly based on the selected area of the user, an actual portrait subject area is extracted, and simultaneously, the size, the shape, and the like of an actual portrait area in the first screen area may be dynamically adjusted based on parameters such as the size, the shape, the size and the like of the portrait area of the subject and the size and the like of the best-looking state of a human face, so as to perform beauty shooting. Such as: the portrait main area extracted from the actual image accounts for 80% of the whole image, and the size of the most beautiful state of the conventional face is 50%, the size of the actual image area in the first screen area is enlarged, and the actual image extraction area is expanded to make the ratio reach 80%.
In the embodiment, the image acquired by the camera is segmented, the shooting subject and the subject gesture are extracted, so that invalid background figures and the like can be displayed independently in real time in the first screen area through screen splitting before shooting, and a clean required picture is displayed finally for shooting; the beauty template is independently displayed in the second screen area, different judging actions are obtained by recognizing gestures of a main body, the judgment actions comprise a template selecting action and a shooting action, the template selecting action can trigger the selection of the beauty template, a user can remotely operate and replace the beauty template, the shooting main body processed by the beauty template can be displayed to the user in the first screen area in real time, the gesture action can be replaced after the user confirms the beauty template, the shooting of short videos is started and ended after the shooting action is recognized, the beauty template selection is realized during remote or single-hand operation, beautified images needed by the user are finally and directly presented, and post-processing after imaging is avoided.
The method is different from the prior art that when short-range shooting is carried out, a mobile phone needs to be held while being clicked, when long-range shooting is inconvenient to operate, and when equipment such as a selfie stick is used, template selection cannot be carried out, eye control causes eye change to influence the aesthetic feeling of pictures, and the like; the beauty template is independently displayed in the second screen area, different judging actions are obtained by recognizing gestures of a main body, the judgment actions comprise a template selecting action and a shooting action, the template selecting action can trigger the selection of the beauty template, a user can remotely operate and replace the beauty template, the shooting main body processed by the beauty template can be displayed to the user in the first screen area in real time, the gesture action can be replaced after the user confirms the beauty template, the shooting of short videos is started and ended after the shooting action is recognized, the beauty template selection is realized during remote or single-hand operation, beautified images needed by the user are finally and directly presented, and post-processing after imaging is avoided.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A second embodiment of the present invention relates to a server, as shown in fig. 4, including a processor (processor)810, a communication Interface (Communications Interface)820, a memory (memory)830 and a communication bus 840, wherein the processor 810, the communication Interface 820 and the memory 830 complete communication with each other through the communication bus 840. The processor 810 may call logic instructions in the memory 830 to perform the steps of the photographing method as described in the embodiments above. Examples include:
segmenting an image acquired by a camera to obtain a shooting subject and a corresponding subject gesture;
displaying the shooting subject in a first screen area of a display screen, and displaying at least one beauty template in a second screen area of the display screen;
identifying the main body gesture in real time to obtain a gesture identification result, wherein the gesture identification result comprises a template selection action and a shooting action;
selecting any beauty template from the second screen area based on the template selection action, and performing beauty treatment on the shooting subject displayed in the first screen area in real time based on the selected beauty template;
and shooting based on the shooting action.
Where the memory and processor are connected by a communications bus, which may include any number of interconnected buses and bridges, connecting together the various circuits of the memory and one or more processors. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between a communication bus and a transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
A third embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the steps of the photographing method as described in the above embodiments. Examples include:
segmenting an image acquired by a camera to obtain a shooting subject and a corresponding subject gesture;
displaying the shooting subject in a first screen area of a display screen, and displaying at least one beauty template in a second screen area of the display screen;
identifying the main body gesture in real time to obtain a gesture identification result, wherein the gesture identification result comprises a template selection action and a shooting action;
selecting any beauty template from the second screen area based on the template selection action, and performing beauty treatment on the shooting subject displayed in the first screen area in real time based on the selected beauty template;
and shooting based on the shooting action.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.
Claims (10)
1. A photographing method, characterized by comprising:
segmenting an image acquired by a camera to obtain a shooting subject and a corresponding subject gesture;
displaying the shooting subject in a first screen area of a display screen, and displaying at least one beauty template in a second screen area of the display screen;
identifying the main body gesture in real time to obtain a gesture identification result, wherein the gesture identification result comprises a template selection action and a shooting action;
selecting any beauty template from the second screen area based on the template selection action, and performing beauty treatment on the shooting subject displayed in the first screen area in real time based on the selected beauty template;
and shooting based on the shooting action.
2. The shooting method according to claim 1, wherein after recognizing the subject gesture in real time to obtain a gesture recognition result, the method further comprises:
and displaying the gesture recognition result in a third screen area of the display screen.
3. The shooting method according to claim 1, wherein the segmenting the image acquired by the camera to obtain the shooting subject and the subject gesture specifically comprises:
performing foreground segmentation on an image acquired by a camera, and extracting the positions of all human faces and the positions of all gestures in the image;
taking the face with the largest area as a main body face, taking the face with the distance relative to the main body face smaller than a preset threshold value or the face with the area where the main body face is located repeatedly as a co-shooting object, and counting the face into the main body face to obtain a shooting main body;
and scaling the distance between the actual face and the hand in an equal proportion based on the image proportion, and determining the region where the main body gesture corresponding to the main body face is located to obtain the main body gesture.
4. The shooting method according to claim 3, wherein after extracting the positions of all human faces in the image, the method further comprises:
connecting all key points extracted from the human face, and connecting all the key points one by one to obtain a human face curve;
respectively fitting each face curve, and carrying out smoothing treatment on different parts according to corresponding preset curvatures to obtain a face area corresponding to each face;
and counting the pixel points of each face region to obtain the area of each face.
5. The shooting method according to claim 3, wherein after obtaining the face region corresponding to each face, the method further comprises:
acquiring the width and the height of a region where a main face is located to determine a main rectangular region where the main face is located;
and adaptively and proportionally adjusting the main rectangular area based on the size of the display area of the first screen area.
6. The shooting method according to claim 5, wherein after obtaining the face region corresponding to each face, the method further comprises:
extracting the position of hair corresponding to the face of the main body, and counting the hair into the face of the main body if judging that the region where the hair is located is not coincident with the regions where all gestures are located;
and updating the width and the height of the area of the main face.
7. The photographing method of claim 6, wherein after the hair is put into the face of the subject, further comprising:
if the display area size of the main rectangular area is judged to be not matched with the display area size of the first screen area, judging the coincidence of the area where the hair is located and the area where the main face is located, and adaptively cutting part of the hair which is lower than the area where the main face is located so as to match the display area size of the first screen area.
8. The shooting method according to claim 1, wherein after obtaining the shooting subject and the subject gesture, the method further comprises:
determining a final display area according to a target area selected by a user;
and dynamically adjusting the size and the shape of the shooting subject in the first screen area based on the final display area.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the photographing method according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the photographing method according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011398975.6A CN112492211A (en) | 2020-12-01 | 2020-12-01 | Shooting method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011398975.6A CN112492211A (en) | 2020-12-01 | 2020-12-01 | Shooting method, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112492211A true CN112492211A (en) | 2021-03-12 |
Family
ID=74939604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011398975.6A Pending CN112492211A (en) | 2020-12-01 | 2020-12-01 | Shooting method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112492211A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI817885B (en) * | 2023-01-04 | 2023-10-01 | 友達光電股份有限公司 | Beauty display device and beauty display method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365044A (en) * | 2012-03-30 | 2013-10-23 | 百度在线网络技术(北京)有限公司 | Method for taking pictures by using camera and camera used device for taking pictures |
CN103500335A (en) * | 2013-09-09 | 2014-01-08 | 华南理工大学 | Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition |
CN104333794A (en) * | 2014-11-18 | 2015-02-04 | 电子科技大学 | Channel selection method based on depth gestures |
CN106959761A (en) * | 2017-04-18 | 2017-07-18 | 广东欧珀移动通信有限公司 | A kind of terminal photographic method, device and terminal |
CN107231529A (en) * | 2017-06-30 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and storage medium |
CN107493428A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Filming control method and device |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN108108024A (en) * | 2018-01-02 | 2018-06-01 | 京东方科技集团股份有限公司 | Dynamic gesture acquisition methods and device, display device |
CN108596079A (en) * | 2018-04-20 | 2018-09-28 | 歌尔科技有限公司 | Gesture identification method, device and electronic equipment |
CN108600647A (en) * | 2018-07-24 | 2018-09-28 | 努比亚技术有限公司 | Shooting preview method, mobile terminal and storage medium |
CN108924417A (en) * | 2018-07-02 | 2018-11-30 | Oppo(重庆)智能科技有限公司 | Filming control method and Related product |
CN109377446A (en) * | 2018-10-25 | 2019-02-22 | 北京市商汤科技开发有限公司 | Processing method and processing device, electronic equipment and the storage medium of facial image |
CN110971924A (en) * | 2018-09-30 | 2020-04-07 | 武汉斗鱼网络科技有限公司 | Method, device, storage medium and system for beautifying in live broadcast process |
US20200275029A1 (en) * | 2011-11-17 | 2020-08-27 | Samsung Electronics Co., Ltd. | Method and apparatus for self camera shooting |
CN111866372A (en) * | 2020-06-16 | 2020-10-30 | 深圳酷派技术有限公司 | Self-photographing method, device, storage medium and terminal |
CN111860206A (en) * | 2020-06-29 | 2020-10-30 | 深圳市优必选科技股份有限公司 | Image acquisition method and device, storage medium and intelligent equipment |
-
2020
- 2020-12-01 CN CN202011398975.6A patent/CN112492211A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200275029A1 (en) * | 2011-11-17 | 2020-08-27 | Samsung Electronics Co., Ltd. | Method and apparatus for self camera shooting |
CN103365044A (en) * | 2012-03-30 | 2013-10-23 | 百度在线网络技术(北京)有限公司 | Method for taking pictures by using camera and camera used device for taking pictures |
CN103500335A (en) * | 2013-09-09 | 2014-01-08 | 华南理工大学 | Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition |
CN104333794A (en) * | 2014-11-18 | 2015-02-04 | 电子科技大学 | Channel selection method based on depth gestures |
CN106959761A (en) * | 2017-04-18 | 2017-07-18 | 广东欧珀移动通信有限公司 | A kind of terminal photographic method, device and terminal |
CN107231529A (en) * | 2017-06-30 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and storage medium |
CN107493428A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Filming control method and device |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN108108024A (en) * | 2018-01-02 | 2018-06-01 | 京东方科技集团股份有限公司 | Dynamic gesture acquisition methods and device, display device |
CN108596079A (en) * | 2018-04-20 | 2018-09-28 | 歌尔科技有限公司 | Gesture identification method, device and electronic equipment |
CN108924417A (en) * | 2018-07-02 | 2018-11-30 | Oppo(重庆)智能科技有限公司 | Filming control method and Related product |
CN108600647A (en) * | 2018-07-24 | 2018-09-28 | 努比亚技术有限公司 | Shooting preview method, mobile terminal and storage medium |
CN110971924A (en) * | 2018-09-30 | 2020-04-07 | 武汉斗鱼网络科技有限公司 | Method, device, storage medium and system for beautifying in live broadcast process |
CN109377446A (en) * | 2018-10-25 | 2019-02-22 | 北京市商汤科技开发有限公司 | Processing method and processing device, electronic equipment and the storage medium of facial image |
CN111866372A (en) * | 2020-06-16 | 2020-10-30 | 深圳酷派技术有限公司 | Self-photographing method, device, storage medium and terminal |
CN111860206A (en) * | 2020-06-29 | 2020-10-30 | 深圳市优必选科技股份有限公司 | Image acquisition method and device, storage medium and intelligent equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI817885B (en) * | 2023-01-04 | 2023-10-01 | 友達光電股份有限公司 | Beauty display device and beauty display method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102279813B1 (en) | Method and device for image transformation | |
CN110110118B (en) | Dressing recommendation method and device, storage medium and mobile terminal | |
CN108012081B (en) | Intelligent beautifying method, device, terminal and computer readable storage medium | |
CN106161939B (en) | Photo shooting method and terminal | |
CN108919958A (en) | A kind of image transfer method, device, terminal device and storage medium | |
CN109903291B (en) | Image processing method and related device | |
US20190109981A1 (en) | Guided image composition on mobile devices | |
CN108346171B (en) | Image processing method, device, equipment and computer storage medium | |
CN110211211B (en) | Image processing method, device, electronic equipment and storage medium | |
CN108109161B (en) | Video data real-time processing method and device based on self-adaptive threshold segmentation | |
CN108921856B (en) | Image cropping method and device, electronic equipment and computer readable storage medium | |
CN108200337B (en) | Photographing processing method, device, terminal and storage medium | |
CN113409342A (en) | Training method and device for image style migration model and electronic equipment | |
CN111161131A (en) | Image processing method, terminal and computer storage medium | |
CN111080546B (en) | Picture processing method and device | |
CN104967778A (en) | Focusing reminding method and terminal | |
CN113870121A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112637490A (en) | Video production method and device, electronic equipment and storage medium | |
CN114187166A (en) | Image processing method, intelligent terminal and storage medium | |
CN112036209A (en) | Portrait photo processing method and terminal | |
CN106131439A (en) | A kind of image pickup method and filming apparatus | |
CN109559288A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109451234A (en) | Optimize method, equipment and the storage medium of camera function | |
CN112492211A (en) | Shooting method, electronic equipment and storage medium | |
CN108447026A (en) | Acquisition methods, terminal and the computer readable storage medium of U.S. face parameter attribute value |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210312 |
|
RJ01 | Rejection of invention patent application after publication |