[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020019663A1 - 基于人脸的特效生成方法、装置和电子设备 - Google Patents

基于人脸的特效生成方法、装置和电子设备 Download PDF

Info

Publication number
WO2020019663A1
WO2020019663A1 PCT/CN2018/123639 CN2018123639W WO2020019663A1 WO 2020019663 A1 WO2020019663 A1 WO 2020019663A1 CN 2018123639 W CN2018123639 W CN 2018123639W WO 2020019663 A1 WO2020019663 A1 WO 2020019663A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
face
generating
face image
reference point
Prior art date
Application number
PCT/CN2018/123639
Other languages
English (en)
French (fr)
Inventor
林鑫
王晶
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to JP2020571798A priority Critical patent/JP7286684B2/ja
Priority to US16/997,551 priority patent/US11354825B2/en
Priority to GB2100224.1A priority patent/GB2590208B/en
Publication of WO2020019663A1 publication Critical patent/WO2020019663A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • the present disclosure relates to the field of image technology, and in particular, to a method, a device, a hardware device, and a computer-readable storage medium for generating a special effect based on a human face.
  • the application range of smart terminals has been widely expanded, such as listening to music, playing games, chatting on the Internet, and taking photos through smart terminals.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • the current special effect production APPs are pre-made with some special effects and cannot be flexibly edited, and the special effects can only be fixed at a fixed position of the image.
  • the present disclosure provides a face-based special effect generating method including: displaying a standard face image; selecting a reference point on the standard face image in response to receiving a reference point selection command; and responding to receiving To the special effect production operation, make special effects on the standard face image; generate the parameters of the special effect; obtain the first face image recognized from the image sensor; and according to the reference point and the parameters of the special effect, The special effect is generated on a face image.
  • the standard human face includes a plurality of regions; the reference point is located in the plurality of regions; and the special effect is located in a region where the reference point is located.
  • each special effect corresponds to a different reference point and is located in a different area.
  • the method further includes: setting a trigger condition of the special effect in response to the received trigger condition setting command.
  • the method further includes: in response to receiving a play setting command, setting a play order and / or a playing time of the special effect.
  • the play order is set based on a message; the message is used to control the start or stop of the special effect.
  • the parameters of the special effect include: a position of the special effect and a size of the special effect.
  • the position of the special effect and the size of the special effect are determined by the position of the reference point and the distance between the reference points.
  • making special effects on a standard face image includes: selecting a resource package in response to a received resource package selection command; parsing the resource package and displaying a configuration interface; responding Based on the received configuration command, resources in the resource package are configured; the special effects are formed according to the configured resources, and the special effects are displayed on a standard face image.
  • the configuring resources in the resource package includes: configuring a size, a position, and a rotation center of the resources.
  • the present disclosure provides a face-based special effect generating device including: a display module for displaying a standard face image; and a reference point selection module for receiving a reference point selection command, Select a reference point on a standard face image; a special effect production module for producing a special effect on a standard face image in response to receiving a special effect production operation; a special effect parameter generation module for generating a parameter of the special effect; a face image
  • An acquisition module is configured to acquire a first face image identified from an image sensor;
  • a feature generation module is configured to generate the special effect on the first face image according to the reference point and parameters of the special effect.
  • the standard human face includes a plurality of regions; the reference point is located in the plurality of regions; and the special effect is located in a region where the reference point is located.
  • each special effect corresponds to a different reference point and is located in a different area.
  • the face-based special effect generating device further includes a trigger condition setting module for setting a trigger condition of the special effect in response to a received trigger condition setting command before generating the parameters of the special effect.
  • the face-based special effect generating device further includes a play setting module, configured to set a play order and / or a play time of the special effect in response to receiving a play setting command before generating the parameters of the special effect.
  • the play order is set based on a message; the message is used to control the start or stop of the special effect.
  • the parameters of the special effect include: a position of the special effect and a size of the special effect.
  • the position of the special effect and the size of the special effect are determined by the position of the reference point and the distance between the reference points.
  • the special effect production module includes: a selection module for selecting a resource package in response to a received resource package selection command; a parsing and display module for parsing the resource package and displaying a configuration interface; a resource configuration module, Configured to configure resources in the resource package in response to a received configuration command; a first display module configured to form the special effect according to the configured resource, and display the special effect on a standard face image .
  • the configuring resources in the resource package includes: configuring a size, a position, and a rotation center of the resources.
  • the present disclosure provides an electronic device including: a memory for storing non-transitory computer-readable instructions; and a processor for running the computer-readable instructions such that the processing The implementation of the processor implements the steps described in any of the above methods.
  • the present disclosure provides a computer-readable storage medium for storing non-transitory computer-readable instructions, which when executed by a computer, cause the computer to execute The steps described in any of the above methods.
  • Embodiments of the present disclosure provide a face-based special effect generating method, device, electronic device, and computer-readable storage medium.
  • the face-based special effect generating method includes: displaying a standard face image; selecting a reference point on the standard face image in response to receiving a reference point selection command; and responding to receiving a special effect production operation, in the standard face image Generate special effects on the computer; generate the parameters of the special effects; obtain the first face image identified from the image sensor; and generate the special effects on the first face image according to the reference point and the parameters of the special effects.
  • FIG. 1 is a schematic flowchart of a face-based special effect generating method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of an implementation manner of generating a parameter of a special effect according to the embodiment shown in FIG. 1;
  • FIG. 3 is a schematic flowchart of a face-based special effect generating method according to another embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a face-based special effect generating method according to another embodiment of the present disclosure.
  • FIG. 5a is a schematic structural diagram of a face-based special effect generating device according to an embodiment of the present disclosure
  • FIG. 5b is a schematic structural diagram of a special effect module in a face-based special effect generating device according to the embodiment in FIG. 5a of the present disclosure
  • FIG. 6 is a schematic structural diagram of a face-based special effect generating device according to another embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a face-based special effect generating device according to another embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a face-based special effect generating terminal according to an embodiment of the present disclosure.
  • the face-based special effect generating method mainly includes the following steps S1 to S6. among them:
  • Step S1 Display a standard face image.
  • a standard face image is displayed on the display device.
  • the standard face image is a preset face image.
  • the standard face image is a front face image, and the standard face image includes a preset face image.
  • Good feature points where the number of feature points can be set, and the user can freely set the required number of feature points.
  • the feature points of an image are points in the image that have distinctive characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate how to obtain key points.
  • the face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth, and cheeks, and sometimes also includes pupils and nostrils. Generally, a complete description of the face contour is achieved.
  • the number of key points required is about 60. If only the basic structure is described, the details of each part do not need to be described in detail, or the cheeks need not be described, then the number of key points can be reduced accordingly. If you need to describe the pupils, nostrils, or need More detailed features can increase the number of key points.
  • Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the features corresponding to the keypoints.
  • a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image.
  • the feature points occupy only a very small area in the image (usually only a few to tens of pixels in size)
  • the area occupied by the features corresponding to the feature points on the image is also usually very limited and local.
  • the features currently used There are two kinds of extraction methods: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of square neighborhood of feature points.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios.
  • Step S2 In response to the received reference point selection command, a reference point is selected on the standard face image.
  • the reference point is a facial feature point
  • the selected reference point may be one or more.
  • a user may send a selection command through an input device such as a mouse or a keyboard, for example, using a mouse to select a feature point displayed on a display device or using a keyboard to input a feature point number to select a corresponding feature point, and select the selected feature point.
  • the feature points are used as reference points, and the reference points are used to record the relative position and size ratio of the special effect on the human face.
  • Step S3 In response to the received special effect making operation, make a special effect on the standard face image.
  • the special effect production operation may be a special effect sticker operation.
  • the special effect sticker may be a 2D sticker, and the 2D sticker is covered on a human face to show a specific effect.
  • the special effect may be a static special effect, such as a picture, or a dynamic special effect, such as a multi-frame animation.
  • a resource package selection command sent by the user is received.
  • the resource package includes materials required for making special effects, such as pictures, sounds, and videos. According to the selection command, a corresponding resource package is selected. Without any material, you can import the material or import the resource package; after selecting the resource package, analyze the resource package and display the configuration interface.
  • the resource package includes a picture, and the picture is a pair of glasses.
  • the glasses are displayed by default.
  • the configuration interface includes the attribute parameters of the glasses.
  • the attribute parameters include position, rotation center, size, etc.
  • the user can configure the above attribute parameters and generate a Special effects of glasses, which cover the standard face image; for the position, size, and rotation center, the position, size, and rotation center of the resource can be controlled by corresponding position controls, size controls, and rotation center controls Further, in a typical application, the user can drag and drop 2
  • the scale of the D sticker ’s zoom box is used to adjust the position. You can adjust the size of the 2D sticker by dragging the corner of the 2D sticker ’s zoom box.
  • the rotation center can Set to any feature point, you can click with the mouse or directly enter the feature number in the configuration interface. After selecting the rotation center, the 2D sticker can rotate around the rotation center according to the rotation command.
  • the resource pack may include sequence frames, and the user may choose to configure each frame separately.
  • Step S4 Generate the parameters of the special effect.
  • the parameters of the special effect include at least the position of the special effect and the size of the special effect; in this embodiment, the position of the special effect and the size of the special effect are the relative position and relative size of the special effect in the standard face image ,
  • the relative position and relative size are represented by reference points. As shown in FIG. 2, the feature point A and the feature point B are selected as reference points.
  • the special effect is an ellipse, and the center point of the ellipse is point C.
  • the length of the long axis is a and the length of the short axis is b.
  • Point D is the vertical point from the point C to the line AB.
  • the right-angled rectangle composed of 4 straight lines is the smallest rectangle. Use the center of the rectangle as the center point of the special effect, so that it can be used regardless of the shape of the special effect.
  • the four linear difference coefficients are described to describe the relative position and relative size.
  • the special effect coefficient may further include a rotation center, and the rotation center may be directly expressed by a feature point, and only a number of the feature point is required to be recorded; if it is a rotation center other than the feature point, the center point may be recorded as above
  • the method to record the center of rotation, in particular, the center of rotation and the center point can coincide.
  • Step S5 Acquire a first face image identified from the image sensor.
  • a face image recognized from a camera is obtained.
  • the face image may be a face recognized from a real person, or a face recognized by using a camera to take a picture or video including the face This disclosure does not limit, in short, the face image is different from the standard face image.
  • Face detection is any given image or a set of image sequences, and it is searched with a certain strategy to determine the position and area of all faces.
  • the methods of face detection can be divided into four categories: (1) methods based on prior knowledge, which encodes a typical face formation rule base to encode faces and locate the faces through the relationship between facial features; (2) feature-invariant method, which finds stable features under changing attitude, viewing angle, or lighting conditions, and then uses these features to determine the face; (3) template matching method, which stores several standard human faces Mode, which is used to describe the entire face and facial features respectively, and then calculate the correlation between the input image and the stored pattern and use it for detection; (4) appearance-based method, which is the opposite of template matching method and is performed from the training image set Learn to get models and use them for detection.
  • an implementation of the method (4) is used to explain the face detection process: First, features need to be extracted to complete modeling.
  • Haar features are a simple Rectangular features with fast extraction speed.
  • the feature template used for the calculation of general Haar features uses a simple rectangle combination consisting of two or more congruent rectangles, where the feature template has two rectangles, black and white; after that, use
  • the AdaBoost algorithm finds a part of the features that play a key role from a large number of Haar features, and uses these features to generate an effective classifier.
  • the constructed classifier can detect faces in the image. There may be one or more human faces in the image in this embodiment.
  • each face detection algorithm has its own advantages and different adaptation ranges, multiple different detection algorithms can be set to automatically switch different algorithms for different environments, such as in images with relatively simple background environments. , You can use the algorithm with a lower detection rate but faster; in the image with a more complex background environment, you can use the algorithm with a higher detection rate but a slower speed; for the same image, you can use multiple algorithms multiple times Detection to improve detection rate.
  • Step S6 Generate the special effect on the first face image according to the reference point and the parameters of the special effect.
  • step S4 according to the number of the reference point and the special effect parameters generated in step S4, the same special effect is generated on the face image recognized from the camera as on the standard face image.
  • the effects on the standard face image need to have a mapping relationship from the effects on the first face image collected by the image sensor, according to different mapping methods, the effects can be divided into fixed effects and tracking effects.
  • this special effect is relatively simple. You only need to set the absolute position of the entire special effect range in the image sensor.
  • the implementation method can be one-to-one correspondence between the display device and the pixels of the image acquisition window of the image sensor to determine the special effect. The position in the display device, and then the corresponding special effect processing is performed on the corresponding position of the image collected by the image sensor acquisition window.
  • the advantage of this special effect processing method is simple and easy to operate.
  • the parameters used in this implementation are relative to the acquisition window.
  • the special effect image when generating the special effect image, first obtain the feature points of the standard face image in step S1, and determine the position of the special effect in the standard face image by using the feature points; Among the images collected by the sensor, the first Face image; standard face image determined location mapped to the first face images; a first face image do special effects processing to generate image Texiao.
  • the relative position of the special effect in the first face image is determined. No matter how the first face image moves and changes, the special effect is always located at the relative position, and the purpose of tracking the special effect is achieved.
  • the standard face image is triangulated and has 106 feature points.
  • the relative position of the scope of action in the face image is determined using the relative positions of the special effects and feature points.
  • the face image is subjected to the same triangulation.
  • the special effect can be fixed at the relative position on the human face to achieve the effect of tracking special effects.
  • the user can select one face image that needs special effects production, and can also select multiple face images for the same or different processing.
  • special effects you can number standard faces, such as ID1 and ID2, and set special effects on the standard face images of ID1 and ID2, respectively.
  • the special effects can be the same or different.
  • personal face images special effects are added to the multiple face images according to the identified order. For example, if the first face is identified, the special effect on the standard face image of ID1 is added to the first face, and then 2 is identified. No. 2 face, add the special effect of the standard face image of ID2 to the No. 2 face; if you only made the standard face image effect of ID1, you can add the ID1 to both the No. 1 and No. 2 face images Special effects on standard face images can also be added only to the first face.
  • the standard face image is divided into multiple regions, such as an eye region, a nose region, a mouth region, a cheek region, an eyebrow region, a forehead region, and the like, and each region includes optimized feature points.
  • the optimized feature points refer to the more representative feature points that are selected after data analysis. These feature points represent the area in which they are located. For example, if the feature points of the eye area are selected as reference points, it indicates that the selected The eye area is used as the target area for special effects production, and multiple sub effects can be made for each area. Each sub effect separately tracks the area where it is located. The sub effects are combined into a special effect. The advantage of doing this is to reduce the number of feature points. The number of feature points does not need to be selected as a reference point among multiple feature points. All the feature points displayed to the user are optimized. As long as the user selects the area, the feature point in the area is selected; a large special effect can be disassembled. Create multiple sub effects to reduce production difficulty.
  • a face special effect is edited on a standard face image, and then based on the relative relationship between the selected reference point and the attribute of the special effect, the face special effect is mapped into the image collected by the image sensor.
  • special effects need to be produced by third-party tools, and they are not flexible enough to be configured in real time.
  • special effects can only be fixed at a fixed position in the image window. When the face moves or rotates, the special effects cannot follow Relative movement or rotation of the human face reduces the user experience.
  • the user can configure and edit the special effect, and because the feature point of the face is selected as the reference point, and the relative relationship between the special effect and the reference point is recorded, no matter the image How the first face image collected in the sensor moves or rotates, the special effects will be relatively changed according to the change of the reference point, so compared to the existing technology, the editing difficulty and editing time of the special effects are greatly reduced, and the special effects will be tracked all the time. Corresponding changes occur due to face changes, thereby improving the user experience.
  • the method may further include:
  • step S31 in response to the trigger condition setting command received, a trigger condition of the special effect is set.
  • the special effect triggers the display only when a certain condition is met
  • the trigger condition may be a user's action, expression, sound, or a parameter of the terminal, and so on.
  • the action can be a facial action, such as blinking, mouth opening, shaking his head, nodding, eyebrows, such as 2D stickers with special effects, you can set the trigger condition to blink twice quickly, when the user is detected to blink twice quickly , 2D stickers of glasses are displayed on the user's eyes; the expressions can be happy, frustrated, angry, etc., such as 2D stickers with special effects of tears, you can set the trigger condition to a frustrated expression, when the user's expression is detected as When frustrated, a tearful 2D sticker is displayed under the user's eyes; when the trigger condition is a sound, the user's voice or ambient sound can be detected; when a predetermined sound is detected, the corresponding special effect is triggered; when the trigger condition is a terminal parameter, You can monitor the parameters of various components in the terminal, such as the posture and shaking of
  • the corresponding special effects are triggered by the gesture or shaking, which are not listed here one by one. It is understandable that the triggering conditions can be any applicable to this disclosure.
  • the triggering conditions in the technical solution may be one or more, and there is no limitation here.
  • the trigger may be the start of the trigger or the disappearance of the trigger. When the trigger starts, a corresponding special effect appears when the trigger condition occurs, and when the trigger disappears, the corresponding special effect disappears when the trigger condition occurs; the trigger condition may further include a delay time after the trigger, Is the delay after the trigger condition appears, how long the special effects appear or disappear.
  • the parameters of the special effect further include a trigger condition of the special effect.
  • the method may further include:
  • step S32 in response to receiving the playback setting command, the playback order and / or playback time of the special effects are set.
  • the play order and play time of multiple special effects can be set.
  • it includes three special effects, namely special effect 1, special effect 2 and special effect 3.
  • special effect 1 special effect 1
  • special effect 3 special effect 2
  • the special effects will be played in this order;
  • you can directly set the number of the special effect playback order such as the effect order of effect 1 is 1, the effect order of effect 2 is 3, the effect order of effect 3 is 2, or you can display a timeline through visualization.
  • Set the ID of the special effect directly on the time axis for example, display a time axis, and mark the special effect 1, special effect 3, and special effect 2 in the positive direction of the time axis in order to set the effect order.
  • the special effects are played in sequence, that is, it has nothing to do with the playing time. Only when all the frame sequences of one special effect have been played, the next special effect is played.
  • the effect of the special effect can also be set.
  • the playback time can be the length of time, or the number of times played, such as special effect 1 played for 10 seconds or special effect 1 played 10 times, the number of playbacks refers to the number of complete loop playback of the sequence of special effects; the playback order and The number of playbacks can be used alone or in combination. When the playback order is used alone, all the special effects are played once in a row.
  • each effect plays the next special effect after the playback time; in a more flexible setting, the playback order can be configured based on the message, such as setting the first playback Special effect 1, when special effect 1 is played to the nth frame, send a message to Special effect 3, make special effect 3 start to play, when special effect 3 is played to the m frame, stop special effect 1 and make special effect 2 start to play, using this message-based play order configuration, you can more flexibly set the effect between The start-stop relationship makes the combination and connection of special effects richer and more variable.
  • the play time or not When using message-based play order configuration, you can set the play time or not, and you can also set the priority of the message and play time. For example, when the playing time of special effect 1 is set to 10 seconds, but when special effect 3 sends a message to stop special effect 1 and special effect 1 has not been played for 10 seconds, you can determine whether to stop playing according to the set priority. If the priority of the message is greater than the priority of the message, the effect 1 continues to play until 10 seconds and stops. If the priority of the message is greater than the priority of the playing time, the effect 1 stops playing immediately.
  • the above settings are examples for easy understanding. In actual use, the playback order and playback time can be combined in any way, and the priority can also be set arbitrarily.
  • the playback order and playback time can be global parameters.
  • each face includes different multiple special effects.
  • the playback order and playback time between these special effects are uniformly set.
  • Two face images were detected, including special effect 1 and special effect 2 on face 1, and special effect 3 and special effect 4 on face 2, you can set to play special effect 1 first, play for 10 seconds, and then play special effect 3 for 2 seconds, and then Play special effect 2 and special effect 4, and play for 5 seconds. This can achieve carousel between multiple special effects of multiple faces to produce interactive effects.
  • the parameters of the special effects further include a play order and a playing time of the special effects.
  • the setting of the trigger condition and the setting of the playback sequence and playback time in the above two embodiments can be used in combination.
  • the specific sequence of the settings is not limited in this disclosure, and can be arbitrarily changed as required.
  • the operations and settings involved before generating the parameters of the special effects can finally form the parameters of the corresponding special effects, which are used to generate the special effects of the faces recognized from the image sensor.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure. For convenience of explanation, only parts related to the embodiments of the present disclosure are shown. Reference is made to the method embodiments of the present disclosure.
  • An embodiment of the present disclosure provides a face-based special effect generating device.
  • the device can perform the steps described in the embodiment of the face-based special effect generating method.
  • the device mainly includes a display module 51, a reference point selection module 52, a special effect production module 53, a special effect parameter generation module 54, a face image acquisition module 55, and a feature generation module 56.
  • a display module 51 for displaying a standard face image
  • a reference point selection module 52 for selecting a reference point on a standard face image in response to receiving a reference point selection command
  • a special effect production module 53 for responding to After receiving the special effect production operation, special effects are made on the standard face image
  • special effect parameter generation module 54 is used to generate the parameters of the special effect
  • face image acquisition module 55 is used to acquire the first person identified from the image sensor A face image
  • a special effect generating module 56 configured to generate the special effect on a first face image according to the reference point and parameters of the special effect.
  • the special effect production module 53 further includes:
  • a selection module 531 configured to select a resource package in response to a received resource package selection command
  • the analysis and display module 532 configured to parse the resource package and display a configuration interface
  • a resource configuration module 533 configured to configure resources in the resource package in response to a received configuration command
  • a first display module 534 is configured to form the special effect according to the configured resources, and display the special effect on a standard face image.
  • the above-mentioned face-based special effect generating device corresponds to the face-based special effect generating method in the embodiment shown in FIG. 1 above.
  • the face-based special effect generating method corresponds to the face-based special effect generating method in the embodiment shown in FIG. 1 above.
  • a face special effect is edited on a standard face image, and then based on the relative relationship between the selected reference point and the attribute of the special effect, the face special effect is mapped into the image collected by the image sensor.
  • special effects need to be produced by third-party tools, and they are not flexible enough to be configured in real time.
  • special effects can only be fixed at a fixed position in the image window. When the face moves or rotates, the special effects cannot follow Relative movement or rotation of the human face reduces the user experience.
  • the user can configure and edit the special effect, and because the feature point of the face is selected as the reference point, and the relative relationship between the special effect and the reference point is recorded, no matter the image How the first face image collected in the sensor moves or rotates, the special effects will be relatively changed according to the change of the reference point, so compared to the existing technology, the editing difficulty and editing time of the special effects are greatly reduced, and the special effects will be tracked all the time. Corresponding changes occur due to face changes, thereby improving the user experience.
  • the face-based special effect generating device further includes a trigger condition setting module 61, configured to set a trigger of the special effect in response to a received trigger condition setting command. condition.
  • the above-mentioned face-based special effect generating device corresponds to the face-based special effect generating method in the embodiment shown in FIG. 3 above.
  • the face-based special effect generating method described above and details are not described herein again.
  • the face-based special effect generating device further includes a play setting module 71, configured to set a play order of the special effects and / Or play time.
  • the above-mentioned face-based special effect generating device corresponds to the face-based special effect generating method in the embodiment shown in FIG. 4 above.
  • the face-based special effect generating method corresponds to the face-based special effect generating method in the embodiment shown in FIG. 4 above.
  • FIG. 8 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 8, the electronic device 80 according to an embodiment of the present disclosure includes a memory 81 and a processor 82.
  • the memory 81 is configured to store non-transitory computer-readable instructions.
  • the memory 81 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 82 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 80 to perform desired functions.
  • the processor 82 is configured to run the computer-readable instructions stored in the memory 81, so that the electronic device 80 executes the face-based special effect generating method of the foregoing embodiments of the present disclosure. All or part of the steps.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
  • FIG. 9 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 90 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 91 stored thereon.
  • the non-transitory computer-readable instruction 91 is executed by a processor, all or part of the steps of the face-based special effect generating method of the foregoing embodiments of the present disclosure are performed.
  • the computer-readable storage medium 90 includes, but is not limited to, an optical storage medium (for example, CD-ROM and DVD), a magneto-optical storage medium (for example, MO), a magnetic storage medium (for example, magnetic tape or a mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • an optical storage medium for example, CD-ROM and DVD
  • a magneto-optical storage medium for example, MO
  • a magnetic storage medium for example, magnetic tape or a mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 10 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure.
  • the face-based special effect generating terminal 100 includes the foregoing embodiment of the face-based special effect generating device.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the image special effect processing terminal 100 may include a power supply unit 101, a wireless communication unit 102, an A / V (audio / video) input unit 103, a user input unit 104, a sensing unit 105, an interface unit 106, and a control unit.
  • FIG. 10 illustrates a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 102 allows radio communication between the terminal 100 and a wireless communication system or network.
  • the A / V input unit 103 is used to receive audio or video signals.
  • the user input unit 104 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 105 detects the current state of the terminal 100, the position of the terminal 100, the presence or absence of a user's touch input to the terminal 100, the orientation of the terminal 100, the acceleration or deceleration movement and direction of the terminal 100, and the like, and generates a signal for controlling the terminal 100 commands or signals for operation.
  • the interface unit 106 serves as an interface through which at least one external device can connect with the terminal 100.
  • the output unit 108 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 109 may store software programs and the like for processing and control operations performed by the controller 107, or may temporarily store data that has been output or is to be output.
  • the storage unit 109 may include at least one type of storage medium.
  • the terminal 100 can cooperate with a network storage device that performs a storage function of the storage unit 109 through a network connection.
  • the controller 107 generally controls the overall operation of the terminal device.
  • the controller 107 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 107 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
  • the power supply unit 101 receives external power or internal power under the control of the controller 107 and provides appropriate power required to operate each element and component.
  • Various embodiments of the face-based special effect generation method proposed by the present disclosure may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof.
  • various embodiments of the image special effect processing method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD). ), A field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and, in some cases, this
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • a processor a controller
  • microcontroller a microcontroller
  • microprocessor an electronic unit designed to perform the functions described herein, and, in some cases, this
  • various embodiments of the face-based special effect generation method proposed by the present disclosure can be implemented with a separate software module that allows execution of at least one function or operation.
  • the software codes may be implemented by a software application program (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 109 and executed by the controller 107.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于人脸的特效生成方法、装置、电子设备和计算机可读存储介质。其中,该基于人脸的特效生成方法包括:显示标准人脸图像(S1);响应于接收到的参考点选择命令,在标准人脸图像上选择参考点(S2);响应于接收到的特效制作操作,在标准人脸图像上制作特效(S3);生成所述特效的参数(S4);获取从图像传感器中识别出的第一人脸图像(S5);根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效(S6)。通过上述特效制作操作,用户可以方便的对特效进行配置和编辑,并且由于记录了特效与参考点之间的相对关系,特效会一直跟踪人脸变化而发生相应的变化,从而提高了用户体验效果。

Description

基于人脸的特效生成方法、装置和电子设备
交叉引用
本公开引用于2018年07月27日递交的名称为“基于人脸的特效生成方法、装置和电子设备”的、申请号为201810838432.8的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及一种图像技术领域,特别是涉及一种基于人脸的特效生成方法、装置、硬件装置和计算机可读存储介质。
背景技术
随着计算机技术的发展,智能终端的应用范围得到了广泛的扩展,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。
目前在采用智能终端进行拍照或者拍视频时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照和视频效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果或者视频效果。
目前的特效制作APP都是预制了一些特效,无法灵活编辑,并且所述特效只能固定在图像的固定位置上。
发明内容
根据本公开的一个方面,本公开提供一种基于人脸的特效生成方法,包括:显示标准人脸图像;响应于接收到参考点选择命令,在标准人脸图像上选择参考点;响应于接收到特效制作操作,在标准人脸图像上制作特效;生 成所述特效的参数;获取从图像传感器中识别出的第一人脸图像;根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
进一步地,所述标准人脸包括多个区域;所述参考点位于所述多个区域中;所述特效位于所述参考点所在的区域中。
进一步的,所述特效有多个,每个特效对应不同的参考点,位于不同的区域中。
进一步的,在生成所述特效的参数之前,还包括:响应于接收到的触发条件设置命令,设置特效的触发条件。
进一步的,在生成所述特效的参数之前,还包括:响应于接收到播放设置命令,设置特效的播放顺序和/或播放时间。
进一步的,所述播放顺序是基于消息设置的;所述消息用于控制所述特效的开始或停止。
进一步的,所述特效的参数具体包括:所述特效的位置以及所述特效的尺寸。
进一步的,所述特效的位置以及所述特效的尺寸由所述参考点的位置以及所述参考点间的距离确定。
进一步的,所述响应于接收到特效制作操作,在标准人脸图像上制作特效,包括:响应于接收到的资源包选择命令,选择资源包;解析所述资源包,并显示配置界面;响应于接收到的配置命令,对所述资源包中的资源进行配置;根据配置后的资源形成所述特效,并将所述特效显示在标准人脸图像上。
进一步的,所述对所述资源包中的资源进行配置包括:配置资源的尺寸、位置和旋转中心。
根据本公开的另一个方面,本公开提供一种基于人脸的特效生成装置, 包括:显示模块,用于显示标准人脸图像;参考点选择模块,用于响应于接收到参考点选择命令,在标准人脸图像上选择参考点;特效制作模块,用于响应于接收到特效制作操作,在标准人脸图像上制作特效;特效参数生成模块,用于生成所述特效的参数;人脸图像获取模块,用于获取从图像传感器中识别出的第一人脸图像;特征生成模块,用于根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
进一步地,所述标准人脸包括多个区域;所述参考点位于所述多个区域中;所述特效位于所述参考点所在的区域中。
进一步的,所述特效有多个,每个特效对应不同的参考点,位于不同的区域中。
进一步的,所述基于人脸的特效生成装置还包括触发条件设置模块,用于所述在生成所述特效的参数之前,响应于接收到的触发条件设置命令,设置特效的触发条件。
进一步的,所述基于人脸的特效生成装置还包括播放设置模块,用于在生成所述特效的参数之前,响应于接收到播放设置命令,设置特效的播放顺序和/或播放时间。
进一步的,所述播放顺序是基于消息设置的;所述消息用于控制所述特效的开始或停止。
进一步的,所述特效的参数具体包括:所述特效的位置以及所述特效的尺寸。
进一步的,所述特效的位置以及所述特效的尺寸由所述参考点的位置以及所述参考点间的距离确定。
进一步的,所述特效制作模块包括:选择模块,用于响应于接收到的资源包选择命令,选择资源包;解析显示模块,用于解析所述资源包,并显示配置界面;资源配置模块,用于响应于接收到的配置命令,对所述资源包中 的资源进行配置;第一显示模块,用于根据配置后的资源形成所述特效,并将所述特效显示在标准人脸图像上。
进一步的,所述对所述资源包中的资源进行配置包括:配置资源的尺寸、位置和旋转中心。
根据本公开的又一个方面,本公开提供一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及,处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一方法所述的步骤。
根据本公开的又一个方面,本公开提供一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一方法中所述的步骤。
本公开实施例提供一种基于人脸的特效生成方法、装置、电子设备和计算机可读存储介质。其中,该基于人脸的特效生成方法包括:显示标准人脸图像;响应于接收到参考点选择命令,在标准人脸图像上选择参考点;响应于接收到特效制作操作,在标准人脸图像上制作特效;生成所述特效的参数;获取从图像传感器中识别出的第一人脸图像;根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。本公开实施例通过特效制作操作,用户可以方便的对特效进行配置和编辑,并且由于选择了人脸特征点为参考点,并且记录了特效与参考点之间的相对关系,大大降低了特效的编辑难度以及编辑时间,且特效会一直跟踪人脸变化而发生相应的变化,从而提高了用户体验效果。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
图1为根据本公开一个实施例的基于人脸的特效生成方法的流程示意图;
图2为根据图1所示实施例中生成特效的参数的实施方式的示意图;
图3为根据本公开又一实施例的基于人脸的特效生成方法的流程示意图;
图4为根据本公开又一实施例的基于人脸的特效生成方法的流程示意图
图5a为根据本公开一实施例的基于人脸的特效生成装置的结构示意图;
图5b为根据本公开图5a中的实施例的基于人脸的特效生成装置中的特效模块的结构示意图;
图6为根据本公开又一实施例的基于人脸的特效生成装置的结构示意图;
图7为根据本公开又一实施例的基于人脸的特效生成装置的结构示意图;
图8为根据本公开一个实施例的电子设备的结构示意图;
图9为根据本公开一个实施例的计算机可读存储介质的结构示意图;
图10为根据本公开一个实施例的基于人脸的特效生成终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可 以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
为了解决如何提高用户体验效果的技术问题,本公开实施例提供一种基于人脸的特效生成方法。如图1所示,该基于人脸的特效生成方法主要包括如下步骤S1至步骤S6。其中:
步骤S1:显示标准人脸图像。
在显示装置上显示标准人脸图像,所述标准人脸图像是预先设置好的 人脸图像,通常来说,该标准人脸图像为正面人脸图像,且该标准人脸图像带有预先设置好的特征点,其中特征点的数量可以设置,用户可以自由设定所需要的特征点的数量。图像的特征点是指图像中具有鲜明特性并能够有效反映图像本质特征且能够标识图像中目标物体的点。如果目标物体为人脸,那么就需要获取人脸关键点,如果目标图像为一栋房子,那么就需要获取房子的关键点。以人脸为例说明关键点的获取方法,人脸轮廓主要包括眉毛、眼睛、鼻子、嘴巴和脸颊5个部分,有时还会包括瞳孔和鼻孔,一般来说实现对人脸轮廓较为完整的描述,需要关键点的个数在60个左右,如果只描述基本结构,不需要对各部位细节进行详细描述,或不需要描述脸颊,则可以相应降低关键点数目,如果需要描述瞳孔、鼻孔或者需要更细节的五官特征,则可以增加关键点的数目。在图像上进行人脸关键点提取,相当于寻找每个人脸轮廓关键点在人脸图像中的对应位置坐标,即关键点定位,这一过程需要基于关键点对应的特征进行,在获得了能够清晰标识关键点的图像特征之后,依据此特征在图像中进行搜索比对,在图像上精确定位关键点的位置。由于特征点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),特征点对应的特征在图像上所占据的区域通常也是非常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)特征点方形邻域的二维范围图像特征提取。上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等。上述各种实现方法所使用的关键点个数,准确度以及速度各不相同,适用于不同的应用场景。
步骤S2:响应于接收到的参考点选择命令,在标准人脸图像上选择参考点。
在一个实施例中,所述参考点为人脸特征点,选择的参考点可以为一个或者多个。在具体实现中,用户可以通过鼠标或者键盘等输入设备,发送选 择命令,比如使用鼠标点选显示装置上所显示的特征点或者使用键盘输入特征点的编号来选择相应的特征点,将所选择的特征点作为参考点,所述参考点用于记录所述特效在人脸上的相对位置以及大小比例。
步骤S3:响应于接收到的特效制作操作,在标准人脸图像上制作特效。
在一个实施例中,所述特效制作操作可以是制作特效贴纸操作,所述特效贴纸可以是2D贴纸,将所述2D贴纸覆盖于人脸上,以展示特定的效果。针对人脸来说,所述特效可以是静态特效,如一张图,或者动态特效,比如多帧动画。制作特效时,接收用户发送的资源包选择命令,所述资源包中包括制作特效所需要的素材,如图片、声音、视频等,根据所述选择命令,选择对应的资源包,如果资源包中没有任何素材,可以导入素材或者导入资源包;当选择资源包之后,对资源包进行分析,并显示配置界面,比如资源包中包括一个图片,所述图片为一个眼镜,则该眼镜显示于默认位,并在眼镜周边显示配置界面,所述配置界面上包括眼镜的属性参数,所述属性参数包括位置、旋转中心、尺寸等等,用户可以对上述属性参数进行配置,根据所述配置生成一个眼镜的特效,所述特效覆盖于标准人脸图像上;对于所述位置、尺寸和旋转中心,可以通过对应的位置控件、尺寸控件和旋转中心控件来控制所述资源的位置、尺寸和旋转中心;进一步的,在一个典型应用中,用户可以通过拖拽2D贴纸的缩放框进行位置调整,可以通过拖拽2D贴纸缩放框的角调整2D贴纸的尺寸,还可以通过画布缩放命令,缩放标准人脸图像,间接达到调整2D贴纸尺寸的效果;旋转中心可以设置为任何一个特征点,可以使用鼠标点选或者直接在配置界面中输入特征的编号,选中旋转中心之后,所述2D贴纸可以根据旋转命令围绕该旋转中心旋转。所述资源包中可以包括序列帧,用户可以选择对每一帧分别进行配置。上述配置方式以及属性参数仅为举例,并不对本公开构成限定,实际上任何需要配置或者可以配置的属性参数均可以在本公开的技术方案中使用。
步骤S4:生成所述特效的参数。
在一个实施例中,所述特效的参数至少包括特效的位置以及特效的尺寸;在该实施例中,所述特效的位置和特效的尺寸是特效在标准人脸图像中的相对位置和相对大小,该相对位置和相对大小由参考点来表示。如图2所述,选择特征点A点和特征点B点为参考点,所述特效为一个椭圆,椭圆的中心点为C点,其长轴的长度为a,短轴的长度为b,D点为从C点向线段AB做垂线的垂点。其中A点的坐标为(X A,Y A),B点的坐标为(X B,Y B),C点的坐标为(X C,Y C),D点的坐标为(X D,Y D),则可以得到4个线性差值系数
Figure PCTCN2018123639-appb-000001
Figure PCTCN2018123639-appb-000002
其中AB、AD、CD分别表示线段的长度,
Figure PCTCN2018123639-appb-000003
Figure PCTCN2018123639-appb-000004
Figure PCTCN2018123639-appb-000005
在标准人脸图像中,A点、B点、C点和D点的坐标以及a和b均为已知的值,因此可以使用上述4个线性差值系数来记录中心点C相对于参考点A和B的相对位置以及椭圆长短轴相对于参考点A和B的相对长度,其中λ 1和λ 2用于记录中心点C相对于参考点A和B的相对位置,λ 3用于记录椭圆长轴的相对长度,λ 4用于记录椭圆短轴的相对响度,生成的特效参数至少包括这4个线性差值系数;在上述例子中,特效为椭圆状,因为需要两个差值系数分别记录长短轴的相对长度,但是在实际应用中,特效的形状可能不规则,因此可以使用特效的外框来表示特效的尺寸,所述特效的外框可以是包括所述特效的最小矩形,具体的,可以经过特效最外围的4个点做直线,4个直线组成的直角矩形为最小矩形,使用矩形的中心作为特效的中心点,这样无论特效是什么形状,均可以使用上述4个线性差值系数来描述相对位置和相对大小。所述特效系数还可以包括旋转中心,所述旋转中心可以直接由特征点来表示,此时只需记录特征点的编号即可;如果是特征点以外的旋转中心,则可以使用如上记录中心点的方法来记录旋转中心,特别的,旋转中心和中心点可以重合。
步骤S5:获取从图像传感器中识别出的第一人脸图像。
该步骤中,获取从摄像头中识别出来的人脸图像,该人脸图像可以是从 真实的人识别出来的人脸,也可以是使用摄像头拍摄包括人脸的图片或者视频所识别出的人脸,本公开不做限制,总之该人脸图像有别于标准人脸图像。
识别人脸图像,主要是在图像中检测出人脸,人脸检测是任意给定一个图像或者一组图像序列,采用一定策略对其进行搜索,以确定所有人脸的位置和区域的一个过程,从各种不同图像或图像序列中确定人脸是否存在,并确定人脸数量和空间分布的过程。通常人脸检测的方法可以分为4类:(1)基于先验知识的方法,该方法将典型的人脸形成规则库对人脸进行编码,通过面部特征之间的关系进行人脸定位;(2)特征不变方法,该方法在姿态、视角或光照条件改变的情况下找到稳定的特征,然后使用这些特征确定人脸;(3)模板匹配方法,该方法存储几种标准的人脸模式,用来分别描述整个人脸和面部特征,然后计算输入图像和存储的模式间的相互关系并用于检测;(4)基于外观的方法,该方法与模板匹配方法相反,从训练图像集中进行学习从而获得模型,并将这些模型用于检测。在此使用第(4)种方法中的一个实现方式来说明人脸检测的过程:首先需要提取特征完成建模,本实施例使用Haar特征作为判断人脸的关键特征,Haar特征是一种简单的矩形特征,提取速度快,一般Haar特征的计算所使用的特征模板采用简单的矩形组合由两个或多个全等的矩形组成,其中特征模板内有黑色和白色两种矩形;之后,使用AdaBoost算法从大量的Haar特征中找到起关键作用的一部分特征,并用这些特征产生有效的分类器,通过构建出的分类器可以对图像中的人脸进行检测。本实施例中图像中的人脸可以是一个或多个。
可以理解的是,由于每种人脸检测算法各有优点,适应范围也不同,因此可以设置多个不同的检测算法,针对不同的环境自动切换不同的算法,比如在背景环境比较简单的图像中,可以使用检出率较差但是速度较快的算法;在背景环境比较复杂的图像中,可以使用检出率较高但是速度较慢的算法;对于同一图像,也可以使用多种算法多次检测以提高检出率。
步骤S6:根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
在该步骤中,根据参考点的编号以及步骤S4中生成的特效参数,在从摄像头中识别出的人脸图像上生成与标准人脸图像上相同的特效。
由于标准人脸图像上的特效到图像传感器所采集到第一人脸图像的特效需要有一个映射关系,根据映射的方式不同,特效的方式可以分为固定特效和跟踪特效,在一个实施例中使用固定特效,这种特效比较简单,只需要设置整个特效范围在图像传感器中的绝对位置即可,其实现方式可以是将显示装置与图像传感器的图像采集窗口的像素点一一对应,判断特效在显示装置中的位置,之后对图像传感器采集窗口采集到的图像的对应位置进行相应的特效处理,这种特效处理方式的优点是简单易操作,该实现方式所使用的参数都相对于采集窗口的位置;在另一个实施例中,生成特效图像时,先获取步骤S1中的标准人脸图像的特征点,通过所述特征点确定所述特效在标准人脸图像中的位置;从通过图像传感器所采集到的图像中识别与标准人脸图像对应的第一人脸图像;将在标准人脸图像中所确定的位置映射到第一人脸图像中;对第一人脸图像做特效处理,生成特效图像。该方式中,确定特效在第一人脸图像中的相对位置,无论第一人脸图像如何移动变化,所述特效总位于该相对位置上,达到跟踪特效的目的。在一个典型的应用中,所述标准人脸图像经过三角剖分,有106个特征点,利用特效和特征点的相对位置确定作用范围在人脸图像中的相对位置,对摄像头采集到的人脸图像做同样的三角剖分,之后当摄像头中的人脸发生移动或转动时,所述特效可以一直固定在人脸上的相对位置上,以达到追踪特效的效果。
举例来说,使用19和20号特征点作为参考点,那么在第一人脸图像中使用相同的特征点,在第一人脸图像中查找对应的19和20号特征点A’和B’,获取这两个点在第一人脸图像中的坐标,使用λ 1计算特效中心点C’到A’和B’的连线的垂点D’,之后根据λ 2计算出C’的位置,使用λ 3和λ 4 计算出特效的尺寸,对第一人脸图像上的特效进行缩放。这样就完成了利用参考点和特效的参数,将所述特效映射到第一人脸图像上的步骤。
可以理解的是,当图像中识别出多个人脸图像时,用户可以选择需要进行特效制作的一个人脸图像,也可以选择多个人脸图像做相同或者不同的处理。举例来说,在制作特效时,可以对标准人脸进行编号,如ID1和ID2,分别对ID1和ID2标准人脸图像设置特效,所述特效可以相同也可以不同,当从摄像头中识别出多个人脸图像,根据识别出的顺序对所述多个人脸图像添加特效,比如先识别出1号人脸,则在1号人脸上添加ID1的标准人脸图像上的特效,之后识别出2号人脸,则在2号人脸上添加ID2的标准人脸图像上的特效;如果只制作了ID1的标准人脸图像特效,则可以在1号和2号人脸图像上均添加ID1的标准人脸图像上的特效,也可以只在1号人脸上添加特效。
在一个实施例中,所述标准人脸图像被分为多个区域,比如眼睛区域、鼻子区域、嘴巴区域、脸颊区域、眉毛区域、额头区域等等,每个区域中都包括优化的特征点,所述优化的特征点是指通过数据分析后筛选出来的,更具代表性的特征点,这些特征点代表所处的区域,比如选择了眼睛区域的特征点作为参考点,则表示选择了眼睛区域作为特效制作的目标区域,针对每个区域可以分别制作多个子特效,每个子特效分别跟踪自己所处的区域,子特效合在一起为一个特效,这样做的好处是:减少特征点的数量,不需要在多个特征点中选择作为参考点的特征点,所有展示给用户的特征点都是优化过的,用户只要选中区域就选择了区域中的特征点;可以将一个大特效拆成多个子特效来制作,降低制作难度。
本公开实施例中,在标准人脸图像上编辑人脸特效,之后基于所选择的参考点和所述特效的属性的相对关系,将人脸特效映射到图像传感器所采集到的图像中。已有技术中,特效需要通过第三方工具制作,使用时灵活性不足,不能实时对特效进行配置,另外特效只能固定在图像窗口的固定位置,当人脸发生移动或旋转时,特效不能跟随人脸发生相对移动或旋转,降低了 用户体验。而本实施例中,通过特效制作操作,用户可以方面的对特效进行配置和编辑,并且由于选择了人脸特征点为参考点,并且记录了特效与参考点之间的相对关系,因此无论图像传感器中采集的第一人脸图像如何移动或旋转,特效都会根据参考点的变化进行相对的变化,因此相较于已有技术,大大降低了特效的编辑难度以及编辑时间,且特效会一直跟踪人脸变化而发生相应的变化,从而提高了用户体验效果。
在一个可选的实施例中,如图3所示,步骤S4即生成所述特效的参数的步骤之前,还可以包括:
步骤S31,响应于接收到的触发条件设置命令,设置特效的触发条件。
在该可选实施例中,特效只有满足一定的条件才会触发显示,所述触发条件可以是用户的动作、表情、声音或者终端的参数等等。所述动作可以是面部动作,比如眨眼、嘴巴大张、摇头、点头、眉毛挑动,比如特效为眼镜的2D贴纸,则可以设置触发条件为快速眨眼两次,当检测到用户快速的眨眼两次,在用户的眼睛上显示眼镜的2D贴纸;所述表情可以是高兴、沮丧、愤怒等等,比如特效为眼泪的2D贴纸,则可以设置触发条件为沮丧的表情,当检测到用户的表情为沮丧时,在用户的眼睛下方显示眼泪的2D贴纸;当触发条件为声音时,可以检测用户的语音或者环境音,当检测到预定声音时,触发对应的特效;当触发条件为终端参数时,可以监控终端中的各部件的参数,比如终端的姿态、晃动等等,通过姿态或者晃动触发对应的特效,在此不再一一列举,可以理解的是该触发条件可以是任何适用于本公开技术方案中的触发条件,触发条件可以是一个或多个,在此不做限制。所述触发可以是触发开始或者触发消失,触发开始为出现触发条件时,对应的特效出现,触发消失为出现触发条件时,对应的特效消失;所述触发条件还可以包括触发后的延迟时间,就是触发条件出现之后,延迟多长时间特效出现或者消失。
在该实施例中,所述特效的参数中进一步包括了特效的触发条件。
在一个可选的实施例中,如图4所示,步骤S4即生成所述特效的参数的步骤之前,还可以包括:
步骤S32,响应于接收到播放设置命令,设置特效的播放顺序和/或播放时间。
在该可选实施例中,可以设置多个特效的播放顺序和播放时间。在一个实施例中,包括三个特效,分别为特效1、特效2和特效3,设置特效的播放顺序为特效1、特效3和特效2,则特效就会根据该顺序依次播放;在设置播放顺序时,可以直接对特效的播放顺序进行编号设置,比如特效1的播放顺序是1,特效2的播放顺序是3,特效3的播放顺序是2,也可以通过可视化的方法,显示一个时间轴,直接在时间轴上设置特效的ID,比如显示一个时间轴,在时间轴的正方向上依次标注特效1、特效3和特效2,以设置特效的播放顺序。在默认的情况下,特效是依次播放的,也就是与播放时间无关,只有当一个特效的所有帧序列都播放完了之后,才播放下一个特效;在该实施例中,还可以设置特效的播放时间,所述播放时间可以是时间长度,也可以是播放的次数,比如特效1播放10秒钟或者特效1播放10次,播放的次数是指特效的序列帧完整循环播放的次数;播放顺序和播放次数可以单独使用也可以结合使用,当单独使用播放顺序时,所有特效依次播放1次,当单独使用播放时间时,所有特效一起播放,但是结束时间可能根据播放时间的不同而不同,当播放顺序和播放时间同时使用时,默认情况下,按照播放顺序,每个特效按照播放时间播放完之后播放下一个特效;在一种更灵活的设置中,播放顺序可以基于消息配置,比如设置首先播放特效1,当特效1播放到第n帧的时候,发送消息给特效3,使特效3开始播放,当特效3播放到第m帧时,使特效1停止播放并使特效2开始播放,使用这种基于消息的播放顺序配置,可以更灵活的设置特效之间的启停关系,使特效之间的组合和衔接更加丰富多变,在使用基于消息的播放顺序配置时,可以设置播放时间也可以不设置播放时间,另外也可以设置消息和播放时间的优先级,比如当特效1的播放时间设置为10秒,但是当特效3发送消息让特效1停止播放时,特效1还未播放10秒钟,则可以根据设置的优先级来判断是否停止播放,如果播放时间的优先级大于消息的优先级,则特效1 继续播放到10秒之后停止,如果消息的优先级大于播放时间的优先级,则特效1立刻停止播放。上述设置均为举例说明,方便理解,在实际使用中,播放顺序和播放时间可以按照任意方式组合使用,优先级也可以任意设置。播放顺序和播放时间可以是全局参数,比如在多个人脸图像的场景下,每个人脸都包括不同的多个特效,则这些特效之间的播放顺序和播放时间统一设置,在一个例子中,检测到两个人脸图像,人脸1上包括特效1和特效2,人脸2上包括特效3和特效4,则可以设置先播放特效1,播放10秒,之后播放特效3播放2秒,之后播放特效2和特效4,播放5秒,这样可以做到多个人脸的多个特效之间的轮播,以产生互动效果。
在该实施例中,所述特效的参数中进一步包括了特效的播放顺序和播放时间。
可以理解的是,上述两个实施例中对触发条件的设置和对播放顺序、播放时间的设置是可以混合使用的,具体设置的前后顺序在本公开中不做限定,可以根据需要任意调换,在生成特效的参数之前所做涉及的操做和设置,最后都可以形成对应的特效的参数,被用于生成从图像传感器中识别出的人脸的特效。
在上文中,虽然按照上述的顺序描述了上述方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
本公开实施例提供一种基于人脸的特效生成装置。该装置可以执行上述基于人脸的特效生成方法实施例中所述的步骤。如图5a所示,该装置主要包括:显示模块51、参考点选择模块52、特效制作模块53、特效参数生 成模块54、人脸图像获取模块55和特征生成模块56。其中:显示模块51,用于显示标准人脸图像;参考点选择模块52,用于响应于接收到参考点选择命令,在标准人脸图像上选择参考点;特效制作模块53,用于响应于接收到特效制作操作,在标准人脸图像上制作特效;特效参数生成模块54,用于生成所述特效的参数;人脸图像获取模块55,用于获取从图像传感器中识别出的第一人脸图像;特效生成模块56,用于根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
如图5b所示,在一个可选的实施例中,所述特效制作模块53进一步包括:
选择模块531,用于响应于接收到的资源包选择命令,选择资源包;
解析显示模块532,用于解析所述资源包,并显示配置界面;
资源配置模块533,用于响应于接收到的配置命令,对所述资源包中的资源进行配置;
第一显示模块534,用于根据配置后的资源形成所述特效,并将所述特效显示在标准人脸图像上。
上述基于人脸的特效生成装置与上述图1所示实施例中的基于人脸的特效生成方法对应一致,具体细节可参考上述对基于人脸的特效生成方法的描述,在此不再赘述。
本公开实施例中,在标准人脸图像上编辑人脸特效,之后基于所选择的参考点和所述特效的属性的相对关系,将人脸特效映射到图像传感器所采集到的图像中。已有技术中,特效需要通过第三方工具制作,使用时灵活性不足,不能实时对特效进行配置,另外特效只能固定在图像窗口的固定位置,当人脸发生移动或旋转时,特效不能跟随人脸发生相对移动或旋转,降低了用户体验。而本实施例中,通过特效制作操作,用户可以方面的对特效进行配置和编辑,并且由于选择了人脸特征点为参考点,并且记录了特效与参考点之间的相对关系,因此无论图像传感器中采集的第一人脸图像如何移动或旋转,特效都会根据参考点的变化进行相对的变化,因此相较于已有技术, 大大降低了特效的编辑难度以及编辑时间,且特效会一直跟踪人脸变化而发生相应的变化,从而提高了用户体验效果。
如图6所示,在一个可选的实施例中,基于人脸的特效生成装置,还包括触发条件设置模块61,用于:响应于接收到的触发条件设置命令,设置所述特效的触发条件。
上述基于人脸的特效生成装置与上述图3所示实施例中的基于人脸的特效生成方法对应一致,具体细节可参考上述对基于人脸的特效生成方法的描述,在此不再赘述。
如图7所示,在一个可选的实施例中,基于人脸的特效生成装置,还包括播放设置模块71,用于:响应于接收到播放设置命令,设置所述特效的播放顺序和/或播放时间。
上述基于人脸的特效生成装置与上述图4所示实施例中的基于人脸的特效生成方法对应一致,具体细节可参考上述对基于人脸的特效生成方法的描述,在此不再赘述。
有关基于人脸的特效生成实施例的工作原理、实现的技术效果等详细说明可以参考前述基于人脸的特效生成方法实施例中的相关说明,在此不再赘述。
图8是图示根据本公开的实施例的电子设备框图。如图8所示,根据本公开实施例的电子设备80包括存储器81和处理器82。
该存储器81用于存储非暂时性计算机可读指令。具体地,存储器81可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器82可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备80中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器82用于运行 该存储器81中存储的该计算机可读指令,使得该电子设备80执行前述的本公开各实施例的基于人脸的特效生成方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图9是图示根据本公开的实施例的计算机可读存储介质的示意图。如图9所示,根据本公开实施例的计算机可读存储介质90,其上存储有非暂时性计算机可读指令91。当该非暂时性计算机可读指令91由处理器运行时,执行前述的本公开各实施例的基于人脸的特效生成方法的全部或部分步骤。
上述计算机可读存储介质90包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图10是图示根据本公开实施例的终端设备的硬件结构示意图。如图10所示,该基于人脸的特效生成终端100包括上述基于人脸的特效生成装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端还可以包括其他组件。如图10所示, 该图像特效处理终端100可以包括电源单元101、无线通信单元102、A/V(音频/视频)输入单元103、用户输入单元104、感测单元105、接口单元106、控制器107、输出单元108和存储单元109等等。图10示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元102允许终端100与无线通信系统或网络之间的无线电通信。A/V输入单元103用于接收音频或视频信号。用户输入单元104可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元105检测终端100的当前状态、终端100的位置、用户对于终端100的触摸输入的有无、终端100的取向、终端100的加速或减速移动和方向等等,并且生成用于控制终端100的操作的命令或信号。接口单元106用作至少一个外部装置与终端100连接可以通过的接口。输出单元108被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元109可以存储由控制器107执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元109可以包括至少一种类型的存储介质。而且,终端100可以与通过网络连接执行存储单元109的存储功能的网络存储装置协作。控制器107通常控制终端设备的总体操作。另外,控制器107可以包括用于再现或回放多媒体数据的多媒体模块。控制器107可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元101在控制器107的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的基于人脸的特效生成方法的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的图像特效处理方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一 种来实施,在一些情况下,本公开提出的基于人脸的特效生成方法的各种实施方式可以在控制器107中实施。对于软件实施,本公开提出的基于人脸的特效生成方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元109中并且由控制器107执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (13)

  1. 一种基于人脸的特效生成方法,其特征在于,包括:
    显示标准人脸图像;
    响应于接收到的参考点选择命令,在标准人脸图像上选择参考点;
    响应于接收到的特效制作操作,在标准人脸图像上制作特效;
    生成所述特效的参数;
    获取从图像传感器中识别出的第一人脸图像;
    根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
  2. 如权利要求1所述的基于人脸的特效生成方法,其特征在于:
    所述标准人脸包括多个区域;
    所述参考点位于所述多个区域中;
    所述特效位于所述参考点所在的区域中。
  3. 如权利要求2所述的基于人脸的特效生成方法,其特征在于:
    所述特效有多个,每个特效对应不同的参考点,位于不同的区域中。
  4. 如权利要求1所述的基于人脸的特效生成方法,其特征在于,在生成所述特效的参数之前,还包括:
    响应于接收到的触发条件设置命令,设置所述特效的触发条件。
  5. 如权利要求1所述的基于人脸的特效生成方法,其特征在于,在生成所述特效的参数之前,还包括:
    响应于接收到播放设置命令,设置所述特效的播放顺序和/或播放时间。
  6. 如权利要求5所述的基于人脸的特效生成方法,其特征在于:
    所述播放顺序是基于消息设置的;
    所述消息用于控制所述特效的开始或停止。
  7. 如权利要求1所述的基于人脸的特效生成方法,其特征在于,所述特效的参数具体包括:
    所述特效的位置以及所述特效的尺寸。
  8. 如权利要求7所述的基于人脸的特效生成方法,其特征在于:
    所述特效的位置以及所述特效的尺寸由所述参考点的位置以及所述参考点间的距离确定。
  9. 如权利要求1所述的基于人脸的特效生成方法,其特征在于,所述响应于接收到特效制作操作,在标准人脸图像上制作特效,包括:
    响应于接收到的资源包选择命令,选择资源包;
    解析所述资源包,并显示配置界面;
    响应于接收到的配置命令,对所述资源包中的资源进行配置;
    根据配置后的资源形成所述特效,并将所述特效显示在标准人脸图像上。
  10. 如权利要求9所述的基于人脸的特效生成方法,其特征在于,所述对所述资源包中的资源进行配置包括:
    配置资源的尺寸、位置和旋转中心。
  11. 一种基于人脸的特效生成装置,其特征在于,包括:
    显示模块,用于显示标准人脸图像;
    参考点选择模块,用于响应于接收到参考点选择命令,在标准人脸图像上选择参考点;
    特效制作模块,用于响应于接收到特效制作操作,在标准人脸图像上制 作特效;
    特效参数生成模块,用于生成所述特效的参数;
    人脸图像获取模块,用于获取从图像传感器中识别出的第一人脸图像;
    特效生成模块,用于根据所述参考点以及所述特效的参数,在第一人脸图像上生成所述特效。
  12. 一种电子设备,其特征在于,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-10任一所述的基于人脸的特效生成方法。
  13. 一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-10任一所述的基于人脸的特效生成方法。
PCT/CN2018/123639 2018-07-27 2018-12-25 基于人脸的特效生成方法、装置和电子设备 WO2020019663A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020571798A JP7286684B2 (ja) 2018-07-27 2018-12-25 顔に基づく特殊効果発生方法、装置および電子機器
US16/997,551 US11354825B2 (en) 2018-07-27 2018-12-25 Method, apparatus for generating special effect based on face, and electronic device
GB2100224.1A GB2590208B (en) 2018-07-27 2018-12-25 Method, apparatus for generating special effect based on face, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810838432.8 2018-07-27
CN201810838432.8A CN108958610A (zh) 2018-07-27 2018-07-27 基于人脸的特效生成方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2020019663A1 true WO2020019663A1 (zh) 2020-01-30

Family

ID=64464019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123639 WO2020019663A1 (zh) 2018-07-27 2018-12-25 基于人脸的特效生成方法、装置和电子设备

Country Status (5)

Country Link
US (1) US11354825B2 (zh)
JP (1) JP7286684B2 (zh)
CN (1) CN108958610A (zh)
GB (1) GB2590208B (zh)
WO (1) WO2020019663A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307925A (zh) * 2020-10-23 2021-02-02 腾讯科技(深圳)有限公司 图像检测方法、图像显示方法、相关设备及存储介质
US11189067B2 (en) * 2019-02-28 2021-11-30 Samsung Electronics Co., Ltd. Electronic device and content generation method

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958610A (zh) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 基于人脸的特效生成方法、装置和电子设备
CN111507142A (zh) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 人脸表情图像处理方法、装置和电子设备
CN111328389A (zh) * 2019-02-27 2020-06-23 北京市商汤科技开发有限公司 显示方法及装置、电子设备及存储介质
CN109885231B (zh) 2019-02-27 2021-07-02 北京市商汤科技开发有限公司 显示方法及装置、电子设备及存储介质
CN110070592B (zh) * 2019-02-28 2020-05-05 北京字节跳动网络技术有限公司 特效包的生成方法、装置和硬件装置
CN110070496B (zh) * 2019-02-28 2020-07-31 北京字节跳动网络技术有限公司 图像特效的生成方法、装置和硬件装置
CN111985268B (zh) * 2019-05-21 2024-08-06 北京搜狗科技发展有限公司 一种人脸驱动动画的方法和装置
CN112132859A (zh) * 2019-06-25 2020-12-25 北京字节跳动网络技术有限公司 贴纸生成方法、装置、介质和电子设备
CN110363814A (zh) * 2019-07-25 2019-10-22 Oppo(重庆)智能科技有限公司 一种视频处理方法、装置、电子装置和存储介质
CN110503724A (zh) * 2019-08-19 2019-11-26 北京猫眼视觉科技有限公司 一种基于人脸特征点的ar表情资源构建管理系统及方法
CN112529985A (zh) * 2019-09-17 2021-03-19 北京字节跳动网络技术有限公司 图像处理方法及装置
CN110807728B (zh) 2019-10-14 2022-12-13 北京字节跳动网络技术有限公司 对象的显示方法、装置、电子设备及计算机可读存储介质
CN111145082A (zh) * 2019-12-23 2020-05-12 五八有限公司 人脸图像处理方法、装置、电子设备及存储介质
CN111243049B (zh) * 2020-01-06 2021-04-02 北京字节跳动网络技术有限公司 人脸图像的处理方法、装置、可读介质和电子设备
CN111242881B (zh) * 2020-01-07 2021-01-12 北京字节跳动网络技术有限公司 显示特效的方法、装置、存储介质及电子设备
CN111277893B (zh) * 2020-02-12 2021-06-25 北京字节跳动网络技术有限公司 视频处理方法、装置、可读介质及电子设备
CN113496533A (zh) * 2020-03-19 2021-10-12 北京字节跳动网络技术有限公司 贴纸处理方法及装置
CN113434223A (zh) * 2020-03-23 2021-09-24 北京字节跳动网络技术有限公司 特效处理方法及装置
CN113643411A (zh) * 2020-04-27 2021-11-12 北京达佳互联信息技术有限公司 一种图像特效添加方法、装置、电子设备和存储介质
CN113569595B (zh) * 2020-04-28 2024-03-22 富泰华工业(深圳)有限公司 身份辨识装置以及身份辨识方法
CN113628097A (zh) * 2020-05-09 2021-11-09 北京字节跳动网络技术有限公司 图像特效配置方法、图像识别方法、装置及电子设备
CN113709573B (zh) * 2020-05-21 2023-10-24 抖音视界有限公司 配置视频特效方法、装置、设备及存储介质
CN111783928A (zh) * 2020-06-29 2020-10-16 北京市商汤科技开发有限公司 动物互动方法、装置、设备和介质
CN113918442A (zh) * 2020-07-10 2022-01-11 北京字节跳动网络技术有限公司 图像特效参数处理方法、设备和存储介质
CN112188103A (zh) * 2020-09-30 2021-01-05 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN112419143B (zh) * 2020-11-20 2024-08-06 广州繁星互娱信息科技有限公司 图像处理方法、特效参数设置方法、装置、设备及介质
CN115442639B (zh) * 2021-06-03 2024-01-16 北京字跳网络技术有限公司 一种特效配置文件的生成方法、装置、设备及介质
CN115811665A (zh) * 2021-09-14 2023-03-17 北京字跳网络技术有限公司 一种视频生成方法、装置、终端设备及存储介质
CN113938618B (zh) * 2021-09-29 2024-04-30 北京达佳互联信息技术有限公司 特效制作方法、装置、电子设备及存储介质
CN114092678A (zh) * 2021-11-29 2022-02-25 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114245031B (zh) * 2021-12-20 2024-02-23 北京字跳网络技术有限公司 图像展示方法、装置、电子设备及存储介质
CN114697568B (zh) * 2022-04-07 2024-02-20 脸萌有限公司 特效视频确定方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024167A (zh) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 一种移动终端拍照方法及系统
CN105303523A (zh) * 2014-12-01 2016-02-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106875332A (zh) * 2017-01-23 2017-06-20 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN108010037A (zh) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
CN108022279A (zh) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 视频特效添加方法、装置及智能移动终端
CN108958610A (zh) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 基于人脸的特效生成方法、装置和电子设备

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005242566A (ja) 2004-02-25 2005-09-08 Canon Inc 画像合成装置及び方法
JP2006260198A (ja) 2005-03-17 2006-09-28 Toshiba Corp 仮想化粧装置、仮想化粧方法および仮想化粧プログラム
JP2009053981A (ja) 2007-08-28 2009-03-12 Kao Corp 化粧シミュレーション装置
JP5525923B2 (ja) * 2010-06-09 2014-06-18 任天堂株式会社 画像処理プログラム、画像処理装置、画像処理システム、および画像処理方法
JP2012181688A (ja) * 2011-03-01 2012-09-20 Sony Corp 情報処理装置、情報処理方法、情報処理システムおよびプログラム
US9619037B2 (en) * 2012-07-25 2017-04-11 Facebook, Inc. Custom gestures
JP6264665B2 (ja) * 2013-04-17 2018-01-24 パナソニックIpマネジメント株式会社 画像処理方法および画像処理装置
JP6115774B2 (ja) 2013-07-11 2017-04-19 フリュー株式会社 画像編集装置、画像編集方法、およびプログラム
CN104240274B (zh) * 2014-09-29 2017-08-25 小米科技有限责任公司 人脸图像处理方法及装置
US9665930B1 (en) * 2015-11-10 2017-05-30 Adobe Systems Incorporated Selective editing of images using editing tools with persistent tool settings
CN105791692B (zh) * 2016-03-14 2020-04-07 腾讯科技(深圳)有限公司 一种信息处理方法、终端及存储介质
CN106341720B (zh) * 2016-08-18 2019-07-26 北京奇虎科技有限公司 一种在视频直播中添加脸部特效的方法及装置
CN106845400B (zh) * 2017-01-19 2020-04-10 南京开为网络科技有限公司 一种基于人脸关键点跟踪实现特效而产生的品牌展示方法
CN107452034B (zh) * 2017-07-31 2020-06-05 Oppo广东移动通信有限公司 图像处理方法及其装置
CN107679497B (zh) * 2017-10-11 2023-06-27 山东新睿信息科技有限公司 视频面部贴图特效处理方法及生成系统
CN107888845B (zh) 2017-11-14 2022-10-21 腾讯数码(天津)有限公司 一种视频图像处理方法、装置及终端
CN108259496B (zh) * 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024167A (zh) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 一种移动终端拍照方法及系统
CN105303523A (zh) * 2014-12-01 2016-02-03 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106875332A (zh) * 2017-01-23 2017-06-20 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN108010037A (zh) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
CN108022279A (zh) * 2017-11-30 2018-05-11 广州市百果园信息技术有限公司 视频特效添加方法、装置及智能移动终端
CN108958610A (zh) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 基于人脸的特效生成方法、装置和电子设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11189067B2 (en) * 2019-02-28 2021-11-30 Samsung Electronics Co., Ltd. Electronic device and content generation method
CN112307925A (zh) * 2020-10-23 2021-02-02 腾讯科技(深圳)有限公司 图像检测方法、图像显示方法、相关设备及存储介质
CN112307925B (zh) * 2020-10-23 2023-11-28 腾讯科技(深圳)有限公司 图像检测方法、图像显示方法、相关设备及存储介质

Also Published As

Publication number Publication date
GB202100224D0 (en) 2021-02-24
GB2590208A (en) 2021-06-23
US20210366163A1 (en) 2021-11-25
CN108958610A (zh) 2018-12-07
US11354825B2 (en) 2022-06-07
JP7286684B2 (ja) 2023-06-05
JP2021530031A (ja) 2021-11-04
GB2590208B (en) 2023-04-19

Similar Documents

Publication Publication Date Title
WO2020019663A1 (zh) 基于人脸的特效生成方法、装置和电子设备
US12020380B2 (en) Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
WO2022001593A1 (zh) 视频生成方法、装置、存储介质及计算机设备
WO2020019666A1 (zh) 人脸特效的多人脸跟踪方法、装置和电子设备
WO2020001013A1 (zh) 图像处理方法、装置、计算机可读存储介质和终端
US20200167995A1 (en) Textured mesh building
CN109064387A (zh) 图像特效生成方法、装置和电子设备
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
US8976182B2 (en) Facial sketch creation device, configuration information generation device, configuration information generation method, and storage medium
WO2020019665A1 (zh) 基于人脸的三维特效生成方法、装置和电子设备
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
TWI255141B (en) Method and system for real-time interactive video
KR20240155971A (ko) 실시간 3d 신체 모션 캡처로부터의 사이드-바이-사이드 캐릭터 애니메이션
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
JP7209851B2 (ja) 画像変形の制御方法、装置およびハードウェア装置
WO2019242271A1 (zh) 图像变形方法、装置及电子设备
WO2022170958A1 (zh) 基于增强现实的显示方法、设备、存储介质及程序产品
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
US11651019B2 (en) Contextual media filter search
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
WO2020001016A1 (zh) 运动图像生成方法、装置、电子设备及计算机可读存储介质
WO2020029556A1 (zh) 自适应平面的方法、装置和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927821

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020571798

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202100224

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181225

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18927821

Country of ref document: EP

Kind code of ref document: A1