CN114067407A - Expression driving method and device, electronic equipment and storage medium - Google Patents
Expression driving method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114067407A CN114067407A CN202111376028.1A CN202111376028A CN114067407A CN 114067407 A CN114067407 A CN 114067407A CN 202111376028 A CN202111376028 A CN 202111376028A CN 114067407 A CN114067407 A CN 114067407A
- Authority
- CN
- China
- Prior art keywords
- expression
- target
- facial image
- target object
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 473
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000001815 facial effect Effects 0.000 claims abstract description 201
- 230000007935 neutral effect Effects 0.000 claims description 42
- 238000004891 communication Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000000694 effects Effects 0.000 abstract description 13
- 230000008859 change Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 210000001508 eye Anatomy 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The embodiment of the invention provides an expression driving method, an expression driving device, electronic equipment and a storage medium, wherein the expression driving method comprises the following steps: acquiring the characteristics of a target face image of a target object; calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; and calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficient. By adopting the method, the driven object can be driven to make the same fine expression as the target object by using a small amount of preset facial images, so that the expression driving effect is improved, the expression driving can be applied to scenes needing fine expression, the workload of the expression driving is reduced, and the efficiency of the expression driving is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an expression driving method and apparatus, an electronic device, and a storage medium.
Background
At present, the expression driving technology is not only widely applied in the fields of games and CG (Computer Animation) production, but also in the emerging virtual doll industry such as virtual idols and virtual singers. The virtual idol, the virtual singer and other virtual idols are manufactured in the forms of painting, music, animation, CG and the like, and perform performance activities in Internet virtual scenes or real scenes, but are character images which do not exist in the form of entities. The expression drive is to make the driven object (such as a virtual doll) synchronously make the same expression as the corresponding creature.
The existing expression driving method is to capture the facial expression of a living being corresponding to a driven object through a mobile phone camera and drive the driven object by using the captured facial expression. For example, FaceRig (an open authoring platform capable of making characters) and Apple ARKit (Apple AR development platform) can enable the public to control a driven object (such as a virtual doll) to synchronously make the same expression as the expression of the public through a mobile phone or an RGB camera of the public.
However, the expression driving method uses some general expression driving algorithms, which can only drive conservative expressions, and thus is difficult to apply to scenes requiring detailed expressions. Moreover, the expression of the driven object driven by the expression driving method is not close to the expression of the corresponding living body enough, and the expression driving effect is poor.
Disclosure of Invention
The embodiment of the invention aims to provide an expression driving method, an expression driving device, electronic equipment and a storage medium, so as to improve an expression driving effect.
In a first aspect of the present invention, there is provided an expression driving method, including:
acquiring the characteristics of a target face image of a target object;
calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; each preset facial image is a facial image of the target object in different expression states;
calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that an expression driving controller controls a driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
Optionally, the calculating, based on the features and the pre-extracted image features of each preset facial image of the target object, an expression magnitude parameter corresponding to the target facial image includes:
based on the features and the pre-extracted image features of each preset facial image of the target object, calculating expression amplitude parameters corresponding to the target facial image by adopting the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiI is more than or equal to 1 and less than or equal to n for the ith expression amplitude parameter corresponding to the target face image;a0、a1、…、animage features, a, corresponding to the respective n +1 preset face images0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、…、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
Optionally, the calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image obtained in advance includes:
based on the expression amplitude parameter and the control coefficient corresponding to each pre-set face image acquired in advance, calculating the expression control coefficient corresponding to the target face image by adopting the following formula:
b is an expression control coefficient corresponding to the target facial image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
Optionally, the acquiring the feature of the target face image of the target object includes:
and acquiring coordinate information of face key points in a target face image of the target object as the characteristics of the target face image.
In another aspect of the present invention, there is also provided an expression driver including:
the characteristic acquisition module is used for acquiring the characteristics of a target face image of a target object;
the amplitude parameter calculation module is used for calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; each preset facial image is a facial image of the target object in different expression states;
the control coefficient calculation module is used for calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls a driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
Optionally, the amplitude parameter calculating module is specifically configured to calculate, based on the features and the pre-extracted image features of each preset face image of the target object, expression amplitude parameters corresponding to the target face image by using the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiI is more than or equal to 1 and less than or equal to n for the ith expression amplitude parameter corresponding to the target face image;a0、a1、…、animage features, a, corresponding to the respective n +1 preset face images0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、…、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
Optionally, the control coefficient calculation module is specifically configured to calculate, based on the expression amplitude parameter and a control coefficient corresponding to each pre-set face image obtained in advance, an expression control coefficient corresponding to the target face image by using the following formula:
b is an expression control coefficient corresponding to the target facial image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
Optionally, the feature obtaining module is specifically configured to obtain coordinate information of a key point of a face in a target face image of the target object, as a feature of the target face image.
In another aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any expression driving method step when executing the program stored in the memory.
In yet another aspect of the present invention, there is further provided a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the expression driving method steps described above.
In yet another aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the expression driving method steps described above.
By adopting the method provided by the embodiment of the invention, the characteristics of the target face image of the target object are obtained; calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; and calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficient. For example, the expression of the target object's mouth may be regarded as the expression obtained by superimposing the neutral expression (the facial state of the target object when the target object does not make any expression) with the neutral expression after the expression of the target object's mouth is reduced by the expression amplitude (the amplitude of the target object's mouth is reduced), and the expression amplitude change and the expression control coefficient both have a linear interpolation relationship, that is, the expression of the target object may be changed by changing the expression amplitude of the target object, and the expression of the driven object driven based on the expression control coefficient may be changed by changing the expression control coefficient, so that the driven object makes the same expression as the target object. Therefore, the method of the embodiment of the invention can calculate the expression amplitude parameter corresponding to the target facial image by using the image characteristics of a plurality of preset facial images, then calculate the expression control coefficient corresponding to the target facial image according to the expression amplitude parameter and the control coefficient corresponding to each preset facial image by using the interpolation relation existing between the expression amplitude change of the target object and the expression control coefficient of the driven object, and further enable the expression driving controller to control the driven object to make the same expression as the target object in the target facial image by using the expression control coefficient, so that the expression made by the driven object is very close to the expression of the target object in the target facial image by using the expression control coefficient, and the expression driving effect is improved. According to the method provided by the embodiment of the invention, the driven object can be driven to make the same fine expression as the target object by using a small amount of preset facial images, so that the expression driving effect is improved, the expression driving can be applied to scenes needing fine expression, the workload of the expression driving is reduced, and the expression driving efficiency is improved. Moreover, the expression amplitude parameter corresponding to the calculation target facial image and the expression control coefficient corresponding to the calculation target facial image are calculated only by solving a linear problem, and the solving speed can reach millisecond level, so that the expression driving method provided by the embodiment of the invention can be applied to a real-time expression driving scene, namely, the expression driving application scene is expanded.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of an expression driving method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the superposition of expressions of different amplitudes;
FIG. 3 is a schematic diagram of an optimal projection coefficient of the expression of the target object in the target facial image on the expression of the target object in each preset facial image;
FIG. 4 is a schematic diagram of an expression driver system according to an embodiment of the present invention;
fig. 5 is a structural diagram of an expression driving device according to an embodiment of the present invention
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Because the existing expression driving method uses some general expression driving algorithms and can only drive conservative expressions, the method is difficult to be applied to scenes needing detailed expressions. In order to improve the expression driving effect, embodiments of the present invention provide an expression driving method and apparatus, an electronic device, and a storage medium. The expression driving method provided by the embodiment of the invention can be applied to any electronic equipment capable of processing images, such as computers, mobile phone terminals, ipads and the like, and is not limited specifically here.
The expression driving method provided by the embodiment of the invention is described in detail below.
Fig. 1 is a flowchart of an expression driving method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
And 102, calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object.
And each preset facial image is a facial image of the target object in different expression states.
And 103, calculating expression control coefficients corresponding to the target facial image based on the expression amplitude parameters and the control coefficients corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficients.
Wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
By adopting the method provided by the embodiment of the invention, the characteristics of the target face image of the target object are obtained; calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; and calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficient. For example, the expression of the target object's mouth may be regarded as the expression obtained by superimposing the neutral expression (the facial state of the target object when the target object does not make any expression) with the neutral expression after the expression of the target object's mouth is reduced by the expression amplitude (the amplitude of the target object's mouth is reduced), and the expression amplitude change and the expression control coefficient both have a linear interpolation relationship, that is, the expression of the target object may be changed by changing the expression amplitude of the target object, and the expression of the driven object driven based on the expression control coefficient may be changed by changing the expression control coefficient, so that the driven object makes the same expression as the target object. Therefore, the method of the embodiment of the invention can calculate the expression amplitude parameter corresponding to the target facial image by using the image characteristics of a plurality of preset facial images, then calculate the expression control coefficient corresponding to the target facial image according to the expression amplitude parameter and the control coefficient corresponding to each preset facial image by using the interpolation relation existing between the expression amplitude change of the target object and the expression control coefficient of the driven object, and further enable the expression driving controller to control the driven object to make the same expression as the target object in the target facial image by using the expression control coefficient, so that the expression made by the driven object is very close to the expression of the target object in the target facial image by using the expression control coefficient, and the expression driving effect is improved. According to the method provided by the embodiment of the invention, the driven object can be driven to make the same fine expression as the target object by using a small amount of preset facial images, so that the expression driving effect is improved, the expression driving can be applied to scenes needing fine expression, the workload of the expression driving is reduced, and the expression driving efficiency is improved. Moreover, the expression amplitude parameter corresponding to the calculation target facial image and the expression control coefficient corresponding to the calculation target facial image are calculated only by solving a linear problem, and the solving speed can reach millisecond level, so that the expression driving method provided by the embodiment of the invention can be applied to a real-time expression driving scene, namely, the expression driving application scene is expanded.
In the embodiment of the invention, the target object can be a person, an animal or a doll capable of generating expression changes, the driven object can be a virtual doll, the virtual doll is a character image which is manufactured in the forms of painting, music, animation, CG and the like, performs performance activities in an internet virtual scene or a real scene, but does not exist in a physical form, and the virtual doll includes but is not limited to a virtual doll, a virtual singer and the like. The expression driving essence is to make the driven object synchronously make the same expression as the target object. The expression of the target object has a variation in magnitude in addition to a variation in the kind of expression such as closing eyes and opening mouth, for example, the magnitude of the expression of the corner of the mouth slightly differs from that of the expression of the long and large mouth, and the expression of the small magnitude can be regarded as an expression obtained by superimposing the expression of the large size and the neutral expression, for example, the expression of the corner of the mouth slightly can be regarded as an expression obtained by superimposing the expression of the long and large mouth and the neutral expression, where the neutral expression refers to a face state when the target object does not make any expression.
The expression amplitude change and the expression control coefficient have a linear interpolation relation, namely the expression of the target object can be correspondingly changed by changing the expression amplitude of the target object, and the expression of the driven object driven based on the expression control coefficient can be correspondingly changed by changing the expression control coefficient, so that the driven object can make the same expression as the target object. Fig. 2 is a schematic diagram showing the superposition of expressions of different magnitudes, as shown in fig. 2, (1) in fig. 2 is a face state when the target object 201 does not make any expression, fig. 2(2) is a face state when the target object 201 makes an expression of closed eyes and sipping mouth, fig. 2(3) is a facial state in which target object 201 makes an expression of a large mouth and a large eye, both (2) in fig. 2 and (3) in fig. 2 make a large-amplitude expression of the target object 201, and, it is exactly the opposite trend that the target object 201 in (2) in fig. 2 and (3) in fig. 2 makes a large-scale expression, and therefore, as shown in fig. 2, the neutral expression of (1) target object 201 in fig. 2 can be obtained by superimposing the expressions of (2) in fig. 2 and (3) in fig. 2, and the expression amplitude parameters of the expressions of (2) in fig. 2 and (3) in fig. 2 are both 0.5.
Since the expression with a small amplitude can be regarded as the expression obtained by superimposing the expression with a large amplitude and the neutral expression, as shown in fig. 2, (1) in fig. 2 can be obtained by superimposing 2 times the neutral expression (2) in fig. 2 and the expression with a large amplitude (3) in fig. 2.
In the embodiment of the invention, n +1 facial images with different expression states of the target object can be collected in advance by using the RGB camera as the preset facial image. Specifically, n face images when the target object is in the exaggerated and extreme expression state and a face image when the target object is in the neutral expression state may be selected as the preset face images. For example, a face image of the target object in a neutral expression, a left eye closed, a right eye closed, a mouth opened and a mouth closed is acquired as a preset face image. Where n is a natural number greater than or equal to 1, and a specific value of n may be set according to an actual application scenario. Other expressions of the target object may be seen as a combination of different magnitudes of these extreme, exaggerated expressions.
For example, n may be set to 2 for the target object 201 shown in fig. 2, and then 2 facial images when the target object 201 is in an exaggerated and extreme expression state and 2 facial images when the target object 201 is in a neutral expression state may be selected, such as (1), (2), and (3) in fig. 2 are selected as the preset facial images of the target object 201.
In the embodiment of the present invention, after n +1 preset facial images of a target object are collected, for each preset facial image, an expression driving controller may be used to control a driven object to make an expression identical to that of the target object in the preset facial image, and then a parameter of the expression driving controller when the driven object makes an expression identical to that of the target object in the preset facial image is determined as a control coefficient corresponding to the preset facial image. The combination of the control coefficients corresponding to n +1 preset face images of the target object can be obtained as { B }0,B1,B2,…,BnIn which B is0And the control coefficient corresponding to the preset face image when the target object is in the neutral expression state is obtained.
In the embodiment of the invention, the image characteristics of each preset face image of the target object can be extracted in advance. Specifically, since a lot of information unrelated to the expression of the target object exists in the collected preset Face image, such as background, clothing, and the like, in order to reduce interference of the information on the expression of the target object, for each preset Face image of the target object, coordinate information of a Face key point in the preset Face image or a 3D portable Face Model (3D DMM) coefficient may be extracted as an image feature of the preset Face image. The face key points can be points of the target object face contour, nose, eyes, mouth, eyebrows and the like. Alternatively, in the embodiment of the present invention, for each preset face image of the target object, facial contour information of the five sense organs of the target object in the preset face image may also be extracted as an image feature of the preset face image. The set of image features of the preset face image of the target object can be denoted as { a }0,a1,a2,…,anIn which a0The image characteristics of the preset facial image when the target object is in the neutral expression state are set. In the embodiment of the invention, the image characteristics of the preset face image of the target object can be stored as the specific momentMatrix form: that is, each line can be represented by subtracting the image feature of the preset facial image when the target object is in the neutral expression state from the image feature of the preset facial image when the target object is in the non-neutral expression state, so as to form a set of expression bases, that is, a represents the set of expression bases of the target object.
The image characteristics of each preset face image of the target object extracted in advance are consistent with the acquired characteristics of the target face image of the target object, namely if the coordinate information of the key points of the face in each preset face image is extracted in advance as the image characteristics of the preset face image, the acquired characteristics of the target face image of the target object should be the coordinate information of the key points of the face in the target face image; if the facial contour information of the target object in each preset facial image is extracted in advance as the image feature of the preset facial image, the acquired feature of the target facial image of the target object should be the facial contour information of the target object in the target facial image.
In the embodiment of the present invention, after the step of obtaining the features of the target face image of the target object is performed, the step of extracting the image features of each preset face image of the target object and the step of obtaining the control coefficients corresponding to each preset face image may be performed.
In a possible implementation manner, the calculating, based on the features and image features of each preset facial image of the target object extracted in advance, an expression magnitude parameter corresponding to the target facial image includes: based on the features and the pre-extracted image features of each preset facial image of the target object, calculating expression amplitude parameters corresponding to the target facial image by adopting the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiThe ith expression amplitude parameter corresponding to the target face image is set, i is more than or equal to 1 and less than or equal to n; a0、a1、…、animage features, a, corresponding to the respective n +1 preset face images0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、…、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
Fig. 3 is a schematic diagram of an optimal projection coefficient of the expression of the target object in the target facial image on the expressions of the target object in each preset facial image, as shown in fig. 3, the expression of the target object in the target facial image may be obtained by summing the expressions of the target object in each preset facial image multiplied by the corresponding projection coefficients, then subtracting the neutral expressions of the target object and then calculating a norm, and when the norm reaches the minimum, each projection coefficient w is obtained1-wnNamely, the expression amplitude parameters corresponding to the target facial image are obtained.
In a possible implementation manner, the calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image obtained in advance includes: based on the expression amplitude parameter and the control coefficient corresponding to each pre-set facial image acquired in advance, the expression control coefficient corresponding to the target facial image is calculated by adopting the following formula:
b is an expression control coefficient corresponding to the target face image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
In the embodiment of the present invention, the target face image may be any face image of the target object acquired by using an RGB camera or other image acquisition devices.
In the embodiment of the invention, after the expression control coefficient corresponding to the target facial image is obtained, the parameter of the expression driving controller can be adjusted to the expression control coefficient, and the driven object is controlled to make the same expression as the target object in the target facial image. The expression driving controller may be any electronic device that can control the driven object to make the same expression as the target object, and is not particularly limited herein.
For example, for the target object 201 shown in fig. 2, it is desirable to drive an expression (an expression of the target object 201 closing eyes and growing mouth) on the target face image of the target object 201 on the virtual puppet, and specifically refer to the following operation manners:
a plurality of face images of the target object 201 may be captured by the RGB camera in advance. Then, n may be set to be 2, and fig. 2(1), (2), and (3) of the plurality of face images of the acquired target object 201 may be selected as the preset face image of the target object 201. Fig. 2(2) and (3) show the target object 201 in an exaggerated and extreme expression state, and fig. 2(1) shows the target object 201 in a neutral expression state. Then, the 3DMM coefficient of each preset face image of the target object 201 may be extracted as an image feature of the preset face image and recorded as a set { a }(201)0,a(201)1,a(201)2Wherein a(201)0Is an image feature of a preset facial image (i.e., (1) in fig. 2) when the target object 201 is in a neutral expression state, a(201)1Is an image feature of a preset facial image when the target object 201 is in an expression state shown in (2) in fig. 2, a(201)2Is a preset face map when the target object 201 is in an expression state shown in (3) in fig. 2Image characteristics of the image.
Then, the image features of the 3 preset face images of the target object 201 may be stored in a matrix form:A(201)a set of emoticons representing the target object 201,are respectively a(201)0,a(201)1,a(201)2The corresponding transpose matrix. Then, the expression amplitude parameter w corresponding to the target face image of the target object 201 can be calculated using the following formula(201)i:
min||A(201)w(201)i-x(201)||,s.t.w(201)i≥0
Wherein, w(201)iIs the ith expression amplitude parameter, x(201)=a(201)-a(201)0,a(201)Is a feature of a target face image of the target object 201. The expression amplitude parameter w corresponding to the target face image of the target object 201 can be calculated and obtained through the formula(201)1And w(201)2。
Then, the expression control coefficient B corresponding to the target facial image can be calculated using the following formula(201):
Wherein, B(201)0A control coefficient corresponding to a preset facial image when the target object 201 is in a neutral expression state (i.e. a control coefficient corresponding to (1) in fig. 2), B(201)1Is a control coefficient corresponding to (2) in FIG. 2, B(201)2The corresponding control coefficients of fig. 2 (3).
Then, the parameter of the expression drive controller may be adjusted to the expression control coefficient B(201)The virtual figure is controlled to make the same expression as in the target face image of the target object 201, that is, the virtual figure is controlled to make eye closureAnd expression of the long mouth.
By adopting the method provided by the embodiment of the invention, the expression amplitude parameter corresponding to the target facial image can be calculated by using the image characteristics of a plurality of preset facial images, then the expression control coefficient corresponding to the target facial image is calculated according to the expression amplitude parameter and the control coefficient corresponding to each preset facial image by utilizing the interpolation relation existing between the expression amplitude change of the target object and the expression control coefficient of the driven object, and the expression driving controller is further used for controlling the driven object to make the same expression as the target object in the target facial image. The method provided by the embodiment of the invention can drive the driven object to make the same fine expression as the target object by using a small amount of preset facial images, thereby improving the expression driving effect, enabling the expression driving to be applied to scenes needing fine expression, and compared with the traditional expression driving method which can drive the driven object by capturing a large amount of biological facial images, the method provided by the embodiment of the invention does not need to label a large amount of biological facial images, only needs the target object to make a small amount of preset facial images with extreme expressions, calculates the amplitude change parameters between the target facial image and the preset facial images based on the small amount of preset facial images of the target object by using the amplitude change of the expression, then determines the expression control coefficient according to the amplitude change parameters, and then directly drives the driven object to make the same expression as the target object by using the expression control coefficient, the annotation data volume is reduced, namely, the workload of expression driving is reduced, and the efficiency of expression driving is improved.
Moreover, the expression amplitude parameter corresponding to the calculation target facial image and the expression control coefficient corresponding to the calculation target facial image are calculated only by solving a linear problem, and the solving speed can reach millisecond level, so that the expression driving method provided by the embodiment of the invention can be applied to a real-time expression driving scene, namely, the expression driving application scene is expanded. In addition, the control coefficient and the feature of the facial image can be adjusted according to different application scenes, so that the expression driving method provided by the embodiment of the invention is not limited by the expression driving controller or the facial image feature standard, and has strong flexibility.
In addition, the method provided by the embodiment of the invention can realize high-precision situation driving aiming at the target object only by a small number of preset face images and control coefficients of the target object. Compared with a plug-and-play universal scheme, the data are collected aiming at a specific target object, so that the expression of the driven object can be more detailed and closer to the expression of the target object; on the other hand, the control coefficient corresponding to the preset facial image of the target object can be freely provided, so that the expression driving method provided by the embodiment of the invention can be selected by the target object in a self-defined manner, does not need to meet specific specifications, and has higher degree of freedom.
Fig. 4 is a schematic diagram of an expression driving system according to an embodiment of the present invention, and as shown in fig. 4, the expression driving system may include an RGB camera 401, a parameter determining module 402, and an expression driving module 403. A plurality of key expressions of the target object can be acquired by adopting an RGB camera, and the key expressions are exaggerated expressions, extreme expressions and neutral expressions of the target object. Then, the parameter determination module can acquire a plurality of key expressions of the target object acquired by the RGB camera, drive the virtual puppet to make an expression identical to the key expression based on the key expressions, and obtain a parameter when the expression drive controller controls the virtual puppet to make the expression identical to the key expression, wherein the parameter is used as a control coefficient corresponding to the key expression. And then the parameter determining module sends the control coefficient corresponding to the key expression to the expression driving module, the expression driving module can acquire any expression of the target object from the RGB camera in real time, the control coefficient corresponding to any expression of the target object is determined by using the control coefficient corresponding to the key expression, then the parameter of the expression driving controller is adjusted to the control coefficient corresponding to any expression, the virtual doll is controlled to make the expression same as the any expression, and the expression driving of the virtual doll is realized.
Based on the same inventive concept, according to the expression driving method provided in the above embodiment of the present invention, correspondingly, another embodiment of the present invention further provides an expression driving apparatus, a schematic structural diagram of which is shown in fig. 5, and the expression driving apparatus specifically includes:
a feature obtaining module 501, configured to obtain features of a target face image of a target object;
a magnitude parameter calculating module 502, configured to calculate an expression magnitude parameter corresponding to the target facial image based on the feature and an image feature of each preset facial image of the target object extracted in advance; each preset facial image is a facial image of the target object in different expression states;
a control coefficient calculation module 503, configured to calculate an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression drive controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
By adopting the device provided by the embodiment of the invention, the characteristics of the target face image of the target object are obtained; calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; and calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls the driven object to make the same expression as the target object in the target facial image based on the expression control coefficient. For example, the expression of the target object's mouth may be regarded as the expression obtained by superimposing the neutral expression (the facial state of the target object when the target object does not make any expression) with the neutral expression after the expression of the target object's mouth is reduced by the expression amplitude (the amplitude of the target object's mouth is reduced), and the expression amplitude change and the expression control coefficient both have a linear interpolation relationship, that is, the expression of the target object may be changed by changing the expression amplitude of the target object, and the expression of the driven object driven based on the expression control coefficient may be changed by changing the expression control coefficient, so that the driven object makes the same expression as the target object. Therefore, the device provided in the embodiment of the present invention may use image features of a plurality of preset facial images to calculate expression amplitude parameters corresponding to a target facial image, then use an interpolation relationship between expression amplitude changes of the target object and expression control coefficients of a driven object, calculate expression control coefficients corresponding to the target facial image according to the expression amplitude parameters and the control coefficients corresponding to each preset facial image, and further use the expression control coefficients to enable the expression driving controller to control the driven object to make the same expression as the target object in the target facial image. According to the device provided by the embodiment of the invention, the driven object can be driven to make the same fine expression as the target object by using a small amount of preset facial images, so that the expression driving effect is improved, the expression driving can be applied to scenes needing fine expression, the workload of the expression driving is reduced, and the expression driving efficiency is improved. Moreover, the expression amplitude parameter corresponding to the calculation target facial image and the expression control coefficient corresponding to the calculation target facial image are calculated only by solving a linear problem, and the solving speed can reach millisecond level, so that the expression driving device provided by the embodiment of the invention can be applied to a real-time expression driving scene, namely, the expression driving application scene is expanded.
Optionally, the amplitude parameter calculating module 502 is specifically configured to calculate, based on the features and the image features of each preset face image of the target object extracted in advance, expression amplitude parameters corresponding to the target face image by using the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiI is more than or equal to 1 and less than or equal to n for the ith expression amplitude parameter corresponding to the target face image;a0、a1、…、anrespectively corresponding n +1 preset face imagesImage feature of a0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、…、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
Optionally, the control coefficient calculating module 503 is specifically configured to calculate, based on the expression amplitude parameter and a control coefficient corresponding to each pre-set face image obtained in advance, an expression control coefficient corresponding to the target face image by using the following formula:
b is an expression control coefficient corresponding to the target facial image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
Optionally, the feature obtaining module 501 is specifically configured to obtain coordinate information of a key point of a face in a target face image of a target object, as a feature of the target face image.
Therefore, by adopting the device provided by the embodiment of the invention, the driven object can be driven to make the same fine expression as the target object by using a small amount of preset facial images, so that the expression driving effect is improved, the expression driving can be applied to scenes needing fine expression, the workload of the expression driving is reduced, and the expression driving efficiency is improved. In addition, the expression driving device provided by the embodiment of the invention is not limited by the expression driving controller or the facial image feature standard, and has strong flexibility. High-precision emotion driving aiming at the target object can be realized only by a small number of preset face images and control coefficients of the target object. Compared with a plug-and-play universal scheme, the data are collected aiming at a specific target object, so that the expression of the driven object can be more detailed and closer to the expression of the target object; on the other hand, because the control coefficient corresponding to the preset facial image of the target object can be freely provided, the expression driving device provided by the embodiment of the invention can be selected by the target object in a self-defined manner, does not need to meet specific specifications, and has higher degree of freedom.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
acquiring the characteristics of a target face image of a target object;
calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; each preset facial image is a facial image of the target object in different expression states;
calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that an expression driving controller controls a driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In still another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the expression driving method according to any of the above embodiments.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the expression driving method as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, the electronic device, the storage medium and the computer program product, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to the description, reference may be made to some parts of the description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. An expression driving method, comprising:
acquiring the characteristics of a target face image of a target object;
calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; each preset facial image is a facial image of the target object in different expression states;
calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that an expression driving controller controls a driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
2. The method according to claim 1, wherein the calculating of the corresponding expression magnitude parameter of the target facial image based on the feature and the pre-extracted image feature of each preset facial image of the target object comprises:
based on the features and the pre-extracted image features of each preset facial image of the target object, calculating expression amplitude parameters corresponding to the target facial image by adopting the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiI is more than or equal to 1 and less than or equal to n for the ith expression amplitude parameter corresponding to the target face image;a0、a1、...、animage features, a, corresponding to the respective n +1 preset face images0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、...、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
3. The method of claim 1, wherein the calculating an expression control coefficient corresponding to the target facial image based on the expression magnitude parameter and a control coefficient corresponding to each pre-set facial image obtained in advance comprises:
based on the expression amplitude parameter and the control coefficient corresponding to each pre-set face image acquired in advance, calculating the expression control coefficient corresponding to the target face image by adopting the following formula:
b is an expression control coefficient corresponding to the target facial image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
4. The method according to any one of claims 1-3, wherein the obtaining features of the target face image of the target object comprises:
and acquiring coordinate information of face key points in a target face image of the target object as the characteristics of the target face image.
5. An expression driver, comprising:
the characteristic acquisition module is used for acquiring the characteristics of a target face image of a target object;
the amplitude parameter calculation module is used for calculating expression amplitude parameters corresponding to the target facial image based on the features and the pre-extracted image features of each preset facial image of the target object; each preset facial image is a facial image of the target object in different expression states;
the control coefficient calculation module is used for calculating an expression control coefficient corresponding to the target facial image based on the expression amplitude parameter and a control coefficient corresponding to each pre-set facial image acquired in advance, so that the expression driving controller controls a driven object to make the same expression as the target object in the target facial image based on the expression control coefficient; wherein, the control coefficient corresponding to each preset face image is as follows: and the expression driving controller controls the parameters of the expression driving controller when the driven object makes the same expression as the target object in the preset facial image.
6. The apparatus according to claim 5, wherein the amplitude parameter calculating module is specifically configured to calculate, based on the features and pre-extracted image features of each preset facial image of the target object, an expression amplitude parameter corresponding to the target facial image by using the following formula:
min||Awi-x||,s.t.wi≥0
wherein, wiFor the ith corresponding to the target face imageThe expression amplitude parameter is that i is more than or equal to 1 and less than or equal to n;a0、a1、...、animage features, a, corresponding to the respective n +1 preset face images0The image characteristics of the preset facial image when the target object is in the neutral expression state,are respectively a0、a1、...、anA corresponding transposed matrix; x is a-a0And a is the feature of the target face image.
7. The apparatus according to claim 5, wherein the control coefficient calculating module is specifically configured to calculate, based on the expression magnitude parameter and a control coefficient corresponding to each pre-set facial image obtained in advance, an expression control coefficient corresponding to the target facial image by using the following formula:
b is an expression control coefficient corresponding to the target facial image; b is0A control coefficient corresponding to a preset facial image when the target object is in a neutral expression state, BiAnd n is the number of the preset face images except the preset face image when the target object is in the neutral expression state.
8. The apparatus according to any one of claims 5 to 7, wherein the feature obtaining module is specifically configured to obtain coordinate information of key points of a face in a target face image of a target object as features of the target face image.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 4 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111376028.1A CN114067407A (en) | 2021-11-19 | 2021-11-19 | Expression driving method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111376028.1A CN114067407A (en) | 2021-11-19 | 2021-11-19 | Expression driving method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114067407A true CN114067407A (en) | 2022-02-18 |
Family
ID=80278774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111376028.1A Pending CN114067407A (en) | 2021-11-19 | 2021-11-19 | Expression driving method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114067407A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627218A (en) * | 2022-05-16 | 2022-06-14 | 成都市谛视无限科技有限公司 | Human face fine expression capturing method and device based on virtual engine |
CN114693845A (en) * | 2022-04-08 | 2022-07-01 | Oppo广东移动通信有限公司 | Method and apparatus for driving stylized object, medium, and electronic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019024751A1 (en) * | 2017-07-31 | 2019-02-07 | 腾讯科技(深圳)有限公司 | Facial expression synthesis method and apparatus, electronic device, and storage medium |
CN111968203A (en) * | 2020-06-30 | 2020-11-20 | 北京百度网讯科技有限公司 | Animation driving method, animation driving device, electronic device, and storage medium |
CN112307942A (en) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | Facial expression quantitative representation method, system and medium |
CN112329663A (en) * | 2020-11-10 | 2021-02-05 | 西南大学 | Micro-expression time detection method and device based on face image sequence |
CN112614213A (en) * | 2020-12-14 | 2021-04-06 | 杭州网易云音乐科技有限公司 | Facial expression determination method, expression parameter determination model, medium and device |
CN113066156A (en) * | 2021-04-16 | 2021-07-02 | 广州虎牙科技有限公司 | Expression redirection method, device, equipment and medium |
CN113095134A (en) * | 2021-03-08 | 2021-07-09 | 北京达佳互联信息技术有限公司 | Facial expression extraction model generation method and device, and facial image generation method and device |
WO2021227916A1 (en) * | 2020-05-09 | 2021-11-18 | 维沃移动通信有限公司 | Facial image generation method and apparatus, electronic device, and readable storage medium |
-
2021
- 2021-11-19 CN CN202111376028.1A patent/CN114067407A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019024751A1 (en) * | 2017-07-31 | 2019-02-07 | 腾讯科技(深圳)有限公司 | Facial expression synthesis method and apparatus, electronic device, and storage medium |
WO2021227916A1 (en) * | 2020-05-09 | 2021-11-18 | 维沃移动通信有限公司 | Facial image generation method and apparatus, electronic device, and readable storage medium |
CN111968203A (en) * | 2020-06-30 | 2020-11-20 | 北京百度网讯科技有限公司 | Animation driving method, animation driving device, electronic device, and storage medium |
CN112307942A (en) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | Facial expression quantitative representation method, system and medium |
CN112329663A (en) * | 2020-11-10 | 2021-02-05 | 西南大学 | Micro-expression time detection method and device based on face image sequence |
CN112614213A (en) * | 2020-12-14 | 2021-04-06 | 杭州网易云音乐科技有限公司 | Facial expression determination method, expression parameter determination model, medium and device |
CN113095134A (en) * | 2021-03-08 | 2021-07-09 | 北京达佳互联信息技术有限公司 | Facial expression extraction model generation method and device, and facial image generation method and device |
CN113066156A (en) * | 2021-04-16 | 2021-07-02 | 广州虎牙科技有限公司 | Expression redirection method, device, equipment and medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114693845A (en) * | 2022-04-08 | 2022-07-01 | Oppo广东移动通信有限公司 | Method and apparatus for driving stylized object, medium, and electronic device |
CN114627218A (en) * | 2022-05-16 | 2022-06-14 | 成都市谛视无限科技有限公司 | Human face fine expression capturing method and device based on virtual engine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021169839A1 (en) | Action restoration method and device based on skeleton key points | |
CN108335345B (en) | Control method and device of facial animation model and computing equipment | |
JP7268071B2 (en) | Virtual avatar generation method and generation device | |
Le et al. | Live speech driven head-and-eye motion generators | |
WO2020024484A1 (en) | Method and device for outputting data | |
WO2016110199A1 (en) | Expression migration method, electronic device and system | |
KR102462139B1 (en) | Device, method and program that implements an educational metaverse using 3D asset placement | |
KR20220028127A (en) | Animation making method and apparatus, computing device and storage medium | |
CN107657651A (en) | Expression animation generation method and device, storage medium and electronic installation | |
CN111583399B (en) | Image processing method, device, equipment, medium and electronic equipment | |
CN107180446A (en) | The expression animation generation method and device of character face's model | |
CN114067407A (en) | Expression driving method and device, electronic equipment and storage medium | |
CN113362263B (en) | Method, apparatus, medium and program product for transforming an image of a virtual idol | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
WO2022033206A1 (en) | Expression generation method and apparatus for animation object, storage medium, and electronic device | |
CN109427105A (en) | The generation method and device of virtual video | |
KR102250163B1 (en) | Method and apparatus of converting 3d video image from video image using deep learning | |
CN113223125B (en) | Face driving method, device, equipment and medium for virtual image | |
CN112634413B (en) | Method, apparatus, device and storage medium for generating model and generating 3D animation | |
CN115601484A (en) | Virtual character face driving method and device, terminal equipment and readable storage medium | |
WO2023284634A1 (en) | Data processing method and related device | |
CN111563490A (en) | Face key point tracking method and device and electronic equipment | |
CN110188630A (en) | A kind of face identification method and camera | |
CN112714337A (en) | Video processing method and device, electronic equipment and storage medium | |
CN115170706A (en) | Artificial intelligence neural network learning model construction system and construction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |