[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113902768B - Three-dimensional face model edge optimization method and system based on micro-rendering - Google Patents

Three-dimensional face model edge optimization method and system based on micro-rendering Download PDF

Info

Publication number
CN113902768B
CN113902768B CN202111180132.3A CN202111180132A CN113902768B CN 113902768 B CN113902768 B CN 113902768B CN 202111180132 A CN202111180132 A CN 202111180132A CN 113902768 B CN113902768 B CN 113902768B
Authority
CN
China
Prior art keywords
face model
dimensional face
loss function
information
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111180132.3A
Other languages
Chinese (zh)
Other versions
CN113902768A (en
Inventor
俞庭
李炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Versatile Media Co ltd
Original Assignee
Zhejiang Versatile Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Versatile Media Co ltd filed Critical Zhejiang Versatile Media Co ltd
Priority to CN202111180132.3A priority Critical patent/CN113902768B/en
Publication of CN113902768A publication Critical patent/CN113902768A/en
Application granted granted Critical
Publication of CN113902768B publication Critical patent/CN113902768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a three-dimensional face model edge optimization method and system based on micro-rendering, which relate to the technical field of computers, wherein edge characteristic information is extracted from depth information through a guidance step, gravity center coordinate interpolation is carried out according to the edge characteristic information to obtain edge characteristic points of a three-dimensional face model, a total loss function lall consisting of a distribution loss function ldis, a chamfer distance loss function lchamfer and a regularized loss function lreg is constructed, iteration is carried out on the total loss function lall to obtain optimal expression weight, and finally an optimal three-dimensional face model is generated according to the optimal expression weight, wherein the optimal three-dimensional face model has the advantages of high matching degree with a target face image and good reality.

Description

Three-dimensional face model edge optimization method and system based on micro-rendering
Technical Field
The invention relates to the technical field of computers, in particular to a three-dimensional face model edge optimization method and system based on micro-rendering.
Background
In the prior art, a general method for driving expression by a picture is to extract key points of a face on the picture, find indexes corresponding to the back projection of the key points of the face on a three-dimensional face model, and solve the weight value corresponding to the feature vector of the three-dimensional face model by using a least square algorithm so that the key points of the face and coordinate points of the three-dimensional face model are likely to be similar.
However, the three-dimensional face model established according to the existing method has the problems that the effect is poor in a specific area at the edge and the similarity with the acquired target face image is low because no specific model index is arranged at the edge and the super-realistic real person model is driven, and particularly the problem that the expression of the three-dimensional face model is inconsistent with the expression of the target face image when the super-realistic real person model is driven is solved if the fixed model index is selected.
Disclosure of Invention
The invention aims to solve the problems in the background technology and provides a three-dimensional face model edge optimization method and system based on micro-rendering.
In order to achieve the above purpose, the present invention firstly provides a three-dimensional face model edge optimization method based on micro-rendering, which comprises the following steps:
Acquiring a target face image;
Initializing expression weights of the face model and generating an initial three-dimensional face model;
Generating a standardized three-dimensional face model, wherein the standardized three-dimensional face model is aligned with the target face image;
Rasterizing the standardized three-dimensional face model to obtain two-dimensional image information, wherein the two-dimensional image information comprises depth information, face information and barycenter coordinate information, and the depth information, the face information and the barycenter coordinate information correspond to pixel coordinates;
extracting edge characteristic information from the depth information through a guiding step;
Obtaining edge feature points by acquiring barycentric coordinate information corresponding to the edge feature information and interpolating;
establishing a distribution loss function ldis, a chamfering distance loss function lchamfer and a regularization loss function lreg according to key points and the edge characteristic points in the target face image, wherein the distribution loss function ldis is the mean square sum of expression weights of the face model, and the chamfering distance loss function Regularization loss function lreg = |var (S1) -Var (S2) |, S1 is the set of keypoints, S2 is the set of edge feature points, var is the variance operator;
Weighting calculation is carried out on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, wherein w1, w2 and w3 are super parameters of the total loss function;
performing back propagation iteration on the total loss function lall until the total loss function lall obtains the minimum value, so as to obtain the optimal expression weight;
and generating an optimal three-dimensional face model according to the optimal expression weight.
Optionally, the extracting edge feature information from the depth information through the guiding step includes the following steps: extracting depth information on a standardized three-dimensional face model to generate a first depth map; convolving the first depth map to generate a second depth map, wherein the second depth map displays an edge contour line of the standardized three-dimensional face model; and optimizing the second depth map so as to finish the edge feature extraction of the standardized three-dimensional face model.
Optionally, the depth information located on the normalized three-dimensional face model is extracted by the formula norm (z) = (z+1)/(z+1.001), where z is the depth information corresponding to all pixel coordinates.
Optionally, the edge feature information is edge feature information of the inner lip, and the edge feature points are edge feature points of the inner lip.
Optionally, the obtaining the edge feature point by obtaining barycentric coordinate information corresponding to the edge feature information and interpolating includes the following steps: acquiring specific pixel coordinates with pixel values larger than zero in the edge characteristic information; acquiring plane information and barycentric coordinate information corresponding to the specific pixel coordinates; and obtaining edge characteristic points through barycentric coordinate interpolation.
Optionally, the generating the standardized three-dimensional face model includes the following steps: acquiring camera parameters corresponding to the target face image; by the formulaGenerating a standardized three-dimensional face model, wherein P is perspective transformation, R T is a transposed matrix of a camera rotation matrix R, t is a camera displacement vector, p_bias is a pixel correction value, M is an initial three-dimensional face model,Is a standardized three-dimensional face model.
Optionally, generating an initial three-dimensional face model by a formula m=b+ Σ i wi×fi, wherein M is the initial three-dimensional face model, b is the basic three-dimensional face model, fi is the ith feature vector of the three-dimensional face model, and Wi is the ith expression weight of the three-dimensional face model.
Optionally, the total loss function lall is iteratively updated by adam algorithm until the total loss function lall obtains the minimum value, so as to obtain the optimal expression weight.
The embodiment of the invention also provides a three-dimensional face model edge optimization system based on micro-rendering, which comprises the following steps:
a face image acquisition module configured to acquire a target face image;
the system comprises an initial face model generation module, a three-dimensional face model generation module and a three-dimensional face model generation module, wherein the initial face model generation module is configured to initialize face model expression weights and generate an initial three-dimensional face model;
a standardized face model generation module configured to generate a standardized three-dimensional face model that is aligned with the target face image;
the rasterization processing module is configured to perform rasterization processing on the standardized three-dimensional face model to obtain two-dimensional image information, wherein the two-dimensional image information comprises depth information, face information and barycenter coordinate information, and the depth information, the face information and the barycenter coordinate information correspond to pixel coordinates;
an edge feature information extraction module configured to extract edge feature information from the depth information through a bootable step;
the barycentric coordinate interpolation module is configured to obtain edge feature points by acquiring barycentric coordinate information corresponding to the edge feature information and interpolating;
A loss function generation module configured to establish a distribution loss function ldis, a chamfer distance loss function lchamfer, and a regularized loss function lreg from key points in a target face image and the edge feature points, wherein the distribution loss function ldis is a mean square sum of face model expression weights, the chamfer distance loss function Regularization loss function lreg = |var (S1) -Var (S2) |, S1 is the set of keypoints, S2 is the set of edge feature points, var is the variance operator;
The total loss function generation module is configured to perform weighted calculation on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, wherein w1, w2 and w3 are super parameters of the total loss function;
the optimal expression weight acquisition module is configured to perform back propagation iteration on the total loss function lall until the total loss function lall obtains a minimum value, so as to obtain an optimal expression weight;
And the optimal three-dimensional face model generation module is configured to generate an optimal three-dimensional face model according to the optimal expression weight.
The invention has the beneficial effects that:
According to the three-dimensional face model edge optimization method and system based on micro-rendering, edge characteristic information is extracted from the depth information through a guiding step, gravity center coordinate interpolation is conducted according to the edge characteristic information to obtain edge characteristic points of the three-dimensional face model, then a total loss function lall consisting of a distribution loss function ldis, a chamfer distance loss function lchamfer and a regularization loss function lreg is constructed, iteration is conducted on the total loss function lall to obtain optimal expression weight, and finally the optimal three-dimensional face model is generated according to the optimal expression weight.
In addition, the embodiment of the invention is based on the three-dimensional face model edge optimization method and system capable of micro-rendering, and the edge characteristic points can automatically find the corresponding key point information of the edge characteristic points on the target face image by constructing the chamfer distance loss function lchamfer; by constructing a distribution loss function Ldis, the edge feature points of the three-dimensional face model are consistent with the distribution of key points on the target face image, so that the problem of sinking into a local optimal solution in the iterative process is ensured; by constructing the regularized loss function lreg, the expression weight is smaller and better under the condition of meeting the first two loss functions, so that the movement of the finally obtained three-dimensional face model is smooth.
In addition, by optimizing the edges of the three-dimensional face model, particularly the inner lips, the problem that the expression of the three-dimensional face model is inconsistent with the expression of the acquired target face image due to the fact that no specific model index exists at the inner lips and the fixed model index is selected when the super-written real human model is driven is solved.
The features and advantages of the present invention will be described in detail by way of example with reference to the accompanying drawings.
Drawings
FIG. 1 is one of the flow diagrams of a micro-renderable three-dimensional face model edge optimization method according to an embodiment of the present invention;
FIG. 2 is a second flow chart of a method for optimizing edges of a three-dimensional face model based on micro-renderable according to an embodiment of the invention;
FIG. 3 is a third flow chart of a three-dimensional face model edge optimization method based on micro-renderable according to an embodiment of the invention;
FIG. 4 is a fourth flow chart of a micro-renderable three-dimensional face model edge optimization method according to an embodiment of the invention;
fig. 5 is a block diagram of a three-dimensional face model edge optimization system based on micro-renderable according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples for the purpose of facilitating understanding to those skilled in the art.
Referring to fig. 1, an embodiment of the present invention provides a three-dimensional face model edge optimization method based on micro-rendering, which includes the following steps:
S10, acquiring a target face image; in this embodiment, in the process of collecting the target face image by the camera, the method further includes a step of extracting key points in the target face image and/or obtaining camera pose parameters corresponding to the target face image, and specifically, the key points or obtaining the camera pose parameters can be extracted by using existing semi-automatic software and manual fine tuning. In other embodiments, the step of acquiring the target face image may be included only.
Step S20, initializing expression weights of a face model and generating an initial three-dimensional face model; specifically, an initial three-dimensional face model can be generated through a formula m=b+ Σ i wi×fi, wherein M is an initial three-dimensional face model to be generated, b is a preset three-dimensional face model basic type, fi is an ith feature vector of the three-dimensional face model, wi is an ith expression weight of the three-dimensional face model, wherein the feature vectors of the three-dimensional face model can be preset, three-dimensional face models with different shapes can be obtained only by changing the expression weight corresponding to each feature vector, and when the expression weight is initialized to a certain initial value, the initial three-dimensional face model is obtained, and no correlation exists between the obtained initial three-dimensional face model and the obtained target face image.
S30, generating a standardized three-dimensional face model, wherein the standardized three-dimensional face model is aligned with the target face image; specifically, the pose parameters of the camera corresponding to the target face image are back calculated into a motion matrix, and then the motion matrix is applied to the initial three-dimensional face model, so that a standardized three-dimensional face model is generated, the standardized three-dimensional face model is aligned with the target face image, namely, the position of the target face image relative to the camera is consistent with the position of the standardized three-dimensional face model relative to the camera, the step is the key of constructing a correct loss function subsequently, and if the standardized three-dimensional face model is not generated or an incorrect standardized three-dimensional face model is generated, the subsequent loss function is influenced, so that the accuracy of the three-dimensional face model obtained through final optimization, in particular, the accuracy of edges is influenced.
Referring to fig. 4, a flowchart of generating a standardized three-dimensional face model according to an embodiment of the present invention includes the following steps:
step S310, obtaining camera parameters corresponding to the target face image.
Step S320, through the formulaGenerating a standardized three-dimensional face model, wherein P is perspective transformation, R T is a transposed matrix of a camera rotation matrix R, t is a camera displacement vector, p_bias is a pixel correction value, M is an initial three-dimensional face model,In order to normalize the three-dimensional face model, can obtain the accurate normalized three-dimensional face model through the above-mentioned formula, can also set up the value of the displacement vector t of the camera as the zero camera is motionless at the origin in addition, simplify the computational difficulty of the normalized three-dimensional face model greatly.
Step S40, carrying out rasterization processing on the standardized three-dimensional face model to obtain two-dimensional image information, wherein the two-dimensional image information comprises depth information, face information and barycenter coordinate information, and the depth information, the face information and the barycenter coordinate information correspond to pixel coordinates; specifically, depth information corresponding to pixel coordinates is stored through zbuffer data, face information corresponding to the pixel coordinates is stored through pix2face data, and barycentric coordinate information corresponding to the pixel coordinates is stored through bary data.
Step S50, extracting edge characteristic information from the depth information through a guidance step; in a preferred embodiment, the edge feature information is edge feature information of a lip portion in the three-dimensional face model.
Specifically, the edge feature information is extracted by the formula zp= relu (conv (norm (z)), where norm is a custom operator, zp is an edge feature map, z is zbuffer data, conv is a convolution operation, and all steps in the formula zp= relu (conv (norm (z)) are conductive, so that the optimal expression weight can be obtained through iterative update in step S90.
Referring to fig. 2, a flowchart of extracting edge feature information from depth information through a guidance step according to an embodiment of the present invention includes the following steps:
Step S510, extracting depth information on a standardized three-dimensional face model to generate a first depth map; wherein the first depth map is a well-defined edge depth picture.
In this embodiment, depth information on the standardized three-dimensional face model is extracted by the formula norm (z) = (z+1)/(z+1.001), where z is depth information corresponding to all pixel coordinates, and pixels on the standardized three-dimensional face model in the depth image are infinitely close to 1 and pixels not on the standardized three-dimensional face model are infinitely close to 0 by the formula norm (z) = (z+1)/(z+1.001), so that extraction of depth information on the standardized three-dimensional face model is achieved.
Step S520, convolving the first depth map to generate a second depth map, where the second depth map displays an edge contour line of the standardized three-dimensional face model, and in this embodiment, the convolution is adoptedIn other embodiments, according to the actual situation, convolution kernels with different compensation or operators with other edge extraction can be selected, so long as the condition of being able to be guided is satisfied.
And step S530, optimizing the second depth map so as to finish the edge feature extraction of the standardized three-dimensional face model.
In this embodiment, the edge extraction of the standardized three-dimensional face model can be completed by performing relu function operations on the second depth map with only edge contour lines, and compared with the relu activation function commonly used in neural networks for changing a linear process into a nonlinear process, in this embodiment, the relu function truncation property is applied in the step of edge extraction, so that the edge extraction by relu functions is also an invention point of the present invention.
Step S60, obtaining edge feature points by obtaining barycentric coordinate information corresponding to the edge feature information and interpolating; in a preferred embodiment, the edge feature points are edge feature points of lip portions in a three-dimensional face model.
Referring to fig. 3, the edge feature points are obtained by obtaining barycentric coordinate information corresponding to the edge feature information and interpolating, and specifically include the following steps:
in step S610, specific pixel coordinates with pixel values greater than zero in the edge feature information are obtained.
Step S620, acquiring plane information and barycentric coordinate information corresponding to the specific pixel coordinates.
In step S630, edge feature points are obtained by interpolation of barycentric coordinates.
The method can obtain accurate three-dimensional model edge feature points, and the error of the edge feature points obtained by the method can be accurate to the pixel level.
Step S70, a distribution loss function ldis, a chamfer distance loss function lchamfer and a regularization loss function lreg are established according to key points in the target face image and the edge feature points, wherein the distribution loss function ldis is the mean square sum of expression weights of the face model, and the chamfer distance loss functionRegularization loss function lreg = |var (S1) -Var (S2) |, S1 is the set of keypoints, S2 is the set of edge feature points, and Var is the variance operator.
In the embodiment, by constructing the chamfer distance loss function lchamfer, the edge feature points can autonomously find the corresponding key point information of the edge feature points on the target face image; by constructing a distribution loss function Ldis, the edge feature points of the three-dimensional face model are consistent with the distribution of key points on the target face image, so that the problem of sinking into a local optimal solution in the iterative process is ensured; by constructing the regularized loss function lreg, the expression weight is smaller and better under the condition of meeting the first two loss functions, so that the movement of the finally obtained three-dimensional face model is smooth.
Step S80, performing weighted calculation on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, wherein w1, w2 and w3 are super parameters of the total loss function, and the sizes of w1, w2 and w3 need to be fixed before iteration, so as to adjust the influence proportion of the three loss functions on the result.
Step S90, carrying out back propagation iteration on the total loss function lall until the total loss function lall obtains the minimum value, so as to obtain the optimal expression weight; in a preferred embodiment, the total loss function lall may be iteratively updated by adam's algorithm until the total loss function lall takes a minimum value, resulting in an optimal expression weight. In other embodiments, other existing AI algorithms may be employed to iteratively update the total loss function lall, all falling within the scope of the present invention.
Step S100, generating an optimal three-dimensional face model according to the optimal expression weight, and after obtaining the optimal expression weight, generating the optimal three-dimensional face model according to a formula m=b+ Σ i wi×fi, which is not described herein.
In summary, the three-dimensional face model edge optimization method based on micro-rendering of the embodiment of the invention firstly extracts edge feature information from the depth information through a guiding step, then carries out barycentric coordinate interpolation according to the edge feature information to obtain edge feature points of the three-dimensional face model, then carries out iteration on the total loss function lall to obtain optimal expression weight through constructing a total loss function lall consisting of a distribution loss function ldis, a chamfer distance loss function lchamfer and a regularized loss function lreg, and finally generates the optimal three-dimensional face model according to the optimal expression weight.
In addition, by optimizing the edges of the three-dimensional face model, particularly the inner lips, the problem that the expression of the three-dimensional face model is inconsistent with the expression of the acquired target face image due to the fact that no specific model index exists at the inner lips and the fixed model index is selected when the super-written real human model is driven is solved.
According to the above-mentioned three-dimensional face model edge optimization method based on micro-rendering, the embodiment of the invention also provides a three-dimensional face model edge optimization system based on micro-rendering, please refer to fig. 5, the system comprises the following modules:
A face image acquisition module 11 configured to acquire a target face image.
An initial face model generation module 21 configured to initialize face model expression weights and generate an initial three-dimensional face model.
A normalized face model generation module 31 configured to generate a normalized three-dimensional face model that is aligned with the target face image.
And a rasterizing processing module 41, configured to perform rasterizing processing on the standardized three-dimensional face model to obtain two-dimensional image information, where the two-dimensional image information includes depth information, face information, and barycentric coordinate information, and the depth information, the face information, and the barycentric coordinate information correspond to pixel coordinates.
An edge feature information extraction module 51 configured to extract edge feature information from the depth information by a bootable step.
And a barycentric coordinate interpolation module 61 configured to obtain an edge feature point by acquiring barycentric coordinate information corresponding to the edge feature information and performing interpolation.
A loss function generation module 71 configured to establish a distribution loss function ldis, a chamfer distance loss function lchamfer, and a regularized loss function lreg from key points in the target face image and the edge feature points, wherein the distribution loss function ldis is a mean square sum of expression weights of the face model, and the chamfer distance loss functionRegularization loss function lreg = |var (S1) -Var (S2) |, S1 is the set of keypoints, S2 is the set of edge feature points, and Var is the variance operator.
The total loss function generating module 81 is configured to perform a weighted calculation on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, where w1, w2 and w3 are super parameters of the total loss function.
And an optimal expression weight obtaining module 91, configured to perform back propagation iteration on the total loss function lall until the total loss function lall obtains a minimum value, so as to obtain an optimal expression weight.
An optimal three-dimensional face model generation module 110 configured to generate an optimal three-dimensional face model from the optimal expression weights.
In summary, the three-dimensional face model edge optimization system based on micro-renderable according to the embodiments of the present application may be implemented as a program, and run on a computer device. The memory of the computer device may store various program modules constituting the micro-renderable three-dimensional face model edge optimization system, such as the face image acquisition module 11, the initial face model generation module 21, the standardized face model generation module 31, the rasterization processing module 41, the edge feature information extraction module 51, the barycentric coordinate interpolation module 61, the loss function generation module 71, the total loss function generation module 81, the optimal expression weight acquisition module 91, and the optimal three-dimensional face model generation module 110 shown in fig. 5. The program of each program module causes the processor to execute the steps of the edge optimization method of the three-dimensional face model based on micro-rendering according to each embodiment of the application described in the specification.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above embodiments are illustrative of the present invention, and not limiting, and any simple modifications of the present invention fall within the scope of the present invention. The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (9)

1. The three-dimensional face model edge optimization method based on micro-rendering is characterized by comprising the following steps of:
Acquiring a target face image;
Initializing expression weights of the face model and generating an initial three-dimensional face model;
Generating a standardized three-dimensional face model, wherein the standardized three-dimensional face model is aligned with the target face image;
Rasterizing the standardized three-dimensional face model to obtain two-dimensional image information, wherein the two-dimensional image information comprises depth information, face information and barycenter coordinate information, and the depth information, the face information and the barycenter coordinate information correspond to pixel coordinates;
extracting edge feature information from the depth information by a bootstrapping step comprises the steps of:
extracting depth information on a standardized three-dimensional face model to generate a first depth map;
Convolving the first depth map to generate a second depth map, wherein the second depth map displays an edge contour line of the standardized three-dimensional face model;
optimizing the second depth map so as to finish edge feature extraction of the standardized three-dimensional face model;
Obtaining edge feature points by acquiring barycentric coordinate information corresponding to the edge feature information and interpolating;
Establishing a distribution loss function ldis, a chamfering distance loss function lchamfer and a regularization loss function lreg according to key points and the edge characteristic points in the target face image, wherein the distribution loss function ldis is the mean square sum of expression weights of the face model, and the chamfering distance loss function lchamfer = Regularized penalty function lreg = ||S1 is the set of key points, S2 is the set of edge feature points, var is a variance operator;
Weighting calculation is carried out on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, wherein w1, w2 and w3 are super parameters of the total loss function;
performing back propagation iteration on the total loss function lall until the total loss function lall obtains the minimum value, so as to obtain the optimal expression weight;
and generating an optimal three-dimensional face model according to the optimal expression weight.
2. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein depth information on a standardized three-dimensional face model is extracted by a formula norm (z) = (z+1)/(z+1.001), where z is depth information corresponding to all pixel coordinates.
3. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein the second depth map is optimized through relu functions to complete edge feature extraction of a standardized three-dimensional face model.
4. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein the edge feature information is edge feature information of an inner lip, and the edge feature points are edge feature points of the inner lip.
5. The method for optimizing edges of a three-dimensional face model based on micro-renderable according to claim 1, wherein the obtaining edge feature points by obtaining barycentric coordinate information corresponding to the edge feature information and interpolating comprises the steps of:
acquiring specific pixel coordinates with pixel values larger than zero in the edge characteristic information;
Acquiring plane information and barycentric coordinate information corresponding to the specific pixel coordinates;
and obtaining edge characteristic points through barycentric coordinate interpolation.
6. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein generating a normalized three-dimensional face model comprises the steps of:
acquiring camera parameters corresponding to the target face image;
by the formula Generating a standardized three-dimensional face model, wherein P is perspective transformation,Is the transpose of the camera rotation matrix R, t is the camera displacement vector, p_bias is the pixel correction value, M is the initial three-dimensional face model,Is a standardized three-dimensional face model.
7. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein the formula m=b+is used forGenerating an initial three-dimensional face model, wherein M is the initial three-dimensional face model, b is the three-dimensional face model basic type, fi is the ith feature vector of the three-dimensional face model, and Wi is the ith expression weight of the three-dimensional face model.
8. The micro-renderable three-dimensional face model edge optimization method according to claim 1, wherein the total loss function lall is iteratively updated by adam algorithm until the total loss function lall takes a minimum value, so as to obtain an optimal expression weight.
9. A micro-renderable three-dimensional face model edge optimization system, comprising:
a face image acquisition module configured to acquire a target face image;
the system comprises an initial face model generation module, a three-dimensional face model generation module and a three-dimensional face model generation module, wherein the initial face model generation module is configured to initialize face model expression weights and generate an initial three-dimensional face model;
a standardized face model generation module configured to generate a standardized three-dimensional face model that is aligned with the target face image;
the rasterization processing module is configured to perform rasterization processing on the standardized three-dimensional face model to obtain two-dimensional image information, wherein the two-dimensional image information comprises depth information, face information and barycenter coordinate information, and the depth information, the face information and the barycenter coordinate information correspond to pixel coordinates;
The edge feature information extraction module is configured to extract edge feature information from the depth information through a bootable step, extract depth information located on a standardized three-dimensional face model and generate a first depth map; convolving the first depth map to generate a second depth map, wherein the second depth map displays an edge contour line of the standardized three-dimensional face model; optimizing the second depth map so as to finish edge feature extraction of the standardized three-dimensional face model;
the barycentric coordinate interpolation module is configured to obtain edge feature points by acquiring barycentric coordinate information corresponding to the edge feature information and interpolating;
A loss function generation module configured to establish a distribution loss function ldis, a chamfer distance loss function lchamfer, and a regularized loss function lreg from key points and the edge feature points in the target face image, wherein the distribution loss function ldis is a mean square sum of expression weights of the face model, and the chamfer distance loss function lchamfer = Regularized penalty function lreg = ||S1 is the set of key points, S2 is the set of edge feature points, var is a variance operator;
The total loss function generation module is configured to perform weighted calculation on the distributed loss function ldis, the chamfer distance loss function lchamfer and the regularized loss function lreg to obtain a total loss function lall =w1× ldis +w2× lchamfer +w3× lreg, wherein w1, w2 and w3 are super parameters of the total loss function;
the optimal expression weight acquisition module is configured to perform back propagation iteration on the total loss function lall until the total loss function lall obtains a minimum value, so as to obtain an optimal expression weight;
And the optimal three-dimensional face model generation module is configured to generate an optimal three-dimensional face model according to the optimal expression weight.
CN202111180132.3A 2021-10-11 2021-10-11 Three-dimensional face model edge optimization method and system based on micro-rendering Active CN113902768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111180132.3A CN113902768B (en) 2021-10-11 2021-10-11 Three-dimensional face model edge optimization method and system based on micro-rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111180132.3A CN113902768B (en) 2021-10-11 2021-10-11 Three-dimensional face model edge optimization method and system based on micro-rendering

Publications (2)

Publication Number Publication Date
CN113902768A CN113902768A (en) 2022-01-07
CN113902768B true CN113902768B (en) 2024-08-13

Family

ID=79191186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111180132.3A Active CN113902768B (en) 2021-10-11 2021-10-11 Three-dimensional face model edge optimization method and system based on micro-rendering

Country Status (1)

Country Link
CN (1) CN113902768B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012517B (en) * 2023-02-02 2023-08-08 北京数原数字化城市研究中心 Regularized image rendering method and regularized image rendering device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191507A (en) * 2018-08-24 2019-01-11 北京字节跳动网络技术有限公司 Three-dimensional face images method for reconstructing, device and computer readable storage medium
CN111951381A (en) * 2020-08-13 2020-11-17 科大乾延科技有限公司 Three-dimensional face reconstruction system based on single face picture

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144284B (en) * 2019-12-25 2021-03-30 支付宝(杭州)信息技术有限公司 Method and device for generating depth face image, electronic equipment and medium
CN111428579A (en) * 2020-03-03 2020-07-17 平安科技(深圳)有限公司 Face image acquisition method and system
CN111951384B (en) * 2020-08-13 2024-05-28 科大乾延科技有限公司 Three-dimensional face reconstruction method and system based on single face picture

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191507A (en) * 2018-08-24 2019-01-11 北京字节跳动网络技术有限公司 Three-dimensional face images method for reconstructing, device and computer readable storage medium
CN111951381A (en) * 2020-08-13 2020-11-17 科大乾延科技有限公司 Three-dimensional face reconstruction system based on single face picture

Also Published As

Publication number Publication date
CN113902768A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
US11954870B2 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
EP3698275B1 (en) Data processing method, apparatus, system and storage media
WO2021174939A1 (en) Facial image acquisition method and system
US9830701B2 (en) Static object reconstruction method and system
US11481973B2 (en) Method, device, and storage medium for segmenting three-dimensional object
CN109697688A (en) A kind of method and apparatus for image procossing
US10430922B2 (en) Methods and software for generating a derived 3D object model from a single 2D image
CN111445582A (en) Single-image human face three-dimensional reconstruction method based on illumination prior
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN113936090A (en) Three-dimensional human body reconstruction method and device, electronic equipment and storage medium
CN111723707A (en) Method and device for estimating fixation point based on visual saliency
CN113593001A (en) Target object three-dimensional reconstruction method and device, computer equipment and storage medium
CN117372604B (en) 3D face model generation method, device, equipment and readable storage medium
CN116051722A (en) Three-dimensional head model reconstruction method, device and terminal
CN106910173A (en) The method that flake video wicket real time roaming is realized based on correcting fisheye image
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
CN114170290A (en) Image processing method and related equipment
CN113902768B (en) Three-dimensional face model edge optimization method and system based on micro-rendering
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
Kosaka et al. Vision-based motion tracking of frigid objects using prediction of uncertainties
CN112767478A (en) Appearance guidance-based six-degree-of-freedom pose estimation method
CN111664845B (en) Traffic sign positioning and visual map making method and device and positioning system
CN113223137B (en) Generation method and device of perspective projection human face point cloud image and electronic equipment
CN113886510A (en) Terminal interaction method, device, equipment and storage medium
Sakaue et al. Optimization approaches in computer vision and image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant