CN115147578B - Stylized three-dimensional face generation method and device, electronic equipment and storage medium - Google Patents
Stylized three-dimensional face generation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115147578B CN115147578B CN202210766709.7A CN202210766709A CN115147578B CN 115147578 B CN115147578 B CN 115147578B CN 202210766709 A CN202210766709 A CN 202210766709A CN 115147578 B CN115147578 B CN 115147578B
- Authority
- CN
- China
- Prior art keywords
- dimensional face
- stylized
- dimensional
- face image
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 19
- 239000000758 substrate Substances 0.000 claims description 33
- 210000000056 organ Anatomy 0.000 claims description 27
- 230000004927 fusion Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 16
- 238000013508 migration Methods 0.000 claims description 11
- 230000005012 migration Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 16
- 238000004590 computer program Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 210000000697 sensory organ Anatomy 0.000 description 7
- 210000000887 face Anatomy 0.000 description 6
- 238000013461 design Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Biophysics (AREA)
- Computer Graphics (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a stylized three-dimensional face generation method, a stylized three-dimensional face generation device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, deep learning, enhancement/virtual reality and the like, and can be applied to scenes such as meta universe, virtual digital person generation and the like. The specific implementation scheme is as follows: acquiring a two-dimensional face image acquired by a shooting device; converting the two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device; generating stylized texture features based on the three-dimensional face image; and generating a stylized three-dimensional face image based on the stylized texture features. The present disclosure obtains stylized three-dimensional face images. The whole processing flow is automatically completed without participation of multiple persons, so that the stylized three-dimensional face generation method is low in cost, and the whole flow of the embodiment of the invention is concise and has low requirement on computing capacity.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, deep learning, enhancement/virtual reality and the like, and can be applied to scenes such as metauniverse, virtual digital person generation and the like.
Background
The construction of the virtual universe is established on subjective and objective requirements such as high fineness, high immersion, low delay and the like. Wherein virtual digital people are the key to creating a metauniverse virtual world. According to different demands of the virtual digital person, the virtual digital person can be classified into 2D (2D), 3D, cartoon, super-realistic, and the like.
The creation of the stylized three-dimensional face of the virtual digital person generally requires the participation of multiple persons, and has long time and high cost. How to efficiently and cost-effectively create a stylized three-dimensional face still needs to be solved.
Disclosure of Invention
The disclosure provides a stylized three-dimensional face generation method, a stylized three-dimensional face generation device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a stylized three-dimensional face generation method, including:
acquiring a two-dimensional face image acquired by a shooting device;
converting the two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device;
generating stylized texture features based on the three-dimensional face image;
and generating a stylized three-dimensional face image based on the stylized texture features.
According to a second aspect of the present disclosure, there is provided a stylized three-dimensional face generating apparatus, including:
the acquisition module is used for acquiring the two-dimensional face image acquired by the shooting device;
the conversion module is used for converting the two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device;
the texture generation module is used for generating stylized texture features based on the three-dimensional face image;
and the face generation module is used for generating a stylized three-dimensional face image based on the stylized texture features.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method in the first aspect.
In the embodiment of the disclosure, the two-dimensional face image is converted into the three-dimensional face image based on the internal parameters of the shooting device, so that the conversion from two dimensions to three dimensions is realized, and then the stylized texture features are generated based on the three-dimensional face. Thus, the stylized texture features required by the three-dimensional face image are obtained, the stylization of the face is realized, and the stylized three-dimensional face image is obtained. The whole processing flow is automatically completed without participation of multiple persons, so that the cost of the stylized three-dimensional face generation method is low. And the overall flow of the embodiment of the disclosure is concise and has low requirement on computing power. Therefore, the embodiment of the disclosure realizes a three-dimensional face stylization scheme with low cost and high efficiency.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a stylized three-dimensional face generation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of cropping a two-dimensional image according to another embodiment of the present disclosure;
FIG. 3 is another flow diagram of a stylized three-dimensional face generation method according to another embodiment of the present disclosure;
FIG. 4 is another flow diagram of a stylized three-dimensional face generation method according to another embodiment of the present disclosure;
FIG. 5 is another flow diagram of a stylized three-dimensional face generation method according to another embodiment of the present disclosure;
FIG. 6 is a schematic diagram of extracting face keypoints according to another embodiment of the disclosure;
FIG. 7 is a schematic diagram of a loop generation network according to another embodiment of the present disclosure;
fig. 8 is a schematic structural view of a stylized three-dimensional face generating device according to another embodiment of the present disclosure;
fig. 9 is another structural schematic diagram of a stylized three-dimensional face generating device according to another embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device used to implement a stylized three-dimensional face generation method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the first aspect of the present disclosure, a method for generating a stylized three-dimensional face is provided, as shown in fig. 1, which is a flow chart of the method, including:
s101, acquiring a two-dimensional face image acquired by a shooting device.
S102, converting the two-dimensional face image into a three-dimensional face image based on internal parameters of the shooting device.
In a possible implementation manner, a two-dimensional face image may be input into a two-dimensional image-based depth prediction model, where the model is used to infer the two-dimensional face image, detect the 2D coordinates (u, v) of the pixel point in the two-dimensional face image, and predict the depth Z corresponding to the 2D coordinates. Then, based on the imaging principle, mapping the 2D face to the 3D face in a mode shown by a formula (1):
in the formula (1), (u, v) represents the 2D coordinates of the pixel point, Z represents the depth of the pixel point, K represents the internal reference of the camera, and K can be obtained through calibration of the camera (such as camera calibration). Wherein,,fx, fy, cx and cy are all sub-parameters of the internal reference, K -1 The inverse matrix of K, and (X, Y, Z) are 3D coordinates of the pixel points in the camera coordinate system.
S103, generating stylized texture features based on the three-dimensional face image.
And S104, generating a stylized three-dimensional face image based on the stylized texture features.
In summary, in the embodiment of the present disclosure, a two-dimensional face image is converted into a three-dimensional face image based on an internal parameter of a photographing device, so as to implement conversion from two dimensions to three dimensions, and then a stylized texture feature is generated based on the three-dimensional face, so as to obtain key elements of the stylized three-dimensional face, and then stylized and three-dimensional of a real face are implemented based on the stylized texture feature. The whole processing flow is automatically completed without participation of multiple persons, so that the cost of the stylized three-dimensional face generating method is low. And the overall flow of the embodiment of the disclosure is concise and has low requirement on computing power. Therefore, the embodiment of the disclosure realizes a three-dimensional face stylization scheme with low cost and high efficiency.
Wherein, in some embodiments, after the two-dimensional face image is acquired, the two-dimensional face image may be preprocessed to facilitate subsequent generation of a stylized three-dimensional face image. Based on the above, the conversion of the two-dimensional face image into the three-dimensional face image based on the internal parameters of the photographing device may be implemented as: performing face detection on the two-dimensional face image to obtain a face area; cutting the two-dimensional face image by taking the position of the face area in the center of the image as a reference to obtain a cut face two-dimensional image; and converting the cut two-dimensional face image into the three-dimensional face image based on the internal parameters of the shooting device.
For example, as shown in the left diagram of fig. 2, a large amount of space is left above the face for the schematic view of the acquired two-dimensional face image. And it can be seen that the face is biased to the left in the two-dimensional face image, and the face can be moved to the middle of the image in a clipping manner. The right diagram in fig. 2 is a schematic diagram of the two-dimensional face image after clipping processing. After cutting, the face is positioned in the middle of the image, so that the stylized texture features can be extracted well in the subsequent processing process, and the stylized three-dimensional face can be constructed conveniently.
In some embodiments, generating a stylized three-dimensional face image based on the stylized texture features in S104 may be implemented as: generating a stylized three-dimensional face image based on the stylized texture features, the external parameters of the photographing device, the stylized expression substrate and the three-dimensional face shape substrate.
The external parameters of the shooting device are used for enabling the orientation angles of the generated stylized three-dimensional face and the two-dimensional face to be kept similar to each other, the stylized expression substrate means an expression with a certain style, and the stylized expression substrate and the three-dimensional face shape substrate are respectively a model and can be obtained through training. When the method is implemented, the method can be used for carrying out micro-rendering based on the stylized texture features, the external parameters of the shooting device, the stylized expression substrate and the three-dimensional face shape substrate, so as to generate a stylized three-dimensional face image.
In implementation, the known real-shape expression substrate (namely, the real shape-expression substrate) can be subjected to deformation migration based on a triangle mesh deformation migration (Deformation transfer for triangle meshes) method to obtain the stylized expression substrate. The design cost of the stylized expression substrate can be reduced through the triangular mesh deformation migration mode, and any stylized expression substrate can be obtained through the mode, so that the generation of the stylized expression substrate has expansibility.
In addition, the design modification can be carried out on the basis of the generated stylized expression substrate, and compared with the process of constructing the stylized expression substrate from scratch, the stylized expression substrate is obtained in a modified mode, so that the cost is lower and the method is more convenient.
In some embodiments, to make the generated stylized three-dimensional face more realistic, real face textures are employed in embodiments of the present disclosure to create the stylized three-dimensional face. As shown in fig. 3, generating a stylized three-dimensional face image based on the stylized texture features may be implemented as the following steps:
s301, acquiring a real face texture extracted from a two-dimensional face image.
S302, fusing the real face texture and the stylized texture features to obtain fusion features.
In one possible implementation, the real face texture and the stylized texture features may be weighted and summed to obtain a fusion feature. The weighted summation mode is simple and convenient to operate, the algorithm complexity is low, and the generated fusion characteristics can be guaranteed to have stylized textures and also have the textures of real faces, wherein the textures of the real faces enable the stylized three-dimensional faces to be more lifelike, and the three-dimensional faces have a certain style due to the similarity with the two-dimensional faces and the stylized textures.
S303, generating a stylized three-dimensional face image based on the fusion characteristics.
In the embodiment of the disclosure, the real face texture is extracted from the two-dimensional face image, and the real face texture and the stylized texture feature fusion are adopted, so that the finally generated stylized three-dimensional face image has a certain style and simultaneously has the characteristics of the real face texture, the generated three-dimensional face texture is close to the real face texture, the fidelity of the generated three-dimensional face is improved, and the similarity of the generated three-dimensional face and the real face is improved.
In some embodiments, to facilitate generating a stylized expression substrate, in embodiments of the present disclosure, a three-dimensional face image is input to a first generator, resulting in stylized texture features output by the first generator.
The generator can well and truly express the characteristics of the image in the field of image processing, and based on the characteristics, the stylized texture features extracted by the first generator can meet the style requirements and meanwhile, the fidelity is higher.
The first generator is obtained by training a neural network model, and in order to achieve the purpose of rapid training, a cyclic generation network is adopted in the embodiment of the disclosure to obtain the first generator. The loop generation network includes a pair of generators, hereinafter referred to as a second generator and a third generator. Wherein the second generator is trained to obtain the first generator. The second generator is used to map from the a-picture space to the B-picture space, so this may also be referred to as an A2B generator. Accordingly, a third generator is used to map from the B-picture space to the a-picture space, so this third generator may also be referred to as a B2A generator. FIG. 4 is a schematic flow chart of a first generator obtained by training, comprising the following steps:
s401, converting the two-dimensional face sample into a three-dimensional face sample based on internal parameters of the shooting device.
The implementation of converting from a two-dimensional image to a three-dimensional image has been described in the foregoing, and will not be described in detail here.
And S402, extracting stylized texture features of the three-dimensional face sample based on the second generator.
S403, based on the stylized texture features of the three-dimensional face sample, a stylized three-dimensional image of the two-dimensional face sample is obtained.
S404, processing the stylized image of the two-dimensional face sample based on the third generator to generate an image to be compared.
S405, adjusting model parameters of the second generator based on a perceived Loss (Loss) between the two-dimensional face sample and the image to be compared to obtain a first generator.
In the embodiment of the disclosure, the second generator is trained by adopting two generators in the cyclic generation network, so that the first generator is obtained, and compared with the training by adopting only one generator, the training convergence speed is high and the efficiency is high.
In addition, in the embodiment of the disclosure, in order to increase the similarity between the generated image and the real face, the face feature points and the five sense organs are combined in the training process to train to obtain the first generator. As shown in fig. 5, for the flow chart of the added optimization training method, the method further comprises the following steps on the basis of fig. 4:
s501, extracting two-dimensional face feature points and a first five-sense organ size from a two-dimensional face sample.
For example, a plurality of face key points are extracted and used as two-dimensional face feature points. Fig. 6 is a schematic diagram of a face key point. Wherein facial organ characteristics such as eyes, mouth and nose are expressed through key points.
Regarding the size of the five sense organs, for example, it is possible to extract the absolute distance between the eyes, nose and mouth with emphasis, such as the distance between the upper and lower eyelid, the distance between the left and right corners of the eye, the distance between the left and right wings of the nose, the distance between the left and right corners of the mouth, the distance between the upper and lower lips, and so on. The three-dimensional stylized face can be enabled to fit the shape of the five sense organs of the real user based on the extracted five sense organ sizes, so that the fidelity of the stylized three-dimensional face and the similarity with the real face are provided.
S502, processing the two-dimensional face feature points and the first five-sense organ size by adopting a fourth generator to obtain stylized two-dimensional face feature points and the second five-sense organ size.
S503, projecting three-dimensional feature points in the stylized three-dimensional face image of the two-dimensional face sample to a two-dimensional space to obtain two-dimensional face feature points to be compared, and determining the third five-sense organ size in the stylized three-dimensional face image of the two-dimensional face sample based on the two-dimensional face feature points to be compared.
It should be noted that, the execution timing of step S503 is not limited, and may be executed before step S501, or may be executed simultaneously, as long as the loss of the same batch of samples is determined when the loss is calculated, which is applicable to the embodiments of the present disclosure.
S504, determining a first loss between the stylized two-dimensional face feature points and the two-dimensional face feature points to be compared and a second loss between the second five-sense organ size and the third five-sense organ size.
Thus, the aforementioned step S405 may be implemented to adjust the model parameters of the second generator based on the perceived loss, the first loss, and the second loss between the two-dimensional face sample and the image to be compared to obtain the first generator.
Therefore, in the embodiment of the disclosure, the two-dimensional face feature points of the real face can be adopted to train and obtain the first generator, and the similarity of the obtained stylized three-dimensional face and the real face in the two-dimensional image is further improved by combining the five-sense organ size, so that the realistic stylized three-dimensional face which is highly similar to the real face is obtained.
For ease of understanding, the manner in which the first generator is trained is further described below in connection with the architecture of the loop generation network. The loop generation network provided in the embodiment of the present disclosure, as shown in fig. 7, mainly includes:
a second generator (A2B) 701 for mapping the a image space to the B image space, thereby extracting stylized texture features in the input image;
a third generator (B2A) 702 constituting a cycle (loop) with the second generator 701, which maps the B image space to the a image space, the third generator 702 generating a face image based on the stylized three-dimensional face image generated by the second generator 701 as an input and based on the input in the embodiment of the present disclosure;
a fourth generator 703 for extracting stylized two-dimensional feature points and a second five-element size from the two-dimensional feature points and the first five-element size.
The training of the first generator in connection with fig. 7 mainly includes: and mapping the two-dimensional face sample into a three-dimensional space by adopting internal parameters of the shooting device to obtain a three-dimensional face sample. The three-dimensional face sample is input into a second generator (A2B) 701 to obtain stylized texture features, then the stylized texture features, external parameters (including R, T) of the shooting device, a stylized expression substrate (Exp) and a three-dimensional face Shape substrate (Shape) are input into a micro-renderer 704 to be rendered to generate a stylized three-dimensional face image, and the stylized three-dimensional face image is input into a third generator (B2A) 702 to obtain an image to be compared, so that a perception Loss (perception Loss) is obtained based on the image to be compared and the two-dimensional face sample.
In addition, in order to improve the similarity between the generated stylized three-dimensional face and the real user face, in the embodiment of the present disclosure, the two-dimensional face feature points and the first five-sense organ size are extracted from the two-dimensional face sample, and are input to the stylized two-dimensional face feature points and the second five-sense organ size obtained by the fourth generator 703.
In addition, the stylized three-dimensional face image of the two-dimensional face sample needs to be projected to a two-dimensional space to obtain two-dimensional face feature points to be compared, and the third five-sense organ size is obtained based on the two-dimensional face feature points.
So far, the first loss between the stylized two-dimensional face feature points and the two-dimensional face feature points to be compared and the second loss between the second five-sense organ size and the third five-sense organ size can be calculated.
The model parameters of the second generator 701 are jointly adjusted based on the perceived loss, the first loss and the second loss, thereby enabling the second generator 701 to take into account texture and similarity to a real face when extracting stylized texture features.
In addition, in the embodiment of the present disclosure, the second generator 701 and the third generator 702 mainly include a convolution layer module, and the fourth generator 703 mainly adopts a full connection layer.
In summary, in the embodiment of the disclosure, based on three dimensions of the texture of the real face, the feature points of the face and the dimensions of the five sense organs, the generated three-dimensional face and the real two-dimensional face can be guaranteed to be highly similar. In addition, in the embodiment of the disclosure, the first generator is generated by adopting a cyclic generation network, so that the training speed of the network model is high and the training efficiency is high. The whole flow of the embodiment of the invention has low algorithm complexity and low calculation cost, and can be deployed in a server for use and a terminal device for constructing a stylized three-dimensional face.
In addition, in the embodiment of the disclosure, the stylized expression substrate can be obtained by adopting triangular mesh deformation migration for the true known shape expression substrate, so that the design cost can be saved. In addition, the stylized three-dimensional face generation method provided by the embodiment of the disclosure has simple overall flow, no fixed requirement on specific styles, and three-dimensional faces of any style can be obtained based on the method, so that migration among different styles can be realized, and the stylized three-dimensional face generation method is high in expansibility.
In summary, embodiments of the present disclosure focus on efficient and low cost generation of avatars mapped with real world characters on high similarity, high aesthetics, high controllability, and the like.
Based on the same technical concept, an embodiment of a second aspect of the present disclosure provides a stylized three-dimensional face generating device, as shown in fig. 8, which is a schematic structural diagram of the device, including:
an acquisition module 801, configured to acquire a two-dimensional face image acquired by a capturing device;
a conversion module 802, configured to convert the two-dimensional face image into a three-dimensional face image based on the internal parameters of the photographing device;
a texture generation module 803 for generating stylized texture features based on the three-dimensional face image;
the face generation module 804 is configured to generate a stylized three-dimensional face image based on the stylized texture feature.
In some embodiments, based on fig. 8, as shown in fig. 9, the face generating module 804 includes:
an acquiring unit 901, configured to acquire a real face texture extracted from a two-dimensional face image;
the fusion unit 902 is configured to fuse the real face texture and the stylized texture feature to obtain a fusion feature;
the face generation unit 903 generates a stylized three-dimensional face image based on the fusion feature.
In some embodiments, the fusion unit 902 is configured to perform weighted summation on the real face texture and the stylized texture feature, to obtain a fusion feature.
In some embodiments, the texture generation module 803 is configured to input the three-dimensional face image into the first generator, and obtain the stylized texture feature output by the first generator.
In some embodiments, the apparatus further comprises a training module 904 for training to obtain the first generator by a method comprising:
converting the two-dimensional face sample into a three-dimensional face sample based on the internal parameters of the shooting device;
extracting stylized texture features of the three-dimensional face sample based on the second generator;
based on the stylized texture features of the three-dimensional face sample, a stylized three-dimensional face image of the two-dimensional face sample is obtained;
processing the stylized three-dimensional face image of the two-dimensional face sample based on a third generator to generate an image to be compared;
determining perceived loss between a two-dimensional face sample and an image to be compared
And adjusting model parameters of the second generator to obtain a first generator based on the perception loss.
In some embodiments, training module 904 is further configured to:
extracting two-dimensional face feature points and a first five-sense organ size from a two-dimensional face sample;
processing the two-dimensional face feature points and the first five sense organs by adopting a fourth generator to obtain stylized two-dimensional face feature points and the second five sense organs;
projecting three-dimensional feature points in a stylized three-dimensional face image of a two-dimensional face sample to a two-dimensional space to obtain two-dimensional face feature points to be compared, and determining the third five-sense organ size in the stylized three-dimensional face image of the two-dimensional face sample based on the two-dimensional face feature points to be compared;
determining a first loss between the stylized two-dimensional face feature points and the two-dimensional face feature points to be compared and a second loss between the second five-sense organ size and the third five-sense organ size;
the model parameters of the second generator are adjusted based on the perceived loss, the first loss, and the second loss to obtain a first generator.
In some embodiments, the face generating module 804 is configured to generate a stylized three-dimensional face image based on the stylized texture feature, the external parameter of the photographing device, the stylized expression substrate, and the three-dimensional face shape substrate, where the stylized expression substrate is obtained by performing deformation migration on a known real shape expression substrate based on a triangle mesh deformation migration method.
In some embodiments, the conversion module 802 is configured to:
performing face detection on the two-dimensional face image to obtain a face area;
cutting the two-dimensional face image by taking the position of the face area in the center of the image as a reference to obtain a cut two-dimensional face image;
and converting the cut two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device.
In summary, in the embodiment of the disclosure, a two-dimensional face image is converted into a three-dimensional face image based on an internal parameter of a photographing device, so that conversion from two dimensions to three dimensions is realized, and then a stylized texture feature is generated based on the three-dimensional face, and a three-dimensional face image is obtained accordingly. The whole processing flow is automatically completed without participation of multiple persons, so that the cost of the stylized three-dimensional face generation method is low. And the overall flow of the embodiment of the disclosure is concise and has low requirement on computing power. Therefore, the embodiment of the disclosure realizes a three-dimensional face stylization scheme with low cost and high efficiency.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 10 shows a schematic block diagram of an example electronic device 1000 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the electronic apparatus 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in the electronic device 1000 are connected to the I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows electronic device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 1001 performs the stylized three-dimensional face generation method described above. In some embodiments, the stylized three-dimensional face generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1000 via the ROM 1002 and/or the communication unit 1009. When the computer program is loaded into the RAM 1003 and executed by the computing unit 1001, one or more steps of the stylized three-dimensional face generating method may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the stylized three-dimensional face generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (12)
1. A stylized three-dimensional face generation method comprises the following steps:
acquiring a two-dimensional face image acquired by a shooting device;
converting the two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device;
inputting the three-dimensional face image into a first generator to obtain stylized texture features output by the first generator;
the first generator is trained and obtained by the following method, comprising:
extracting stylized texture features of a three-dimensional face sample of a two-dimensional face sample based on a second generator to obtain a stylized three-dimensional face image of the two-dimensional face sample;
forming a circulation network based on a third generator and the second generator, so as to process the stylized three-dimensional face image of the two-dimensional face sample by using the third generator and generate an image to be compared;
extracting two-dimensional face feature points and a first five-sense organ size from the two-dimensional face sample;
processing the two-dimensional face feature points and the first five-sense organ size based on a fourth generator to obtain stylized two-dimensional face feature points and a second five-sense organ size;
projecting the three-dimensional feature points in the stylized three-dimensional face image to a two-dimensional space to obtain two-dimensional face feature points to be compared so as to determine the third five-element size in the stylized three-dimensional face image;
determining a first loss between the stylized two-dimensional face feature points and the two-dimensional face feature points to be compared and a second loss between the second five-sense organ size and the third five-sense organ size;
adjusting model parameters of the second generator based on the perceived loss between the two-dimensional face sample and the image to be compared, the first loss and the second loss to obtain the first generator;
generating a stylized three-dimensional face image based on the stylized texture features, the external parameters of the photographing device, the stylized expression substrate and the three-dimensional face shape substrate.
2. The method of claim 1, the generating a stylized three-dimensional face image comprising:
acquiring a real face texture extracted from the two-dimensional face image;
fusing the real face texture and the stylized texture characteristics to obtain fusion characteristics;
and generating the stylized three-dimensional face image based on the fusion characteristics.
3. The method of claim 2, wherein the fusing the real face texture and the stylized texture feature to obtain a fused feature comprises:
and carrying out weighted summation on the real face texture and the stylized texture characteristics to obtain the fusion characteristics.
4. A method according to any one of claims 1-3, wherein the stylized expression substrate is obtained by deformation migration of a known real shape expression substrate based on a triangular mesh deformation migration method.
5. The method of claim 1, the converting the two-dimensional face image into a three-dimensional face image based on internal parameters of the camera, comprising:
performing face detection on the two-dimensional face image to obtain a face area;
cutting the two-dimensional face image by taking the position of the face area in the center of the image as a reference to obtain a cut face two-dimensional image;
and converting the cut two-dimensional face image into the three-dimensional face image based on the internal parameters of the shooting device.
6. A stylized three-dimensional face generation device, comprising:
the acquisition module is used for acquiring the two-dimensional face image acquired by the shooting device;
the conversion module is used for converting the two-dimensional face image into a three-dimensional face image based on the internal parameters of the shooting device;
the texture generation module is used for inputting the three-dimensional face image into a first generator to obtain stylized texture characteristics output by the first generator;
the training module is used for training and obtaining the first generator by the following method, and comprises the following steps:
extracting stylized texture features of a three-dimensional face sample of a two-dimensional face sample based on a second generator to obtain a stylized three-dimensional face image of the two-dimensional face sample;
forming a circulation network based on a third generator and the second generator, so as to process the stylized three-dimensional face image of the two-dimensional face sample by using the third generator and generate an image to be compared;
extracting two-dimensional face feature points and a first five-sense organ size from the two-dimensional face sample;
processing the two-dimensional face feature points and the first five-sense organ size based on a fourth generator to obtain stylized two-dimensional face feature points and a second five-sense organ size;
projecting the three-dimensional feature points in the stylized three-dimensional face image to a two-dimensional space to obtain two-dimensional face feature points to be compared so as to determine the third five-element size in the stylized three-dimensional face image;
determining a first loss between the stylized two-dimensional face feature points and the two-dimensional face feature points to be compared and a second loss between the second five-sense organ size and the third five-sense organ size;
adjusting model parameters of the second generator based on the perceived loss between the two-dimensional face sample and the image to be compared, the first loss and the second loss to obtain the first generator;
and the face generation module is used for generating a stylized three-dimensional face image based on the stylized texture features, the external parameters of the shooting device, the stylized expression substrate and the three-dimensional face shape substrate.
7. The apparatus of claim 6, wherein the face generation module comprises:
the acquisition unit is used for acquiring the real face texture extracted from the two-dimensional face image;
the fusion unit is used for fusing the real face texture and the stylized texture characteristics to obtain fusion characteristics;
and the human face generating unit is used for generating the stylized three-dimensional human face image based on the fusion characteristics.
8. The apparatus of claim 7, wherein the fusion unit is configured to weight sum the real face texture and the stylized texture feature to obtain the fusion feature.
9. The apparatus of any of claims 6-8, wherein the stylized expression substrate is obtained by deformation migration of a known real shape expression substrate based on a triangular mesh deformation migration method.
10. The apparatus of claim 6, the conversion module to:
performing face detection on the two-dimensional face image to obtain a face area;
cutting the two-dimensional face image by taking the position of the face area in the center of the image as a reference to obtain a cut face two-dimensional image;
and converting the cut two-dimensional face image into the three-dimensional face image based on the internal parameters of the shooting device.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210766709.7A CN115147578B (en) | 2022-06-30 | 2022-06-30 | Stylized three-dimensional face generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210766709.7A CN115147578B (en) | 2022-06-30 | 2022-06-30 | Stylized three-dimensional face generation method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115147578A CN115147578A (en) | 2022-10-04 |
CN115147578B true CN115147578B (en) | 2023-10-27 |
Family
ID=83411124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210766709.7A Active CN115147578B (en) | 2022-06-30 | 2022-06-30 | Stylized three-dimensional face generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147578B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011075082A1 (en) * | 2009-12-14 | 2011-06-23 | Agency For Science, Technology And Research | Method and system for single view image 3 d face synthesis |
WO2016110005A1 (en) * | 2015-01-07 | 2016-07-14 | 深圳市唯特视科技有限公司 | Gray level and depth information based multi-layer fusion multi-modal face recognition device and method |
CN109978930A (en) * | 2019-03-27 | 2019-07-05 | 杭州相芯科技有限公司 | A kind of stylized human face three-dimensional model automatic generation method based on single image |
CN110163054A (en) * | 2018-08-03 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of face three-dimensional image generating method and device |
WO2020042345A1 (en) * | 2018-08-28 | 2020-03-05 | 初速度(苏州)科技有限公司 | Method and system for acquiring line-of-sight direction of human eyes by means of single camera |
CN112818733A (en) * | 2020-08-24 | 2021-05-18 | 腾讯科技(深圳)有限公司 | Information processing method, device, storage medium and terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541963B (en) * | 2020-11-09 | 2023-12-26 | 北京百度网讯科技有限公司 | Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium |
-
2022
- 2022-06-30 CN CN202210766709.7A patent/CN115147578B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011075082A1 (en) * | 2009-12-14 | 2011-06-23 | Agency For Science, Technology And Research | Method and system for single view image 3 d face synthesis |
WO2016110005A1 (en) * | 2015-01-07 | 2016-07-14 | 深圳市唯特视科技有限公司 | Gray level and depth information based multi-layer fusion multi-modal face recognition device and method |
CN110163054A (en) * | 2018-08-03 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of face three-dimensional image generating method and device |
WO2020042345A1 (en) * | 2018-08-28 | 2020-03-05 | 初速度(苏州)科技有限公司 | Method and system for acquiring line-of-sight direction of human eyes by means of single camera |
CN109978930A (en) * | 2019-03-27 | 2019-07-05 | 杭州相芯科技有限公司 | A kind of stylized human face three-dimensional model automatic generation method based on single image |
CN112818733A (en) * | 2020-08-24 | 2021-05-18 | 腾讯科技(深圳)有限公司 | Information processing method, device, storage medium and terminal |
Non-Patent Citations (1)
Title |
---|
高翔等.3DMM与GAN结合的实时人脸表情迁移方法.计算机应用与软件.2020,第37卷(第4期),119-126. * |
Also Published As
Publication number | Publication date |
---|---|
CN115147578A (en) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113643412B (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN113327278B (en) | Three-dimensional face reconstruction method, device, equipment and storage medium | |
CN115345980B (en) | Generation method and device of personalized texture map | |
CN112819947A (en) | Three-dimensional face reconstruction method and device, electronic equipment and storage medium | |
CN114820905B (en) | Virtual image generation method and device, electronic equipment and readable storage medium | |
JP7387202B2 (en) | 3D face model generation method, apparatus, computer device and computer program | |
CN115049799B (en) | Method and device for generating 3D model and virtual image | |
JP2024004444A (en) | Three-dimensional face reconstruction model training, three-dimensional face image generation method, and device | |
CN113129450A (en) | Virtual fitting method, device, electronic equipment and medium | |
CN113362263A (en) | Method, apparatus, medium, and program product for changing the image of a virtual idol | |
CN114549710A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN112784765A (en) | Method, apparatus, device and storage medium for recognizing motion | |
CN113313631B (en) | Image rendering method and device | |
CN113808249B (en) | Image processing method, device, equipment and computer storage medium | |
CN112884889B (en) | Model training method, model training device, human head reconstruction method, human head reconstruction device, human head reconstruction equipment and storage medium | |
CN114140320A (en) | Image migration method and training method and device of image migration model | |
CN115147578B (en) | Stylized three-dimensional face generation method and device, electronic equipment and storage medium | |
CN116843807B (en) | Virtual image generation method, virtual image model training method, virtual image generation device, virtual image model training device and electronic equipment | |
CN115965735B (en) | Texture map generation method and device | |
CN114092616B (en) | Rendering method, rendering device, electronic equipment and storage medium | |
CN113327311B (en) | Virtual character-based display method, device, equipment and storage medium | |
CN115082298A (en) | Image generation method, image generation device, electronic device, and storage medium | |
CN116342782B (en) | Method and apparatus for generating avatar rendering model | |
CN115953553B (en) | Avatar generation method, apparatus, electronic device, and storage medium | |
CN116188640B (en) | Three-dimensional virtual image generation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |