CN112541966B - Face replacement method based on reconstruction and generation network - Google Patents
Face replacement method based on reconstruction and generation network Download PDFInfo
- Publication number
- CN112541966B CN112541966B CN202011425921.4A CN202011425921A CN112541966B CN 112541966 B CN112541966 B CN 112541966B CN 202011425921 A CN202011425921 A CN 202011425921A CN 112541966 B CN112541966 B CN 112541966B
- Authority
- CN
- China
- Prior art keywords
- face
- network
- image
- generator
- reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000001061 forehead Anatomy 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face replacement method based on a reconstruction and generation network, which comprises the following steps: inputting the source face image and the target face image into a position mapping network to reconstruct a 3D face, and adjusting the pose of the source face to the pose of the target face; generating character mouth details by using a mouth detail generation network; the global sharpening generation network is utilized to remove the whole artifacts and noise of the human face, so that a human face replacement image result which can not be easily identified by human eyes and can not be easily identified by an algorithm is obtained. The invention can generate a realistic face replacement result on the basis of only one pair of source face images and target face images, improves the efficiency of the face replacement method, is suitable for generating a large number of face replacement results with different identities, and provides a large number of countermeasure samples for a face counterfeiting detection algorithm.
Description
Technical Field
The invention belongs to the field of automatic control, and particularly relates to a face replacement method based on reconstruction and network generation.
Background
Face replacement (face swap) is a relatively popular research topic in face editing. The present document is primarily concerned with high quality face replacement for identity replacement. The high-quality face replacement refers to a face replacement image which is not easy to identify by human eyes and a computer discriminant algorithm, and the face replacement for identity replacement refers to that the identity of a person in a source portrait is transferred to a person of a target portrait, and other characteristics of the target portrait are maintained unchanged.
The main face replacement method at present is DEEPFAKES types of methods, and mainly comprises the following ideas: face replacement based on a convolutional network is a face replacement method based on deep learning, a series of convolutional networks are designed to solve various problems, such as gestures, skin colors, illumination and the like, which are required to be faced when the face is replaced, but the performance of face replacement results depends on the task design of the networks, and counterfeiting marks are obvious due to poor design. The other DEEPFAKES method mainly uses face replacement based on a self-coding model, two pairs of self-coder structures are designed firstly to perform self-supervision training on a source portrait set and a target portrait set respectively, then two decoders are exchanged for testing, the performance is greatly improved compared with the previous method, especially compared with the DeepFaceLab method after GAN network introduction, the method can generate more lifelike fake results, but the poor identity maintainability can be seen from some experimental samples, and the DEEPFAKES method needs a large number of source portraits and target portraits for training due to algorithm design, and different source identities need to be trained respectively.
Another face replacement method is face replacement based on 3D face reconstruction, but due to its lack of processing of the post-3D reconstruction mapping operation, its counterfeit trace is still more visible.
The DEEPFAKES-type method is inefficient in generating the face replacement result, while the 3D-based face reconstruction-type method lacks effective processing of the map.
Disclosure of Invention
The invention aims to: aiming at the problems existing in the prior art, the face replacement method based on the reconstruction and the network generation is provided, has high quality and high efficiency, and is suitable for generating a large-scale countermeasure sample set.
The technical scheme is as follows: in order to solve the technical problems, the invention provides a face replacement method based on reconstruction and network generation, which comprises the following steps:
(1) And inputting the source face image and the target face image into a position mapping network to reconstruct the 3D face.
(2) And (3) adjusting the posture of the source face in the step (1) to the posture of the target face.
(3) And (3) generating the character mouth details on the result of the step (2) by using a mouth detail generation network.
(4) And (3) removing the whole artifacts and noise of the human face by utilizing a global sharpening generation network on the result of the step (3), thereby obtaining a human face replacement image result which meets the conditions that human eyes cannot easily identify and an algorithm cannot easily identify.
Further, the face replacement method based on the reconstruction and the network generation is characterized by comprising the following steps of: the training step of generating the network through the mouth details in the step (3) is as follows:
and (3.1) establishing a least square countermeasure generation network, and extracting discrimination errors.
(3.2) Establishing a VGG network to extract the reconstruction error.
And (3.3) extracting mouth posture features of the face image output in the step (2).
(3.3) During training, the mouth posture feature of (3.3) is input to the generator, the output of the generator and the mouth image of the target face image are input to the discriminator, and the output of the generator and the mouth image of the target face image are input to the VGG structure. Training the network by using the discrimination error of (3.1) and the reconstruction error of (3.2), wherein the loss function is as follows:
where LD is the discriminator Loss and LG is the generator Loss, and the V (x) function represents the output of x through the VGG structure, and the Loss that it produces is called Content Loss (Content Loss), and the gamma parameter represents the coefficient of the counter Loss (ADVERSARIAL LOSS).
Further, the face replacement method based on the reconstruction and the network generation is characterized by comprising the following steps of: the training steps of generating the network through the global sharpening in the step (4) are as follows:
And (4.1) establishing a square countermeasure generation network to extract discrimination errors.
(4.2) Establishing a VGG network to extract the reconstruction error.
(4.3) Establishing a translator network for translating the face image into a corresponding semantic image for semantic error extraction.
And (4.4) reconstructing the 3D face of the original image, and performing the operations of the steps (1), 2 and 3) on the original image to obtain the original image.
When training (4.5), the generator input is the original image in (4.4), the arbiter input is the output of the generator and the original image in (4.4), the input of the translator network is the output of the generator and the original image in (4.4), and the input of the VGG network is the output of the generator and the original image in (4.4). Training the network by using the discrimination error of (4.1), the reconstruction error of (4.2) and the semantic error of (4.3), wherein the loss function is as follows:
Where LD is the arbiter penalty and LG is the generator penalty, while the V (x) function represents the content penalty of x, the P (x) function represents the result of x going through PR-Net, and the T (x) function represents the result of the translator output, referred to herein as the penalty it produces is the semantic penalty. The coefficients α, β are the coefficients of the counterloss and the reconstruction loss, respectively.
The method provides a 3D face Reconstruction-based method with high efficiency and high quality, and the azimuth mapping network in the 3D face Reconstruction algorithm and the generation countermeasure network for mapping detail processing are organically combined, so that the method is called Reconstruction-generation network (Reconstruction-GenerationNetwork, EGNet). The new structure not only can generate a face replacement result with the quality equivalent to that of DEEPFAKES types of methods, but also has the high-efficiency characteristic based on the 3D face reconstruction method: only a pair of source face images and target face images are needed to generate high-quality face replacement results without retraining.
Compared with the prior art, the invention has the advantages that:
1. The DEEPFAKES face replacement method, such as DeepFaceLab, needs to be retrained for different source face images, and the method does not need to be retrained for different source face images, so that the efficiency is greatly improved, and the method is more suitable for generating diversified large-scale data.
2. A face replacement method based on 3D face reconstruction is poor in processing of the mapping, such as Nirkin et al, and the reconstruction-generation network mode of the method inherits the high efficiency of the 3D face reconstruction method and overcomes the defect of the mapping processing, particularly the loss of mouth details.
3. The prior method mainly meets the requirement of human eyes, and the method also considers the requirement of algorithm identification, so that the result of the method is difficult to identify by the mainstream face replacement detection method.
Drawings
FIG. 1 is a general flow diagram of a rebuild-generation network pattern in an exemplary embodiment;
FIG. 2 is a schematic diagram of a mouth detail generation network in an embodiment;
fig. 3 is a schematic structural diagram of a global sharpening generation network in an embodiment.
Detailed Description
The invention is further elucidated below in connection with the drawings and the detailed description. The described embodiments of the invention are only some, but not all, embodiments of the invention. Based on the embodiments of the present invention, other embodiments that may be obtained by those of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
As shown in fig. 1-3, the face replacement method based on the reconstruction and generation network according to the present invention is characterized in that: the method has the advantages of no need of multiple training, low sample size requirement, high quality of the generated result and the like.
The reestablishing-generating network mode comprises the steps of:
(1) And inputting the source face image and the target face image into a PR-Net (PRN) azimuth mapping network to reconstruct the 3D face, and adjusting the posture of the source face to the posture of the target face according to the posture of the source face model.
(2) And (3) generating character Mouth details on the result of the step (1) by using a Mouth detail generation network (mountain-GAN), wherein the problem of losing the Mouth details of the 3D face reconstruction method is solved, when the character Mouth details are trained, the generator is input into Mouth gesture features, the discriminator is input into the output of the generator and the Mouth image of the target face image, and the VGG structure is input into the output of the generator and the Mouth image of the target face image. Training the network by adopting the discrimination error and the reconstruction error described in (3.1), wherein the loss function is as follows:
where LD is the discriminator Loss and LG is the generator Loss, and the V (x) function represents the output of x through the VGG structure, and the Loss that it produces is called Content Loss (Content Loss), and the gamma parameter represents the coefficient of the counter Loss (ADVERSARIAL LOSS).
(3) And (3) removing overall artifacts and noise of the human face by using a global sharpening generation network (Sharpen-GAN) on the result of the step (2), so as to obtain a human face replacement image result which meets the conditions that human eyes cannot easily identify and an algorithm cannot easily identify. During training, the generator input is an original image, the arbiter input is the output of the generator and the original image, the input of the translator network is the output of the generator and the original image, and the input of the VGG network is the output of the generator and the original image. Training the network by adopting a discrimination error, a reconstruction error and a semantic error, wherein the loss function is as follows:
Where LD is the arbiter penalty and LG is the generator penalty, while the V (x) function represents the content penalty of x, the P (x) function represents the result of x going through PR-Net, and the T (x) function represents the result of the translator output, referred to herein as the penalty it produces is the semantic penalty. The coefficients α, β are the coefficients of the counterloss and the reconstruction loss, respectively.
The trained models of all modules only need to be tested by using a generator, and the test indexes are as follows: structural Similarity (SSIM), posing Error (Pose Error, PE), skin color Error (SE), and identity distance (IDENTITY DISTANCE). The evaluation index will be described next. Four indexes capable of measuring the quality of the face replacement image are summarized: SSIM, attitude error, skin color error and identity gap are used for comprehensively evaluating the forgery quality of forgery images. Of these, SSIM (structural similarity) is a common indicator for measuring image quality. The SSIM index evaluates whether the fake image has distortion and noise compared with the target portrait; the other three are the evaluation indexes proposed herein: the gesture error evaluates whether the gesture of the foreground image and the background image in the fake image is consistent, the skin color error evaluates whether the skin colors of the foreground image and the background image in the fake image are consistent and whether obvious fake boundaries exist, and the identity gap evaluates whether the fake image successfully migrates the identity of the source portrait to achieve the aim of replacing the identity. These metrics will be used in subsequent experiments to measure the performance of the face replacement algorithm.
Posing errors (Pose Error, PE) the pose of the character in the face replacement result may cause visual vulnerability if there is an error with the pose of the target person. Thus, a posture error is proposed to measure the result of the algorithm posture adjustment. The distance D between two images is defined as an attitude error as shown below,
Wherein I 1,I2 is two face images to be detected respectively, wherein gamma is used for controlling coefficients for the horizontal direction or the vertical direction, and delta (x) represents that the difference calculation is carried out on the x indexes of the two images.
Skin Error (SE) source and target figures produce poor results if there is a difference in Skin tone, but not processed. A large number of samples indicate that the difference in skin color is mainly concentrated in the forehead portion. The pixel F of the sampled face is compared with the pixel T of the forehead portion to obtain a skin tone error theta, which, as shown below,
Wherein the method comprises the steps ofAndRepresenting the mean value corresponding to the face pixel F and the forehead portion pixel T, the norm function represents normalizing it.
Identity distance (IDENTITY DISTANCE, ID) since the fundamental purpose of the face substitution class approach described herein is the substitution of identities, this approach is considered ineffective if the identity of a person cannot be effectively substituted. An index is therefore required to measure identity retention, i.e., identity gap between the source portrait and the replacement result (IDENTITY DISTANCE). Inspired by the Face recognition method Face-Net method, the measurement method carries out regression on each Face image through a depth network, maps the Face image to a Euclidean space, and meets the following properties:
If the identity of the two portraits is the same, then their corresponding points will be as close as possible
If the identities of the two portraits differ, then their corresponding points will be as far apart as possible
The objective function of the depth network is shown below,
Wherein θ is a target parameter of the depth network, D is a distribution formed by all face images, D represents a distribution of a certain identity face image, x and y are samples of the face image, and γ is a parameter for adjusting two emphasis. The trained depth network can measure the identity distance between the face replacement result and the source portrait. In the experiment, the network is a model trained on VGGFace < 2 >, and 1.1 is selected as a threshold value through a large number of tests, namely if the identity distance exceeds 1.1, the face replacement algorithm is not considered to be poor in effect, and if the identity distance is smaller than 1.1, the face replacement algorithm is considered to be good in effect.
The comparison of the present process with the main stream process is shown in table 1.
Table 1 comparison of performance indices of the methods herein and DeepFake methods and Nirkin et al; the index labeled "+.f" therein indicates that the higher the better, the index marked with "∈" indicates that the lower the better
In view of the above results, the method has great advantages compared with the mainstream method in terms of four indexes of face replacement. Particularly, SSIM for measuring the image quality, the method uses a global sharpening generation network, so that the image quality is improved more. And the pose accuracy, the identity maintainability and the replacement efficiency of the 3D face reconstruction method are inherited, and the generated image quality is equivalent to that of the DEEPFAKES type method. Therefore, the method has certain application value and prospect.
Claims (1)
1. The face replacement method based on the reconstruction and the generation network is characterized by comprising the following steps:
(1) Inputting the source face image and the target face image into a PR-Net azimuth mapping network to reconstruct a 3D face;
(2) Adjusting the pose of the source face model in the step (1) to the pose of the target face;
(3) Generating character mouth details on the result of the step (2) by using a mouth detail generation network;
the mouth detail generation network and the generation of the character mouth detail are specifically as follows:
(3.1) establishing a least square countermeasure generation network, and extracting discrimination errors;
(3.2) establishing a VGG network to extract reconstruction errors;
(3.3) extracting mouth gesture features of the target face gesture output in the step (2);
(3.4) during training, inputting mouth posture characteristics as (3.3) by the generator, inputting mouth images of the target face image and output of the generator by the discriminator, and inputting mouth images of the target face image and output of the generator by the VGG structure; training the network by using the discrimination error of (3.1) and the reconstruction error of (3.2), wherein the loss function is as follows:
Where L D is the arbiter penalty and L G is the generator penalty, and the V (x) function represents the output of x through the VGG structure, and the resulting penalty is called the Content penalty, and the gamma parameter represents the coefficient of the counterpenalty ADVERSARIAL LOSS;
(4) Removing overall artifacts and noise of the face by utilizing a global sharpening generation network on the result of the step (3), thereby obtaining a face replacement image result;
The global sharpening generation network and the implementation method thereof are as follows:
(4.1) establishing a square countermeasure generation network, and extracting discrimination errors;
(4.2) establishing a VGG network to extract reconstruction errors;
(4.3) establishing a translator network for translating the face image into a corresponding semantic image for extracting semantic errors;
(4.4) reconstructing the original image by using a 3D face to obtain an original image;
When training (4.5), the generator input is the original image in (4.4), the discriminator input is the output of the generator and the original image in (4.4), the input of the translator network is the output of the generator and the original image in (4.4), the VGG network input is the output of the generator and the original image in (4.4), the discrimination error in (4.1), the reconstruction error in (4.2) and the semantic error in (4.3) are adopted to train the network, and the loss function is as follows:
Wherein L D is the arbiter penalty and L G is the generator penalty, and the V (x) function represents the content penalty of x, the P (x) function represents the result of x going through PR-Net, and the T (x) function represents the result of translator output, referred to herein as semantic penalty; the coefficients α, β are the coefficients of the counterloss and the reconstruction loss, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011425921.4A CN112541966B (en) | 2020-12-09 | 2020-12-09 | Face replacement method based on reconstruction and generation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011425921.4A CN112541966B (en) | 2020-12-09 | 2020-12-09 | Face replacement method based on reconstruction and generation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112541966A CN112541966A (en) | 2021-03-23 |
CN112541966B true CN112541966B (en) | 2024-08-06 |
Family
ID=75019544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011425921.4A Active CN112541966B (en) | 2020-12-09 | 2020-12-09 | Face replacement method based on reconstruction and generation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541966B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734634B (en) * | 2021-03-30 | 2021-07-27 | 中国科学院自动化研究所 | Face changing method and device, electronic equipment and storage medium |
CN113240575B (en) * | 2021-05-12 | 2024-05-21 | 中国科学技术大学 | Face fake video effect enhancement method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349232A (en) * | 2019-06-17 | 2019-10-18 | 达闼科技(北京)有限公司 | Generation method, device, storage medium and the electronic equipment of image |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110706157B (en) * | 2019-09-18 | 2022-09-30 | 中国科学技术大学 | Face super-resolution reconstruction method for generating confrontation network based on identity prior |
CN111861872B (en) * | 2020-07-20 | 2024-07-16 | 广州市百果园信息技术有限公司 | Image face changing method, video face changing method, device, equipment and storage medium |
-
2020
- 2020-12-09 CN CN202011425921.4A patent/CN112541966B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349232A (en) * | 2019-06-17 | 2019-10-18 | 达闼科技(北京)有限公司 | Generation method, device, storage medium and the electronic equipment of image |
Also Published As
Publication number | Publication date |
---|---|
CN112541966A (en) | 2021-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062880B (en) | Underwater image real-time enhancement method based on condition generation countermeasure network | |
CN109254274B (en) | Radar radiation source identification method based on feature fusion | |
CN107657279B (en) | Remote sensing target detection method based on small amount of samples | |
CN104834922B (en) | Gesture identification method based on hybrid neural networks | |
CN108520503A (en) | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image | |
CN112766334B (en) | Cross-domain image classification method based on pseudo label domain adaptation | |
WO2023125456A1 (en) | Multi-level variational autoencoder-based hyperspectral image feature extraction method | |
CN112541966B (en) | Face replacement method based on reconstruction and generation network | |
CN116310008B (en) | Image processing method based on less sample learning and related equipment | |
CN115511795A (en) | Medical image segmentation method based on semi-supervised learning | |
Agarwal et al. | MD-CSDNetwork: Multi-domain cross stitched network for deepfake detection | |
CN111553898A (en) | Fabric defect detection method based on convolutional neural network | |
CN114742758A (en) | Cell nucleus classification method in full-field digital slice histopathology picture | |
CN112884773B (en) | Target segmentation model based on target attention consistency under background transformation | |
Chen et al. | Eyes localization algorithm based on prior MTCNN face detection | |
CN109145749B (en) | Cross-data-set facial expression recognition model construction and recognition method | |
CN105389573B (en) | A kind of face identification method based on three value mode layering manufactures of part | |
CN112446345B (en) | Low-quality three-dimensional face recognition method, system, equipment and storage medium | |
Qi et al. | Generalizing to unseen domains via text-guided augmentation | |
CN117275063A (en) | Face depth counterfeiting detection method and system based on three-dimensional information time sequence consistency | |
CN117830148A (en) | SRGAN-GAN-based image restoration method, device, equipment and storage medium | |
CN113962885B (en) | Image highlight processing method based on improvement CycleGAN | |
CN115239943A (en) | Training method of image correction model and color correction method of slice image | |
CN115861276A (en) | Method and device for detecting scratches on surface of graphite membrane | |
CN114821174A (en) | Power transmission line aerial image data cleaning method based on content perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |