[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112818782B - Generalized silence living body detection method based on medium sensing - Google Patents

Generalized silence living body detection method based on medium sensing Download PDF

Info

Publication number
CN112818782B
CN112818782B CN202110085636.0A CN202110085636A CN112818782B CN 112818782 B CN112818782 B CN 112818782B CN 202110085636 A CN202110085636 A CN 202110085636A CN 112818782 B CN112818782 B CN 112818782B
Authority
CN
China
Prior art keywords
face
image
living body
body detection
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110085636.0A
Other languages
Chinese (zh)
Other versions
CN112818782A (en
Inventor
罗杨
韦仕才
骆春波
李智
曹英杰
彭涛
刘翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110085636.0A priority Critical patent/CN112818782B/en
Publication of CN112818782A publication Critical patent/CN112818782A/en
Application granted granted Critical
Publication of CN112818782B publication Critical patent/CN112818782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a media sensing-based generalized silence in-vivo detection method, and relates to the technical field of in-vivo detection. The method comprises the steps of obtaining a single frame face image to be detected; extracting a face brightness image according to the face image; obtaining a face reflection image according to the face image and the face brightness image; constructing a living body detection neural network model, and training the living body detection neural network model by adopting an improved central loss function; and performing living body detection on the human face reflection image by adopting the trained living body detection neural network model. According to the method, the generalized human face medium characteristics are extracted, so that the living body detection model is comprehensively superior to silent living body detection of other single RGB photos in performance expression and time complexity, the user experience is ensured, the peripheral requirement is reduced, and the detection accuracy is improved.

Description

Generalized silence living body detection method based on medium sensing
Technical Field
The invention relates to the technical field of in-vivo detection, in particular to a media perception-based generalization silencing in-vivo detection method.
Background
In recent years, as the face recognition technology is more and more widely applied to mobile payment, video witness account opening and other scenes, the research on the related safety becomes an important problem, particularly, the face image to be recognized is judged to be the face image of a living body, or is judged to be the face image in a photo or a recorded video.
The existing face living body judgment technology can be roughly divided into two types, namely an action living body and a silence living body. The action living body mainly refers to various living body judgment based on actions, and requires a user to complete specified facial actions such as blinking and opening mouth before taking a lens. However, on the one hand, the user experience is extremely bad because it requires the user's cooperation, and on the other hand, these facial actions can also be easily accomplished by various face synthesis software, the security level is not high enough, and therefore is gradually being replaced by silent living bodies.
Silent living bodies can be simply classified into three categories according to the data used for living body judgment: silence liveness detection based on a single frame RGB image, silence liveness detection based on a multi-frame image, and silence liveness detection based on a multi-modality. The silence living body based on the single frame RGB image only utilizes the image of one RGB face to extract the characteristics of texture and the like to judge the authenticity of the face, the method has the characteristics of simplicity and high efficiency, but because the static RGB face image is very easy to obtain, and the texture of a real person and a deceptive face is greatly influenced by the environment, deceptive media and the like, the detection method is very easy to crack, and the robustness is not high. Subsequently, researchers propose to perform silence live body detection by using images of multiple frames and introduce more information, such as subtle movement of a human face, to detect spoofing attacks. However, this method has a serious disadvantage that when an attacker uses real-person video playback to attack, there is also slight movement of the human face, and at this time, multi-frame-based live body detection may fail. Therefore, other researchers have proposed that we can improve the detection accuracy by introducing data of other modalities, such as depth maps and infrared maps. However, the method has two defects, one is that the method cannot be effective to 3D living attack, and in addition, the scheme usually needs a special camera to acquire a depth map and an infrared map, but the cost of the camera is high, and the camera is difficult to popularize in practical application scenes.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a generalized silence live detection method based on medium sensing.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a generalization silence in-vivo detection method based on medium perception comprises the following steps:
s1, acquiring a single frame of face image to be detected;
s2, extracting a face brightness image according to the face image obtained in the step S1;
s3, obtaining a face reflection image according to the face image obtained in the step S1 and the face brightness image extracted in the step S2;
s4, constructing a living body detection neural network model, and training the living body detection neural network model by adopting an improved central loss function;
and S5, performing living body detection on the human face reflection image obtained in the step S3 by adopting the trained living body detection neural network model.
The beneficial effect of this scheme is: according to the invention, the reflection image representing the characteristics of the human face medium material is extracted, so that the influence of ambient illumination is eliminated, the problem of in-vivo detection can be solved without using multi-frame images or data of other modes, the use of expensive peripherals such as a 3D camera, an infrared camera and the like is avoided, and the system can be conveniently deployed in various scenes; in addition, the improved central loss function is adopted to train the living body detection neural network model, so that the neural network can learn the medium characteristics with generalization, the trained model is comprehensively superior to the silent living body detection of other single RGB pictures in performance and time complexity, the user experience is ensured, the peripheral requirement is reduced, and the detection accuracy is improved.
Further, the step S2 is specifically:
estimating the face image obtained in the step S1 by using low-pass filters of multiple scales to obtain multiple estimation results;
and averaging the plurality of estimation results to obtain a final face brightness image.
Further, the face luminance image is represented as:
Figure BDA0002910699980000031
where k is the number of low pass filters, wkAs weighting coefficients, FkIs the low-pass filter operator of the kth low-pass filter.
Further, the face reflection image in the step S3 is expressed as:
R(x,y)=e(logI(x,y)-logL(x,y))
wherein, I (x, y) is a face image.
Further, the step S4 specifically includes the following sub-steps:
s41, constructing a live body detection ResNet neural network model;
s42, extracting generalization medium characteristics from the known human face reflection image training set sample by using a living body detection ResNet neural network model;
and S43, carrying out iterative optimization on the in-vivo detection neural network model parameters by adopting the improved central loss function.
Further, the last three layer parameters in the in-vivo detection ResNet neural network model are fixed as set parameters, and a random gradient descent optimizer with momentum is adopted to learn the first layer parameter.
Further, the improved center loss function is expressed as:
Figure BDA0002910699980000041
wherein x isiTraining sample corresponding characteristic vector, x, for ith human face reflection imagejA feature vector corresponding to a jth deception face reflection image training sample, c is a cluster vector of the feature vectors of the real human face reflection images, m1Training set sample number, m, for real human face reflection images2The number of samples in the training set is reflected for the spoofed face.
Further, the live body detection ResNet neural network model updates the clustering vector in the training process, specifically:
randomly initializing a cluster vector as an initial cluster vector;
vc is calculated according to the feature vector corresponding to the real human face reflection image training samplet
And setting an updating weight and updating the clustering vector.
Further, the VctIs calculated by the formula:
Figure BDA0002910699980000042
Wherein, ctAnd the clustering vector is the characteristic vector of the real human face reflection image.
Further, the update formula for updating the cluster vector is as follows:
ct+1=ct-α*Vct
wherein, ct+1For the updated cluster vector, α is the update weight.
Drawings
FIG. 1 is a schematic flow chart of a generalized silence live-detection method based on media sensing according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to different material characteristics of different media of a real person face and a deceptive face such as a video playback face and a photo printing, the living body detection problem is solved by utilizing the inherent generalization characteristic of the material characteristics of the media of the real person face and the deceptive face instead of the conventional multi-frame images and data of other modes, and the problems of poor user experience and easiness in playback attack and deception existing in the multi-frame images are solved. The invention does not need expensive hardware peripheral support, and is easy to be deployed in the existing face recognition system; user experience is optimized, peripheral requirements are reduced, and living body detection is carried out only by using a single frame RGB image.
As shown in fig. 1, an embodiment of the present invention provides a generalized silence live-detection method based on media sensing, including the following steps S1 to S5:
s1, acquiring a single frame of face image to be detected;
in this embodiment, the present invention regards the image as an incident image and a reflected image according to Retinex theory, the incident light is irradiated on a reflecting object, and the reflected light is formed by reflection of the reflecting object and enters human eyes. So as to represent the acquisition of the single-frame face image to be detected as:
I(x,y)=R(x,y)*L(x,y)
where I (x, y) represents the imaging result, R (x, y) represents the reflection image, which represents the reflection properties of the object, i.e. the intrinsic properties of the reflector, and L (x, y) represents the incident image, i.e. the luminance image, which is influenced by the surrounding environment, which determines the dynamic range that the image pixel can reach.
S2, extracting a face brightness image according to the face image obtained in the step S1;
in this embodiment, the present invention performs logarithmic transformation on the face image I (x, y) obtained in step S1 to obtain:
logI(x,y)=logR(x,y)+logL(x,y)
because logI (x, y) is a known quantity, the reflection image of the face image can be calculated by only requiring logL (x, y), namely the log value of the brightness image.
Assuming that the illumination is a uniform low-frequency signal in a local area, the invention deconvolves the original illumination image by using a low-pass filter operator to obtain an approximate brightness image. In order to improve the accuracy of the brightness image estimation, the invention further adopts a low-pass filter with a plurality of scales to estimate the face image obtained in the step S1 to obtain a plurality of estimation results; then, averaging a plurality of estimation results to obtain a final face brightness image, which is expressed as:
Figure BDA0002910699980000061
where k is the number of low pass filters, wkAs weighting coefficients, FkIs the low-pass filter operator of the kth low-pass filter. Specifically, in the present invention, k is 3, wkTaking 1/3, the three dimensions of the filter are (3,3), (5,5), (9, 9).
S3, obtaining a face reflection image according to the face image obtained in the step S1 and the face brightness image extracted in the step S2;
in the embodiment, the traditional texture and frequency characteristics indirectly utilize the characteristics of media, but are often easily influenced by environment and resolution and have no robustness; according to the invention, the reflection image directly representing the material characteristics is obtained, the brightness image with environmental influence is eliminated, and the characteristics are extracted by using the neural network, so that the living body detection is more robust.
For the human face and the deceptive face, due to the difference of media, the formed reflection images have differences, are not influenced by illumination and environment, have very robust and generalizable differences, and therefore the human face reflection images are used as detection features to carry out living body detection.
The present invention obtains a face reflection image from the face image I (x, y) obtained in step S1 and the face luminance image L (x, y) extracted in step S2, and is represented as:
R(x,y)=e(logI(x,y)-logL(x,y))
s4, constructing a living body detection neural network model, and training the living body detection neural network model by adopting an improved central loss function;
in this embodiment, step S4 specifically includes the following sub-steps:
s41, constructing a live body detection ResNet neural network model;
the method adopts ResNet152 as a backbone network to construct a live body detection ResNet neural network model, fixes the parameters of the last three layer layers in the live body detection ResNet neural network model as the parameters trained by ImageNet data set, and learns the parameter of the first layer by adopting a random gradient descent optimizer with momentum.
S42, extracting generalization medium characteristics from the known human face reflection image training set sample by using a living body detection ResNet neural network model;
and S43, carrying out iterative optimization on the in-vivo detection neural network model parameters by adopting the improved central loss function.
Because the living body detection is carried out by detecting the difference between the media of the real person and the media of the deceptive face, the media of the deceptive face are infinite and difficult to traverse completely, and the problem of overfitting is caused by adopting the same traditional central loss function for clustering, the invention improves the central loss function into clustering only on the feature vector of the real person face without processing the feature vector of the deceptive face.
By modifying the central loss function, only the characteristic vectors of the human faces of the real persons are clustered, so that the problem of overfitting is avoided, but the limit of the inter-class distance is also relaxed.
The improved center loss function of the present invention is expressed as:
Figure BDA0002910699980000081
wherein x isiTraining sample corresponding characteristic vector, x, for ith human face reflection imagejA feature vector corresponding to a jth deception face reflection image training sample, c is a cluster vector of the feature vectors of the real human face reflection images, m1Training set sample number, m, for real human face reflection images2The number of samples in the training set is reflected for the spoofed face.
The improved central loss function is utilized, when the value of the neural network loss function is smaller and smaller due to iteration, the distance in the class of the feature vector of the human face of the real person is forced to be smaller, the distance between the deception human face and the human face of the real person is increased, and the feature with more generalization is learned.
For the improved center loss function, the invention adopts a method for updating a clustering center to obtain and update a clustering vector, and specifically comprises the following steps:
randomly initializing a cluster vector as an initial cluster vector;
vc is calculated according to the feature vector corresponding to the real human face reflection image training sampletExpressed as:
Figure BDA0002910699980000082
wherein, ctClustering vectors of the feature vectors of the real human face reflection images;
setting an update weight, updating the clustering vector, and expressing as:
ct+1=ct-α*Vct
wherein, ct+1And alpha is an updating weight for the updated clustering vector, so that the phenomenon that the clustering center has large deviation due to abnormal samples is prevented. .
And S5, performing living body detection on the human face reflection image obtained in the step S3 by adopting the trained living body detection neural network model.
The present invention will be described with reference to specific examples to explain the performance of the generalized silent biopsy method of the present invention.
Training and cross-testing are performed by taking known and published living body detection data sets CASIA-FASD data sets and Replay-labeled data sets as examples, and the test results are shown in Table 1.
TABLE 1 Living body detection data set test comparison result table
Figure BDA0002910699980000091
As can be seen from Table 1, the method of the present invention has better comprehensive performance in terms of comprehensive accuracy and time complexity compared with the existing method.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (7)

1. A generalization silence live body detection method based on medium perception is characterized by comprising the following steps:
s1, acquiring a single frame of face image to be detected;
s2, extracting a face brightness image according to the face image obtained in the step S1, specifically:
estimating the face image obtained in the step S1 by using low-pass filters of multiple scales to obtain multiple estimation results;
averaging a plurality of estimation results to obtain a final face brightness image, wherein the face brightness image is expressed as:
Figure FDA0003212558940000011
where k is the number of low pass filters, wkAs weighting coefficients, FkA low-pass filter operator for a kth low-pass filter;
s3, obtaining a face reflection image according to the face image obtained in the step S1 and the face brightness image extracted in the step S2, wherein the face reflection image is expressed as:
R(x,y)=e(logI(x,y)-logL(x,y))
wherein, I (x, y) is a face image, and L (x, y) is a face brightness image;
s4, constructing a living body detection neural network model, and training the living body detection neural network model by adopting an improved central loss function;
and S5, performing living body detection on the human face reflection image obtained in the step S3 by adopting the trained living body detection neural network model.
2. The method for detecting the generalization silence live-body based on the media perception according to claim 1, wherein the step S4 specifically comprises the following sub-steps:
s41, constructing a live body detection ResNet neural network model;
s42, extracting generalization medium characteristics from the known human face reflection image training set sample by using a living body detection ResNet neural network model;
and S43, carrying out iterative optimization on the in-vivo detection neural network model parameters by adopting the improved central loss function.
3. The method according to claim 2, wherein the last three layer parameters in the live detection ResNet neural network model are fixed as setting parameters, and a random gradient descent optimizer with momentum is used to learn the first layer parameter.
4. The media-aware-based generalized silent liveness detection method according to claim 3, wherein said modified center loss function is expressed as:
Figure FDA0003212558940000021
wherein x isiTraining sample corresponding characteristic vector, x, for ith human face reflection imagejA feature vector corresponding to a jth deception face reflection image training sample, c is a cluster vector of the feature vectors of the real human face reflection images, m1Training set sample number, m, for real human face reflection images2The number of samples in the training set is reflected for the spoofed face.
5. The method according to claim 4, wherein the live-detection ResNet neural network model updates the cluster vector during a training process, specifically:
randomly initializing a cluster vector as an initial cluster vector;
vc is calculated according to the feature vector corresponding to the real human face reflection image training samplet
And setting an updating weight and updating the clustering vector.
6. The media-aware-based generalized silent liveness detection method according to claim 5, wherein said VctThe calculation formula of (2) is as follows:
Figure FDA0003212558940000022
wherein, ctAnd the clustering vector is the characteristic vector of the real human face reflection image.
7. The method according to claim 6, wherein the updating formula for updating the cluster vector is as follows:
ct+1=ct-α*Vct
wherein, ct+1For the updated cluster vector, α is the update weight.
CN202110085636.0A 2021-01-22 2021-01-22 Generalized silence living body detection method based on medium sensing Active CN112818782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110085636.0A CN112818782B (en) 2021-01-22 2021-01-22 Generalized silence living body detection method based on medium sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110085636.0A CN112818782B (en) 2021-01-22 2021-01-22 Generalized silence living body detection method based on medium sensing

Publications (2)

Publication Number Publication Date
CN112818782A CN112818782A (en) 2021-05-18
CN112818782B true CN112818782B (en) 2021-09-21

Family

ID=75858761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110085636.0A Active CN112818782B (en) 2021-01-22 2021-01-22 Generalized silence living body detection method based on medium sensing

Country Status (1)

Country Link
CN (1) CN112818782B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110289A (en) * 2011-03-29 2011-06-29 东南大学 Method for enhancing color image contrast ratio on basis of variation frame
CN111191549A (en) * 2019-12-23 2020-05-22 浙江大学 Two-stage face anti-counterfeiting detection method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9342732B2 (en) * 2012-04-25 2016-05-17 Jack Harper Artificial intelligence methods for difficult forensic fingerprint collection
EP3545462A1 (en) * 2016-12-23 2019-10-02 Aware, Inc. Analysis of reflections of projected light in varying colors, brightness, patterns, and sequences for liveness detection in biometric systems
CN107992794B (en) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
CN108875508B (en) * 2017-11-23 2021-06-29 北京旷视科技有限公司 Living body detection algorithm updating method, device, client, server and system
CN108647650B (en) * 2018-05-14 2021-07-09 北京大学 Human face in-vivo detection method and system based on corneal reflection and optical coding
US11550031B2 (en) * 2019-03-18 2023-01-10 Samsung Electronics Co., Ltd. Method and apparatus for biometric authentication using face radar signal
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110674730A (en) * 2019-09-20 2020-01-10 华南理工大学 Monocular-based face silence living body detection method
CN111783640A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Detection method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110289A (en) * 2011-03-29 2011-06-29 东南大学 Method for enhancing color image contrast ratio on basis of variation frame
CN111191549A (en) * 2019-12-23 2020-05-22 浙江大学 Two-stage face anti-counterfeiting detection method

Also Published As

Publication number Publication date
CN112818782A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN109583342B (en) Human face living body detection method based on transfer learning
Zhu et al. A fast single image haze removal algorithm using color attenuation prior
Raja et al. Video presentation attack detection in visible spectrum iris recognition using magnified phase information
US10360465B2 (en) Liveness testing methods and apparatuses and image processing methods and apparatuses
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2020258667A1 (en) Image recognition method and apparatus, and non-volatile readable storage medium and computer device
CN108710910B (en) Target identification method and system based on convolutional neural network
CN108345818B (en) Face living body detection method and device
WO2019152983A2 (en) System and apparatus for face anti-spoofing via auxiliary supervision
CN110033040B (en) Flame identification method, system, medium and equipment
CN109614910B (en) Face recognition method and device
EP4187484A1 (en) Cbd-net-based medical endoscopic image denoising method
CN112580576B (en) Face spoofing detection method and system based on multi-scale illumination invariance texture characteristics
CN111047543A (en) Image enhancement method, device and storage medium
CN108985200A (en) A kind of In vivo detection algorithm of the non-formula based on terminal device
WO2022156214A1 (en) Liveness detection method and apparatus
CN114550110A (en) Vehicle weight identification method and system based on unsupervised domain adaptation
WO2021134485A1 (en) Method and device for scoring video, storage medium and electronic device
CN110363111B (en) Face living body detection method, device and storage medium based on lens distortion principle
CN115147936A (en) Living body detection method, electronic device, storage medium, and program product
CN112818782B (en) Generalized silence living body detection method based on medium sensing
CN111126283A (en) Rapid in-vivo detection method and system for automatically filtering fuzzy human face
CN111191549A (en) Two-stage face anti-counterfeiting detection method
CN117892795A (en) Neural network model training method, device, terminal and storage medium
CN117495718A (en) Multi-scale self-adaptive remote sensing image defogging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant