CN111126366B - Method, device, equipment and storage medium for distinguishing living human face - Google Patents
Method, device, equipment and storage medium for distinguishing living human face Download PDFInfo
- Publication number
- CN111126366B CN111126366B CN202010248023.XA CN202010248023A CN111126366B CN 111126366 B CN111126366 B CN 111126366B CN 202010248023 A CN202010248023 A CN 202010248023A CN 111126366 B CN111126366 B CN 111126366B
- Authority
- CN
- China
- Prior art keywords
- living body
- face
- model
- discrimination
- quantization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application discloses a method for distinguishing a living human face, which comprises the following steps: training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance; when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, the method can avoid video attack and improve the discrimination accuracy; the living body identification method and the living body identification device can save identification time, improve convenience of a user in living body identification and improve use experience of the user. The application also discloses a device, equipment and a computer readable storage medium for distinguishing the faces of the living bodies, and the device, the equipment and the computer readable storage medium have the beneficial effects.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for discriminating a human face.
Background
In recent years, face recognition technology has been widely applied to the fields of security, access control, payment and the like. Meanwhile, in order to ensure the safety and reliability of the face recognition technology, it is necessary to ensure that the face captured by the face recognition system is from the legal user, i.e. the living face, rather than the deceptive media such as face photos or videos. In the prior art, generally, in a visible light scene, a user is required to perform corresponding actions according to a system instruction of a face recognition system to perform living body judgment. Therefore, the method in the prior art still suffers from video attack, so that the judgment accuracy is low, and the use experience of the user is reduced due to the fact that the high cooperation of the user is needed and the time consumption is long.
Therefore, how to more efficiently judge the living human face and improve the user experience on the basis of improving the accuracy of judging the living human face is a technical problem which needs to be solved by technical personnel in the field at present.
Disclosure of Invention
In view of the above, the present invention aims to provide a method for discriminating a living human face, which can more efficiently discriminate the living human face on the basis of improving the accuracy of discriminating the living human face, and improve the user experience; another object of the present invention is to provide an apparatus, a device and a computer-readable storage medium for discriminating a human face, all of which have the above advantages.
In order to solve the above technical problem, the present invention provides a method for discriminating a human face, including:
training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance;
when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing the first living body face discrimination model and the second living body face discrimination model to obtain a first discrimination result and a second discrimination result;
and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
Preferably, after the pre-training of the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, the method further includes:
obtaining a corresponding quantization scale through analog quantization training;
and respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
Preferably, the process of obtaining the corresponding quantization scale through the simulation quantization training specifically includes:
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the scale of an activation value of the model according to the target threshold;
and quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale to obtain the scale of the model weight.
Preferably, when there is a face image to be recognized, the process of respectively performing living body discrimination on the face image to be recognized by using the first living body face discrimination model and the second living body face discrimination model to obtain a first discrimination result and a second discrimination result specifically includes:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by utilizing the first living body face distinguishing model and the second living body face distinguishing model;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain the first judgment result and the second judgment result.
Preferably, the preprocessing operation specifically includes: grayscale processing and/or image enhancement.
Preferably, after the preprocessing operation is performed on the face image to be recognized, the method further includes:
and carrying out alignment processing and size cutting on the image to be identified.
Preferably, when the target discrimination result is a non-living human face, corresponding prompt information is sent out.
In order to solve the above technical problem, the present invention further provides a device for discriminating a human face, including:
the pre-training module is used for pre-training a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the judging module is used for respectively judging the living bodies of the face images to be recognized by utilizing the first living body face judging model and the second living body face judging model when the face images to be recognized exist, so as to obtain a first judging result and a second judging result;
and the determining module is used for determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
In order to solve the above technical problem, the present invention further provides an apparatus for discriminating a human face, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of any one of the living human face discrimination methods when the computer program is executed.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of any one of the above methods for discriminating a living human face.
The invention provides a living body face discrimination method, which is characterized in that a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image are trained in advance; then when the face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, living body discrimination is carried out by utilizing the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, video attack can be avoided, and discrimination accuracy is improved; in addition, the method can perform living body judgment under the condition that the input is the face image to be recognized which is only a single frame, and does not need the user to perform corresponding action according to a system instruction, so that the judgment time can be saved, the convenience of the user in living body judgment is improved, and the use experience of the user is improved.
In order to solve the technical problem, the invention also provides a device, equipment and a computer readable storage medium for distinguishing the human face of the living body, which have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for discriminating a human face according to an embodiment of the present invention;
FIG. 2 is a block diagram of a deep neural network;
fig. 3 is a flowchart of another living human face discrimination method according to an embodiment of the present invention;
fig. 4 is a structural diagram of an apparatus for discriminating a human face according to an embodiment of the present invention;
fig. 5 is a structural diagram of an apparatus for discriminating a human face according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The core of the embodiment of the invention is to provide a living body face distinguishing method, which can more efficiently distinguish the living body face and improve the use experience of a user on the basis of improving the distinguishing accuracy of the living body face; another core of the present invention is to provide a device, an apparatus and a computer-readable storage medium for discriminating a human face, all of which have the above advantages.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of a method for discriminating a human face according to an embodiment of the present invention. As shown in fig. 1, a method for discriminating a human face includes:
s10: a first living body face distinguishing model based on a near infrared image and a second living body face distinguishing model based on a visible light image are trained in advance.
Specifically, in this embodiment, first, training data sets are collected, including a first training data set for training a first living human face discrimination model based on a near-infrared image and a second training data set for training a second living human face discrimination model based on a visible light image. The first training data set comprises a positive face sample and a negative face sample in a near-infrared light scene, the wavelength of the near-infrared light is 850nm, and the negative sample comprises visible light photo printing attack, near-infrared photo printing attack, 3D mask attack and the like; the second training data set comprises positive samples and negative samples of the human face in a visible light scene, wherein the negative samples comprise visible light photo printing attacks, near infrared photo printing attacks, 3D mask attacks and the like.
Specifically, the alignment processing process according to the five-point normalization specifically comprises the steps of detecting the coordinates of five points, namely the outer eye corner of the left eye, the outer eye corner of the right eye, the top of the nose and two mouth corners of the human face, comparing the coordinates of the five points with the coordinates of the five points of a standard human face, aligning the detected human face with the standard human face to obtain a front face picture, wherein the mask is used for covering the information of the non-human face part to enable the human face characteristics to be more prominent, the corresponding human face position in the visible light human face image in the training data set is detected by the human face detector, the alignment processing is carried out on the detected human face by the five-point normalization operation, and the cut human face position in the visible light human face image is cut into the preset size (for example, 35112), and the non-mask image in the near-infrared human face image is removed by the human face detector.
And converting each image in the preprocessed training data set into a gray scale image, inputting the gray scale image into a predetermined deep neural network for training, and obtaining a first living body face discrimination model based on the near-infrared image and a second living body face discrimination model based on the visible light image.
It should be noted that, as shown in fig. 2, a structure diagram of a deep neural network is shown, in this embodiment, it is modified based on a deep neural network MobileNet V2, wherein an image of the deep neural network based on a near infrared light image is input as a single-channel image of 112 × 112 × 1, a deep convolution operation is applied after a 4 × 4 feature map is applied, so that the model focuses more on face information in the middle of the image, and the obtained 64-channel 2 × 2 feature map is flattened into a 256-dimensional vector, the deep neural network based on a visible light image is identical to the deep neural network based on a near infrared image in structure, in that an image based on a visible light image is input as an RGB image, and a 3-channel image of 112 × 112 × 3 is obtained, it should be noted that, in this embodiment, a central difference convolution (central difference convolution) is preferably used to train a first living face model and a second living body face model, and compared with a central difference convolution training mode in the prior art, a first living body discrimination model is obtained, so that a second living body discrimination model can be obtained based on the central difference convolution operation, and the face discrimination model can be judged more accurately.
Specifically, the loss function in the present embodiment is preferably set to a local loss function. It can be understood that, in the living human face identification operation, there are many types of negative samples, and the use of the Focal loss function can effectively solve the problem of unbalance of the positive and negative samples. More specifically, the Focal loss function is specifically:
the α parameter is used for balancing the proportion of positive and negative sample losses, the value of α is generally set to 0.25, the gamma parameter is a sample which enables the model to pay more attention to and is difficult to classify, the value of gamma is generally set to 3, and y' is the probability that the sample to be detected is judged to be a positive sample.
S20: when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result;
s30: and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
Specifically, after a first living body face distinguishing model based on a near-infrared image and a second living body face distinguishing model based on a visible light image are trained, when a face image to be recognized exists, face features in the face image to be recognized are respectively extracted by using the first living body face distinguishing model and the second living body face distinguishing model; living body discrimination is carried out on the face image to be recognized according to the extracted face features to obtain a corresponding first discrimination result and a corresponding second discrimination result; then, a multi-mode fusion strategy is adopted, and a target discrimination result is obtained according to the first discrimination result and the second discrimination result; the target discrimination result indicates that the face image to be recognized is a living body face or a non-living body face. It should be noted that, in this embodiment, if the score in the first determination result is greater than a first preset threshold (for example, greater than 0.9), the target determination result is determined as a living body; if the first discrimination result score is lower than a second preset threshold (if less than 0.1), the target discrimination result is discriminated as a non-living body; and if the score of the first judgment result is between the first preset threshold and the second preset threshold, performing weighted fusion on the score of the first judgment result and the score of the second judgment result, comparing the score with a third preset threshold, judging the living body if the score of the first judgment result is greater than or equal to the third preset threshold, and judging the living body if the score of the first judgment result is less than the third preset threshold.
According to the living body face distinguishing method provided by the embodiment of the invention, a first living body face distinguishing model based on a near-infrared image and a second living body face distinguishing model based on a visible light image are trained in advance; then when the face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, living body discrimination is carried out by utilizing the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, video attack can be avoided, and discrimination accuracy is improved; in addition, the method can perform living body judgment under the condition that the input is the face image to be recognized which is only a single frame, and does not need the user to perform corresponding action according to a system instruction, so that the judgment time can be saved, the convenience of the user in living body judgment is improved, and the use experience of the user is improved.
As shown in fig. 3, a flowchart of another living human face discrimination method is further illustrated and optimized in this embodiment on the basis of the above embodiment, specifically, after a first living human face discrimination model based on a near-infrared image and a second living human face discrimination model based on a visible light image are trained in advance, the method further includes:
s40: obtaining a corresponding quantization scale through analog quantization training;
s50: and respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
In this embodiment, after a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image are trained in advance, a corresponding quantization scale is further obtained through analog quantization training, and then the first living body face discrimination model and the second living body face discrimination model are respectively subjected to quantization compression according to the obtained quantization scale to obtain quantized compressed models; and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed model.
It should be noted that, in this embodiment, the process of obtaining the corresponding quantization scale through the analog quantization training specifically includes:
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the scale of an activation value of the model according to a target threshold;
and quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale to obtain the scale of the weight of the model.
Specifically, in this embodiment, the distribution of the activation values of each network layer is obtained through a calibration data set, for example, a histogram distribution of the activation values represented by 32-bit floating point numbers; then, determining a corresponding activation value quantization distribution based on different threshold values, wherein the different threshold values can be integers in a [128, 8192) interval, and the activation value quantization distribution can be a histogram distribution of activation values expressed by 8-bit integers; calculating the similarity between each activation value quantization distribution and the activation value distribution of the corresponding network layer; in the present embodiment, the corresponding similarity is preferably derived by calculating a relative entropy (relative entropy) or KL divergence (Kullback-leiblerdientgence) of each activation value quantization distribution and the activation value distribution of the corresponding network layer. Then, selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer, and obtaining a corresponding target threshold; if the activation value quantitative distribution with the minimum KL divergence is selected as the activation value quantitative target value distribution of the corresponding network layer, and a corresponding target threshold value is obtained; and determining the activation value scale of the model according to the target threshold value.
Specifically, the activation scale of the model may be calculated by the formula r ═ s × q, where r represents the model activation value, q represents the model activation value quantization target value, and s represents the model activation value scale.
And (3) fixing the model activation value scale s for training in the simulation quantization process by loading the model activation value scale s, namely quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale of the model to obtain the scale of the weight of the model.
Specifically, the formula is used:the model weight is directly quantized to obtain the scale of the model weight, where max | r | represents the maximum value of the absolute value of the weight, and | Q | represents the absolute value of the quantized value range that the model needs to be quantized, and this value is set to 127 in this embodiment.
In this embodiment, the activation value scale and the weight scale of the model are quantization scales, and the first living body face discrimination model and the second living body face discrimination model are quantized and compressed by using the activation value scale and the weight value scale.
Therefore, the corresponding quantization scale is obtained through the simulation quantization training; the first living body face discrimination model and the second living body face discrimination model are respectively quantized and compressed according to the quantization scale, and the first living body face discrimination model and the second living body face discrimination model are updated by using the quantized and compressed models, so that the first living body face discrimination model and the second living body face discrimination model are lighter and lighter, and the models can be deployed on ARM (Advanced RISC Machine) or DSP (digital signal processor) and other embedded devices, and are more convenient and faster to apply.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, when a face image to be recognized exists, a process of performing living body discrimination on the face image to be recognized by using a first living body face discrimination model after quantization compression and a second living body face discrimination model after quantization compression respectively to obtain a first discrimination result and a second discrimination result specifically includes:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by using the first living body face discrimination model after quantization compression and the second living body face discrimination model after quantization compression;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain a first judgment result and a second judgment result.
It should be noted that the main purpose of the preprocessing operation performed on the face image to be recognized is to eliminate irrelevant information in the face image to be recognized, recover useful real information, enhance the detectability of the face information, and simplify data to the maximum extent. Specifically, in this embodiment, a preprocessing operation is further performed on the face image to be recognized, and then the first living body face discrimination model after quantization compression and the second living body face discrimination model after quantization compression are used to respectively extract the face features in the face image to be recognized after preprocessing, so that the face features of the face image to be recognized are more accurate to extract.
As a preferred embodiment, the pretreatment operation specifically comprises: grayscale processing and/or image enhancement.
Specifically, the grayscale processing refers to an operation of displaying a face image to be recognized in a grayscale color mode; the gray processing method includes four methods, i.e., a component method, a maximum value method, an average value method, and a weighted average method, and the specific operation mode of the gray processing is not limited in this embodiment.
The image enhancement means that the overall or local characteristics of the face image to be recognized are purposefully emphasized, the original unclear face image to be recognized is changed into clear or the characteristics of the face part are emphasized, the difference between the characteristics of different parts in the face image to be recognized is enlarged, uninteresting characteristics are inhibited, the recognition and characteristic extraction effects of the face image to be recognized can be enhanced, and the face characteristics of the face image to be recognized can be further accurately extracted.
As a preferred embodiment, after performing a preprocessing operation on a face image to be recognized, the embodiment further includes: and carrying out alignment processing and size cutting on the image to be recognized.
The alignment processing refers to performing face alignment on a face region by adopting five-point normalization operation after detecting the size and the position of a face by using a face detector; and the size cutting refers to cutting the face image to be recognized according to the size of the training data set.
In the embodiment, the method further comprises the steps of carrying out alignment processing and size cutting on the face image to be recognized, so that the influence of non-face part characteristics on living body judgment accuracy can be reduced, and the judgment accuracy is improved.
On the basis of the above embodiment, the present embodiment further describes and optimizes the technical solution, and specifically, in the present embodiment, when the target determination result is a non-living human face, corresponding prompt information is sent.
Specifically, in this embodiment, the target discrimination result has two cases, that is, the face image to be recognized is a living face or the face image to be recognized is a non-living face. And when the judgment result is a non-living human face, the current system is subjected to illegal attack, so that the prompting device is triggered to send out corresponding prompting information. It should be noted that, in actual operation, the prompting device may specifically be a buzzer, an indicator light, a display, or the like, and the corresponding prompting information is corresponding information sent when the prompting device is running, such as a sent buzzer sound, a sent flashing indicator light, or content displayed by the display, and the specific type of the prompting information is not limited in this embodiment.
Therefore, in the embodiment, the corresponding prompt information is further sent when the target judgment result is a non-living human face, so that the use experience of the user can be further improved.
The above detailed description is made on the embodiment of the method for discriminating a living human face provided by the present invention, and the present invention also provides a device, an apparatus, and a computer-readable storage medium for discriminating a living human face corresponding to the method.
Fig. 4 is a structural diagram of an apparatus for discriminating a living body face according to an embodiment of the present invention, and as shown in fig. 4, the apparatus for discriminating a living body face includes:
a pre-training module 41, configured to pre-train a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the judging module 42 is configured to, when there is a face image to be recognized, perform living body judgment on the face image to be recognized by using the first living body face judgment model and the second living body face judgment model respectively to obtain a first judgment result and a second judgment result;
and the determining module 43 is configured to determine the target discrimination result according to the first discrimination result and the second discrimination result by using a multi-mode fusion strategy.
The device for distinguishing the living human face provided by the embodiment of the invention has the beneficial effect of the method for distinguishing the living human face.
As a preferred embodiment, an apparatus for discriminating a human face of a living body further includes:
the scale training module is used for obtaining a corresponding quantization scale through analog quantization training;
and the quantization compression module is used for respectively performing quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
As a preferred embodiment, the scale training module specifically includes:
the first determining unit is used for determining the distribution of the activation value of each network layer in the first living body face distinguishing model and the second living body face distinguishing model by using a preset calibration data set;
the calculation unit is used for determining corresponding activation value quantization distributions based on different threshold values and calculating the similarity between each activation value quantization distribution and the corresponding activation value distribution of the network layer;
the selection unit is used for selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
the second determining unit is used for determining the activation value scale of the model according to the target threshold;
and the quantization unit is used for quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activated value scale to obtain the scale of the model weight.
As a preferred embodiment, the determining module specifically includes:
the preprocessing unit is used for preprocessing the face image to be recognized when the face image to be recognized exists;
the extraction unit is used for respectively extracting the face features in the preprocessed face image to be recognized by utilizing the first living body face discrimination model after quantization compression and the second living body face discrimination model after quantization compression;
and the judging unit is used for respectively judging the living body of the face image to be recognized according to the extracted face features to obtain a first judging result and a second judging result.
As a preferred embodiment, an apparatus for discriminating a human face of a living body further includes:
and the prompting module is used for sending out corresponding prompting information when the target judgment result is a non-living human face.
Fig. 5 is a structural diagram of an apparatus for discriminating a living body face according to an embodiment of the present invention, and as shown in fig. 5, the apparatus for discriminating a living body face includes:
a memory 51 for storing a computer program;
and a processor 52 for implementing the steps of the above method for discriminating a human face of a living body when executing a computer program.
The living human face distinguishing device provided by the embodiment of the invention has the beneficial effect of the living human face distinguishing method.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above method for discriminating a living human face.
The computer-readable storage medium provided by the embodiment of the invention has the beneficial effect of the living human face distinguishing method.
The method, apparatus, device and computer readable storage medium for discriminating a living human face provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are set forth only to help understand the method and its core ideas of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Claims (8)
1. A method for discriminating a human face of a living body, comprising:
training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance;
obtaining a corresponding quantization scale through analog quantization training; wherein the quantization scale comprises an activation value scale of the model and a scale of weights of the model;
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the activation value scale of the model according to the target threshold;
respectively quantizing the model weights of the first living body face distinguishing model and the second living body face distinguishing model by using the scale of the activation value of the model to obtain the scale of the weight of the model;
respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models;
when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing the first living body face discrimination model after quantization compression and the second living body face discrimination model after quantization compression to obtain a first discrimination result and a second discrimination result;
and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
2. The method according to claim 1, wherein the process of obtaining a first determination result and a second determination result by respectively performing living body determination on the face image to be recognized by using the first living body face determination model after quantization compression and the second living body face determination model after quantization compression when the face image to be recognized exists specifically comprises:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by using the first living body face distinguishing model after quantization compression and the second living body face distinguishing model after quantization compression;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain the first judgment result and the second judgment result.
3. The method according to claim 2, characterized in that said preprocessing operation comprises in particular: grayscale processing and/or image enhancement.
4. The method according to claim 3, wherein after the preprocessing operation on the face image to be recognized, the method further comprises:
and carrying out alignment processing and size cutting on the image to be identified.
5. The method according to any one of claims 1 to 4, wherein when the target discrimination result is a non-living human face, corresponding prompt information is sent out.
6. An apparatus for discriminating a human face of a living body, comprising:
the pre-training module is used for pre-training a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the scale training module is used for obtaining a corresponding quantization scale through analog quantization training; wherein the quantization scale comprises an activation value scale of the model and a scale of weights of the model;
wherein, the scale training module specifically comprises:
a first determining unit, configured to determine, by using a preset calibration data set, an activation value distribution of each network layer in the first living body face discrimination model and the second living body face discrimination model;
the calculation unit is used for determining corresponding activation value quantization distributions based on different threshold values and calculating the similarity between each activation value quantization distribution and the corresponding activation value distribution of the network layer;
the selection unit is used for selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
the second determining unit is used for determining the activation value scale of the model according to the target threshold;
the quantization unit is used for quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by utilizing the activated value scale to obtain the scale of the weight of the models;
the quantization compression module is used for respectively performing quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models;
the judging module is used for respectively judging the living bodies of the face images to be recognized by utilizing the first living body face judging model after quantization compression and the second living body face judging model after quantization compression when the face images to be recognized exist, so as to obtain a first judging result and a second judging result;
and the determining module is used for determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
7. An apparatus for discriminating a human face of a living body, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method for discriminating a living human face according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the steps of the method for discriminating a living body face according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010248023.XA CN111126366B (en) | 2020-04-01 | 2020-04-01 | Method, device, equipment and storage medium for distinguishing living human face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010248023.XA CN111126366B (en) | 2020-04-01 | 2020-04-01 | Method, device, equipment and storage medium for distinguishing living human face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111126366A CN111126366A (en) | 2020-05-08 |
CN111126366B true CN111126366B (en) | 2020-06-30 |
Family
ID=70493947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010248023.XA Active CN111126366B (en) | 2020-04-01 | 2020-04-01 | Method, device, equipment and storage medium for distinguishing living human face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126366B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967319B (en) * | 2020-07-14 | 2024-04-12 | 高新兴科技集团股份有限公司 | Living body detection method, device, equipment and storage medium based on infrared and visible light |
CN111931594A (en) * | 2020-07-16 | 2020-11-13 | 广州广电卓识智能科技有限公司 | Face recognition living body detection method and device, computer equipment and storage medium |
CN111881909A (en) * | 2020-07-27 | 2020-11-03 | 精英数智科技股份有限公司 | Coal and gangue identification method and device, electronic equipment and storage medium |
CN111860405A (en) * | 2020-07-28 | 2020-10-30 | Oppo广东移动通信有限公司 | Quantification method and device of image recognition model, computer equipment and storage medium |
CN112329624A (en) * | 2020-11-05 | 2021-02-05 | 北京地平线信息技术有限公司 | Living body detection method and apparatus, storage medium, and electronic device |
CN113128481A (en) * | 2021-05-19 | 2021-07-16 | 济南博观智能科技有限公司 | Face living body detection method, device, equipment and storage medium |
CN115512428B (en) * | 2022-11-15 | 2023-05-23 | 华南理工大学 | Face living body judging method, system, device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015040001A2 (en) * | 2013-09-19 | 2015-03-26 | Muehlbauer Ag | Device, system and method for identifying a person |
US9202105B1 (en) * | 2012-01-13 | 2015-12-01 | Amazon Technologies, Inc. | Image analysis for user authentication |
CN108509984A (en) * | 2018-03-16 | 2018-09-07 | 新智认知数据服务有限公司 | Activation value quantifies training method and device |
CN109766800A (en) * | 2018-12-28 | 2019-05-17 | 华侨大学 | A kind of construction method of mobile terminal flowers identification model |
CN110008783A (en) * | 2018-01-04 | 2019-07-12 | 杭州海康威视数字技术股份有限公司 | Human face in-vivo detection method, device and electronic equipment based on neural network model |
CN110276301A (en) * | 2019-06-24 | 2019-09-24 | 泰康保险集团股份有限公司 | Face identification method, device, medium and electronic equipment |
CN110659617A (en) * | 2019-09-26 | 2020-01-07 | 杭州艾芯智能科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
-
2020
- 2020-04-01 CN CN202010248023.XA patent/CN111126366B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9202105B1 (en) * | 2012-01-13 | 2015-12-01 | Amazon Technologies, Inc. | Image analysis for user authentication |
WO2015040001A2 (en) * | 2013-09-19 | 2015-03-26 | Muehlbauer Ag | Device, system and method for identifying a person |
CN110008783A (en) * | 2018-01-04 | 2019-07-12 | 杭州海康威视数字技术股份有限公司 | Human face in-vivo detection method, device and electronic equipment based on neural network model |
CN108509984A (en) * | 2018-03-16 | 2018-09-07 | 新智认知数据服务有限公司 | Activation value quantifies training method and device |
CN109766800A (en) * | 2018-12-28 | 2019-05-17 | 华侨大学 | A kind of construction method of mobile terminal flowers identification model |
CN110276301A (en) * | 2019-06-24 | 2019-09-24 | 泰康保险集团股份有限公司 | Face identification method, device, medium and electronic equipment |
CN110659617A (en) * | 2019-09-26 | 2020-01-07 | 杭州艾芯智能科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference;Jacob Benoit et al.;《arXiv: Learning》;20171215;第1-14页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111126366A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126366B (en) | Method, device, equipment and storage medium for distinguishing living human face | |
US11195037B2 (en) | Living body detection method and system, computer-readable storage medium | |
WO2020151489A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
US20190034702A1 (en) | Living body detecting method and apparatus, device and storage medium | |
CN105631439B (en) | Face image processing process and device | |
WO2020207423A1 (en) | Skin type detection method, skin type grade classification method and skin type detection apparatus | |
CN104123543B (en) | A kind of eye movement recognition methods based on recognition of face | |
CN106056079B (en) | A kind of occlusion detection method of image capture device and human face five-sense-organ | |
CN106570489A (en) | Living body determination method and apparatus, and identity authentication method and device | |
CN108664840A (en) | Image-recognizing method and device | |
CN109815797B (en) | Living body detection method and apparatus | |
CN110688878B (en) | Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device | |
CN111626371A (en) | Image classification method, device and equipment and readable storage medium | |
CN109858375A (en) | Living body faces detection method, terminal and computer readable storage medium | |
CN107832721B (en) | Method and apparatus for outputting information | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
CN109325472B (en) | Face living body detection method based on depth information | |
CN110059607B (en) | Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium | |
CN112613471B (en) | Face living body detection method, device and computer readable storage medium | |
CN112183356A (en) | Driving behavior detection method and device and readable storage medium | |
CN109543635A (en) | Biopsy method, device, system, unlocking method, terminal and storage medium | |
CN112926364B (en) | Head gesture recognition method and system, automobile data recorder and intelligent cabin | |
CN115376210B (en) | Drowning behavior identification method, device, equipment and medium for preventing drowning in swimming pool | |
CN113989886B (en) | Crewman identity verification method based on face recognition | |
CN110532993A (en) | A kind of face method for anti-counterfeit, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |