[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112199975A - Identity verification method and device based on human face features - Google Patents

Identity verification method and device based on human face features Download PDF

Info

Publication number
CN112199975A
CN112199975A CN201910611701.1A CN201910611701A CN112199975A CN 112199975 A CN112199975 A CN 112199975A CN 201910611701 A CN201910611701 A CN 201910611701A CN 112199975 A CN112199975 A CN 112199975A
Authority
CN
China
Prior art keywords
face
image
certificate
sample
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910611701.1A
Other languages
Chinese (zh)
Inventor
汤人杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910611701.1A priority Critical patent/CN112199975A/en
Publication of CN112199975A publication Critical patent/CN112199975A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an identity verification method and device based on human face features. The method comprises the following steps: acquiring a personal certificate sample image, and training a regional convolution neural network by using the personal certificate sample image to obtain a face detection model, wherein the face detection model outputs a face picture; training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model; acquiring a to-be-identified person certificate-holding image, wherein the to-be-identified person certificate-holding image comprises a certificate-holding person face and a certificate face; inputting a to-be-identified person license image into a face detection model to obtain a face picture in the to-be-identified person license image; inputting the face pictures in the image of the person certificate taking part to be recognized into a face recognition model, and calculating to obtain the face difference degree between the face pictures in the image of the person certificate taking part to be recognized; and the identity authentication result is determined according to the human face difference, so that the accuracy of identity authentication is improved.

Description

Identity verification method and device based on human face features
Technical Field
The invention relates to the technical field of identity authentication, in particular to an identity authentication method and device based on human face features.
Background
The human face is used as an important human body feature and is an important basis for distinguishing different people, and compared with the method for identifying the identities of different people by comparing complex features such as fingerprints and irises, the human face is more natural, more direct and more convenient.
In the prior art, the application of the face detection and recognition technology is very wide, such as a railway station face brushing and ticket checking system, online loan, identity verification and the like. Due to the change of the times, the working efficiency of workers cannot meet the pace of life which is accelerated day by day, and the demand of intelligent machines is increasing day by day. In many places where identity authentication is needed for access, workers cannot keep full of energy and physical strength for a long time, and negligence is inevitable, so that some potential safety hazards are caused. Therefore, a face detection and recognition technology is needed, and the machine can intelligently recognize the identity of people by detecting and comparing faces. Compared with manual verification, the intelligent method can greatly improve the verification efficiency and is greatly convenient for the daily life and work of people.
Many scenes need to identify whether the certificate belongs to a certificate holder, and people's certificate identification is usually realized by means of face detection and face recognition technologies, the traditional face detection and face recognition technologies have OpenCV and dlib realized based on traditional machine learning, and the technologies train a detection model and an identification model according to preset characteristics so as to realize identity verification.
The face detection and face recognition technology based on the traditional machine learning has the following problems:
1) model training is performed according to characteristics which are artificially preset, so that the final detection and recognition effects are possibly poor:
the human face has many features, the traditional machine learning can only carry out model training through part of features selected manually, but the characteristics of the human face cannot be reflected well because the features set manually in advance have subjective components, and images with the features are not necessarily the human face, so that the detection and recognition results are not accurate.
2) The quantity of the certificate photo that can acquire is not many, and the data set quantity is not enough, probably leads to training the effect of model not good enough:
because the certificate photo relates to personal privacy, a few samples can be obtained, and the traditional technology has insufficient data sets, so that the effect of training a model is not good enough.
Due to the defects, the problem of inaccurate identity verification results still exists, so that some users can still hold certificates of other users to achieve corresponding purposes.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a method and apparatus for authenticating identity based on human face features that overcomes or at least partially solves the above problems.
According to an aspect of the present invention, there is provided an identity verification method based on human face features, the method being performed based on a trained human face detection model and a human face recognition model, the method comprising:
acquiring a personal certificate sample image, and training a regional convolution neural network by using the personal certificate sample image to obtain a face detection model, wherein the face detection model outputs a face picture;
training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model;
acquiring a to-be-identified person certificate-holding image, wherein the to-be-identified person certificate-holding image comprises a certificate-holding person face and a certificate face;
inputting a to-be-identified person license image into a face detection model to obtain a face picture in the to-be-identified person license image;
inputting the face pictures in the image of the person certificate taking part to be recognized into a face recognition model, and calculating to obtain the face difference degree between the face pictures in the image of the person certificate taking part to be recognized;
and determining an identity verification result according to the human face difference.
According to another aspect of the present invention, there is provided an authentication apparatus based on human face features, the apparatus being executed based on a trained human face detection model and a human face recognition model, the apparatus comprising:
the human face detection model training module is suitable for acquiring a certificate holding sample image, and training the regional convolution neural network by using the certificate holding sample image to obtain a human face detection model, wherein the human face detection model outputs a human face picture;
the face recognition model training module is suitable for training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model;
the system comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is suitable for acquiring a person certificate image to be recognized, and the person certificate image to be recognized comprises a person face of a certificate holder and a certificate face;
the detection module is suitable for inputting the image of the person certificate of possession to be identified into the face detection model to obtain a face picture in the image of the person certificate of possession to be identified;
the recognition module is suitable for inputting the face pictures in the image of the person license to be recognized into the face recognition model, and calculating to obtain the face difference degree between the face pictures in the image of the person license to be recognized;
and the verification module is suitable for determining an identity verification result according to the human face difference degree.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the identity authentication method based on the human face features.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the above-mentioned identity authentication method based on human face features.
According to the scheme provided by the invention, a certificate holding sample image is obtained, and a regional convolution neural network is trained by using the certificate holding sample image to obtain a face detection model, wherein the face detection model outputs a face picture; training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model; acquiring a to-be-identified person certificate-holding image, wherein the to-be-identified person certificate-holding image comprises a certificate-holding person face and a certificate face; inputting a to-be-identified person license image into a face detection model to obtain a face picture in the to-be-identified person license image; inputting the face pictures in the image of the person certificate taking part to be recognized into a face recognition model, and calculating to obtain the face difference degree between the face pictures in the image of the person certificate taking part to be recognized; and determining an identity verification result according to the human face difference. According to the scheme provided by the invention, the face detection model and the face recognition model are obtained by training based on the convolutional neural network, so that the automatic extraction of training data characteristics is realized, and the inaccuracy of detection and recognition caused by subjective factors or incomplete preset characteristics is avoided; in addition, the training samples are easy to obtain, so that training data are expanded, and the problem that the trained model is not accurate enough due to insufficient training data, and the verification result is inaccurate is solved; by improving the accuracy of the face detection model and the face recognition model, the accuracy of an identity verification result is improved, whether the certificate belongs to a certificate holder or not can be accurately identified, and the defect that part of users achieve corresponding purposes by holding certificates of other users in the prior art is overcome.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart diagram illustrating a method for identity verification based on human face features according to an embodiment of the invention;
FIG. 2 illustrates a flow diagram of face detection model training according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of face recognition model training according to one embodiment of the invention;
fig. 4 is a schematic structural diagram of an identity authentication device based on human face features according to an embodiment of the present invention;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flow chart of an identity verification method based on human face features according to an embodiment of the invention. The method is executed based on a trained face detection model and a face recognition model, as shown in fig. 1, and comprises the following steps:
step S101, a certificate holding sample image is obtained, the certificate holding sample image is used for training the area convolution neural network, a face detection model is obtained, and the face detection model outputs a face image.
In actual life, many scenes that the user needs to hold the certificate exist, for example, the railway station face-brushing ticket-checking system needs the user to hold the certificate to check the ticket, the online loan needs the user to hold the certificate to transact, and the like, and the user needs to hold the certificate to realize the system. After the personal certificate sample image is obtained, the regional convolution neural network is trained by using the personal certificate sample image to obtain a face detection model. The face detection model obtained by training can detect the face in the input image of the person holding certificate and finally output the face pictures of all faces in the image of the person holding certificate.
Because the image of the personal evidence is easy to obtain, the scale of the sample image used by the training of the face detection model is expanded, the training accuracy is improved, and the defect that the model training is not accurate enough due to the fact that an insufficient image set does not exist in the traditional technology is overcome.
And S102, training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model.
In this embodiment, the training sample of the face recognition model is the face picture output by the face detection model, and the face detection model can accurately identify the face in the image of the person holding the evidence sample, so that the more accurate training sample can be provided to train the face recognition model, and the accuracy of the trained face recognition model can be improved.
Specifically, after the face image is output by using the face detection model in step S101, the deep convolutional neural network is trained by using the face image output by using the face detection model, so as to obtain the face recognition model. The human face recognition model obtained by training can recognize the difference degree between two input human face images.
And step S103, acquiring the image of the certified person to be identified.
The image of the person certificate-holding piece to be identified refers to an image of a certificate-holding person hand-held certificate collected when the identity of the certificate-holding person is authenticated, wherein the image of the person certificate-holding piece to be identified comprises a face of the certificate-holding person and a face of the certificate. For example, when the user a enters the face-brushing ticket-checking system of the railway station to check the ticket, the user a needs to hold the corresponding certificate, and this step is to acquire the image of the person holding the certificate, which includes the face of the user a and the face of the certificate in which the user a holds the certificate. In actual life, in order to ensure the accuracy of identity authentication, it is required that when one person performs identity authentication, other persons are not close to the person, and therefore, the acquired image of the person to be identified, which is taken as a certificate, usually only contains the face of the person to be identified and the face of the certificate.
And step S104, inputting the image of the person license to be identified into the face detection model to obtain a face picture in the image of the person license to be identified.
The face detection model is used for accurately detecting a face picture in the image of the person to be recognized, specifically, after the image of the person to be recognized is acquired, the image of the person to be recognized is input into the face detection model obtained by training in the step S101, and the face detection model detects a face in the image of the person to be recognized and outputs the face picture in the image of the person to be recognized.
In general, if two face pictures output by the face detection model are larger than two detected face pictures, the face recognition is not performed, and a warning is directly provided.
And step S105, inputting the face pictures in the image of the person to be recognized into the face recognition model, and calculating to obtain the face difference between the face pictures in the image of the person to be recognized.
After the human face picture in the image of the person to be recognized is detected and obtained in the step S104, the human face picture in the image of the person to be recognized is input into the human face recognition model obtained in the step S102, the human face recognition model calculates the human face difference degree between the human face pictures in the image of the person to be recognized, and the human face difference degree is output. The face difference represents a difference between two faces, and for example, the face difference can be represented by a euclidean distance.
In this embodiment, the face detection model may detect a plurality of images of the person to be recognized, and then a plurality of sets of face images are output, where a face image detected for one image of the person to be recognized may be stored in one folder, so as to facilitate recognition by the face recognition model.
And step S106, determining an identity verification result according to the human face difference.
After the face difference degree between the face pictures in the image of the person credential to be recognized is calculated according to step S105, the identity of the credential holder can be verified according to the face difference degree. Specifically, the face difference is compared with a preset face difference threshold, whether the certificate belongs to a certificate holder is determined by judging whether the face difference is smaller than or equal to the preset face difference threshold, if the face difference is smaller than or equal to the preset face difference threshold, the fact that the face of the certificate holder and the face of the certificate are the same person is indicated, the certificate belongs to the certificate holder, and the identity verification can be determined to be successful; if the face difference degree is larger than the preset face difference degree threshold value, the fact that the face of the certificate holder is not the face of the same person as the face of the certificate is indicated, the certificate does not belong to the certificate holder, and the identity verification failure can be determined. Taking the preset human face difference threshold value as 0.8 as an example, if the human face difference is greater than 0.8, determining that the identity authentication fails; and if the human face difference degree is less than or equal to 0.8, the identity authentication is determined to be successful.
According to the scheme provided by the invention, the face detection model and the face recognition model are obtained by training based on the convolutional neural network, so that the automatic extraction of training data characteristics is realized, and the inaccuracy of detection and recognition caused by subjective factors or incomplete preset characteristics is avoided; in addition, the training samples are easy to obtain, so that training data are expanded, and the problem that the trained model is not accurate enough due to insufficient training data, and the verification result is inaccurate is solved; by improving the accuracy of the face detection model and the face recognition model, the accuracy of an identity verification result is improved, whether the certificate belongs to a certificate holder or not can be accurately identified, and the defect that part of users achieve corresponding purposes by holding certificates of other users in the prior art is overcome.
FIG. 2 is a flow diagram illustrating face detection model training according to an embodiment of the present invention. As shown in fig. 2, the method comprises the steps of:
step S201, extracting a plurality of personal certificate sample images, and classification labeling results and boundary frame labeling results corresponding to the personal certificate sample images from a sample library, wherein the personal certificate sample images comprise certificate faces corresponding to certificate holders and faces of the certificate holders.
In actual life, many scenes needing a user to hold certificates exist, for example, a railway station face-brushing ticket-checking system needs the user to hold a certificate to check a ticket, online loan needs the user to hold the certificate to transact, and the like, and the user needs to hold the certificate to realize the system.
The embodiment collects images of the certificates held by people in various scenes, the images are easy to obtain, and are not easy to obtain due to privacy unlike the certificate images, so that the number of sample images used for training is increased, and the accuracy of model training is improved.
After the image of the person holding the certificate is collected, the bounding box labeling needs to be performed on the face in the image of the person holding the certificate, that is, classification labeling such as labeling of the face or a non-face and bounding box labeling may be involved.
For example, a labelImg labeling tool can be used to label a face in the collected image of the credential holder, and label a bounding box containing the specific position of the face in the image of the credential holder with xmin,ymin,xmax,ymaxAnd representing coordinates of the upper left corner and the lower right corner of the face frame, and marking the boundary frame for subsequent face detection model training. After the labeling is finished, the classification labeling result and the boundary frame labeling result are stored in a sample library, and when the face detection model training is needed, a plurality of certificate holding sample images and the classification labeling result and the boundary frame labeling result corresponding to the certificate holding sample images are extracted from the sample library.
After extracting a plurality of credential sample images from the sample library and the classification labeling results and the bounding box labeling results corresponding to the credential sample images, the credential sample images can be used for training, and the sample classification results and the sample bounding box results corresponding to the credential sample images can be obtained by inputting the credential sample images into the regional convolutional neural network for training, specifically, the sample classification results and the sample bounding box results corresponding to the credential sample images can be obtained by the methods in steps S202 to S205:
and S202, extracting the characteristics of the personal certificate sample image by using the regional convolutional neural network to obtain a characteristic diagram corresponding to the personal certificate sample image.
And expressing the sample image of the certificate holding person as a tensor of h multiplied by w multiplied by d, wherein h and w express the length and width of the sample image of the certificate holding person, the unit is a pixel, d expresses the number of channels of the sample image of the certificate holding person, for example, h is 800, w is 300, and d is 3, and extracting the characteristics of the sample image of the certificate holding person by adopting a regional convolution neural network to obtain a corresponding characteristic diagram.
And step S203, performing area processing on the characteristic graph to obtain a target area in the image of the person holding evidence sample.
And creating an anchor point for each point of the characteristic graph corresponding to the image of the certified person sample. Selecting different proportions to form an anchor point set, and obtaining N × M × k anchor points in total, where N, M denotes the size of the feature map, and k is the number of anchor points selected at each point on the feature map, for example, obtaining 70 × 40 × 20 — 56000 anchor points in total.
And adjusting the anchor points beyond the boundary of the sample image of the personal certificate, and if a certain anchor point is out of range, changing partial out-of-range numerical values into numerical values of the boundary of the sample image of the personal certificate.
Processing the feature graph by using a regional convolutional neural network, wherein each anchor point outputs two predicted values: and determining whether the anchor points are the human faces or the backgrounds according to the scores of the backgrounds and the scores of the human faces, sequencing according to the scores, and reserving the first N anchor points as proposed areas, namely target areas in the image of the human evidence holding sample. Wherein, the regional convolutional neural network is a full convolutional network. The input of the area convolution neural network is a feature map, and the output is a group of target areas.
Step S204, pooling is carried out on a target area in the personal certificate sample image, and a feature vector corresponding to the target area in the personal certificate sample image is obtained.
After the target area in the person evidence sample image is obtained according to step S203, the RoI in the person evidence sample image is subjected to the RoI pooling process, so as to obtain a feature map with a fixed size corresponding to the target area in the person evidence sample image, and the feature map is converted into a feature vector.
Step S205, two different full connection layer processes are carried out on the feature vector corresponding to the target area in the personal certificate sample image, and a sample classification result and a sample boundary frame result corresponding to the personal certificate sample image are obtained.
After the feature vector corresponding to the target area in the personal certified sample image is obtained in step S204, two different full-connected layer processes are applied to the feature vector corresponding to the target area in the personal certified sample image. And one full connection layer is provided with 2 (including a background and a human face) neural units, a classification layer of the regional convolutional neural network is formed, and the softmax function is used for classifying the content in each target region to obtain a sample classification result corresponding to the image of the sample of the personal credential. And the other full-connection layer is provided with 4 neural units to form a regression layer, the obtained target area is respectively predicted to be delta xcenter, delta ycenter, delta width and delta height, the target area is subjected to bounding box regression again, a bounding box with higher precision is obtained, and a sample bounding box result corresponding to the human certified sample image is obtained.
Step S206, obtaining a loss function of the regional convolutional neural network according to the classification loss between the sample classification result and the classification labeling result and the regression loss between the sample boundary box result and the boundary box labeling result, and updating the weight parameters of the regional convolutional neural network according to the loss function of the regional convolutional neural network.
After the sample classification result and the sample boundary box result corresponding to the image of the human certified sample are obtained according to step S205, a regional convolutional neural network loss function may be obtained according to the classification loss between the sample classification result and the classification labeling result and the regression loss between the sample boundary box result and the boundary box labeling result, where the regional convolutional neural network loss function is the sum of the classification loss function and the regression loss function, and then the weight parameter of the regional convolutional neural network is updated according to the regional convolutional neural network loss function to continuously adjust the model.
Wherein the loss function of the area convolution neural network is:
Figure BDA0002122584300000101
where i is the index of the anchor point in the small batch of data, piIs the prediction probability of the anchor point as the face, if the anchor point is a positive sample, the real label pi1, if it is a negative sample, then pi0. t is ti4 parameterized coordinate vectors representing the predicted bounding box, and tiIs the vector of the bounding box associated with the positive sample anchor. The loss function of the area convolution neural network is divided into 2 parts: loss of classification LclsIs the log loss of two classes (face and background); for regression loss LregUsing the smoothed L1 penalty, only for positive sample activations, the calculation formula is as follows:
Figure BDA0002122584300000102
Figure BDA0002122584300000103
and step S207, iteratively executing the step S201 to the step S206 until a preset convergence condition is met, and obtaining a face detection model.
And (5) iteratively executing the step S201 to the step S206 until a preset convergence condition is met, and obtaining a face detection model. Wherein the predetermined convergence condition includes: the iteration times reach the preset iteration times; and/or the output value of the loss function of the regional convolutional neural network is smaller than a preset threshold value. For example, the preset iteration number is set to 1000, and a person skilled in the art may set the preset iteration number and the preset threshold according to actual experience, which is not specifically described herein.
FIG. 3 is a flow chart illustrating face recognition model training according to an embodiment of the present invention. As shown in fig. 3, the method comprises the steps of:
step S301, a deep convolutional neural network is utilized to perform feature processing on a face picture output by the face detection model, and a corresponding face feature vector is obtained.
In the embodiment, a face picture in an image of a person holding certificate detected by a face detection model is used as a sample for training the face recognition model. In this embodiment, for each person image, the face image output by the face detection model is stored in one folder, where one folder represents one person and different folders represent different persons.
And (4) carrying out feature extraction on the face picture by using a deep convolutional neural network to obtain a feature map. And inputting the extracted feature map into an embedding layer, wherein the function of the embedding layer is to convert the feature map into a 128-dimensional feature vector.
Step S302, randomly selecting a face feature vector, and randomly selecting the same type face feature vector and different type face feature vectors of the face feature vector.
Randomly selecting a face feature vector which is called Anchor, and then randomly selecting a face feature vector (called Positive) belonging to the same class as the Anchor and a face feature vector (called Negative, heterogeneous face feature vector) of different classes.
Step S303, inputting the selected face feature vector, the similar face feature vector and the heterogeneous face feature vector of the selected face feature vector into a deep convolutional neural network for training to obtain a triple loss function, and updating the weight parameter of the deep convolutional neural network according to the triple loss function.
By continuously learning to make the anchor closer to positive and farther away from negative, a triplet loss function is obtained, which is expressed as follows:
Figure BDA0002122584300000111
where i is the index of the triples in a small batch of data, N is the total number of triples, and xi aIs a face feature vector, x, of a random personi pIs with xi aFace feature vectors, x, belonging to the same personi nIs the face feature vector of other people.
The distance referred to in this step is an euclidean distance, and can be understood as a degree of difference.
And after the triple loss function is obtained, updating the weight parameters of the deep convolutional neural network according to the triple loss function.
And step S304, iteratively executing the step S301 to the step S303 until a preset convergence condition is met, and obtaining the face recognition model.
And (6) iteratively executing the step (S301) to the step (S303) until a preset convergence condition is met, and obtaining the face recognition model. Wherein the predetermined convergence condition includes: the iteration times reach the preset iteration times. For example, the preset number of iterations is set to 2000, and those skilled in the art can set the preset number of iterations according to actual experience, which is not specifically described herein.
Fig. 4 is a schematic structural diagram illustrating an authentication apparatus based on human face features according to an embodiment of the present invention. The apparatus is executed based on a trained face detection model and a face recognition model, and as shown in fig. 4, the apparatus includes: a face detection model training module 401, a face recognition model training module 402, an obtaining module 403, a detection module 404, a recognition module 405, and a verification module 406.
The face detection model training module 401 is adapted to acquire a certificate holding sample image, train the regional convolution neural network by using the certificate holding sample image, and obtain a face detection model, wherein the face detection model outputs a face image;
a face recognition model training module 402, adapted to train the deep convolutional neural network by using a face image output by the face detection model to obtain a face recognition model;
an obtaining module 403, adapted to obtain a person image to be identified, where the person image to be identified includes a person face of a person holding the certificate and a face of the certificate;
the detection module 404 is adapted to input the image of the person license to be identified to the face detection model, so as to obtain a face picture in the image of the person license to be identified;
the recognition module 405 is adapted to input the face pictures in the image of the person license to be recognized into the face recognition model, and calculate the face difference between the face pictures in the image of the person license to be recognized;
and the verification module 406 is adapted to determine an identity verification result according to the human face difference.
Optionally, the face detection model training module is further adapted to: extracting a plurality of certificate holding sample images, classification marking results and boundary frame marking results corresponding to the certificate holding sample images from a sample library, wherein the certificate holding sample images comprise certificate holders' faces and certificate faces corresponding to the certificate holders;
inputting the image of the person certified sample into a regional convolution neural network for training to obtain a sample classification result and a sample boundary box result corresponding to the image of the person certified sample;
obtaining a loss function of the regional convolutional neural network according to the classification loss between the sample classification result and the classification labeling result and the regression loss between the sample boundary box result and the boundary box labeling result, and updating the weight parameters of the regional convolutional neural network according to the loss function of the regional convolutional neural network;
and the face detection model training module is executed in an iterative mode until a preset convergence condition is met, so that a face detection model is obtained.
Optionally, the face detection model training module is further adapted to: carrying out feature extraction on the personal certificate sample image by using the regional convolution neural network to obtain a feature map corresponding to the personal certificate sample image;
performing area processing on the characteristic graph to obtain a target area in the image of the person holding certificate sample;
pooling is carried out on a target area in a personal certificate sample image to obtain a feature vector corresponding to the target area in the personal certificate sample image;
and carrying out two different full-connection layer treatments on the feature vector corresponding to the target area in the personal certificate sample image to obtain a sample classification result and a sample boundary frame result corresponding to the personal certificate sample image.
Optionally, the predetermined convergence condition comprises: the iteration times reach the preset iteration times; and/or the output value of the loss function of the regional convolutional neural network is smaller than a preset threshold value.
Optionally, the face recognition model training module is further adapted to: performing feature processing on a face picture output by the face detection model by using a deep convolutional neural network to obtain a corresponding face feature vector;
randomly selecting a face feature vector, and randomly selecting the same type face feature vector and different type face feature vectors of the face feature vector;
inputting the selected face feature vectors, the similar face feature vectors and the heterogeneous face feature vectors of the selected face feature vectors into a deep convolutional neural network for training to obtain triple loss functions, and updating weight parameters of the deep convolutional neural network according to the triple loss functions;
and the face recognition model training module is executed in an iterative manner until a preset convergence condition is met, so that the face recognition model is obtained.
Optionally, the predetermined convergence condition comprises: the iteration times reach the preset iteration times.
Optionally, the verification module is further adapted to: judging whether the human face difference degree is smaller than or equal to a preset human face difference degree threshold value or not;
if so, determining that the identity authentication is successful; if not, the authentication is determined to fail.
According to the scheme provided by the invention, the face detection model and the face recognition model are obtained by training based on the convolutional neural network, so that the automatic extraction of training data characteristics is realized, and the inaccuracy of detection and recognition caused by subjective factors or incomplete preset characteristics is avoided; in addition, the training samples are easy to obtain, so that training data are expanded, and the problem that the trained model is not accurate enough due to insufficient training data, and the verification result is inaccurate is solved; by improving the accuracy of the face detection model and the face recognition model, the accuracy of an identity verification result is improved, whether the certificate belongs to a certificate holder or not can be accurately identified, and the defect that part of users achieve corresponding purposes by holding certificates of other users in the prior art is overcome.
The embodiment of the invention also provides a nonvolatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the computer executable instruction can execute the identity verification method based on the human face features in any method embodiment.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor (processor), a Communications Interface (Communications Interface), a memory (memory), and a Communications bus.
Wherein:
the processor, the communication interface, and the memory communicate with each other via a communication bus.
A communication interface for communicating with network elements of other devices, such as clients or other servers.
And the processor is used for executing a program, and particularly can execute related steps in the embodiment of the identity authentication method based on the human face features.
In particular, the program may include program code comprising computer operating instructions.
The processor may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program may be specifically configured to cause the processor to execute the identity authentication method based on the human face feature in any of the method embodiments described above. For specific implementation of each step in the program, reference may be made to corresponding steps and corresponding descriptions in units in the above identity authentication embodiment based on the human face features, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of a face feature based authentication device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. An identity authentication method based on human face features comprises the following steps:
acquiring a person-held evidence sample image, and training a regional convolution neural network by using the person-held evidence sample image to obtain a face detection model, wherein the face detection model outputs a face picture;
training a deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model;
acquiring a to-be-identified person certificate-holding image, wherein the to-be-identified person certificate-holding image comprises a certificate-holding person face and a certificate face;
inputting the image of the person license to be identified into a face detection model to obtain a face picture in the image of the person license to be identified;
inputting the face pictures in the image of the person to be recognized into a face recognition model, and calculating to obtain the face difference between the face pictures in the image of the person to be recognized;
and determining an identity verification result according to the human face difference.
2. The method of claim 1, wherein the obtaining of the image of the sample of the credential and the training of the area convolutional neural network with the image of the sample of the credential to obtain the face detection model further comprises:
s1, extracting a plurality of certificate holding sample images, classification and annotation results corresponding to the certificate holding sample images and boundary frame annotation results from a sample library, wherein the certificate holding sample images comprise certificate holders' faces and certificate faces corresponding to the certificate holders;
s2, inputting the image of the certificate holding sample into a regional convolution neural network for training to obtain a sample classification result and a sample boundary box result corresponding to the image of the certificate holding sample;
s3, obtaining a loss function of the regional convolutional neural network according to the classification loss between the sample classification result and the classification labeling result and the regression loss between the sample boundary box result and the boundary box labeling result, and updating the weight parameter of the regional convolutional neural network according to the loss function of the regional convolutional neural network;
and (5) iteratively executing the step S1-the step S3 until a preset convergence condition is met, and obtaining the face detection model.
3. The method of claim 2, wherein inputting the image of the sample of the human certified device into the regional convolutional neural network for training, and obtaining a sample classification result and a sample bounding box result corresponding to the image of the sample of the human certified device further comprises:
carrying out feature extraction on the personal certificate sample image by using the regional convolution neural network to obtain a feature map corresponding to the personal certificate sample image;
performing area processing on the characteristic graph to obtain a target area in the image of the person holding certificate sample;
pooling is carried out on a target area in a personal certificate sample image to obtain a feature vector corresponding to the target area in the personal certificate sample image;
and carrying out two different full-connection layer treatments on the feature vector corresponding to the target area in the personal certificate sample image to obtain a sample classification result and a sample boundary frame result corresponding to the personal certificate sample image.
4. The method of claim 2 or 3, wherein the predetermined convergence condition comprises: the iteration times reach the preset iteration times; and/or the output value of the loss function of the regional convolutional neural network is smaller than a preset threshold value.
5. The method according to claim 2 or 3, wherein the training of the deep convolutional neural network by using the face image output by the face detection model to obtain the face recognition model further comprises:
s4, performing feature processing on the face image output by the face detection model by using a deep convolutional neural network to obtain a corresponding face feature vector;
s5, randomly selecting a face feature vector and randomly selecting the same type face feature vector and different type face feature vector of the face feature vector;
s6, inputting the selected face feature vectors, the similar face feature vectors and the heterogeneous face feature vectors of the selected face feature vectors into a deep convolutional neural network for training to obtain triple loss functions, and updating the weight parameters of the deep convolutional neural network according to the triple loss functions;
and (5) iteratively executing the step S4-the step S6 until a preset convergence condition is met, and obtaining the face recognition model.
6. The method of claim 5, wherein the predetermined convergence condition comprises: the iteration times reach the preset iteration times.
7. The method of any of claims 1-3, wherein the determining an authentication result from the face dissimilarity further comprises:
judging whether the human face difference degree is smaller than or equal to a preset human face difference degree threshold value or not;
if so, determining that the identity authentication is successful; if not, the authentication is determined to fail.
8. An identity authentication device based on human face features comprises:
the human face detection model training module is suitable for acquiring a human license sample image, and training a regional convolution neural network by using the human license sample image to obtain a human face detection model, wherein the human face detection model outputs a human face picture;
the face recognition model training module is suitable for training the deep convolutional neural network by using a face picture output by the face detection model to obtain a face recognition model;
the system comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is suitable for acquiring a to-be-recognized person certificate-holding image, and the to-be-recognized person certificate-holding image comprises a person face of a certificate holder and a person face of the certificate;
the detection module is suitable for inputting the image of the person license to be identified into a face detection model to obtain a face picture in the image of the person license to be identified;
the recognition module is suitable for inputting the face pictures in the image of the person license to be recognized into a face recognition model, and calculating to obtain the face difference degree between the face pictures in the image of the person license to be recognized;
and the verification module is suitable for determining an identity verification result according to the human face difference degree.
9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the identity authentication method based on the human face features in any one of claims 1-7.
10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the method for authenticating identity based on human face features according to any one of claims 1 to 7.
CN201910611701.1A 2019-07-08 2019-07-08 Identity verification method and device based on human face features Pending CN112199975A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910611701.1A CN112199975A (en) 2019-07-08 2019-07-08 Identity verification method and device based on human face features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910611701.1A CN112199975A (en) 2019-07-08 2019-07-08 Identity verification method and device based on human face features

Publications (1)

Publication Number Publication Date
CN112199975A true CN112199975A (en) 2021-01-08

Family

ID=74004468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910611701.1A Pending CN112199975A (en) 2019-07-08 2019-07-08 Identity verification method and device based on human face features

Country Status (1)

Country Link
CN (1) CN112199975A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283359A (en) * 2021-06-02 2021-08-20 万达信息股份有限公司 Authentication method and system for handheld certificate photo and electronic equipment
CN114565967A (en) * 2022-04-28 2022-05-31 广州丰石科技有限公司 Worker card face detection method, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780906A (en) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN107577987A (en) * 2017-08-01 2018-01-12 广州广电卓识智能科技有限公司 Identity authentication method, system and device
CN109543507A (en) * 2018-09-29 2019-03-29 深圳壹账通智能科技有限公司 Identity identifying method, device, terminal device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780906A (en) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN107577987A (en) * 2017-08-01 2018-01-12 广州广电卓识智能科技有限公司 Identity authentication method, system and device
CN109543507A (en) * 2018-09-29 2019-03-29 深圳壹账通智能科技有限公司 Identity identifying method, device, terminal device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283359A (en) * 2021-06-02 2021-08-20 万达信息股份有限公司 Authentication method and system for handheld certificate photo and electronic equipment
CN114565967A (en) * 2022-04-28 2022-05-31 广州丰石科技有限公司 Worker card face detection method, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN109657631B (en) Human body posture recognition method and device
CN106780906B (en) A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
US10262190B2 (en) Method, system, and computer program product for recognizing face
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN112232117A (en) Face recognition method, face recognition device and storage medium
CN109376604B (en) Age identification method and device based on human body posture
CN106815566A (en) A kind of face retrieval method based on multitask convolutional neural networks
CN107958230B (en) Facial expression recognition method and device
CN109858375B (en) Living body face detection method, terminal and computer readable storage medium
JP2022521038A (en) Face recognition methods, neural network training methods, devices and electronic devices
CN110852257B (en) Method and device for detecting key points of human face and storage medium
CN111178252A (en) Multi-feature fusion identity recognition method
CN109816634B (en) Detection method, model training method, device and equipment
CN105654035B (en) Three-dimensional face identification method and the data processing equipment for applying it
CN104463237A (en) Human face verification method and device based on multi-posture recognition
CN112199975A (en) Identity verification method and device based on human face features
WO2023124869A1 (en) Liveness detection method, device and apparatus, and storage medium
CN109858355B (en) Image processing method and related product
CN110688875B (en) Face quality evaluation network training method, face quality evaluation method and device
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach
CN113673308A (en) Object identification method, device and electronic system
CN113869364A (en) Image processing method, image processing apparatus, electronic device, and medium
JP4510562B2 (en) Circle center position detection method, apparatus, and program
CN111062338A (en) Certificate portrait consistency comparison method and system
CN111428670B (en) Face detection method, face detection device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108

RJ01 Rejection of invention patent application after publication