[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107798308B - Face recognition method based on short video training method - Google Patents

Face recognition method based on short video training method Download PDF

Info

Publication number
CN107798308B
CN107798308B CN201711095734.2A CN201711095734A CN107798308B CN 107798308 B CN107798308 B CN 107798308B CN 201711095734 A CN201711095734 A CN 201711095734A CN 107798308 B CN107798308 B CN 107798308B
Authority
CN
China
Prior art keywords
face
target face
target
matrix
face characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711095734.2A
Other languages
Chinese (zh)
Other versions
CN107798308A (en
Inventor
卢荣新
王泽民
李珉
施国鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yishi Digital Technology Chengdu Co ltd
Original Assignee
Yishi Digital Technology Chengdu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yishi Digital Technology Chengdu Co ltd filed Critical Yishi Digital Technology Chengdu Co ltd
Priority to CN201711095734.2A priority Critical patent/CN107798308B/en
Publication of CN107798308A publication Critical patent/CN107798308A/en
Application granted granted Critical
Publication of CN107798308B publication Critical patent/CN107798308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method based on a short video training method, which comprises the following steps: acquiring a short video containing a target face; identifying and tracking a target face in the short video, and extracting a plurality of target face pictures; respectively extracting characteristic values of the extracted target face pictures to generate target face characteristic values respectively corresponding to the target face pictures; combining a plurality of groups of target face characteristic values corresponding to a plurality of target face pictures to generate a target face characteristic matrix; and comparing the target face feature matrix with a preset reference face feature matrix, and verifying the target face. The invention realizes the accurate recognition of the human face without constructing a human face model by an image characteristic value extraction mode, realizes the rapid human face matching by the algorithms of coarse positioning and fine positioning, and has high-precision recognition algorithm and high anti-counterfeiting property.

Description

Face recognition method based on short video training method
Technical Field
The invention relates to the field of video monitoring, in particular to a face recognition method based on a short video training method.
Background
The problem of identity recognition is not only a difficult problem frequently encountered in daily life, but also important in various aspects such as national defense, scientific research, safety, intelligent production and the like. The face recognition technology, which is one of the most important branches of identity recognition, has important significance in maintaining national security and people's life and property security, and in anti-terrorism and anti-terrorism, and is a hot spot of research in the industry; with the rapid development of microelectronic technology and computer technology, the subjects of digital image technology and pattern recognition, and the increasing improvement of artificial intelligence technology, the application range of face recognition technology is still expanding, such as criminal recognition, security verification, fast people counting, etc., and it is gradually possible technically and economically.
The face recognition generally comprises three steps of face detection, face feature extraction, face recognition and verification. The current method mainly comprises the following steps:
1) a template matching method. Storing several standard human face modes for respectively describing the whole human face and the facial features; the correlation between the input image and the stored pattern is calculated and used for detection.
2) Appearance-based methods. In contrast to the template matching method, learning is performed from a training image set to obtain models or templates, and these models are used for detection.
Objectively due to: the shape, size, texture, expression and other changes of human face organs are complex and difficult to describe by a uniform mode; some auxiliary foreign matters exist on the surface of the human face, such as glasses, earrings, makeup and the like; the imaging environment changes such as illumination and the like, so that the image quality is greatly different; the matching degree between the input image and the template is poor due to large image background change and the like, or the characteristic value of the trained human face model deviates from the human face characteristic, so that the current main human face recognition algorithm cannot be perfectly applied to all occasions.
Although a great deal of achievements have been made in the aspects of face detection and recognition based on front static state, face feature extraction, face recognition based on multi-pose, and the like with the development of technologies such as artificial intelligence, the main face recognition thinking at present is to train a picture for recognizing a face to generate a recognition model, and then to recognize a person by using the recognition model, and the key problem of the recognition model is whether there is enough face pictures and whether a high-precision face model can be trained, and the recognition model belongs to a static recognition model algorithm. Therefore, the face recognition processing requires that both the test image and the face image can be accurately or nearly accurately described when the test image and the face image are matched, once a certain party has wrong description, a great error of face recognition can be caused, particularly, if the face recognition processing is used as the test image of the target face, the face recognition processing is extremely serious deceptive if the face recognition processing is artificially forged, the face recognition is completely failed, and the defect is obvious.
Patent No. 201410211494.8 (publication date: 2017.06.13) discloses a video face recognition method, which mainly discloses the following: s1: carrying out face detection and tracking on the video to obtain a face sequence; s2: screening the face sequence to obtain a face typical frame set; s3: optimizing the face typical frame set based on a front face generation technology and an image super-resolution technology to obtain an enhanced face typical frame set; s4: and comparing the enhanced typical face frame set with a preset static face image matching library to perform face recognition or verification. The scheme of the invention solves the problems of low single contrast accuracy and poor anti-counterfeiting performance to a great extent. However, when the scheme is used for training the picture, the picture needs to be optimized and enhanced firstly: correcting a typical frame with a human face posture larger than a preset threshold value two in the typical frame set of the human face by adopting a front face generation technology; enhancing the resolution of a typical frame with the human face eye distance smaller than 60 pixels in the human face typical frame set by adopting an image super-resolution technology; and carrying out illumination pretreatment on the enhanced human face typical frame set and a preset static human face image matching library. Then the basic graph for training and learning the image is an unnatural face image, and the learning result contains the unnatural face characteristics left in the image processing process. On one hand, the additional training result influences the recognition and verification of the face image acquired by the short video; on the other hand, the workload of face training and learning is increased, and the face verification efficiency is influenced.
Disclosure of Invention
The invention aims to: aiming at the existing problems, the face recognition method and the face recognition system based on the short video training method are provided, and the following problems are solved: 1. the problem that the accuracy of matching identification based on a single template is not high is solved; 2. the problem that the requirement for constructing a required face picture source is huge in the process of constructing a face model based on a training method is solved; 3. the influence of external foreign bodies on the human face does not need to be considered; 4. the method has the advantages that the collected image is not required to be preprocessed, the natural face image-based direct recognition is realized, and the problems that the recognition result is influenced by extra characteristic values caused by preprocessing, and the problems of complicated processing flows and reduced verification efficiency caused by preprocessing are solved; 5. the problem of face recognition by using a photo to forge a living body is avoided.
The technical scheme adopted by the invention is as follows:
a face recognition method based on a short video training method comprises the following steps:
s001: constructing a reference face feature matrix for the face to be searched;
s100: acquiring a short video containing a target face;
s200: identifying and tracking a target face in the short video, and extracting a plurality of target face pictures;
s300: respectively extracting characteristic values of the extracted target face pictures to generate a plurality of groups of target face characteristic values respectively corresponding to the target face pictures;
s400: combining the plurality of groups of target face characteristic values to generate a target face characteristic matrix corresponding to the target face;
s500: and comparing the target face feature matrix with the reference face feature matrix to verify the target face.
In the method, the target face is subjected to image tracking and extraction based on the short video, so that the source problem of the face image source is solved, and meanwhile, the face is screened in an all-dimensional mode. By directly extracting the characteristic value of the face image, the optimized image does not need to be enhanced in advance, the identification process is reduced, and the identification stability is improved. By constructing the feature matrix, the multi-aspect face recognition based on the multi-face feature values is realized, the face comparison direction is improved, and the recognition accuracy is improved.
Further, the step S400 specifically includes:
s4001: judging the similarity between the target face characteristic value Q1 planned to be stored in the target face characteristic matrix and each group of target face characteristic values in the target face characteristic matrix; if the similarity between the target face characteristic value Q1 and all target face characteristic values in the target face characteristic matrix is within a preset threshold range, marking the target face characteristic value Q1 as an effective target face characteristic value; otherwise, marking the target face characteristic value Q1 as an invalid target face characteristic value;
s4002: storing the effective target face characteristic value into a target face characteristic matrix; discarding the invalid target face feature value;
s4003: judging whether the group number of the target face characteristic values stored in the target face characteristic matrix reaches a preset value one; if yes, executing S500; otherwise, S4001 is executed.
The scheme can set an effective target face characteristic matrix, thereby avoiding the situation that a substantial scheme is substantially the same as a single template identification scheme due to the existence of a target face characteristic value with the same degree or the high similarity in the target face characteristic matrix; or the similarity is too low, so that the face characteristic values of the non-same person are stored, and the recognition result is misjudged. Meanwhile, the preset value one meeting the requirement is set, so that the number of the target face feature matrixes is reduced as much as possible while the face is recognized at high accuracy, the workload of feature value extraction and the follow-up comparison amount are reduced, and the recognition efficiency is improved. Preferably, at least two groups of target face characteristic values are written into the target face characteristic matrix, and 5-7 groups of target face characteristic values are used as a preferred scheme.
Further, S001 specifically is:
s0001: acquiring a face data source containing a reference face;
s0002: identifying a reference face in the face data source, and extracting a plurality of reference face pictures;
s0003: respectively extracting characteristic values of the extracted plurality of reference face pictures to generate a plurality of reference face characteristic values respectively corresponding to the plurality of reference face pictures;
s0004: and combining the plurality of groups of reference face characteristic values to generate a reference face characteristic matrix corresponding to the reference face.
Preferably, the number of the reference face pictures is at least two, and preferably, the reference face pictures are 5 to 7 face pictures with different angles and different illumination.
And constructing a feature matrix of the face to be searched by the same method, thereby improving the matching reliability with the target face. The characteristic of the face to be searched is described in all directions by constructing a reference face characteristic matrix, so that when the target face only appears for a short time, the characteristic extracted for a moment can be accurately matched, and the accuracy of target matching is realized.
Preferably, the face data source is a short video or a plurality of pictures containing the reference face.
The scheme realizes a feature library construction method based on short videos or face pictures, and realizes feature matrix construction based on the face pictures when the short videos of the face to be searched are lacked. And realizing the search of a plurality of face sources.
Further, the step S500 is specifically:
s5001: searching and comparing the target face feature matrix with the reference face feature matrix; (preferably, searching in a matrix cross multiplication mode to improve the searching efficiency) judging the similarity between the target face characteristic value and the reference face characteristic value;
s5002: and confirming the verification result of the target face in the short video according to the relation between the similarity of the target face characteristic moment value and the reference face characteristic value and a second preset value.
By setting the threshold value of the similarity, the recognition result of the target face can be judged quickly, so that the face recognition efficiency is improved.
Further, the S5001 specifically includes:
s5001 a: performing dimension reduction processing on the reference face characteristic matrix and the target face characteristic matrix, namely performing dimension reduction on each group of reference face characteristic values and each group of target face characteristic values, and performing similarity comparison on each group of dimension reduced reference face characteristic values and all dimension reduced target face characteristic values in the target characteristic matrix;
s5001 b: if a dimension-reduced target face characteristic value with the similarity of the dimension-reduced reference face characteristic value exceeding a preset value three exists in the target face characteristic matrix, judging that the target face characteristic matrix before dimension reduction corresponding to the dimension-reduced target face characteristic value is an effective target face characteristic matrix; otherwise, judging the target face feature matrix before dimensionality reduction as an invalid target face feature matrix;
s5001 c: and carrying out similarity comparison on the reference human face feature matrix and the effective target human face feature matrix.
According to the scheme, the time required by one-to-one retrieval is greatly shortened by improving the cross search mode among the matrixes, so that the time required by feature value comparison is shortened, and the face recognition efficiency is improved. Furthermore, the scheme realizes coarse positioning of the target face, preliminarily positions the face to be identified to a certain range, and then finely compares the memorability of the face in the range, thereby obviously improving the face identification efficiency. In the rough positioning, through dimension reduction comparison of the characteristic values, compared with the comparison of normal characteristic values, the comparison calculation amount is greatly reduced.
The setting of the third predetermined value may be set according to actual requirements (accuracy requirements for face recognition), and should not be understood as unclear.
Further, the S5002 specifically includes:
s5002 a: according to the similarity between the target face feature value and the reference face feature value, when the similarity exceeds the second preset value, S5002b is executed, otherwise, the target face corresponding to the target face feature matrix is judged to be a non-face to be searched;
s5002 b: screening out the maximum similarity with the similarity exceeding a second preset value, and weighting the screened similarity to obtain a comparison result; calculating the reliability according to a preset rule according to the comparison result;
s5002 c: and outputting the comparison result and/or the reliability.
Considering the instability of the comparison result caused by the environmental influence factors in the comparison process, the scheme calculates the reliability of the comparison result according to the similarity weighting of the comparison, thereby improving the reliability of the comparison result.
Further, the S5002b specifically includes:
s50021: and (3) calculating: the ratio of the number of groups of the reference face feature values in the reference face feature matrix, the similarity of which to the target face feature value exceeds a predetermined value two, to the total number of groups of reference face feature values in the reference face feature matrix is executed when the ratio meets a predetermined condition, and S50022 is executed; otherwise, taking the maximum similarity of the target face characteristic value in each group of the target face characteristic matrix and all the reference face characteristic values in the reference face characteristic matrix as a comparison result and reliability;
s50022: screening out the maximum similarity of the target face characteristic value in each group of target face characteristic matrix and all reference face characteristic values in the reference face characteristic matrix, wherein the similarity exceeds a second preset value, and weighting the screened similarity according to a preset weight to obtain a comparison result; and calculating the reliability according to a preset rule according to the comparison result and the preset weight.
According to the scheme, the reliability of the comparison record can be further improved by screening and weighting the multiple groups of values with the similarity reaching the standard.
Preferably, the method for determining the similarity includes: calculating the similarity (x, y) of the reference face feature matrix and the target face feature matrix:
Figure GDA0002497673030000051
x is a target face characteristic matrix, y is a reference face characteristic matrix, n is the number of groups of target face characteristic values in the target face characteristic matrix, and m is the length of a single group of target face characteristic values.
Through the similarity calculation principle based on the similarity difference, the similarity calculation efficiency can be improved, so that the comparison efficiency of characteristic values is improved, and the rapid comparison of the human faces is realized.
Further, the quality of the face picture extracted from the short video and the face data source meets the preset quality requirement, and the quality calculation method comprises the following steps:
Figure GDA0002497673030000052
wherein Q is image quality, A is an image,
Figure GDA0002497673030000053
is a gaussian filtered image of image a.
By extracting features based on high-quality pictures, the distinguishing degree between feature values can be improved, so that the recognition degree of the face is obviously improved, and the accuracy of face comparison is improved.
Further, the number of the reference face pictures extracted from the face data source is required to be:
if the face data source is a short video, extracting K face pictures meeting the preset quality requirement from the short video in a preset quantity;
if the face data source is a picture containing the reference face, taking J pictures containing the reference face when the number J of the pictures containing the reference face is within the preset number K; and when the number of the pictures containing the reference human faces is more than the preset number K, taking K pictures containing the reference human faces.
Preferably, the requirement for the number of extracted target face pictures in the short video containing the target face is the requirement for the number of extracted reference face pictures when the face data source is the short video.
Further, when the number of the pictures containing the reference face is more than the preset number K, taking K pictures containing the reference face in the order of high quality to low quality.
The scheme can construct a reference face feature matrix with high recognition degree, thereby improving the accuracy of target face recognition matching.
In order to solve all or part of the above problems, the present invention provides a face recognition system based on short video sequence method, comprising:
the data acquisition unit is used for acquiring a short video containing a target face;
the image extraction unit is used for extracting a plurality of target face pictures containing target faces from the short video received by the data receiving unit;
the face feature extraction unit is used for extracting feature values of the target face pictures and outputting the target face feature values corresponding to the target face pictures respectively;
the feature matrix construction unit is used for combining the target face feature values extracted by the face feature extraction unit to generate a target face feature matrix;
the face feature library is used for storing a reference face feature matrix corresponding to the face to be searched;
and the face verification unit is used for comparing the target face feature matrix generated by the feature matrix construction unit with the reference face feature matrix so as to verify the target face.
Further, the above feature matrix construction unit combines the target face feature values in the following manner:
judging the similarity between the target face characteristic value Q2 output by the face characteristic extraction unit and each group of target face characteristic values in the target face characteristic matrix; when the similarity between the target face characteristic value Q2 and all target face characteristic values in the target face characteristic matrix is judged to be in a second preset threshold range, receiving the target face characteristic value Q2, and writing the target face characteristic matrix; otherwise, not receiving the target face characteristic value Q2;
and further judging whether the group number of the target face characteristic values written into the target face characteristic matrix reaches a preset value A, if so, not receiving the target face characteristic values output by the face characteristic extraction unit.
Preferably, the predetermined value a is an integer of at least 2, preferably 5 to 7.
Further, the face verification unit includes:
the face matching module is used for searching and comparing the target face characteristic matrix and the reference face characteristic matrix and judging the similarity between the target face characteristic value and the reference face characteristic value;
the reliability determining module is used for calculating the reliability according to the similarity between the target face characteristic value and the reference face characteristic value output by the face matching module and a pre-stored rule;
and the result confirmation module is used for calculating according to the face matching module: and outputting a final verification result of the target face in the short video according to the relationship between the similarity of the target face characteristic value and the reference face characteristic value and a second preset value and the output result of the accuracy determination module.
Preferably, the face feature extraction unit extracts a feature value of the face image by a convolutional neural network.
Further, the process of comparing the target face feature matrix and the reference face feature matrix by the face matching module is as follows:
performing dimension reduction processing on the reference face characteristic matrix and the target face characteristic matrix, namely performing dimension reduction on each group of reference face characteristic values and each group of target face characteristic values, and performing similarity comparison on each group of dimension reduced reference face characteristic values and all dimension reduced target face characteristic values in the target characteristic matrix;
if a dimension-reduced target face characteristic value with the similarity of the dimension-reduced reference face characteristic value exceeding a preset value three exists in the target face characteristic matrix, judging that the target face characteristic matrix before dimension reduction is an effective target face characteristic matrix; otherwise, judging the target face feature matrix before dimensionality reduction as an invalid target face feature matrix;
and carrying out similarity comparison on the reference face characteristic value of the reference face characteristic matrix and the target face characteristic value of the effective target face characteristic matrix. The comparison here is the comparison of the characteristic values before dimensionality reduction.
The scheme realizes coarse positioning of the target face, preliminarily positions the face to be identified to a certain range, and then finely compares the memorability of the face in the range, thereby obviously improving the face identification efficiency. In the rough positioning, through dimension reduction comparison of the characteristic values, compared with the comparison of normal characteristic values, the comparison calculation amount is greatly reduced.
The numbers after the predetermined threshold ranges, the predetermined values, or the face feature values are only used as the numbers for better describing the technical features of the present invention, and are not particularly limited to the specific predetermined threshold ranges, the predetermined values, or the face feature values.
Further, the reliability confirming module calculates the reliability by:
according to the similarity output by the face matching module, when the similarity exceeds the preset value B, weighting the maximum similarity exceeding the preset value B to obtain a weighting result; and calculating the weighting result according to a preset rule and outputting the reliability.
Further, the manner of obtaining the weighting result by the reliability confirming module is specifically as follows:
and (3) calculating: the group number of the reference face characteristic values with the similarity to the target face characteristic value exceeding a preset value two in the reference face characteristic matrix is the ratio of the group number of the reference face characteristic values to the total reference face characteristic value group number of the reference face characteristic matrix, and when the ratio meets a preset condition, the maximum similarity exceeding a preset value B is weighted according to a pre-stored weight to obtain a weighting result; and when the ratio does not meet the preset condition, taking the maximum value of the similarity contrast between the reference face characteristic value and the target face characteristic value as a weighting result.
Further, the face matching module calculates the similarity between the reference face feature matrix and the target face feature matrix in the following manner:
Figure GDA0002497673030000081
x is a target face characteristic matrix, y is a reference face characteristic matrix, n is the number of groups of target face characteristic values in the target face characteristic matrix, and m is the length of a single group of target face characteristic values.
Further, the image extraction unit is provided with a preset value C, when the quality of the extracted face image is judged to be higher than the preset value C, the extracted face image is judged to be sent to the face feature extraction unit, otherwise, the extracted face image is discarded and extractedThe face picture of (1); the picture quality calculation method comprises the following steps:
Figure GDA0002497673030000082
wherein Q is image quality, A is an image,
Figure GDA0002497673030000083
is a gaussian filtered image of image a.
Further, the system further comprises: the face database stores a face data source containing a face to be searched; sending the face data source to the image extraction unit;
the image extraction unit is further configured to: extracting a plurality of reference face pictures containing faces to be searched from the face data source;
the face feature extraction unit is further configured to: extracting characteristic values of the plurality of reference face pictures, and outputting a plurality of reference face characteristic values respectively corresponding to the plurality of reference face pictures;
the feature matrix construction unit is further configured to: and combining the plurality of reference face characteristic values extracted by the face characteristic extraction unit to generate a reference face characteristic matrix, and outputting the reference face characteristic matrix to the face characteristic library for storage.
Preferably, the face data source is a short video containing a face to be searched or a plurality of reference face pictures containing the face to be searched.
Further, the image extraction unit extracts the reference face picture specifically as follows:
if the face data source is a short video, extracting D reference face pictures with the preset number and the picture quality above a preset value C from the short video;
if the face data source is a picture containing a reference face, when the number J of the pictures containing the reference face is within the preset number D, all the pictures containing the reference face are sent to the face characteristic value extraction unit; and when the number of the pictures containing the reference human faces is more than the preset number D, taking the number D of the pictures containing the reference human faces and sending the pictures to the human face characteristic value extraction unit.
Preferably, when the number of the pictures containing the reference face is greater than or equal to the predetermined number D, the number D of the pictures containing the reference face is taken and sent to the face feature value extraction unit in the order of high picture quality.
Further, the image extraction unit extracts the target face picture in the same manner as that of the reference face picture when the face data source is a short video.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
by the scheme provided by the invention, people to be recognized can be intelligently captured by the short video, and a data source of a face image is increased; by constructing the human face characteristic value group on the human face image, the comprehensive description of the target human face characteristic is increased, and the robustness of the recognition result is improved. And screening each extracted face characteristic value through a preset threshold range, so that the distinctiveness and effectiveness of the face characteristic value group construction data are ensured, and the accuracy of the recognition result is improved. Compared with the traditional single one-to-one-to-one comparison scheme, the cross multiplication search scheme of the eigenvalue matrix greatly shortens the matching time of the eigenvalue, thereby improving the matching efficiency. Moreover, the scheme of calculating the similarity of the characteristic values through the similarity difference greatly reduces the difficulty of calculating the similarity of the target characteristic value and the reference characteristic value, reduces the matching workload and improves the face recognition efficiency. Through the characteristic value extraction and comparison scheme, the situation that the living human face is disguised by using the picture can be effectively avoided due to the fact that a deep learning algorithm based on a neural network can be adopted.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a flow chart of a face recognition method based on a short video training method.
Fig. 2 is a flow chart of face feature matrix construction.
Fig. 3 is a flow chart of the construction of the reference face feature matrix.
FIG. 4 is a flowchart of the comparison of the target face feature matrix with the reference face feature matrix.
FIG. 5 is a flowchart of the coarse positioning-fine positioning verification step in the comparison step between the target face feature matrix and the reference face feature matrix.
Fig. 6 is a reliability calculation flowchart.
Fig. 7 is a flowchart of reliability calculation mode determination.
Fig. 8 is a block diagram of a face recognition system based on short video training.
Fig. 9 is a structural diagram of a face authentication unit.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
Any feature disclosed in this specification (including any accompanying claims, abstract) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
As shown in fig. 1, the embodiment discloses a face recognition method based on a short video training method, which includes the following steps:
s001: constructing a reference face feature matrix for the face to be searched;
s100: acquiring a short video containing a target face;
s200: identifying and tracking a target face in the short video, and extracting a plurality of target face pictures; the quality of the extracted target face picture meets the preset quality requirement, and the quality calculation method comprises the following steps:
Figure GDA0002497673030000101
wherein Q is image quality, A is an image,
Figure GDA0002497673030000102
as an imageA, an image after Gaussian filtering; the requirement of the extraction quantity can be adjusted according to the use condition, preferably 5-7 pictures with different angles and different illumination, and in the embodiment, the picture quality can reach more than 60 pixels;
s300: respectively extracting characteristic values of the extracted target face pictures to generate a plurality of groups of target face characteristic values respectively corresponding to the target face pictures; if a characteristic value is extracted by a method based on a neural network, the image is projected in the horizontal/vertical direction and then normalized; in this embodiment, 2DPCA is taken as an example, and image feature values are extracted.
S400: combining the plurality of groups of target face characteristic values to generate a target face characteristic matrix corresponding to the target face;
s500: and comparing the target face feature matrix with the reference face feature matrix to verify the target face.
Further, referring to fig. 2, the present embodiment specifically discloses a method for constructing a face feature matrix, that is, S400 of the above embodiment:
s4001: judging the similarity between the target face characteristic value Q1 planned to be stored in the target face characteristic matrix and each group of target face characteristic values in the target face characteristic matrix; if the similarity between the target face characteristic value Q1 and all target face characteristic values in the target face characteristic matrix is judged to be within a preset threshold range one (such as 0.5-0.95), marking the target face characteristic value Q1 as an effective target face characteristic value; otherwise, marking the target face characteristic value Q1 as an invalid target face characteristic value;
s4002: storing the effective target face characteristic value into a target face characteristic matrix; discarding the invalid target face feature value;
s4003: judging whether the number of groups of the target face characteristic values stored in the target face characteristic matrix reaches a preset value (such as 5); if yes, executing S500; otherwise, S4001 is executed.
Referring to fig. 3, this embodiment specifically discloses a method for constructing a standard face feature matrix, that is, S001 in the above embodiment:
s0001: acquiring a face data source containing a reference face; the face data source can be a short video or a plurality of pictures containing the reference face;
s0002: identifying a reference face in the face data source, and extracting a plurality of reference face pictures; specifically, the number of the extracted reference face pictures is required to be: if the face data source is a short video, extracting a preset number U (such as 5) of face pictures meeting the preset quality requirement from the short video; if the face data source is a picture containing the reference face, taking V pictures containing the reference face when the number V (such as 4 pictures) of the pictures containing the reference face is within the preset number U; and when the number of the pictures containing the reference human faces is more than the preset number U, taking U pictures containing the reference human faces according to the sequence from high quality to low quality or different angles and different illumination conditions. Preferably, 5 to 7 sheets are taken.
S0003: respectively extracting characteristic values of the extracted plurality of reference face pictures to generate a plurality of reference face characteristic values respectively corresponding to the plurality of reference face pictures;
s0004: and combining the plurality of groups of reference face characteristic values to generate a reference face characteristic matrix corresponding to the reference face.
Preferably, the number of the reference face pictures is at least two, and preferably, the reference face pictures are 5 to 7 face pictures with different angles and different illumination.
The scheme generates a reference face characteristic matrix which is used for matching and contains a plurality of face characteristic values of the same person through the same training mode with the target face image, thereby avoiding the condition that different characteristic values are trained from the same characteristic point due to different training methods to influence the face recognition result.
Referring to fig. 4, this embodiment specifically discloses a similarity comparison process between the target feature matrix and the reference face feature matrix in the foregoing embodiment, that is, S500:
s5001: searching and comparing the target face feature matrix with the reference face feature matrix; (preferably, searching in a matrix cross multiplication mode to improve the searching efficiency) judging the similarity between the target face characteristic value and the reference face characteristic value;
s5002: and confirming the verification result of the target face in the short video according to the relation between the similarity of the target face characteristic moment value and the reference face characteristic value and a second preset value (such as 0.95).
Further, the method for judging the similarity comprises the following steps: calculating the similarity (x, y) of the reference face feature matrix and the target face feature matrix:
Figure GDA0002497673030000121
x is a target face feature matrix, y is a reference face feature matrix, n is the number of groups of target face feature values in the target face feature matrix, m is the length of a single group of target face feature values, and preferably, m is equal to n is equal to 5.
Further, referring to fig. 5, the embodiment specifically discloses: a method for comparing the rough localization to the fine localization of the target feature matrix and the reference face feature matrix, that is, the specific process of S5001 is as follows:
s5001 a: performing dimension reduction processing on the reference face characteristic matrix and the target face characteristic matrix, namely performing dimension reduction on each group of reference face characteristic values and each group of target face characteristic values, and performing similarity comparison on each group of dimension reduced reference face characteristic values and all dimension reduced target face characteristic values in the target characteristic matrix; the embodiment extracts image characteristic values based on 2DPCA, namely, the dimension of the image matrix is reduced from 102 × 102 to 1 × 128, and the dimension reduction processing reduces the dimension of the image matrix to 1 × 32 through slow feature variance transformation matrix transformation; and comparing the characteristic values on the basis, wherein the dimension reduction comparison is used for rough positioning relative to the comparison between the characteristic values 1 and 128, so that the operation amount can be greatly saved, and according to a comparison method from rough positioning to fine positioning, the searching speed reaches 48ms without a rough positioning method according to multiple tests and based on the same hardware configuration (I3 processor and 2G memory).
S5001 b: if a dimension-reduced target face characteristic value with the similarity of the dimension-reduced reference face characteristic value exceeding a preset value three (such as 0.9) exists in the target face characteristic matrix, judging that the target face characteristic matrix before dimension reduction is an effective target face characteristic matrix; otherwise, judging the target face feature matrix before dimensionality reduction as an invalid target face feature matrix;
s5001 c: and carrying out similarity comparison on the reference human face feature matrix and the effective target human face feature matrix. The effective target face feature matrix is a matrix formed by normal feature extraction values.
Further, referring to fig. 6, the result of the comparison is processed as follows:
s5002 a: according to the similarity between the target face feature value and the reference face feature value, when the similarity exceeds the second preset value (such as 0.95), executing S5002b, otherwise, judging that the target face corresponding to the target face feature matrix is a non-face to be searched;
s5002 b: screening out the maximum similarity with the similarity exceeding a second preset value, and weighting the screened similarity to obtain a comparison result; calculating the reliability according to a preset rule according to the comparison result;
s5002 c: and outputting the comparison result and/or the reliability.
Referring to fig. 7, in order to further improve reliability of reliability, S5002b specifically includes:
s50021: and (3) calculating: a ratio (C/M) of a group number C of the reference face feature values in the reference face feature matrix, the similarity of which to the target face feature value exceeds a predetermined value two, to a total reference face feature value group number M of the reference face feature matrix, and if the ratio satisfies a predetermined condition (e.g., C/M > -0.4), S50022 is executed; otherwise, taking the maximum similarity of the target face characteristic value in each group of the target face characteristic matrix and all the reference face characteristic values in the reference face characteristic matrix as a comparison result and reliability;
s50022: screening out the maximum similarity of the target face characteristic value in each group of target face characteristic matrix and all reference face characteristic values in the reference face characteristic matrix, wherein the similarity exceeds a second preset value, and weighting the screened similarity according to a preset weight to obtain a comparison result; and calculating the reliability according to a preset rule according to the comparison result and the preset weight.
For example, the total number M of the reference face feature values is 5, the number C of similarity groups with a similarity greater than 0.95 is 2, the similarities are 0.952 and 0.981, and C/M is 0.4> and 0.4, which satisfy the requirements. The predetermined weights are 4 grades [0.95,0.96], (0.96,0.97], (0.97,0.98], (0.98,1.00], and the weights are 1.0, 1.05, 1.15 and 1.25 in sequence, the maximum similarity 0.981 is weighted into 0.981.25 ═ 1.23 (two decimal numbers are taken here, and no limitation is used specifically) and is used as a comparison result, and according to a predetermined rule, the comparison result/weight is weighted into 0.981.25 ═ 1.23 (two decimal numbers are taken here), the reliability is obtained, and obviously, the higher the reliability based on the comparison of the target face feature matrix is, the higher the probability that the image based on the target face is found to be the face to be found is shown.
As shown in fig. 8, the present embodiment discloses a face recognition system based on short video cue method, which includes:
the data acquisition unit is used for acquiring a short video containing a target face;
the image extraction unit is used for extracting a plurality of target face pictures containing target faces from the short video received by the data receiving unit;
the face feature extraction unit is used for extracting feature values of the target face pictures and outputting the target face feature values corresponding to the target face pictures respectively; if a characteristic value extraction method based on a neural network method is adopted, the image is projected in the horizontal/vertical direction, and then normalization processing is carried out. The present embodiment extracts image feature values using 2 DPCA.
The feature matrix construction unit is used for combining the target face feature values extracted by the face feature extraction unit to generate a target face feature matrix;
the face database stores a face data source containing a face to be searched, and the face data source is a short video containing the face to be searched or a plurality of reference face pictures containing the face to be searched; the face database also sends the face data source to an image extraction unit; the image extraction unit is further configured to: extracting a plurality of reference face pictures containing faces to be searched from the face data source; the face feature extraction unit is further configured to: extracting characteristic values of the plurality of reference face pictures, and outputting a plurality of reference face characteristic values respectively corresponding to the plurality of reference face pictures; the feature matrix construction unit is further configured to: and combining the plurality of reference face characteristic values extracted by the face characteristic extraction unit to generate a reference face characteristic matrix, and outputting the reference face characteristic matrix to a face characteristic library for storage.
The face feature library is used for storing a reference face feature matrix corresponding to the face to be searched;
and the face verification unit is used for comparing the target face feature matrix generated by the feature matrix construction unit with the reference face feature matrix so as to verify the target face.
The above feature matrix construction unit combines the target face feature values in the following manner:
judging the similarity between the target face characteristic value Q2 output by the face characteristic extraction unit and each group of target face characteristic values in the target face characteristic matrix; when the similarity between the target face characteristic value Q2 and all target face characteristic values in the target face characteristic matrix is judged to be in a second preset threshold range, receiving the target face characteristic value Q2, and writing the target face characteristic matrix; otherwise, not receiving the target face characteristic value Q2;
and further judging whether the group number of the target face characteristic values written into the target face characteristic matrix reaches a preset value A, if so, not receiving the target face characteristic values output by the face characteristic extraction unit.
Preferably, the predetermined value a is an integer of at least 2, preferably 5 to 7.
Referring to fig. 9, this embodiment specifically discloses a structure of the face verification unit, which includes:
the face matching module is used for searching and comparing the target face feature matrix with the reference face feature matrix and judging the similarity between the target face feature matrix and the reference face feature matrix;
the similarity calculation method here is: degree of similarity
Figure GDA0002497673030000141
x is a target face characteristic value, y is a reference face characteristic value, n is the number of groups of target face characteristic values in the target face characteristic matrix, m is the length of a single group of target face characteristic values, and preferably n is equal to m and equal to 5.
The reliability determining module is used for calculating the reliability according to the similarity between the target face characteristic value and the reference face characteristic value output by the face matching module and a pre-stored rule;
and the result confirmation module is used for calculating according to the face matching module: and outputting a final verification result of the target face in the short video according to the relationship between the similarity of the target face characteristic value and the reference face characteristic value and a preset value two (such as 0.95) and the output result of the accuracy determination module.
The process of comparing the target face feature matrix and the reference face feature matrix by the face matching module is as follows:
performing dimension reduction processing on the reference face characteristic matrix and the target face characteristic matrix, namely performing dimension reduction on each group of reference face characteristic values and each group of target face characteristic values, and performing similarity comparison on each group of dimension reduced reference face characteristic values and all dimension reduced target face characteristic values in the target characteristic matrix;
if a dimension-reduced target face characteristic value with the similarity of the dimension-reduced reference face characteristic value exceeding a preset value three exists in the target face characteristic matrix, judging that the target face characteristic matrix before dimension reduction is an effective target face characteristic matrix; otherwise, judging the target face feature matrix before dimensionality reduction as an invalid target face feature matrix;
and carrying out similarity comparison on the reference face characteristic value of the reference face characteristic matrix and the target face characteristic value of the effective target face characteristic matrix. The comparison here is the comparison of the characteristic values before dimensionality reduction.
Specific embodiments refer to the above-mentioned dimension reduction of the slow feature variance transformation matrix, which will not be described in detail herein.
The reliability confirming module calculates the reliability in the following way:
according to the similarity output by the face matching module, when the similarity exceeds the preset value B, weighting the maximum similarity exceeding the preset value B to obtain a weighting result; the weighted result is also calculated according to a predetermined rule (e.g., weighted result 100%/weight), and the reliability is output. Wherein the obtained weighting result is specifically:
and (3) calculating: the group number C of the reference face characteristic value with the similarity with the target face characteristic value exceeding a preset value two in the reference face characteristic matrix, and a ratio (C/M) to the total number M of reference face feature value groups of the reference face feature matrix, and when the ratio satisfies a predetermined condition (e.g. C/M > -0.4), for the maximum similarity exceeding the predetermined value B (e.g., the similarities satisfying the predetermined condition are 0.952 and 0.981, respectively, and 0.981 is taken here), the weights are weighted by the pre-stored weights (4 steps [0.95,0.96], (0.96,0.97], (0.97,0.98], (0.98,1.00], the weights were 1.0, 1.05, 1.15, 1.25, respectively, to give a weighted result (0.981 × 1.25 — 1.23); when the ratio does not satisfy a predetermined condition, and taking the maximum value of the similarity contrast between the reference face characteristic value and the target face characteristic value as a weighting result.
Further, the image extraction unit is provided with a preset value C, when the quality of the extracted face image is judged to be higher than the preset value C, the extracted face image is judged to be sent to the face feature extraction unit, and otherwise, the extracted face image is discarded; the picture quality calculation method comprises the following steps:
Figure GDA0002497673030000151
wherein Q isThe image quality, a is the image,
Figure GDA0002497673030000152
is a gaussian filtered image of image a.
The image extraction unit extracts the reference face picture specifically as follows:
if the face data source is a short video, extracting D reference face pictures with the preset number and the picture quality above a preset value C from the short video;
if the face data source is a picture containing a reference face, when the number J of the pictures containing the reference face is within the preset number D, all the pictures containing the reference face are sent to the face characteristic value extraction unit; and when the number of the pictures containing the reference human faces is more than the preset number D, taking D pictures containing the reference human faces and sending the pictures to the human face characteristic value extraction unit in the order of high picture quality to low picture quality. And the image extraction unit extracts the mode of the target face picture, and extracts the mode of a reference face picture when the face data source is a short video.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed.

Claims (9)

1. A face recognition method based on a short video training method is characterized by comprising the following steps:
s001: constructing a reference face feature matrix for the face to be searched;
s100: acquiring a short video containing a target face;
s200: identifying and tracking a target face in the short video, and extracting a plurality of target face pictures;
s300: respectively extracting characteristic values of the extracted target face pictures to generate a plurality of groups of target face characteristic values respectively corresponding to the target face pictures;
s400: combining the plurality of groups of target face characteristic values to generate a target face characteristic matrix corresponding to the target face;
s500: and comparing the target face feature matrix with the reference face feature matrix to verify the target face.
The S400 specifically comprises the following steps:
s4001: judging the similarity between the target face characteristic value Q1 planned to be stored in the target face characteristic matrix and each group of target face characteristic values in the target face characteristic matrix; if the similarity between the target face characteristic value Q1 and all target face characteristic values in the target face characteristic matrix is within a preset threshold range, marking the target face characteristic value Q1 as an effective target face characteristic value; otherwise, marking the target face characteristic value Q1 as an invalid target face characteristic value;
s4002: storing the effective target face characteristic value into a target face characteristic matrix; discarding the invalid target face feature value;
s4003: judging whether the group number of the target face characteristic values stored in the target face characteristic matrix reaches a preset value one; if yes, executing S500; otherwise, S4001 is executed.
2. The method according to claim 1, wherein S001 is in particular:
s0001: acquiring a face data source containing a reference face;
s0002: identifying a reference face in the face data source, and extracting a plurality of reference face pictures;
s0003: respectively extracting characteristic values of the extracted plurality of reference face pictures to generate a plurality of reference face characteristic values respectively corresponding to the plurality of reference face pictures;
s0004: and combining the plurality of groups of reference face characteristic values to generate a reference face characteristic matrix corresponding to the reference face.
3. The method according to claim 2, wherein S500 is specifically:
s5001: searching and comparing the target face feature matrix with the reference face feature matrix; judging the similarity between the target face feature matrix and the reference face feature matrix;
s5002: and confirming the verification result of the target face in the short video according to the relation between the similarity of the target face characteristic value and the reference face characteristic value and a second preset value.
4. The method according to claim 3, wherein S5001 is specifically:
s5001 a: performing dimension reduction processing on the reference face characteristic matrix and the target face characteristic matrix, namely performing dimension reduction on each group of reference face characteristic values and each group of target face characteristic values, and performing similarity comparison on each group of dimension reduced reference face characteristic values and all dimension reduced target face characteristic values in the target characteristic matrix;
s5001 b: if a dimension-reduced target face characteristic value with the similarity of the dimension-reduced reference face characteristic value exceeding a preset value three exists in the target face characteristic matrix, judging that the target face characteristic matrix before dimension reduction corresponding to the dimension-reduced target face characteristic value is an effective target face characteristic matrix; otherwise, judging the target face feature matrix before dimensionality reduction as an invalid target face feature matrix;
s5001 c: and carrying out similarity comparison on the reference human face feature matrix and the effective target human face feature matrix.
5. The method according to claim 4, wherein S5002 is specifically:
s5002 a: according to the similarity between the target face feature value and the reference face feature value, when the similarity exceeds the second preset value, S5002b is executed, otherwise, the target face corresponding to the target face feature matrix is judged to be a non-face to be searched;
s5002 b: screening out the maximum similarity with the similarity exceeding a second preset value, and weighting the screened similarity to obtain a comparison result; calculating the reliability according to a preset rule according to the comparison result;
s5002 c: and outputting the comparison result and/or the reliability.
6. The method according to claim 5, wherein the S5002b is specifically:
s50021: and (3) calculating: the ratio of the number of groups of the reference face feature values in the reference face feature matrix, the similarity of which to the target face feature value exceeds a predetermined value two, to the total number of groups of reference face feature values in the reference face feature matrix is executed when the ratio meets a predetermined condition, and S50022 is executed; otherwise, taking the maximum similarity of the target face characteristic value in each group of the target face characteristic matrix and all the reference face characteristic values in the reference face characteristic matrix as a comparison result and reliability;
s50022: screening out the maximum similarity of the target face characteristic value in each group of target face characteristic matrix and all reference face characteristic values in the reference face characteristic matrix, wherein the similarity exceeds a second preset value, and weighting the screened similarity according to a preset weight to obtain a comparison result; and calculating the reliability according to a preset rule according to the comparison result and the preset weight.
7. The method according to claim 6, wherein the similarity is determined by:
calculating the similarity between the reference face feature matrix and the target face feature matrix:
Figure FDA0002497673020000021
x is a target face characteristic matrix, y is a reference face characteristic matrix, n is the number of groups of target face characteristic values in the target face characteristic matrix, and m is the length of a single group of target face characteristic values.
8. The method of claim 7, wherein the slave short isThe quality of the face picture extracted from the video and the face data source meets the preset quality requirement, and the quality calculation method comprises the following steps:
Figure FDA0002497673020000031
wherein Q is image quality, A is an image,
Figure FDA0002497673020000032
is a gaussian filtered image of image a.
9. The method of claim 8, wherein the number of reference face pictures extracted from the face data source is required to be:
if the face data source is a short video, extracting K face pictures meeting the preset quality requirement from the short video in a preset quantity;
if the face data source is a picture containing the reference face, taking J pictures containing the reference face when the number J of the pictures containing the reference face is within the preset number K; and when the number of the pictures containing the reference human faces is more than the preset number K, taking K pictures containing the reference human faces.
CN201711095734.2A 2017-11-09 2017-11-09 Face recognition method based on short video training method Active CN107798308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711095734.2A CN107798308B (en) 2017-11-09 2017-11-09 Face recognition method based on short video training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711095734.2A CN107798308B (en) 2017-11-09 2017-11-09 Face recognition method based on short video training method

Publications (2)

Publication Number Publication Date
CN107798308A CN107798308A (en) 2018-03-13
CN107798308B true CN107798308B (en) 2020-09-22

Family

ID=61548007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711095734.2A Active CN107798308B (en) 2017-11-09 2017-11-09 Face recognition method based on short video training method

Country Status (1)

Country Link
CN (1) CN107798308B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325964B (en) * 2018-08-17 2020-08-28 深圳市中电数通智慧安全科技股份有限公司 Face tracking method and device and terminal
CN109508701B (en) * 2018-12-28 2020-09-22 北京亿幕信息技术有限公司 Face recognition and tracking method
CN111507143B (en) * 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
CN110084162A (en) * 2019-04-18 2019-08-02 上海钧正网络科技有限公司 A kind of peccancy detection method, apparatus and server
CN110415424B (en) * 2019-06-17 2022-02-11 众安信息技术服务有限公司 Anti-counterfeiting identification method and device, computer equipment and storage medium
CN116935286B (en) * 2023-08-03 2024-01-09 广州城市职业学院 Short video identification system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187975A (en) * 2007-12-25 2008-05-28 西南交通大学 A face feature extraction method with illumination robustness
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
CN104008370A (en) * 2014-05-19 2014-08-27 清华大学 Video face identifying method
CN105184238A (en) * 2015-08-26 2015-12-23 广西小草信息产业有限责任公司 Human face recognition method and system
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105426860A (en) * 2015-12-01 2016-03-23 北京天诚盛业科技有限公司 Human face identification method and apparatus
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106503691A (en) * 2016-11-10 2017-03-15 广州视源电子科技股份有限公司 Identity labeling method and device for face picture
CN106778522A (en) * 2016-11-25 2017-05-31 江南大学 A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation
CN106845357A (en) * 2016-12-26 2017-06-13 银江股份有限公司 A kind of video human face detection and recognition methods based on multichannel network
CN106971158A (en) * 2017-03-23 2017-07-21 南京邮电大学 A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130004028A1 (en) * 2011-06-28 2013-01-03 Jones Michael J Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187975A (en) * 2007-12-25 2008-05-28 西南交通大学 A face feature extraction method with illumination robustness
CN102693418A (en) * 2012-05-17 2012-09-26 上海中原电子技术工程有限公司 Multi-pose face identification method and system
CN104008370A (en) * 2014-05-19 2014-08-27 清华大学 Video face identifying method
CN105184238A (en) * 2015-08-26 2015-12-23 广西小草信息产业有限责任公司 Human face recognition method and system
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105426860A (en) * 2015-12-01 2016-03-23 北京天诚盛业科技有限公司 Human face identification method and apparatus
CN105956518A (en) * 2016-04-21 2016-09-21 腾讯科技(深圳)有限公司 Face identification method, device and system
CN106503691A (en) * 2016-11-10 2017-03-15 广州视源电子科技股份有限公司 Identity labeling method and device for face picture
CN106778522A (en) * 2016-11-25 2017-05-31 江南大学 A kind of single sample face recognition method extracted based on Gabor characteristic with spatial alternation
CN106845357A (en) * 2016-12-26 2017-06-13 银江股份有限公司 A kind of video human face detection and recognition methods based on multichannel network
CN106971158A (en) * 2017-03-23 2017-07-21 南京邮电大学 A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Survey – A Comparative Analysis of Face Recognition Technique》;Deepshikha Bhati等;《International Journal of Engineering Research and General Science》;20150430;第3卷(第2期);第597-609页 *
《基于类矩阵和特征融合的加权自适应人脸识别》;杨欣等;《中国图象图形学报》;20080531;第13卷(第5期);第930-936页 *

Also Published As

Publication number Publication date
CN107798308A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107798308B (en) Face recognition method based on short video training method
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
US9189686B2 (en) Apparatus and method for iris image analysis
CN110008909B (en) Real-name system business real-time auditing system based on AI
Barnouti et al. Face recognition: A literature review
CN107967458A (en) A kind of face identification method
CN107977439A (en) A kind of facial image base construction method
CN102938065A (en) Facial feature extraction method and face recognition method based on large-scale image data
Aiadi et al. MDFNet: An unsupervised lightweight network for ear print recognition
CN108108760A (en) A kind of fast human face recognition
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
Haji et al. Real time face recognition system (RTFRS)
CN111126307A (en) Small sample face recognition method of joint sparse representation neural network
Gavisiddappa et al. Multimodal biometric authentication system using modified ReliefF feature selection and multi support vector machine
Kaur et al. A novel biometric system based on hybrid fusion speech, signature and tongue
Mane et al. Novel multiple impression based multimodal fingerprint recognition system
Yu et al. Research on face recognition method based on deep learning
Liu et al. A novel high-resolution fingerprint representation method
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
Barde et al. Person Identification Using Face, Ear and Foot Modalities.
Kangwanwatana et al. Improve face verification rate using image pre-processing and facenet
CN110390268B (en) Three-dimensional palmprint recognition method based on geometric characteristics and direction characteristics
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Khan et al. Implementation and Analysis of Fusion in Multibiometrics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant