[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110427843B - Intelligent face recognition method - Google Patents

Intelligent face recognition method Download PDF

Info

Publication number
CN110427843B
CN110427843B CN201910651887.3A CN201910651887A CN110427843B CN 110427843 B CN110427843 B CN 110427843B CN 201910651887 A CN201910651887 A CN 201910651887A CN 110427843 B CN110427843 B CN 110427843B
Authority
CN
China
Prior art keywords
face
image
region
recognition
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910651887.3A
Other languages
Chinese (zh)
Other versions
CN110427843A (en
Inventor
金耀初
何卫灵
刘华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Liko Technology Co ltd
Original Assignee
Guangzhou Liko Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Liko Technology Co ltd filed Critical Guangzhou Liko Technology Co ltd
Priority to CN201910651887.3A priority Critical patent/CN110427843B/en
Publication of CN110427843A publication Critical patent/CN110427843A/en
Application granted granted Critical
Publication of CN110427843B publication Critical patent/CN110427843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An intelligent face recognition method, comprising: selecting an original image under the condition that the person to be identified cannot be identified; locating and tracking the person to be identified; acquiring a face image and an original image to form an original atlas, and acquiring the face image of the figure to be identified in at least one different angle as a correction atlas; obtaining a missing identification sample set and a complete face contour region containing a front face image sample by adjusting the face postures of the face images of the original atlas and the correction atlas; according to the complete face contour region, blocking the face samples in the missing identification sample set to form a block region set; marking as a selected partition; and after the selected blocks are restored to obtain a corrected image, carrying out face recognition on the corrected image. The problem that the human face of a person to be recognized cannot be correctly recognized due to distortion or incapability of recognizing partial image areas caused by the fact that the posture of the human face is adjusted in the traditional human face recognition is solved.

Description

Intelligent face recognition method
Technical Field
The invention relates to the field of face image recognition, in particular to an intelligent face recognition method.
Background
With the help of networking of a camera and a computer or some independent integrated tools, a face recognition technology is a very popular and common technology in modern people's lives. People can remotely or intelligently complete the recognition of the human face by means of the face recognition technology. However, the traditional face recognition greatly depends on the orientation of the person to be recognized and the shooting angle of the camera within a certain range. In the acquired images containing different human face poses, the recognition rate of the human face recognition is decreased exponentially due to an overlarge angle.
For the problems, the most commonly used method in the prior art focuses on conversion between two-dimensional and three-dimensional models of a face image, but the method has high requirements for modeling the three-dimensional models, the resource requirements for recognizing the face are too high, and the resource requirements of the method are often difficult to achieve in a real scene. Based on the problems, the prior art provides a method for optimally synthesizing a multi-angle face image, but the method not only needs a user to identify the characteristic position in the image, but also often makes inaccurate comparison of a single region due to different angles of various images; in another recognition method in the prior art, face recognition is directly performed on a face image after pose adjustment, but after the pose adjustment of the face image, the face image cannot be completely restored or a part of a restored area is distorted due to a shooting angle. Therefore, the prior art still has no good solution to the problem that the human face of the person to be recognized cannot be correctly recognized due to distortion or incapability of recognizing a partial region of the image caused by the traditional face recognition after the face posture is adjusted.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides an intelligent face recognition method which is used for solving the problem that the face of a person to be recognized cannot be correctly recognized due to distortion or incapability of recognizing partial image areas caused by the fact that the posture of the face is adjusted in the traditional face recognition method.
An intelligent face recognition method, comprising:
acquiring an image to be identified and identifying a person to be identified in the image;
under the condition that a person to be identified cannot be identified, selecting a face image of the person to be identified as an original image;
obtaining a source positioning and tracking the person to be identified through an image of an original image;
acquiring at least one additional human face image of a person to be recognized, which is closest to the time of an original image, and the original image to form an original atlas based on the acquisition time of the original image and the same image acquisition mode, and acquiring the human face image of the person to be recognized in at least one different angle by an auxiliary human face collector according to the positioning and tracking direction to form a correction atlas;
obtaining a missing identification sample set and a complete face contour region containing a front face image sample by adjusting the face postures of the face images of the original atlas and the correction atlas;
according to the complete face contour region, blocking the face samples in the missing identification sample set to form a block region set;
marking the sample blocking result with the highest resolution in each block region set as a selected block;
and restoring the face image in the complete face contour region by using the selected blocks and the symmetry of the face to obtain a corrected image, and then carrying out face recognition on the corrected image.
The method uses the face images of other angles to restore and then recognize the first recognition image of the face image of the person to be recognized, solves the problem that the face of the person to be recognized cannot be recognized correctly due to distortion or incapability of recognizing a part of recognition area caused by single-angle posture adjustment, and can provide a more reliable image source with less background interference of the face of the person to be recognized and non-face of the person to be recognized for the recognition of the chosen Facenet model, so that the face in the image can be recognized more efficiently.
The method comprises the steps of adjusting the face posture, and then blocking the adjusted face image, so that the problems that the blocking area of a sample is uneven and the blocking position error is large due to the fact that the non-front image is directly subjected to angle measurement and blocking, and then the blocking area is blocked, and further the subsequent sample identification error is large are solved, and the step of independently measuring and calculating the angle of the blocking area is omitted.
Preferably, the step of locating and tracking the person to be identified by the image acquisition source of the original image specifically includes:
the method comprises the steps of obtaining a lens focal length and a lens orientation of a face image of a person to be identified through an image obtaining source of an original image, and determining a basic orientation of the person to be identified;
and accurately positioning and tracking the person in the basic orientation through at least one camera in other observation angles.
The person to be recognized is positioned through the first image acquisition source, then other auxiliary cameras are used for continuously tracking the person to be recognized so as to ensure subsequent processing of the recognition sample, so that the interference degree of other persons or backgrounds which are not to be recognized on the recognition sample is greatly reduced, and the requirement of the method on the purity degree of the recognition sample is met.
Preferably, the step of obtaining the missing identification sample set including the front face image sample by adjusting the face poses of the face images of the original atlas and the correction atlas specifically includes:
separating the human faces of the figures to be recognized in the human face images of the original atlas and the correction atlas into a recognition sample set;
determining the human face gestures of the recognition sample set;
defining a basic front face outline area through the face pose and the original image set;
the basic front face outline region is supplemented into a complete face outline region based on the symmetry of a face midline, and then the face outline of a figure to be identified in a correction image set is extracted to correct the complete face outline region;
and calibrating the postures of the face samples in the original image set and the corrected image set to the front side through the complete face contour region to obtain a missing recognition sample set.
Extracting and defining an effective face contour is one of main means for prompting the identification effectiveness of the method, main data features and parameters of the face contour can be effectively extracted and identified based on a data set defined by a Facenet model, and the Euclidean distance of feature vectors of the face contour of a person to be identified can be calculated and a face contour region can be restored to be a certain region on a forward plane based on the input image analysis results of the features and the parameters.
Preferably, the step of obtaining the missing recognition sample set by calibrating the postures of the face samples in the original image set and the corrected image set to the front through the complete face contour region specifically includes:
calculating a face posture parameter of a character to be recognized through the complete face contour region, wherein the face posture parameter is a plane region of the front face contour region on a set of Euler angles determined, and a projection distance average value of a face characteristic point of the character to be recognized in the recognition sample set on the plane region determined by the Euler angles;
and correcting the posture of the recognition sample set by using the human face posture parameters to obtain a positive human face as a missing recognition sample set.
Because the method based on the Facenet recognition model is a recognition mode based on image samples, the recognition process is easily influenced by the uncertainty of the face pose, but if the face pose is neglected for recognition, the confusion of the distance between the features and the recognition error caused by the difference of the integrity between the samples are easily caused, so the method emphasizes that the image after the face pose is corrected is used as the recognition sample, and a reliable pose adjustment mode is provided to ensure the reliability of the recognition sample and the accuracy of the recognition.
The missing identification sample set is a complete face contour area which comprises a front projection area of the original sample after the posture adjustment and a missing area outside the front projection after the face posture in the identification sample set is adjusted.
Preferably, the step of blocking the face samples in the missing identification sample set to form a block area set specifically includes:
forming a blocking area based on the complete face contour area, and blocking face samples contained in the original image set and the missing identification sample set in the correction image set according to the blocking area;
marking the partitioning result based on the partitioning region;
a set of each block region is formed based on the labeling result.
The step of establishing pixel coordinates for classification is directly saved by classifying the segmentation results according to the block regions, and because the method is used for processing the face image after the posture adjustment, the resolution or the definition of the face image is not uniformly distributed on a plane, and the image regions can be more effectively separated by a block region defining mode based on the face contour region so as to achieve the purpose of independence among the regions.
Preferably, the step of marking the sample blocking result with the highest resolution in each block region set as the selected block specifically includes:
verifying whether each optimal blocking result which simultaneously comprises the blocking results of the original image set sample and the corrected image set sample and has the highest resolution in the sets belongs to the block area set of the corrected image set, and whether the optimal blocking result has the lowest similarity required by being identified as a similar block or not compared with the blocking result which belongs to the original image set sample in the same block area set, wherein the lowest similarity is the minimum Bhattacharyya distance of pixel gray level distribution after a pair of images are subjected to image pyramid downsampling processing based on the lowest resolution and are converted into a gray image;
and if the optimal blocking result has the lowest similarity required by being identified as a similar block compared with the blocking result belonging to the original atlas sample in the same block region set, marking the optimal blocking result as the selected block of the block region to which the optimal blocking result belongs, and marking the blocking result with the highest resolution in the block region set of the rest unmarked selected blocks as the selected block.
Preferably, after restoring the face image in the complete face contour region by using the selected blocks and the symmetry of the face to obtain a corrected image, the step of performing face recognition on the corrected image specifically includes:
according to the block region set of the selected block, the reset position of the selected block in the complete face contour region is collected;
restoring the face image by using the selected blocks according to the restoration position;
adopting a Facenet deep convolution network to convert the restored face image into a characteristic vector;
after Euclidean distances between the feature vectors of the restored face image after normalization processing and feature vectors stored in an identification library are calculated, corresponding triplets are obtained through the Euclidean distances between the feature vectors;
and forming a face recognition result through the triple.
The pictures processed by the method can be effectively used for identifying the Facenet model, and the recovered pictures are used for improving the accuracy and the adaptation range of Facenet identification.
Compared with the prior art, the invention has the beneficial effects that:
1. the problem that the human face of a person to be recognized cannot be correctly recognized due to distortion or incapability of recognizing partial image areas caused by the fact that the posture of the human face is adjusted in the traditional human face recognition is solved;
2. the recognition method based on Facenet enables the recognition accuracy to be higher;
3. carrying out preferential complementation by using the multi-angle face image so that the restored face image with the front posture is easier to be recognized by a recognition mode based on a Facenet recognition model;
4. the positioning tracking of the multiple cameras on the figure to be identified ensures the independent purity of the identification sample;
5. the blocking set based on the human face contour region solves the problem that the blocking mode which must depend on pixel coordinates leads to uneven sample blocking of the method.
Drawings
FIG. 1 is a flow chart of the method.
FIG. 2 is a schematic diagram of the attitude adjustment and blocking of the method.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1 and fig. 2, the present embodiment provides an intelligent face recognition method, where the method includes:
acquiring an image to be identified and identifying a person to be identified in the image;
under the condition that a person to be identified cannot be identified, selecting a face image of the person to be identified as an original image;
obtaining a source positioning and tracking the person to be identified through an image of an original image;
acquiring at least one additional human face image of a person to be recognized, which is closest to the time of an original image, and the original image to form an original atlas based on the acquisition time of the original image and the same image acquisition mode, and acquiring the human face image of the person to be recognized in at least one different angle by an auxiliary human face collector according to the positioning and tracking direction to form a correction atlas;
obtaining a missing identification sample set and a complete face contour region containing a front face image sample by adjusting the face postures of the face images of the original atlas and the correction atlas;
according to the complete face contour region, blocking the face samples in the missing identification sample set to form a block region set;
marking the sample blocking result with the highest resolution in each block region set as a selected block;
and restoring the face image in the complete face contour region by using the selected blocks and the symmetry of the face to obtain a corrected image, and then carrying out face recognition on the corrected image.
The method uses the face images of other angles to restore and then recognize the first recognition image of the face image of the person to be recognized, solves the problem that the face of the person to be recognized cannot be recognized correctly due to distortion or incapability of recognizing a part of recognition area caused by single-angle posture adjustment, and can provide a more reliable image source with less background interference of the face of the person to be recognized and non-face of the person to be recognized for the recognition of the chosen Facenet model, so that the face in the image can be recognized more efficiently.
The method comprises the steps of adjusting the face posture, and then blocking the adjusted face image, so that the problems that the blocking area of a sample is uneven and the blocking position error is large due to the fact that the non-front image is directly subjected to angle measurement and blocking, and then the blocking area is blocked, and further the subsequent sample identification error is large are solved, and the step of independently measuring and calculating the angle of the blocking area is omitted.
In a specific implementation process, the step of locating and tracking the person to be identified through the image acquisition source of the original image specifically includes:
the method comprises the steps of obtaining a lens focal length and a lens orientation of a face image of a person to be identified through an image obtaining source of an original image, and determining a basic orientation of the person to be identified;
and accurately positioning and tracking the person in the basic orientation through at least one camera in other observation angles.
The person to be recognized is positioned through the first image acquisition source, then other auxiliary cameras are used for continuously tracking the person to be recognized so as to ensure subsequent processing of the recognition sample, so that the interference degree of other persons or backgrounds which are not to be recognized on the recognition sample is greatly reduced, and the requirement of the method on the purity degree of the recognition sample is met.
Specifically, the step of obtaining the missing identification sample set including the front face image sample by adjusting the face postures of the face images of the original atlas and the correction atlas specifically includes:
separating the human faces of the figures to be recognized in the human face images of the original atlas and the correction atlas into a recognition sample set;
determining the human face gestures of the recognition sample set;
defining a basic front face outline area through the face pose and the original image set;
the basic front face outline region is supplemented into a complete face outline region based on the symmetry of a face midline, and then the face outline of a figure to be identified in a correction image set is extracted to correct the complete face outline region;
and calibrating the postures of the face samples in the original image set and the corrected image set to the front side through the complete face contour region to obtain a missing recognition sample set.
Extracting and defining an effective face contour is one of main means for prompting the identification effectiveness of the method, main data features and parameters of the face contour can be effectively extracted and identified based on a data set defined by a Facenet model, and the Euclidean distance of feature vectors of the face contour of a person to be identified can be calculated and a face contour region can be restored to be a certain region on a forward plane based on the input image analysis results of the features and the parameters.
Specifically, the step of calibrating the postures of the face samples in the original image set and the corrected image set to the front side through the complete face contour region to obtain the missing recognition sample set specifically includes:
calculating a face posture parameter of a character to be recognized through the complete face contour region, wherein the face posture parameter is a plane region of the front face contour region on a set of Euler angles determined, and a projection distance average value of a face characteristic point of the character to be recognized in the recognition sample set on the plane region determined by the Euler angles;
and correcting the posture of the recognition sample set by using the human face posture parameters to obtain a positive human face as a missing recognition sample set.
Because the method based on the Facenet recognition model is a recognition mode based on image samples, the recognition process is easily influenced by the uncertainty of the face pose, but if the face pose is neglected for recognition, the confusion of the distance between the features and the recognition error caused by the difference of the integrity between the samples are easily caused, so the method emphasizes that the image after the face pose is corrected is used as the recognition sample, and a reliable pose adjustment mode is provided to ensure the reliability of the recognition sample and the accuracy of the recognition.
The missing recognition sample set is a complete face contour region including a posture-adjusted front projection region of an original sample and a missing region outside the front projection after the face posture in the recognition sample set is adjusted, wherein the rock in fig. 2 is the missing region.
Specifically, the step of blocking the face samples in the missing identification sample set to form a block area set specifically includes:
forming a blocking area based on the complete face contour area, and blocking face samples contained in the original image set and the missing identification sample set in the correction image set according to the blocking area;
marking the partitioning result based on the partitioning region;
a set of each block region is formed based on the labeling result.
The step of establishing pixel coordinates for classification is directly saved by classifying the segmentation results according to the block regions, and because the method is used for processing the face image after the posture adjustment, the resolution or the definition of the face image is not uniformly distributed on a plane, and the image regions can be more effectively separated by a block region defining mode based on the face contour region so as to achieve the purpose of independence among the regions.
Wherein, if the human face contour region is divided into n block regions, n block region sets { S }1,S2,S3,…,Sn-sorting the marked samples according to the area divided by the block area, e.g. for the set S1The m samples contained have a set
Figure BDA0002135508000000071
Will be provided with
Figure BDA0002135508000000072
Defined as the average or maximum resolution or clarity of the mth sample in the set of block regions 1; if it is
Figure BDA0002135508000000073
If the corresponding block area contains missing area, defining
Figure BDA0002135508000000074
Specifically, the step of marking the sample blocking result with the highest resolution in each block region set as the selected block specifically includes:
verifying whether each optimal blocking result which simultaneously comprises the blocking results of the original image set sample and the corrected image set sample and has the highest resolution in the sets belongs to the block area set of the corrected image set, and whether the optimal blocking result has the lowest similarity required by being identified as a similar block or not compared with the blocking result which belongs to the original image set sample in the same block area set, wherein the lowest similarity is the minimum Bhattacharyya distance of pixel gray level distribution after a pair of images are subjected to image pyramid downsampling processing based on the lowest resolution and are converted into a gray image;
and if the optimal blocking result has the lowest similarity required by being identified as a similar block compared with the blocking result belonging to the original atlas sample in the same block region set, marking the optimal blocking result as the selected block of the block region to which the optimal blocking result belongs, and marking the blocking result with the highest resolution in the block region set of the rest unmarked selected blocks as the selected block.
Wherein the selected partition p 'of the nth set of block regions'nIs selected in a manner that
Figure BDA0002135508000000081
Specifically, after restoring the face image in the complete face contour region by using the selected blocks and the symmetry of the face to obtain a corrected image, the step of performing face recognition on the corrected image specifically includes:
according to the block region set of the selected block, the reset position of the selected block in the complete face contour region is collected;
restoring the face image by using the selected blocks according to the restoration position;
adopting a Facenet deep convolution network to convert the restored face image into a characteristic vector;
after Euclidean distances between the feature vectors of the restored face image after normalization processing and feature vectors stored in an identification library are calculated, corresponding triplets are obtained through the Euclidean distances between the feature vectors;
and forming a face recognition result through the triple.
The pictures processed by the method can be effectively used for identifying the Facenet model, and the recovered pictures are used for improving the accuracy and the adaptation range of Facenet identification.
And calculating an average value of the resolution of the restored image, and adjusting the resolution of the restored image by using a Laplacian pyramid and a Gaussian pyramid based on the human face database, so that pixels of the restored image are uniformly distributed to meet the identification requirement of human face identification.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (6)

1. An intelligent face recognition method is characterized in that:
acquiring an image to be identified and identifying a person to be identified in the image;
under the condition that a person to be identified cannot be identified, selecting a face image of the person to be identified as an original image;
obtaining a source positioning and tracking the person to be identified through an image of an original image;
acquiring at least one additional human face image of a person to be recognized, which is closest to the time of an original image, and the original image to form an original atlas based on the acquisition time of the original image and the same image acquisition mode, and acquiring the human face image of the person to be recognized in at least one different angle by an auxiliary human face collector according to the positioning and tracking direction to form a correction atlas;
obtaining a missing identification sample set and a complete face contour region containing a front face image sample by adjusting the face postures of the face images of the original atlas and the correction atlas;
according to the complete face contour region, blocking the face samples in the missing identification sample set to form a block region set;
marking the sample blocking result with the highest resolution in each block region set as a selected block;
restoring a face image in a complete face contour region by using the selected blocks and the symmetry of the face to obtain a corrected image, and then carrying out face recognition on the corrected image;
the step of marking the sample blocking result with the highest resolution in each block region set as the selected block specifically comprises:
verifying whether each block result simultaneously comprises a blocking result of an original image set sample and a corrected image set sample and an optimal blocking result with the highest resolution in a block region set, and whether the optimal blocking result has the lowest similarity required by being identified as a similar block or not compared with the blocking result belonging to the original image set sample in the same block region set, wherein the lowest similarity is the minimum Bhattacharyya distance of pixel gray level distribution after a pair of images are subjected to image pyramid downsampling processing based on the histogram equalization conversion of the images with the lowest resolution to be converted into a gray level image;
and if the optimal blocking result has the lowest similarity required by being identified as a similar block compared with the blocking result belonging to the original atlas sample in the same block region set, marking the optimal blocking result as the selected block of the block region to which the optimal blocking result belongs, and marking the blocking result with the highest resolution in the block region set without marking the selected block as the selected block.
2. The intelligent face recognition method according to claim 1, wherein the step of locating and tracking the person to be recognized through an image acquisition source of an original image specifically comprises:
acquiring a lens focal length and a lens orientation of the person to be recognized during the acquisition of the face image of the person to be recognized through an image acquisition source of an original image, and determining the basic orientation of the person to be recognized;
and accurately positioning and tracking the person in the basic orientation through at least one camera in other observation angles.
3. The intelligent human face recognition method according to claim 1, wherein the step of obtaining the missing recognition sample set including the positive human face image samples by adjusting the human face poses of the human face images of the original atlas and the rectified atlas specifically includes:
separating the human faces of the figures to be recognized in the human face images of the original atlas and the correction atlas into a recognition sample set;
determining the human face gestures of the recognition sample set;
defining a basic front face outline area through the face pose and the original image set;
the basic front face outline region is supplemented into a complete face outline region based on the symmetry of a face midline, and then the face outline of a figure to be identified in a correction image set is extracted to correct the complete face outline region;
and calibrating the postures of the face samples in the original image set and the corrected image set to the front side through the complete face contour region to obtain a missing recognition sample set.
4. The intelligent face recognition method according to claim 3, wherein the step of obtaining the missing recognition sample set by calibrating the poses of the face samples in the original atlas and the rectified atlas to the front through the complete face contour region specifically comprises:
calculating a face posture parameter of a character to be recognized through the complete face contour region, wherein the face posture parameter is a plane region of the front face contour region on a set of Euler angles determined, and a projection distance average value of a face characteristic point of the character to be recognized in the recognition sample set on the plane region determined by the Euler angles;
and correcting the posture of the recognition sample set by using the human face posture parameters to obtain a positive human face as a missing recognition sample set.
5. The intelligent face recognition method according to claim 1, wherein the step of blocking the face samples in the missing recognition sample set to form a block area set specifically comprises:
forming a blocking area based on the complete face contour area, and blocking face samples contained in the original image set and the missing identification sample set in the correction image set according to the blocking area;
marking the partitioning result based on the partitioning region;
a set of each block region is formed based on the labeling result.
6. The intelligent face recognition method according to claim 1, wherein the step of performing face recognition on the corrected image after performing face image restoration in the complete face contour region using the selected blocks and the symmetry of the face to obtain the corrected image specifically comprises:
determining the reset position of the selected block in the complete face contour region according to the block region set of the selected block;
restoring the face image by using the selected blocks according to the restoration position;
adopting a Facenet deep convolution network to convert the restored face image into a characteristic vector;
after Euclidean distances between the feature vectors of the restored face image after normalization processing and feature vectors stored in an identification library are calculated, corresponding triplets are obtained through the Euclidean distances between the feature vectors;
and forming a face recognition result through the triple.
CN201910651887.3A 2019-07-18 2019-07-18 Intelligent face recognition method Active CN110427843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910651887.3A CN110427843B (en) 2019-07-18 2019-07-18 Intelligent face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910651887.3A CN110427843B (en) 2019-07-18 2019-07-18 Intelligent face recognition method

Publications (2)

Publication Number Publication Date
CN110427843A CN110427843A (en) 2019-11-08
CN110427843B true CN110427843B (en) 2021-07-13

Family

ID=68411149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910651887.3A Active CN110427843B (en) 2019-07-18 2019-07-18 Intelligent face recognition method

Country Status (1)

Country Link
CN (1) CN110427843B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163592A (en) * 1998-11-26 2000-06-16 Victor Co Of Japan Ltd Recognizing method for face image
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
KR101491832B1 (en) * 2014-05-23 2015-02-12 동국대학교 산학협력단 Apparatus and method for selecting image
CN108564053A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on FaceNet and method
CN109284722A (en) * 2018-09-29 2019-01-29 佳都新太科技股份有限公司 Image processing method, device, face recognition device and storage medium
CN109360170A (en) * 2018-10-24 2019-02-19 北京工商大学 Face restorative procedure based on advanced features
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN109598210A (en) * 2018-11-16 2019-04-09 三星电子(中国)研发中心 A kind of image processing method and device
CN109902657A (en) * 2019-03-12 2019-06-18 哈尔滨理工大学 A kind of face identification method indicated based on piecemeal collaboration
CN109918998A (en) * 2019-01-22 2019-06-21 东南大学 A kind of big Method of pose-varied face

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163592A (en) * 1998-11-26 2000-06-16 Victor Co Of Japan Ltd Recognizing method for face image
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
KR101491832B1 (en) * 2014-05-23 2015-02-12 동국대학교 산학협력단 Apparatus and method for selecting image
CN108564053A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on FaceNet and method
CN109284722A (en) * 2018-09-29 2019-01-29 佳都新太科技股份有限公司 Image processing method, device, face recognition device and storage medium
CN109377518A (en) * 2018-09-29 2019-02-22 佳都新太科技股份有限公司 Target tracking method, device, target tracking equipment and storage medium
CN109360170A (en) * 2018-10-24 2019-02-19 北京工商大学 Face restorative procedure based on advanced features
CN109598210A (en) * 2018-11-16 2019-04-09 三星电子(中国)研发中心 A kind of image processing method and device
CN109918998A (en) * 2019-01-22 2019-06-21 东南大学 A kind of big Method of pose-varied face
CN109902657A (en) * 2019-03-12 2019-06-18 哈尔滨理工大学 A kind of face identification method indicated based on piecemeal collaboration

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Face recognition under pose variation with local Gabor features enhanced by Active Shape and Statistical Models";Leonardo A. Cament等;《Pattern Recognition》;20150529;第3371-3384页 *
"Symmetry-Aware Face Completion with Generative Adversarial Networks";Jiawan Zhang等;《ACCV 2018: Computer Vision》;20190525;第289-304页 *
"多姿态人脸图像识别系统";王亚南;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150615;第2015年卷(第6期);I138-533 *
"多姿态人脸识别的研究与实现";谢鹏程;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181015;第2018年卷(第10期);I138-828 *
"破损区域分块划分的图像修复";翟东海等;《中国图象图形学报》;20140630;第835-842页 *

Also Published As

Publication number Publication date
CN110427843A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
US8374422B2 (en) Face expressions identification
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN105005755B (en) Three-dimensional face identification method and system
CN101339607B (en) Human face recognition method and system, human face recognition model training method and system
US8395676B2 (en) Information processing device and method estimating a posture of a subject in an image
CN105740780B (en) Method and device for detecting living human face
CN108090435B (en) Parking available area identification method, system and medium
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
US9007481B2 (en) Information processing device and method for recognition of target objects within an image
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
US20110227923A1 (en) Image synthesis method
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
EP1964028A1 (en) Method for automatic detection and classification of objects and patterns in low resolution environments
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
CN107066963B (en) A kind of adaptive people counting method
JP2021503139A (en) Image processing equipment, image processing method and image processing program
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN111127556A (en) Target object identification and pose estimation method and device based on 3D vision
CN104504161B (en) A kind of image search method based on robot vision platform
CN112861785A (en) Shielded pedestrian re-identification method based on example segmentation and image restoration
CN109635682B (en) Face recognition device and method
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN110427843B (en) Intelligent face recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant