[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106778489A - The method for building up and equipment of face 3D characteristic identity information banks - Google Patents

The method for building up and equipment of face 3D characteristic identity information banks Download PDF

Info

Publication number
CN106778489A
CN106778489A CN201611032737.7A CN201611032737A CN106778489A CN 106778489 A CN106778489 A CN 106778489A CN 201611032737 A CN201611032737 A CN 201611032737A CN 106778489 A CN106778489 A CN 106778489A
Authority
CN
China
Prior art keywords
face
information
identity information
feature
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611032737.7A
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201611032737.7A priority Critical patent/CN106778489A/en
Publication of CN106778489A publication Critical patent/CN106778489A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides the method for building up and equipment of a kind of face 3D characteristic identity information banks.The method is comprised the following steps:Personal face RGBD atlas is gathered, wherein, the personal identity information is known;The personal face 3D characteristic informations are obtained by the face RGBD atlas;The identity information of the individual is identified to the corresponding face 3D characteristic informations of the individual to obtain personal information, and the personal information is preserved to form face 3D characteristic identity information banks.The equipment includes the first acquisition module, first information acquisition module and Knowledge Base Module.The influence of the change of situations such as improve the accuracy of recognition of face, and the non-geometric such as attitude, expression, illumination and facial makeup of face cosmetic variation and fat or thin face are not readily susceptible in identification.

Description

Method and equipment for establishing face 3D characteristic identity information base
Technical Field
The invention relates to the field of establishment methods of a face 3D characteristic identity information base, in particular to an establishment method and equipment of a face 3D characteristic identity information base.
Background
Information security issues have attracted widespread attention in all societies. The main approach for ensuring the information security is to accurately identify the identity of the information user and further judge whether the authority of the user for obtaining the information is legal or not according to the identification result, thereby achieving the purposes of ensuring that the information is not leaked and ensuring the legal rights and interests of the user. Therefore, reliable identification is very important and essential.
Face recognition is a biometric technology for identifying an identity based on facial feature information of a person, and the face recognition technology is receiving more and more attention as a safer and more convenient personal identification technology. The traditional face recognition technology is 2D face recognition, the 2D face recognition has no depth information, and is easily influenced by non-geometric appearance changes such as postures, expressions, illumination, facial makeup and the like, so that accurate face recognition is difficult to perform.
Disclosure of Invention
The invention provides a method and equipment for establishing a face 3D characteristic identity information base, which can solve the problem that the prior art is difficult to accurately identify faces.
In order to solve the technical problems, the invention adopts a technical scheme that: a method for establishing a face 3D characteristic identity information base is provided, which comprises the following steps: collecting an individual face RGBD atlas, wherein the identity information of the individual is known; acquiring human face 3D characteristic information of the person through the human face RGBD atlas; and identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
And the human face 3D characteristic identity information base carries out hierarchical classification management on the identity information.
Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
Wherein, after the step of identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information and storing the individual information to form a face 3D feature identity information base, the method further comprises: and carrying out face recognition training on the face 3D characteristic identity information base.
The step of performing face recognition training on the face 3D feature identity information base comprises the following steps: collecting a human face RGBD (red, green and blue) atlas of a test person with known identity information; acquiring human face 3D characteristic information of the tested person from the human face RGBD atlas of the tested person; comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in the human face 3D characteristic identity information base; and if the comparison result is correct, storing the face RGBD atlas of the test person, the corresponding face 3D characteristic information and the identity information into the face 3D characteristic identity information base.
Wherein the test person comprises an individual who has stored personal information in the face 3D feature identity information base and an individual who has not stored personal information in the face 3D feature identity information base.
Wherein the step of collecting the RGBD atlas of the face of the person further comprises: collecting the face RGB atlas of the person; the step of obtaining the 3D feature information of the individual face through the RGBD atlas further includes: acquiring face 2D characteristic information of the person through the face RGB atlas; the step of identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information and storing the individual information to form a face 3D feature identity information base further includes: and identifying the identity information of the individual to the face 3D feature information and the face 2D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
The step of obtaining the 3D characteristic information of the personal face through the face RGBD atlas comprises the following steps: collecting characteristic points of a human face through the RGBD human face image; establishing a face color 3D grid according to the feature points; measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points; and analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face.
In order to solve the technical problem, the invention adopts another technical scheme that: the equipment for establishing the face 3D characteristic identity information base comprises a first acquisition module, a first information acquisition module and an information base module; the first acquisition module is used for acquiring a human face RGBD atlas of an individual, wherein the identity information of the individual is known; the first information acquisition module is connected with the first acquisition module and used for acquiring the human face 3D characteristic information of the person through the human face RGBD atlas; the information base module comprises a storage module, the storage module is connected with the first acquisition module and the first information acquisition module and is used for identifying the identity information of the individual to the face 3D feature information corresponding to the individual to acquire personal information and storing the personal information to form a face 3D feature identity information base.
The information base module further comprises a management module, the management module is connected with the storage module, and the management module is used for carrying out hierarchical classification management on the identity information.
Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
The equipment for establishing the face 3D feature identity information base further comprises a training module, wherein the training module is connected with the first acquisition module, the first information acquisition module and the information base module and is used for carrying out face recognition training on the face 3D feature identity information base.
The training module comprises a control module and a comparison module; the control module is used for controlling the first acquisition module to acquire a human face RGBD atlas of a tester with known identity information, and controlling the first information acquisition module to acquire human face 3D characteristic information of the tester from the human face RGBD atlas of the tester; the comparison module is connected with the control module and is used for comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in the human face 3D characteristic identity information base; the control module is further connected with the storage module and is used for controlling the storage module to store the face RGBD atlas of the test person, the corresponding face 3D feature information and the identity information into the face 3D feature identity information base when the comparison result is correct.
Wherein the test person comprises a person who has stored personal information in the face 3D feature identity information base and a person who has not stored personal information in the face 3D feature identity information base.
The equipment further comprises a second acquisition module and a second information acquisition module; the second acquisition module is used for acquiring the face RGB atlas of the person; the second information acquisition module is connected with the second acquisition module and used for acquiring the 2D characteristic information of the face of the person through the face RGB atlas; the storage module is further connected with the second acquisition module and the second information acquisition module, and is used for storing the 2D feature information of the face of the person and the 3D feature information of the face identified with the identity information into the 3D feature identity information base of the face.
The first information acquisition module further comprises a third acquisition module, a grid establishment module, a calculation module and an analysis module; the third acquisition module is connected with the first acquisition module and is used for acquiring the characteristic points of the human face through the RGB human face image; the grid establishing module is connected with the third acquisition module and used for establishing a face color 3D grid according to the feature points; the calculation module is connected with the grid establishment module and used for measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation among the characteristic points; and the analysis module is connected with the calculation module and is used for analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face.
The invention has the beneficial effects that: different from the prior art, the invention obtains the face 3D characteristic information through the face RGBD atlas, and then identifies the personal identity information to the face 3D characteristic information corresponding to the person and stores the information together to form the face 3D characteristic identity information base for face recognition, because the face 3D characteristic information comprises color information and depth information, a face skeleton can be established, therefore, the face information in the 3D information atlas is more comprehensive, the face recognition can be more accurate, and because the face information in the D information atlas is 3D information, the face recognition can not be influenced by the changes of the non-geometric appearances of the face, such as the posture, the expression, the illumination, the face makeup, and the like, and the changes of the face slimming and other situations.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for establishing a face 3D feature identity information base according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a face 3D feature identity information base according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of identity information hierarchical classification management of a face 3D feature identity information base according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of hierarchical classification management of face 3D feature information of a single person in a face 3D feature identity information base according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a method for establishing a face 3D feature identity information base according to another embodiment of the present invention;
FIG. 6 is a schematic flow chart of step S24 in FIG. 5;
fig. 7 is a schematic diagram of a face 3D feature identity information base according to an embodiment of the present invention during recognition training;
fig. 8 is a schematic diagram of another human face 3D feature identity information base according to an embodiment of the present invention during recognition training;
fig. 9 is a schematic flowchart of a method for establishing a face 3D feature identity information base according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of an apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of another apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an entity apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for establishing a face 3D feature identity information base according to an embodiment of the present invention.
The method for establishing the face 3D characteristic identity information base comprises the following steps:
s11: an RGBD atlas of faces of an individual is acquired, where the identity information of the individual is known.
In step S11, the RGBD face atlas may be acquired by a Kinect sensor, the RGBD face image includes color information (RGB) and Depth information (Depth) of a face, and the Depth information is increased compared to a conventional 2D image, the RGBD face atlas includes a plurality of human face RGBD atlas sets, and the human face RGBD atlas of the same person may include a plurality of RGBD images of the face at a plurality of angles.
The personal identity information may include personal basic information such as the name, sex, age, nationality, native place, contact address, work unit, department, unit address, etc. of the person.
S12: and acquiring the human face 3D characteristic information of the person through the human face RGBD atlas.
Specifically, step S12 includes:
s121: and collecting the characteristic points of the human face through the RGBD human face image. In this step, feature points are collected by collecting face elements, wherein the face elements include: one or more of eyebrows, eyes, nose, mouth, cheeks, and chin. The feature points can be obtained by manually marking the eyes, nose, and other five sense organs, cheeks, mandible, edges thereof, and the like of the human face.
For example, the method for locating the key feature points of the human face comprises the following steps: selecting 9 characteristic points of the human face, wherein the distribution of the characteristic points has angle invariance and is respectively 2 eyeball central points, 4 eye corner points, the middle points of two nostrils and 2 mouth corner points. On the basis of the above-mentioned identification method, the organ characteristics of human face and extended positions of other characteristic points can be easily obtained, and can be used for further identification algorithm.
When the human face features are extracted, because local edge information cannot be effectively organized, the traditional edge detection operator cannot reliably extract the features of the human face (the outlines of eyes or mouth), but from the human visual characteristics, the features of edges and angular points are fully utilized to position the key feature points of the human face, so that the reliability of the human face feature extraction is greatly improved.
Wherein a Susan (Small value Collection optimizing Nucleus) operator is selected for extracting edge and corner features of the local area. According to the characteristics of the Susan operator, the method can be used for detecting edges and extracting corners. Therefore, compared with edge detection operators such as Sobel and Canny, the Susan operator is more suitable for extracting features such as human faces, eyes and mouths and the like, and especially for automatically positioning eye corner points and mouth corner points.
The following is an introduction to the Susan operator:
traversing the image by using a circular template, if the difference between the gray value of any other pixel in the template and the gray value of the pixel (kernel) in the center of the template is less than a certain threshold, the pixel is considered to have the same (or similar) gray value with the kernel, and the region composed of pixels meeting the condition is called a kernel value similarity region (USAN). Associating each pixel in the image with a local area having similar gray values is the basis of the SUSAN criterion.
During detection, a circular template is used for scanning the whole image, the gray values of each pixel and the central pixel in the template are compared, and a threshold value is given to judge whether the pixel belongs to a USAN region, wherein the following formula is as follows:
in the formula, c (r, r)0) Is the discriminant function of pixels in the template that belong to the USAN region, I (r)0) Is the gray value of the center pixel (kernel) of the template, i (r) is the gray value of any other pixel in the template, and t is the gray difference threshold. Which affects the number of detected corner points. t is reduced and more subtle changes in the image are obtained, giving a relatively large number of detections. The threshold t must be determined based on factors such as the contrast and noise of the image. The USAN region size at a point in the image can be represented by the following equation:
wherein g is a geometric threshold, which affects the shape of the detected corner points, and the smaller g is, the sharper the detected corner points are. (1) the determination threshold g for t, g determines the maximum value of the USAN region for the output corner, i.e. a point is determined as a corner as long as the pixels in the image have a USAN region smaller than g. The size of g not only determines how many corners can be extracted from the image, but it also determines how sharp the detected corners are, as previously described. So g can take a constant value once the quality (sharpness) of the desired corner point is determined. The threshold t represents the minimum contrast of the corner points that can be detected and is also the maximum tolerance for negligible noise. It mainly determines the number of features that can be extracted, the smaller t, the more features that can be extracted from an image with lower contrast, and the more features that are extracted. Therefore, different t values should be taken for images of different contrast and noise conditions. The SUSAN operator has the outstanding advantages of insensitivity to local noise and strong noise immunity. This is because it does not rely on the results of earlier image segmentation and avoids gradient calculations, and in addition, the USAN region is accumulated from pixels in the template with similar gray values as the central pixel of the template, which is in fact an integration process that has a good suppression of gaussian noise.
The final stage of the SUSAN two-dimensional feature detection is to find the local maximum of the initial corner response, i.e. non-maximum suppression processing, to obtain the final corner position. As the name implies, the non-maximum suppression is in the local area, if the initial response of the central pixel is the maximum in this area, its value is retained, otherwise, it is deleted, so that the maximum in the local area is obtained.
(1) Automatic positioning of the eyeball and the canthus. In the automatic positioning process of the eyeballs and the canthus, firstly, a normalized template matching method is adopted to initially position the human face. The approximate area of the face is determined in the whole face image. The general human eye positioning algorithm positions according to the valley point property of the eyes, and here, a method of combining the valley point search and the direction projection and the symmetry of the eyeballs is adopted, and the accuracy of the eye positioning can be improved by utilizing the correlation between the two eyes. Integral projection of a gradient map is carried out on the upper left part and the upper right part of the face area, a histogram of the integral projection is normalized, the approximate position of the eyes in the y direction is determined according to valley points of horizontal projection, then x is changed in a large range, valley points in the area are searched, and the detected points are used as eyeball center points of two eyes.
On the basis of obtaining the positions of two eyeballs, processing an eye region, firstly determining a threshold value by adopting a self-adaptive binarization method to obtain an automatic binarization image of the eye region, and then combining with a Susan operator, and accurately positioning inner and outer eye angular points in the eye region by utilizing an algorithm of edge and angular point detection.
The edge image of the eye region obtained by the algorithm is subjected to corner point extraction on the edge curve in the image on the basis, so that accurate positions of the inner and outer eye corner points of the two eyes can be obtained.
(2) Automatic positioning of nose area feature points. And determining the key characteristic point of the nose area of the human face as the midpoint of the central connecting line of the two nostrils, namely the center point of the nose lip. The position of the central point of the nose lip of the human face is relatively stable, and the central point of the nose lip of the human face can also play a role of a reference point when the normalization preprocessing is carried out on the human face image.
And determining the positions of the two nostrils by adopting a regional gray scale integral projection method based on the found positions of the two eyeballs.
Firstly, strip-shaped areas with the width of pupils of two eyes are intercepted, integral projection in the Y direction is carried out, and then a projection curve is analyzed. It can be seen that, searching downwards from the Y coordinate height of the eyeball position along the projection curve, finding out the position of the first valley point (by adjusting and selecting a proper peak-valley delta value, neglecting the burr influence possibly generated by the face scar or glasses and the like in the middle), and using the valley point as the Y coordinate reference point of the nostril position; in the second step, an area with the X coordinate of the two eyeballs as the width and the pixels above and below the Y coordinate of the nostrils (for example, selecting [ nostril Y coordinate-eyeball Y coordinate ] × 0.06) as the height is selected for X-direction integral projection, then the projection curve is analyzed, the X coordinate of the midpoint of the pupils of the two eyes is used as the center point, the search is respectively carried out towards the left side and the right side, and the first valley point which is found is the X coordinate of the center point of the left nostril and the right nostril. And calculating the middle points of the two nostrils to be used as the middle points of the nose and the lip, obtaining the accurate position of the middle point of the nose and the lip, and delimiting the nose area.
(3) Automatic positioning of the corners of the mouth. Because the different facial expressions may cause great change of the mouth shape, and the mouth area is easily interfered by the factors such as beard and the like, the accuracy of mouth feature point extraction has great influence on recognition. Because the positions of the mouth corner points are relatively slightly changed under the influence of expressions and the like, and the positions of the corner points are accurate, the important characteristic points of the mouth region are adopted as the positioning modes of the two mouth corner points.
On the basis of determining the characteristic points of the binocular region and the nasal region, firstly, determining a first valley point of a Y-coordinate projection curve below a nostril (in the same way, burr influence caused by beard, nevus and other factors needs to be eliminated through a proper peak-valley delta value) as a Y-coordinate position of a mouth by using a region gray scale integral projection method; then selecting a mouth region, and processing the region image by using a Susan operator to obtain a mouth edge image; and finally, extracting angular points to obtain the accurate positions of the two mouth corners.
S122: and establishing a face color 3D grid according to the feature points.
S123: and measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points. The color information can measure the relevant characteristic value of the characteristic point of the human face characteristic, wherein the characteristic value is the measurement of one or more of position, distance, shape, size, angle, radian and curvature of the human face characteristic on the 2D plane, and further comprises the measurement of color, brightness, texture and the like. For example, the central pixel point of the iris extends to the periphery, so as to obtain all the pixel positions of the eye, the shape of the eye, the inclination radian of the eye corner, the color of the eye and the like. By combining the color information and the depth information, the connection relationship between the feature points can be calculated, and the connection relationship can be the topological connection relationship and the space geometric distance between the feature points, or can also be the dynamic connection relationship information of various combinations of the feature points, and the like. According to the measurement and calculation of the face color 3D grid, local information including plane information of each element of the face and the spatial position relation of the feature point on each element and overall information of the spatial position relation between each element can be obtained. The local information and the overall information respectively reflect the information and the structural relation hidden on the human face RGBD image from the local part and the overall part.
S124: and analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face. Through the analysis of the characteristic values and the connection relation, the three-dimensional face shape information can be obtained, and therefore the 3D characteristic information of the face can be obtained.
For example, in step S124, the feature values, the topological connection relationships between the feature points, and the spatial geometric distances may be analyzed by using a finite element analysis method to obtain the 3D spatial distribution feature information of the feature points.
In particular, the face color 3D mesh may be surface deformed using finite element analysis. Finite Element Analysis (FEA) is a method for simulating a real physical system (geometric and load conditions) by using a mathematical approximation method. Also with simple and interacting elements, i.e. units, a finite number of unknowns can be used to approximate a real system of infinite unknowns.
For example, after deformation energy analysis is performed on each line cell of the face color 3D mesh, a cell stiffness equation of the line cell can be established. Then, constraint units, such as point, line, tangent vector, normal vector and other constraint unit types are introduced. Because the curved surface meets the requirements of the shape, position, size, continuity with the adjacent curved surface and the like in the audit design, the curved surface is realized through constraint. The embodiment processes the constraints through a penalty function method, and finally obtains a rigidity matrix and an equivalent load array of the constraint unit.
And expanding the data structure of the deformation curve surface, so that the data structure of the deformation curve surface comprises the geometric parameter parts such as orders, control vertexes, node vectors and the like, and also comprises parameters indicating physical characteristics and external loads. Therefore, the deformation curve surface can integrally represent some complicated body representations, and the geometric model of the human face is greatly simplified. Moreover, the physical parameters and the constraint parameters in the data structure uniquely determine the configuration geometric parameters of the human face,
the deformation curve curved surface is solved by finite elements through program design, and the unit inlet program is set for different constraint units, so that any constraint unit stiffness matrix and any constraint unit load array can be calculated. And calculating the overall stiffness matrix by adopting a variable bandwidth one-dimensional array storage method according to the symmetry, banding and sparsity of the overall stiffness matrix. When the linear algebraic equation set is assembled, not only the linear unit or surface unit stiffness matrix but also the constraint unit stiffness matrix are added into the overall stiffness matrix in a 'number matching seating' mode, meanwhile, the constraint unit equivalent load array is added into the overall load array, and finally, the linear algebraic equation set is solved by adopting a Gaussian elimination method.
For example, the modeling method of the curved surface of the human face can be described by a mathematical model as follows:
the obtained deformation curve
u∈Ω=[0,1]Or curved surfaces
(u,v)∈Ω=[0,1]×[0,1]Is a solution to the extreme problem
Wherein,the energy functional function of the curved surface reflects the deformation characteristic of the curved surface to a certain extent and endows the curved surface with physical characteristics. f1, f2, f3, f4 are functions relating to the variables in (-) and,is the boundary of the parameter definition domain, is the curve in the parameter domain of the curved surface, (mu)0,v0) The method is characterized in that the method is a parameter value in a parameter domain, the condition (1) is a boundary interpolation constraint, the condition (2) is a continuity constraint at a boundary, the condition (3) is a constraint of a characteristic line in a curved surface, and the condition (4) is a constraint of an inner point of the curved surface.In application, an energy functionalTaking the following form:
the curve:
surface bending:
wherein α, β, γ represent the stretching, play-out, and distortion coefficients of the curve, respectively, and α ij and β ij are the stretching and play-out coefficients of the curved surface locally in the μ, v direction at (μ, v), respectively.
It can be seen from the mathematical model that the deformation curve surface modeling method treats various constraints in a same and coordinated way, not only satisfies the local control, but also ensures the whole wide and smooth. Using the variational principle, solving the above-mentioned extremum problem can be converted to solving the following equations:
here, the first order variation is shown. Equation (5) is a differential equation, which is a numerical solution because it is complicated and difficult to find an accurate analysis result. For example, finite element methods are used for solving.
The finite element method can be considered as that firstly a proper interpolation form is selected according to the requirement, and then the combination parameters are solved, so that the obtained solution is not only a continuous form, but also the grid generated by pretreatment lays a foundation for finite element analysis.
In the recognition stage, the similarity measure between the unknown face image and the known face template is given by:
in the formula: ciXjRespectively the characteristics of the face to be recognized and the characteristics of the face in the face library, i1,i2,j1,j2,k1,k2Is a 3D mesh vertex feature. The first term in the formula is to select the corresponding local feature X in the two vector fieldsjAnd CiThe second term is to calculate the local position relationship and the matching order, so that the best match is the one with the minimum energy function.
The face color 3D grid is subjected to surface deformation by the finite element method, so that each point of the face color 3D grid is continuously close to the feature point of a real face, thereby obtaining three-dimensional face shape information and further obtaining 3D space distribution feature information of the face feature point.
In addition, a wavelet transformation texture analysis method can be adopted to analyze the dynamic connection relation between the characteristic values and the characteristic points so as to obtain the 3D space distribution characteristic information of the characteristic points.
Specifically, the dynamic connection relationship is a dynamic connection relationship of various combinations of feature points. The wavelet transform is a local transform of time and frequency, has the characteristics of multi-resolution analysis, and has the capability of characterizing local characteristics of signals in a time domain and a frequency domain. In the embodiment, through wavelet transformation texture analysis, by extracting, classifying and analyzing texture features and combining human face feature values and dynamic connection relation information, specifically including color information and depth information, stereoscopic human face shape information is finally obtained, and finally human face shape information with invariance under the condition of human face subtle expression change is analyzed and extracted from the human face shape information to encode human face shape model parameters, wherein the model parameters can be used as geometric features of a human face, so that 3D space distribution feature information of human face feature points is obtained.
In the method for acquiring 3D feature information of a human face provided in some other embodiments, the method for acquiring 2D feature information of a human face is also compatible with the acquisition of 2D feature information of a human face, and the method for acquiring 2D feature information of a human face may be various methods that are conventional in the art. In the embodiments, the 3D feature information of the face is obtained, and the 2D feature information of the face is also obtained, so that the 3D and 2D recognition of the face is performed at the same time, and the accuracy of the face recognition is further improved.
For example, the basis of a three-dimensional wavelet transform is as follows:
wherein,
AJ1as a function f (x, y, z) to space V3 J1The projection operator of (a) is determined,
Qnis Hx,Hy,HzGx,Gy,GzA combination of (1);
let matrix H be (H)m,k),G=(Gm,k) Wherein Hx,Hy,Hzrespectively shows the effect of H on the three-dimensional signals x, y, z and Gx,Gy,GzIndicating that G acts in the x, y, z direction of the three-dimensional signal, respectively.
In the identification stage, after wavelet transformation of an unknown face image, a low-frequency low-resolution sub-image of the unknown face image is taken to be mapped to a face space, a characteristic coefficient is obtained, the distance between the characteristic coefficient to be classified and the characteristic coefficient of each person can be compared by using Euclidean distance, and a PCA algorithm is combined according to the formula:
in the formula, K is the person most matched with the unknown face, N is the number of people in the database, Y is the m-dimensional vector obtained by mapping the unknown face to the subspace formed by the characteristic faces, and Y is the m-dimensional vectorkAnd mapping the known human faces in the database to m-dimensional vectors obtained on a subspace formed by the characteristic faces.
It is understood that, in another embodiment, a 3D face recognition method based on two-dimensional wavelet features may also be used for recognition, where two-dimensional wavelet feature extraction is first required, and the two-dimensional wavelet basis function g (x, y) is defined as
gmn(x,y)=a-mng(x′,y′),a>1,m,n∈Z
Where σ is the size of the Gaussian window, a self-similar filter function can be passed through the function gmn(x, y) is obtained by appropriately expanding and rotating g (x, y). Based on the above functions, the wavelet characteristics for image I (x, y) can be defined as
The two-dimensional wavelet extraction algorithm of the face image comprises the following implementation steps:
(1) wavelet representation about human face is obtained through wavelet analysis, and corresponding features in the original image I (x, y) are converted into wavelet feature vectors F (F ∈ R)m)。
(2) Using a small exponential polynomial (FPP) model k (x, y) ═ x.yd(0 < d < 1) tom-dimensional wavelet feature space RmProjection into a higher n-dimensional space RnIn (1).
(3) Based on the kernel-linear decision analysis algorithm (KFDA), in RnBuilding an inter-class matrix S in spacebAnd intra-class matrix Sw
Calculating SwOf the orthonormal eigenvector α1,α2,…,αn
(4) Extracting the significant distinguishing feature vector of the face image, and changing P1 to (α)1,α2,…,αq) Wherein, α1,α2,…,αqIs SwCorresponding q eigenvectors with positive eigenvalues, q rank (S)w). ComputingEigenvectors β corresponding to the L largest eigenvalues1,β2,…,βL(L is less than or equal to c-1), wherein,c is the number of face classifications. Salient feature vector, fregular=BTP1 Ty wherein y ∈ Rn;B=(β1,β2,…,βl)。
(5) And extracting the distinguishing feature vector which is not obvious in the face image. ComputingEigenvector gamma corresponding to one maximum eigenvalue1,γ2,…,γL(L is less than or equal to c-1). Let P2=(αq+1,αq+2,…,αm) The feature vector is not distinguished
The steps included in the 3D face recognition stage are as follows:
(1) the front face is detected, and key face characteristic points, such as contour characteristic points of the face, left and right eyes, mouth and nose, and the like, in a front face and a face image are positioned.
(2) And reconstructing a three-dimensional face model through the extracted two-dimensional Gabor characteristic vector and a common 3D face database. To reconstruct a three-dimensional face model, a three-dimensional face database of human faces is used, including 100 detected face images. Each face model in the database has approximately 70000 vertices. Determining a feature transformation matrix P, wherein in the original three-dimensional face recognition method, the matrix is usually a subspace analysis projection matrix obtained by a subspace analysis method and consists of feature vectors of covariance matrices of samples corresponding to the first m maximum eigenvalues. And (3) the extracted wavelet discrimination feature vector corresponds to the feature vectors of m maximum feature values to form a main feature transformation matrix P', and the feature transformation matrix has stronger robustness on factors such as illumination, posture, expression and the like than the original feature matrix P, namely the represented features are more accurate and stable.
(3) And processing the newly generated face model by adopting a template matching and linear discriminant analysis (FLDA) method, extracting intra-class difference and inter-class difference of the model, and further optimizing the final recognition result.
S13: and identifying the identity information of the individual to the face 3D characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a face 3D characteristic identity information base.
The personal information of the embodiment includes personal identity information and face 3D feature information, and after the identity information is identified to the face 3D feature information, when face recognition is performed in a later period, people with the same face 3D feature information are recognized, and the identity information corresponding to the face 3D feature information can be obtained. In some embodiments, the personal information includes identity information and face 3D feature information and a corresponding RGBD atlas, as shown in fig. 2, fig. 2 is a schematic diagram of a face 3D feature identity information library provided in an embodiment of the present invention.
Different from the prior art, the invention obtains the face 3D characteristic information through the face RGBD atlas, then identifies the personal identity information to the face 3D characteristic information corresponding to the person and stores the information together to form the face 3D characteristic identity information base for face recognition, and as the face 3D characteristic information comprises color information and depth information, a face skeleton can be established, therefore, the face information in the 3D information gallery is more comprehensive, and can be recognized more accurately when the face is recognized, and as the face information in the 3D information gallery is 3D information, the changes of non-geometric appearances such as the face pose, the expression, the illumination, the face makeup and the like and the changes of the face fatness and the like can not influence the face recognition.
In one embodiment, the identity information is subjected to hierarchical classification management by a face 3D feature identity information base, wherein the hierarchy comprises a personal attribute hierarchy and a group attribute hierarchy.
Referring to fig. 3, fig. 3 is a schematic diagram of identity information hierarchical classification management of a face 3D feature identity information base according to an embodiment of the present invention.
For example, the personal attribute hierarchy includes a collection of information such as name, gender, age, identification number, etc. of a person having uniqueness. The group attribute comprises non-unique group level information such as staff of the same company, staff of the same office building and the like.
As shown in FIG. 3, person A and person B are on the same company "one", person E is on company "three", and company "one" and company "three" are in the same office building "1". Person C and person D are on duty in the same company "two", and company "two" is in office building "2".
Referring to fig. 4, fig. 4 is a schematic diagram of a hierarchical classification management of individual human face 3D feature information of a human face 3D feature identity information base according to an embodiment of the present invention.
In other embodiments, the 3D feature information of the human face can also be hierarchically classified and managed, for example, the 3D feature information of five sense organs such as eyes, nose, mouth and the like is taken as a hierarchy; 3D feature information of cheeks, chin, etc. as one hierarchy, and 3D feature information of face shapes or entire head shapes, etc. as one hierarchy.
It can be understood that the hierarchical division of the identity information and the hierarchical division of the face 3D feature information are only one hierarchical division manner in the present embodiment, and other division manners may be available in other embodiments.
The hierarchical classification management can make the face recognition more convenient and faster.
For example, when the person a uses the payment system, it is necessary to accurately confirm whether the person a is the person a himself, and at this time, when face recognition is performed, it is necessary to acquire comprehensive face 3D feature information of the person a, for example, face 3D feature information of facial features, cheeks, chin and eyes, nose, mouth, and the like, so as to be able to accurately determine information of personal attributes of the user, thereby accurately recognizing whether the person a is the person a himself.
For another example, in an entrance guard system of an office building "1", the entrance guard system only needs to determine whether the identified person is the group attribute of the staff in the office building "1", and does not need to determine the personal attribute of the name, age, etc. of the identified person. Therefore, when passing through the entrance guard system of an office building, the RGBD image of the face acquired by the entrance guard system may not be fine enough, for example, only the RGBD image of the side face of the person a is acquired, or only the RGBD image of the upper half of the face of the person B is acquired when the person B passes through the entrance guard system with a head down, or the person E is in a faster pace, so that the entrance guard system can only acquire the RGBD image … … of the blurred five sense organs of the person E to invoke the identification of the group attribute level identity information in the 3D identity information base of the face, for example, the entrance guard system acquires the 3D feature information of the side face and the nose bridge of the face of the person a through the RGBD image of the side face, and cannot judge who the person a is, but according to the information stored in the 3D feature identity information base, there are workers in the office building "1" who have 3D feature information with the same side face and nose bridge, thus allowing person a to enter office building "1". Or, the access control system acquires the 3D feature information of the upper half face of the person B through the RGBD image of the upper half face of the person B, and according to the information stored in the face 3D feature identity information base, it is known that there are workers in the office building "1" who have the same 3D feature information of the upper half face, and although it is not certain that the worker is the person B, the person B can be allowed to enter the office building "1". Or, the entrance guard system acquires the 3D feature information of the face of the person E from the RGBD image with the blurred five sense organs of the person E, and based on the information stored in the face 3D feature identity information base, it is possible to determine that there are workers in the office building "1" who have 3D feature information of the same face, although it is not possible to determine who the person E is, and thus the person E is allowed to enter the office building "1". If the access control system acquires an RGBD image of a face with fuzzy five sense organs of a person C, 3D characteristic information of the face of the person C is acquired through the RGBD image, and according to information stored in a face 3D characteristic identity information base, a worker similar to the 3D characteristic information of the face is not available in an office building 1, so that the access control system does not allow the identified person to enter the office building 1 temporarily, and the RGBD image of the face needs to be further acquired to acquire more face 3D characteristic information for judgment.
Therefore, after the human face 3D characteristic identity information base is subjected to hierarchical classification management, the identity information without hierarchy can be called according to different needs without excessive work, for example, in an access control system of an office building, the identity information of specific personal attributes does not need to be identified, resources can be saved, time is saved, and the human face recognition is more convenient and faster.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for establishing a face 3D feature identity information base according to another embodiment of the present invention.
S21: an RGBD atlas of faces of an individual is acquired, where the identity information of the individual is known.
S22: and acquiring the human face 3D characteristic information of the person through the human face RGBD atlas.
S23: and identifying the identity information of the individual to the face 3D characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a face 3D characteristic identity information base.
S24: and carrying out face recognition training on the face 3D characteristic identity information base.
The difference between this embodiment and the above embodiment is that step S24 is added, and the richness of information resources in the face 3D feature identity information base can be improved by performing face recognition training on the face 3D feature identity information base, so as to improve the accuracy of face recognition.
As shown in fig. 6, fig. 6 is a schematic flowchart of step S24 in fig. 5. Specifically, step S24 includes:
s241: and collecting a human face RGBD atlas of a test person with known identity information.
The test person includes an individual who has stored personal information in the face 3D feature identity information base and an individual who has not stored personal information in the face 3D feature identity information base.
The identity information of the tester is known, and may refer to that the identity information of the tester is partially or completely known, for example, the identity information of the personal attribute hierarchy of the tester is completely or partially known, and the identity information of the group attribute hierarchy is unknown, or the identity information of the group attribute hierarchy of the tester is partially or completely known, and the identity information of the personal attribute hierarchy is unknown, or the identity information of the personal attribute hierarchy of the tester and the identity model of the group attribute hierarchy are all known.
S242: and acquiring the human face 3D characteristic information of the tested person from the human face RGBD atlas of the tested person.
The method for acquiring the 3D feature information of the human face of the test person in step S242 is the same as the method in step S12 in the above embodiment, and is not described herein again.
S243: and comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in a human face 3D characteristic identity information base.
For example, the face 3D feature information of the test person is compared with the face 3D feature information in the face 3D feature identity information base, and it is obtained that the similarity between the face 3D feature information of the test person and the face 3D feature information of the person X in the face 3D feature identity information base reaches a predetermined threshold. Then the person X is determined to be stored in the face 3D characteristic identity information base, and if the predetermined threshold is not reached, it is determined that the person X is not stored in the face 3D characteristic identity information base.
For example, when testing a person whose personal information is stored in the human face 3D feature identity information base, if the comparison result is that the test person corresponds to the personal information stored in the human face 3D feature identity information base, it indicates that the comparison result is correct, and the process goes to step S244; if the comparison result indicates that the personal information of the tested person is not stored in the face 3D feature identity information base, it indicates that the comparison result is wrong, and therefore, the information stored in the face 3D feature identity information base by the tested person needs to be corrected and the personal information needs to be further enriched.
For example, when the person under test does not store personal information in the face 3D characteristic identity information base, if the comparison result is that the information of the person under test does not exist in the face 3D characteristic identity information base, it indicates that the comparison result is correct, and step S244 is performed to collect the personal information of the person under test; if the comparison result is that the test person is a person in the face 3D characteristic identity information base, the comparison result is wrong, the information of the person in the face 3D characteristic identity information base needs to be corrected, the information of the person is further enriched, and meanwhile, the personal information of the test person is also stored in the face 3D characteristic identity information base, so that the enrichment degree of information resources of the face 3D characteristic identity information base is improved.
S244: and storing the face RGBD atlas of the tested person, the corresponding face 3D characteristic information and the identity information into a face 3D characteristic identity information base.
RGBD atlas, face 3D characteristic information and identity information of the tester collected in the recognition training are stored in corresponding personal information in the face 3D characteristic identity information base, so that information resources in the face 3D characteristic identity information base are richer, and the accuracy in later-stage face recognition is facilitated.
For example, in a preliminarily established face 3D feature identity information base, part of identity information of 500 persons is manually identified to a face RGBD picture and face 3D feature information corresponding to the 500 persons, and is stored in the face 3D feature identity information base. In the process of recognition training, collecting RGBD images of 5000 or 50000 or even more persons for recognition training, identifying at least part of personal identity information on the RGBD image of a tester, storing a large amount of personal information of the tester again, and if the 3D characteristic identity information library of the human face of the tester is original, continuously supplementing the RGBD image, the 3D characteristic information and the identity information of the person.
As shown in fig. 7, fig. 7 is a schematic diagram of a face 3D feature identity information base provided in an embodiment of the present invention during recognition training. The RGBD atlas of the person G, the face 3D characteristic information acquired by the RGBD atlas and the identity information about the personal attribute are originally stored in the face 3D characteristic identity information base, when the recognition training is carried out, the acquired face RGBD atlas of the person G may comprise RGBD images of more angles, more face 3D characteristic information can be acquired from the RGBD images, and in the recognition training, the identity information of the group attribute of a work unit, a building where the work unit is located and the like is identified in the face RGBD atlas of the person G, so that the person G originally stored in the test person and the face 3D characteristic identity information base is identified as the same person during the recognition training, the face RGBD atlas, the face 3D characteristic information and the identity information of the identified group attribute, which are acquired during the recognition training, are all stored in the personal information of the person G in the face 3D characteristic identity information base, the personal information of the person G in the face 3D characteristic identity information base is richer.
For another example, when the face 3D feature identity information base does not store any personal information of the person H, during the recognition training, a face RGBD atlas of the person H is acquired, and the face 3D feature information is acquired from the face RGBD atlas, and at least part of the identity information of the person H is identified in the RGBD atlas, as shown in fig. 8, fig. 8 is a schematic diagram of another face 3D feature identity information base provided in the embodiment of the present invention during the recognition training. In the identification training process, the comparison result indicates that the person H is not stored in the face 3D characteristic identity information base, so that the personal information of the person H, including the face RGBD atlas, the face 3D characteristic information and the identity information, is stored in the face 3D characteristic identity information base, and the file of the person H is established in the face 3D characteristic identity information base.
Referring to fig. 9, fig. 9 is a flowchart illustrating a method for establishing a face 3D feature identity information base according to another embodiment of the present invention.
S31: an RGBD atlas and an RGB atlas of a person's face are acquired, where the identity information of the person is known.
S32: and acquiring the human face 3D characteristic information of the person through a human face RGBD atlas, and acquiring the human face 2D characteristic information of the person through a human face RGB atlas.
S33: and identifying the identity information of the individual to the face 3D characteristic information and the face 2D characteristic information corresponding to the individual to obtain personal information, and storing the personal information to form a face 3D characteristic identity information base.
The difference between this embodiment and the above embodiment is that a face RGB atlas is also acquired while a face RGBD atlas is acquired, so that not only can a face skeleton be established, but also face texture information, skin color information, and the like can be acquired.
Specifically, this embodiment may be applied to a case, for example, when it is necessary to acquire the identity information of the personal attribute and the identity information of the group attribute of a certain identified person, if the acquired face 3D feature information can only identify the identity information of the group attribute of the identified person, but cannot identify the identity information of the personal attribute of the identified person, at this time, it is necessary to combine the face 2D feature information to identify the face skeleton, the face skin color, the texture information, and the like through the face 2D feature information and the face 3D feature information to obtain the identity information of the personal attribute of the identified person.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention.
The device for establishing the face 3D feature identity information base of this embodiment includes a first acquisition module 10, a first information acquisition module 11, and an information base module 12.
In particular, the first acquisition module 10 is configured to acquire an RGBD atlas of a face of an individual, where identity information of the individual is known.
The first information acquisition module 11 is connected to the first acquisition module 10, and is configured to acquire face 3D feature information of the person through the face RGBD atlas.
The information base module 12 comprises a storage module 120, and the storage module 120 is connected to the first acquisition module 10 and the first information acquisition module 11, and is configured to identify the identity information of the individual to face 3D feature information corresponding to the individual to obtain personal information, and store the personal information to form a face 3D feature identity information base.
Referring to fig. 11, fig. 11 is a schematic structural diagram of another apparatus for establishing a face 3D feature identity information base according to an embodiment of the present invention.
The device for establishing the face 3D feature identity information base of this embodiment includes a first acquisition module 20, a first information acquisition module 21, an information base module 22, and a training module 23.
Specifically, the first acquisition module 20 is configured to acquire an RGBD atlas of a face of an individual, where identity information of the individual is known.
The first information obtaining module 21 is connected to the first collecting module 20, and is configured to obtain face 3D feature information of the person through the face RGBD atlas.
The first information obtaining module 21 includes a third collecting module 210, a grid establishing module 211, a calculating module 212, and an analyzing module 213.
The third acquisition module 210 is connected to the first acquisition module 20, and is configured to acquire feature points of a human face through an RGBD human face image.
The mesh establishing module 211 is connected to the third collecting module 210, and is configured to establish a face color 3D mesh according to the feature points.
The calculation module 212 is connected to the mesh establishing module 211, and is configured to measure feature values of the feature points according to the face color 3D mesh and calculate a connection relationship between the feature points.
The analysis module 213 is connected to the calculation module 212, and is configured to analyze the feature values and the connection relationship to obtain the 3D feature information of the human face.
The information base module 22 includes a storage module 220 and a management module 221.
The storage module 220 is connected to the first collecting module 20 and the analyzing module 213, and is configured to identify the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information, and store the individual information to form a face 3D feature identity information base.
The management module 221 is connected to the storage module 220, and the management module 221 is configured to perform hierarchical classification management on the identity information. Wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
The training module 23 is connected to the first acquisition module 20, the first information acquisition module 21, and the information base module 22, and is configured to perform face recognition training on a face 3D feature identity information base.
Specifically, the training module 23 includes a control module 230 and a comparison module 231.
The control module 230 is configured to control the first acquisition module 20 to acquire an RGBD atlas of a face of a test person with known identity information, and control the first information acquisition module 21 to acquire 3D feature information of the face of the test person from the RGBD atlas of the face of the test person. The test persons comprise persons with personal information stored in the face 3D characteristic identity information base and persons without personal information stored in the face 3D characteristic identity information base.
The comparing module 231 is connected to the control module 230, and is configured to compare the acquired human face 3D feature information of the test person with human face 3D feature information in the human face 3D feature identity information base.
The storage module 220 is further configured to store the RGBD atlas of the face of the test person, the corresponding face 3D feature information, and the identity information into the face 3D feature identity information base when the comparison result is correct.
Referring to fig. 12, fig. 12 is a schematic structural diagram of another apparatus for creating a face 3D feature identity information base according to an embodiment of the present invention.
The difference between the apparatus for creating a human face 3D feature identity information base in this embodiment and the above embodiments is that the apparatus in this embodiment further includes a second acquisition module 24 and a second information acquisition module 25.
In particular, the second acquisition module 24 is configured to acquire a face RGB atlas of the individual.
The second information obtaining module 25 is connected to the second collecting module 24, and is configured to obtain the face 2D feature information of the person through the face RGB atlas.
The storage module 220 is further connected to the second acquisition module 24 and the second information acquisition module 25, and is configured to store the 2D feature information of the face of the person and the 3D feature information of the face identified with the identity information in a 3D feature identity information library of the face.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an entity apparatus for establishing a face 3D feature identity information base according to an embodiment of the present invention. The apparatus of this embodiment can execute the steps in the method, and for related content, please refer to the detailed description in the method, which is not described herein again.
The intelligent electronic device comprises a processor 61, a memory 62 coupled to the processor 61.
The memory 62 is used for storing an operating system, a set program and a face RGBD atlas, face 3D feature information and identity information.
The processor 61 is configured to acquire an RGBD atlas of faces of an individual, where identity information of the individual is known; acquiring human face 3D characteristic information of the person through the human face RGBD atlas; and identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
The processor 61 is further configured to perform face recognition training on the face 3D feature identity information base.
The processor 61 is also used for collecting a human face RGBD atlas of a tester with known identity information; acquiring human face 3D characteristic information of the tested person from the human face RGBD atlas of the tested person; comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in the human face 3D characteristic identity information base; and if the comparison result is correct, storing the face RGBD atlas of the test person, the corresponding face 3D characteristic information and the identity information into the face 3D characteristic identity information base.
The processor 61 is also used for collecting the face RGB atlas of the person; acquiring face 2D characteristic information of the person through the face RGB atlas; and identifying the identity information of the individual to the face 3D feature information and the face 2D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
The processor 61 is further configured to collect feature points of a human face through the RGBD human face image; establishing a face color 3D grid according to the feature points; measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points; and analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the invention obtains the face 3D feature information through the face RGBD atlas, and then identifies and stores the individual identity information to the face 3D feature information corresponding to the individual to form the face 3D feature identity information base for face recognition, thereby improving the accuracy of face recognition, and being not easily affected by the changes of the non-geometric appearance such as the pose, the expression, the illumination, the face makeup, and the like of the face and the changes of the situations such as the face fatness and thinness.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (16)

1. A method for establishing a face 3D characteristic identity information base is characterized by comprising the following steps:
collecting an individual face RGBD atlas, wherein the identity information of the individual is known;
acquiring human face 3D characteristic information of the person through the human face RGBD atlas;
and identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
2. The method of claim 1, wherein the identity information is hierarchically categorized and managed by the face 3D feature identity information base.
3. The method of claim 2, wherein the hierarchy includes a personal attribute hierarchy and a group attribute hierarchy.
4. The method of claim 1, wherein after the steps of identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information and storing the individual information to form a face 3D feature identity information base, further comprising:
and carrying out face recognition training on the face 3D characteristic identity information base.
5. The method of claim 4, wherein the step of performing face recognition training on the face 3D feature identity information base comprises:
collecting a human face RGBD (red, green and blue) atlas of a test person with known identity information;
acquiring human face 3D characteristic information of the tested person from the human face RGBD atlas of the tested person;
comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in the human face 3D characteristic identity information base;
and if the comparison result is correct, storing the face RGBD atlas of the test person, the corresponding face 3D characteristic information and the identity information into the face 3D characteristic identity information base.
6. The method of claim 5, wherein the test person comprises a person who has stored personal information in the face 3D feature identity information base and a person who has not stored personal information in the face 3D feature identity information base.
7. The method of claim 1, wherein the step of acquiring an RGBD atlas of a person's face further comprises: collecting the face RGB atlas of the person;
the step of obtaining the 3D feature information of the individual face through the RGBD atlas further includes: acquiring face 2D characteristic information of the person through the face RGB atlas;
the step of identifying the identity information of the individual to the face 3D feature information corresponding to the individual to obtain individual information and storing the individual information to form a face 3D feature identity information base further includes: and identifying the identity information of the individual to the face 3D feature information and the face 2D feature information corresponding to the individual to obtain individual information, and storing the individual information to form a face 3D feature identity information base.
8. The method of claim 1, wherein the step of obtaining the 3D feature information of the human face through the RGBD atlas of the human face comprises:
collecting characteristic points of a human face through the RGBD human face image;
establishing a face color 3D grid according to the feature points;
measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation between the characteristic points;
and analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face.
9. An apparatus for creating a 3D characteristic identity information base of a human face, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a human face RGBD (red, green and blue) atlas of an individual, and the identity information of the individual is known;
the first information acquisition module is connected with the first acquisition module and used for acquiring the human face 3D characteristic information of the person through the human face RGBD atlas;
and the information base module comprises a storage module, and the storage module is connected with the first acquisition module and the first information acquisition module and is used for identifying the identity information of the individual to the face 3D feature information corresponding to the individual to acquire personal information and storing the personal information to form a face 3D feature identity information base.
10. The device according to claim 9, wherein the information base module further comprises a management module, the management module is connected with the storage module, and the management module is configured to perform hierarchical classification management on the identity information.
11. The apparatus of claim 10, wherein the hierarchy comprises a personal attribute hierarchy and a group attribute hierarchy.
12. The device according to claim 11, wherein the device for establishing the face 3D feature identity information base further comprises a training module, and the training module is connected to the first acquisition module, the first information acquisition module, and the information base module, and is configured to perform face recognition training on the face 3D feature identity information base.
13. The apparatus of claim 12, wherein the training module comprises a control module and a comparison module;
the control module is used for controlling the first acquisition module to acquire a human face RGBD atlas of a tester with known identity information, and controlling the first information acquisition module to acquire human face 3D characteristic information of the tester from the human face RGBD atlas of the tester;
the comparison module is connected with the control module and is used for comparing the acquired human face 3D characteristic information of the test person with human face 3D characteristic information in the human face 3D characteristic identity information base;
the control module is further connected with the storage module and is used for controlling the storage module to store the face RGBD atlas of the test person, the corresponding face 3D feature information and the identity information into the face 3D feature identity information base when the comparison result is correct.
14. The apparatus of claim 13, wherein the test person comprises a person who has stored personal information in the face 3D feature identity information base and a person who has not stored personal information in the face 3D feature identity information base.
15. The apparatus of claim 9, further comprising:
the second acquisition module is used for acquiring the face RGB atlas of the person;
the second information acquisition module is connected with the second acquisition module and used for acquiring the 2D characteristic information of the face of the person through the face RGB atlas;
the storage module is further connected with the second acquisition module and the second information acquisition module, and is used for storing the 2D feature information of the face of the person and the 3D feature information of the face identified with the identity information into the 3D feature identity information base of the face.
16. The apparatus of claim 9, wherein the first information obtaining module further comprises:
the third acquisition module is connected with the first acquisition module and is used for acquiring the characteristic points of the human face through the RGB human face image;
the grid establishing module is connected with the third acquisition module and used for establishing a face color 3D grid according to the feature points;
the calculation module is connected with the grid establishment module and used for measuring the characteristic values of the characteristic points according to the face color 3D grid and calculating the connection relation among the characteristic points;
and the analysis module is connected with the calculation module and used for analyzing the characteristic values and the connection relation to acquire the 3D characteristic information of the human face.
CN201611032737.7A 2016-11-14 2016-11-14 The method for building up and equipment of face 3D characteristic identity information banks Pending CN106778489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611032737.7A CN106778489A (en) 2016-11-14 2016-11-14 The method for building up and equipment of face 3D characteristic identity information banks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611032737.7A CN106778489A (en) 2016-11-14 2016-11-14 The method for building up and equipment of face 3D characteristic identity information banks

Publications (1)

Publication Number Publication Date
CN106778489A true CN106778489A (en) 2017-05-31

Family

ID=58970639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611032737.7A Pending CN106778489A (en) 2016-11-14 2016-11-14 The method for building up and equipment of face 3D characteristic identity information banks

Country Status (1)

Country Link
CN (1) CN106778489A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399247A (en) * 2018-03-01 2018-08-14 深圳羚羊极速科技有限公司 A kind of generation method of virtual identity mark
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN109214339A (en) * 2018-09-07 2019-01-15 北京相貌空间科技有限公司 Face shape of face, the calculation method of face's plastic operation and computing device
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
CN110533426A (en) * 2019-08-02 2019-12-03 深圳蚂里奥技术有限公司 A kind of method of payment and system
CN111105881A (en) * 2019-12-26 2020-05-05 昆山杜克大学 Database system for 3D measurement of human phenotype
CN112990101A (en) * 2021-04-14 2021-06-18 深圳市罗湖医院集团 Facial organ positioning method based on machine vision and related equipment
CN113254491A (en) * 2021-06-01 2021-08-13 平安科技(深圳)有限公司 Information recommendation method and device, computer equipment and storage medium
CN113656422A (en) * 2021-08-17 2021-11-16 北京百度网讯科技有限公司 Method and device for updating human face base
CN113869092A (en) * 2020-06-30 2021-12-31 广州慧睿思通科技股份有限公司 Method and device for recognizing face image and readable storage medium
CN118522061A (en) * 2024-07-24 2024-08-20 支付宝(杭州)信息技术有限公司 Face recognition control method, effect monitoring method thereof, related device and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221672A (en) * 2008-01-30 2008-07-16 北京中星微电子有限公司 Automatic registration method and system based on network
CN102004908A (en) * 2010-11-30 2011-04-06 汉王科技股份有限公司 Self-adapting face identification method and device
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN103871106A (en) * 2012-12-14 2014-06-18 韩国电子通信研究院 Method of fitting virtual item using human body model and system for providing fitting service of virtual item
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
CN105022982A (en) * 2014-04-22 2015-11-04 北京邮电大学 Hand motion identifying method and apparatus
CN105046219A (en) * 2015-07-12 2015-11-11 上海微桥电子科技有限公司 Face identification system
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221672A (en) * 2008-01-30 2008-07-16 北京中星微电子有限公司 Automatic registration method and system based on network
CN102164113A (en) * 2010-02-22 2011-08-24 深圳市联通万达科技有限公司 Face recognition login method and system
CN102004908A (en) * 2010-11-30 2011-04-06 汉王科技股份有限公司 Self-adapting face identification method and device
CN103871106A (en) * 2012-12-14 2014-06-18 韩国电子通信研究院 Method of fitting virtual item using human body model and system for providing fitting service of virtual item
CN105022982A (en) * 2014-04-22 2015-11-04 北京邮电大学 Hand motion identifying method and apparatus
CN104573634A (en) * 2014-12-16 2015-04-29 苏州福丰科技有限公司 Three-dimensional face recognition method
CN105046219A (en) * 2015-07-12 2015-11-11 上海微桥电子科技有限公司 Face identification system
CN105184280A (en) * 2015-10-10 2015-12-23 东方网力科技股份有限公司 Human body identity identification method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHAHRAM IZADI等: "KinectFusion: Real-time 3D Reconstruction and Interaction", 《2011 ACM》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399247A (en) * 2018-03-01 2018-08-14 深圳羚羊极速科技有限公司 A kind of generation method of virtual identity mark
CN108416312A (en) * 2018-03-14 2018-08-17 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D data identification methods and system taken pictures based on visible light
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
CN109214339A (en) * 2018-09-07 2019-01-15 北京相貌空间科技有限公司 Face shape of face, the calculation method of face's plastic operation and computing device
CN110533426A (en) * 2019-08-02 2019-12-03 深圳蚂里奥技术有限公司 A kind of method of payment and system
CN111105881A (en) * 2019-12-26 2020-05-05 昆山杜克大学 Database system for 3D measurement of human phenotype
CN111105881B (en) * 2019-12-26 2022-02-01 昆山杜克大学 Database system for 3D measurement of human phenotype
CN113869092A (en) * 2020-06-30 2021-12-31 广州慧睿思通科技股份有限公司 Method and device for recognizing face image and readable storage medium
CN112990101A (en) * 2021-04-14 2021-06-18 深圳市罗湖医院集团 Facial organ positioning method based on machine vision and related equipment
CN112990101B (en) * 2021-04-14 2021-12-28 深圳市罗湖医院集团 Facial organ positioning method based on machine vision and related equipment
CN113254491A (en) * 2021-06-01 2021-08-13 平安科技(深圳)有限公司 Information recommendation method and device, computer equipment and storage medium
CN113656422A (en) * 2021-08-17 2021-11-16 北京百度网讯科技有限公司 Method and device for updating human face base
CN118522061A (en) * 2024-07-24 2024-08-20 支付宝(杭州)信息技术有限公司 Face recognition control method, effect monitoring method thereof, related device and system

Similar Documents

Publication Publication Date Title
CN106778468B (en) 3D face identification method and equipment
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
Dutagaci et al. Evaluation of 3D interest point detection techniques via human-generated ground truth
Bronstein et al. Three-dimensional face recognition
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
Mian et al. Keypoint detection and local feature matching for textured 3D face recognition
US8064685B2 (en) 3D object recognition
CN106778474A (en) 3D human body recognition methods and equipment
US20160371539A1 (en) Method and system for extracting characteristic of three-dimensional face image
Douros et al. Three-dimensional surface curvature estimation using quadric surface patches
Russ et al. A 2D range Hausdorff approach for 3D face recognition
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
KR20050025927A (en) The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN105354555B (en) A kind of three-dimensional face identification method based on probability graph model
Russ et al. 3D facial recognition: a quantitative analysis
Gawali et al. 3d face recognition using geodesic facial curves to handle expression, occlusion and pose variations
Ming et al. A unified 3D face authentication framework based on robust local mesh SIFT feature
Wang et al. Robust face recognition from 2D and 3D images using structural Hausdorff distance
Boukamcha et al. 3D face landmark auto detection
Agrawal et al. An efficient approach for face recognition in uncontrolled environment
Jayalakshmi et al. A study of Iris segmentation methods using fuzzy C-means and K-means clustering algorithm
Ramadan et al. 3D Face compression and recognition using spherical wavelet parametrization
Colombo et al. Face^ 3 a 2D+ 3D Robust Face Recognition System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531

RJ01 Rejection of invention patent application after publication