CN102637251A - Face recognition method based on reference features - Google Patents
Face recognition method based on reference features Download PDFInfo
- Publication number
- CN102637251A CN102637251A CN2012100742248A CN201210074224A CN102637251A CN 102637251 A CN102637251 A CN 102637251A CN 2012100742248 A CN2012100742248 A CN 2012100742248A CN 201210074224 A CN201210074224 A CN 201210074224A CN 102637251 A CN102637251 A CN 102637251A
- Authority
- CN
- China
- Prior art keywords
- facial image
- image
- similarity
- fixed reference
- reference feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000001815 facial effect Effects 0.000 claims abstract description 97
- 238000012549 training Methods 0.000 claims abstract description 54
- 230000009467 reduction Effects 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000012847 principal component analysis method Methods 0.000 claims abstract description 3
- 239000013598 vector Substances 0.000 claims description 27
- 239000000284 extract Substances 0.000 claims description 15
- 239000004744 fabric Substances 0.000 claims description 13
- 238000012546 transfer Methods 0.000 claims description 6
- 210000004709 eyebrow Anatomy 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 20
- 238000011160 research Methods 0.000 description 8
- 238000012360 testing method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000002386 leaching Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 230000000192 social effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a face recognition method based on reference features. The method comprises the following steps that: scale invariant features and local binary pattern features of a face image to be recognized are extracted; a principal component analysis method is utilized for dimensionality reduction to obtain the image features of the face image to be recognized; the similarity of the image features to a cluster center is calculated by utilizing the obtained image features to obtain the reference features of the face image to be recognized; and the similarity of the reference features of the face image to be recognized and the reference features of training data concentration is calculated to obtain an analysis result. The reference features of the face image provided by the invention comprise texture information and structure information of the face image, so that the method provided by the invention can more comprehensively represent the face compared with the method in the prior art, which only represents the texture information or the structure information of the face. The process of feature extraction is simple and easy to realize; the recognition result is highly precise; high recognition rate of different facial gestures of the same person is realized.
Description
Technical field
The invention belongs to computer vision technique, relate to a kind of face identification method based on fixed reference feature.
Background technology
Along with the public safety problem receives social increasing concern, the research of face recognition technology has received the great attention of academia, business circles and government.Face recognition technology is an emerging technology, starts from late 1960s for the research of this gate technique, and distance also has only five in a decade or so now.It is that the New Times of Research on Face Recognition Technology and application has been opened in the appearance of some commercial face identification systems in the later stage nineties 20th century that face recognition technology really obtains paying attention to.In China; The research of face recognition technology is started late; Commercial face recognition technology is also enough unripe; But the social influence that causes owing to especially big public safety accident in recent years, face recognition technology has successively obtained the attention of government bodies, business circles and academia, and the research of face recognition technology has obtained unprecedented development.Give birth to the utilization that the fortune merit can go up face recognition technology in the 2008 Beijing Olympic Games and Shenzhen University in 2011 and obtained good public security effect; Formed good social effect; Increasing company and research institute have all included face recognition technology in new research category, and face recognition technology has become the new strategic emphasis of each major company and research institute.
Face recognition technology is compared with traditional biological identification technology has huge advantage.At first be its naturality, the biological characteristic that this RM of face recognition technology together human (even other biology) is utilized when carrying out individual identification is identical, and is human also through observing relatively people's face differentiation and confirming identity.And fingerprint recognition, iris recognition etc. do not have naturality, because human or other biology is individual through this type of biological characteristic difference.Next be its not by perceptibility, do not discovered for a kind of recognition methods very importantly yet, this can make this recognition methods not offensive, and is not easy to be cheated because be not easy to arouse people's attention.Recognition of face has the characteristics of this respect; It utilizes visible light to obtain human face image information fully; And be different from fingerprint recognition or iris recognition, and need utilize electronic pressure transmitter to gather fingerprint, perhaps closely gather iris image; These special acquisition modes are easy to discovered by the people, thereby more likely by the camouflage deception.These characteristics are specially adapted to runaway convict's tracker.Moreover be that it is untouchable, face recognition technology need not touch the facial image that detected object just can obtain detected object, this is different from the traditional biological recognition technology, so face recognition technology also is easily.
The yardstick invariant features is a kind of characteristic of describing image local information.The yardstick invariant features has that yardstick is constant, the characteristics of translation invariant, invariable rotary, therefore is widely used in during image local describes.Extract the yardstick invariant features and comprise position and the yardstick of confirming key point, the principal direction of confirming key point neighborhood gradient and three steps of descriptor structure.Local binary pattern characteristic is a kind of characteristic of describing image texture information.The principle of local binary pattern characteristic is that a binary string of a pixel and the generation of its neighborhood territory pixel is represented.For a pixel; Is that radius is done the neighborhood territory pixel set that circle obtains with oneself as center of circle fixed length; Selected pixel wherein compares each pixel of neighborhood set and the size of center of circle pixel respectively as starting point, is 1 for the pixel greater than the center of circle in the correspondence position assignment of binary string; Pixel for less than the center of circle is 0 in the correspondence position assignment of binary string, and the binary string that obtains like this is exactly the partial binary mode characteristic of this pixel.Information TRANSFER MODEL clustering method is a kind of widely used clustering method, and this method finally obtains cluster centre for the analysis of the rambling data of importing through the information TRANSFER MODEL.
Summary of the invention
The object of the present invention is to provide a kind of face identification method based on fixed reference feature, the characteristic extraction procedure of this method is simple, and the recognition result accuracy rate is high.
A kind of face identification method based on fixed reference feature provided by the invention is characterized in that this method comprises the steps:
(1) obtain the characteristics of image of facial image:
To facial image to be identified, at first extract the yardstick invariant features and the local binary pattern characteristic of facial image, and then, obtain the characteristics of image of facial image to be identified with principal component analysis method dimensionality reduction;
(2) obtain the fixed reference feature of facial image:
The characteristics of image that utilization obtains, the computed image characteristic obtains facial image fixed reference feature to be identified to the similarity of cluster centre;
(3) judgment analysis
The fixed reference feature that the fixed reference feature and the training data of facial image to be identified are concentrated is adjudicated the sorter analysis with linearity, obtain analysis result.
As the improvement of technique scheme, the detailed process of obtaining the facial image fixed reference feature in the step (2) is:
(2.1) calculate the similarity of the characteristics of image of people's face to be identified to each cluster centre of training dataset
Facial image to be identified, its characteristics of image note is made Y, and the cluster centre set of note training dataset is C, C={C
1, C
2..., C
N, N representes the number of cluster centre, its span is 150 to 250; For Y and C
1, desire is calculated its similarity, at first with Y as sample just, with C-{C
1As negative sample, training linear judgement sorter yClassifier1 is
yClassifier
1=LDA(+:Y,-:{C-C
1})
With C
1As linearity judgement sorter yClassifier
1Input obtains adjudicating mark yScore
1, be
yScore
1=yClassifier
1(C
1)
YScore
1Weighed C
1Similarity to the facial image characteristic Y;
Then, with C
1As positive sample, with C-{C
1As negative sample, training judgement sorter yClassifier
2, be
yClassifier
2=LDA(+:C
1,-:{C-C
1})
Adjudicate sorter yClassifier with Y as linearity
2Input obtains adjudicating mark yScore
2, be
yScore
2=yClassifier(Y)
YScore
2Weighed the facial image characteristic Y to C
1Similarity;
So, the present invention defines the facial image characteristic Y to cluster centre C
1Similarity do
S(Y,C
1)=(yScore
1+yScore
2)/2
At last, calculate the similarity of Y to cluster centre for each cluster centre among the cluster centre set C, resulting similarity vector S (Y) note is done
S(Y)=[S(Y,C
1),S(Y,C
2),...,S(Y,C
N)]
(2.2) similarity vector S (Y) is carried out normalization, define normalized S
N(Y) be the fixed reference feature of facial image to be identified, be
S so
N(Y) be exactly the fixed reference feature of facial image Y to be identified.
As the further improvement of technique scheme, the detailed process of obtaining the characteristics of image of facial image in the step (1) is:
(3.1) 13 of mark facial image to be identified key points, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth; The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be noted;
(3.2), extract the yardstick invariant features of its each gauge point for facial image to be identified;
(3.3) the local binary pattern characteristic of extraction facial image to be identified;
(3.4) utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary pattern characteristic difference dimensionality reduction,, be facial image characteristic to be identified yardstick invariant features behind the dimensionality reduction and the splicing of local binary pattern characteristic.
The invention discloses a kind of face identification method based on fixed reference feature.For the given width of cloth facial image of user, the present invention can discern the people's face in the image, and the present invention just is based on that the fixed reference feature of facial image discerns the personage in the image.
For facial image to be identified, at first facial image to be identified is extracted its yardstick invariant features and local binary pattern characteristic, utilize the principal component analytical method dimensionality reduction to obtain facial image fixed reference feature to be identified then; Calculate the similarity of facial image to be identified again, utilize the similarity of cluster centre to obtain facial image fixed reference feature to be identified to cluster centre; Calculate the similarity of facial image fixed reference feature to be identified at last, draw court verdict to training dataset facial image fixed reference feature.
In a word, compared with prior art, the present invention has following technique effect:
1. the fixed reference feature of facial image has comprised the texture information and the structural information of facial image, only reflects the texture information of people's face or only reflects that the structural information of people's face has more comprehensively characterized people's face than existing method;
2. the characteristic extraction procedure of the inventive method is simple, is easy to realize;
3. the recognition result accuracy rate of the inventive method is high;
4. the inventive method has high recognition for the different facial pose of same personage.
The facial image that the employed training data of the inventive method is concentrated can obtain according to following process: at first extract yardstick invariant features and local binary pattern characteristic, adopt the method dimensionality reduction of principal component analysis then.Training data behind the dimensionality reduction is concentrated facial image characteristic exploit information TRANSFER MODEL method cluster, obtain cluster centre.Then, the similarity of calculation training data set facial image and cluster centre obtains the fixed reference feature of training dataset facial image at last.
Description of drawings
Fig. 1 is based on the facial image recognition method synoptic diagram of fixed reference feature.
Fig. 2 is a facial image key point position synoptic diagram.
Fig. 3 is a five equilibrium facial image synoptic diagram.
Embodiment
It is the facial image fixed reference feature that face recognition technology provided by the invention has proposed a kind of new characteristic, is a kind of new method that people's face is represented in the recognition of face with this mark sheet traveller on a long journey face information.At first; The facial image of concentrating for training data extracts yardstick invariant features (SIFT characteristic) and local binary pattern characteristic (LBP characteristic); Utilize principal component analytical method dimensionality reduction and splicing; Adopt information TRANSFER MODEL method cluster to draw the cluster centre of training dataset then, then obtain the fixed reference feature of facial image, then the test facial image is also extracted fixed reference feature; Calculate the similarity of test person face image reference characteristic and tranining database image reference characteristic, result of calculation analysis is drawn final conclusion.The present invention is based on that the fixed reference feature of facial image discerns the personage in the image.Learning method of the present invention is different from existing method.
Training dataset X: training dataset is the data acquisition of a facial image.Training dataset obtains through gathering the positive face image of personage with the high definition camera, is different from general photograph, and the ratio of width to height that collection result adopted 1: 1 keeps the human face region in the image, and guarantees that through the shooting distance of control camera the size of human face region is consistent.At least gather two photos to reduce the error of bringing owing to the defective of camera for each personage.The size of training dataset X preferably will have the image about 1000; It is very big that excessive data set can cause computing to consume; And less data set effect is also not ideal enough, and such as for gate control system, everyone gathers the employee image and just can form data set X.Here, we remember that each image among the data set X is X
i, wherein subscript i is the sequence number of image.
The inventive method comprises two stages, and the phase one is that subordinate phase is the identification to test pattern to the study of training set.
The study of (one) training being gathered
1, extracts characteristics of image
1.1 the facial image among the mark training dataset X
The facial image of training dataset by the high definition camera according to getting, for each image Xi wherein, 13 key points of labelling human picture, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth, see Fig. 2.The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be noted.
1.2 extract the yardstick invariant features of gauge point
Each width of cloth image that training data is concentrated all was labeled, for facial image X
i, extract the yardstick invariant features of its each gauge point.
Be its leaching process of example explanation with gauge point P below, the leaching process of all the other points is identical.With a P is the center, gets the pixel of 8 neighborhoods, obtains the zone of one 16 * 16 pixel size.The zone of these 16 * 16 sizes evenly is divided into 16 parts, obtains the zone that size is 4 * 4 pixels.For each zone of 4 * 4, calculate the Grad and the principal direction of each pixel in this zone.With the pixel is the center, and 2 π are divided into 8 five equilibriums, be respectively [0, π/4) ... [and 7 π/4,2 π), obtain eight mean lines.In 4 * 4 zone, the Grad G of each pixel at the corresponding vector of two-dimensional space, is calculated this vector to the angle apart from its nearest mean line, calculate its length then, and note is made j in the projection of this mean line.Calculate these 16 Grad obtain respectively eight vector projection length on the direction with, note is made vector v, vector v be illustrated in vector projection length on 8 directions with,
According to from top to bottom, 16 4 * 4 zones of order computation from left to right draw 16 vectors altogether then.According to same order these 16 vector splicings are obtained the vector V of one 128 dimension then:
Wherein, v
k(k=1...16) expression is pressed from top to bottom, the vector in k zone of order from left to right.
V is exactly the yardstick invariant features vector of key point, just the yardstick invariant features of gauge point.
1.3 extract the local binary pattern characteristic of facial image
Each width of cloth facial image of training dataset all is made up of pixel, and people's face is positioned at the position of image center.Calculate the local binary pattern characteristic [H of facial image
1, H
2... H
14], at first, see Fig. 3 with facial image ten quarterns.
Suppose facial image X
iPixel size be w * h, wherein w is the width of image, h is the height of image, with image in Width halve i.e. [1, w/2], [w/2+1, w], with image at short transverse seven five equilibriums i.e. [0, h/7], [h/7+1,2h/7] ... [6h/7+1, h].With vertical curve and horizontal line these Along ents are linked up, image just has been divided into 14 parts.According to from top to bottom, from left to right in proper order, with first regional Q
1Be example, all the other regional disposal routes are similar, repeat no more.For each the pixel q in the zone, be that the big short biography of 8 neighborhood territory pixels at center is made l with q
1, l
2... l
8, wherein initial pixel definition be q vertically directly over pixel order arrange in the direction of the clock.For a neighborhood territory pixel l
a(a=1 ... 8), if pixel value satisfies l
a>=q, the numerical value d of its corresponding binary string so
aBe 1, l else if
a<p is the numerical value d of its corresponding binary string so
aBe 0.
Because position is long is that 8 the corresponding vertical scope of 10 systems of binary string is 0~255, so each pixel value can use a numeric representation between 0~255, and note is made Dec.Each pixel in the zone is carried out same processing, then its histogram H on 0~255 of all pixels statisticses in the zone hereto
1,
H
1=histogram({Dec}) (1.3-2)
Just have 256 numerical statistics wherein number of times of each numerical value appearance, so Q altogether for 0~255
1Be expressed as the vector H of one 256 dimension
1For All Ranges Q
1, Q
2... Q
14, carry out same processing according to from left to right order from top to bottom.Matrix [H
1, H
2... H
14] be exactly the local binary pattern characteristic of training image.
1.4 extraction characteristics of image
Each width of cloth image that training data is concentrated has all extracted yardstick invariant features and local binary pattern characteristic, and note is made [v respectively
1, v
2... v
13] and [h
1, h
2... h
14].For yardstick invariant features [v
1, v
2... v
13], regard each row as a group observations, regard row as variable.Owing to adopt the description vector that method generated of yardstick invariant features and local binary pattern feature description facial image to have the too high defective of dimension, make to increase severely according to the complexity of calculating in the cluster that is about to carry out.Simultaneously, have certain relevance between the dimension of high-dimensional description vector, make and describe information overlap between the vectorial different dimensions, increased the complicacy of computing.PCA is a kind of data mining method of widely used maturation, has the effect of dimensionality reduction.Utilize the method for principal component analysis that the dimension of yardstick invariant features is tieed up from 128 * 13 dimensionality reductions to 150, note is made vector S.
S=princomp([v
1,v
2...v
13])
dim=150 (2-1)
Equally, for local binary pattern characteristic [h
1, h
2... h
14], the method for employing principal component analysis is regarded each row as a group observations, regards row as variable, and local binary pattern characteristic is tieed up from 256 * 14 dimensionality reductions to 150, and note is made vector H.
H=princomp([h
1,h
2...h
14])
dim=150 (2-2)
Yardstick invariant features and local binary pattern characteristic are carried out dimensionality reduction be not limited to 150 dimensions, can be between 100~200 dimensions.
2, extract fixed reference feature
Each width of cloth image that training data is concentrated has all extracted yardstick invariant features S and local binary pattern characteristic H, and the note characteristics of image is I, by I=[S; H] decision.
Each width of cloth image X for training dataset X
i, the characteristics of image note of extracting is done
Be high-dimensional (300 a dimension) vector in essence,, import as clustering algorithm with all images characteristic among the training dataset X; Equaling 200 with information TRANSFER MODEL method (Message Passing Model) with the classification number carries out cluster and (is not limited to 200 for cluster classification number; Can be between 150~250), obtain the cluster centre of data acquisition, also can be described as " central point "; The cluster centre note is made C, wherein C={C
1, C
2... C
200, C
mBe classification m (m=1, " central point " 2...200).
For each width of cloth image X among the training dataset X
t, calculate its similarity to cluster centre C.At first calculate X
tTo C
iSimilarity.Similarity calculating method of the present invention is based on the realization of linear judgement sorter.At first, with X
tAs positive sample, with set X-{X
tAs negative sample, training linear judgement sorter Classifier
1,
Classifier
1=LDA(+:X
t,-:X-{X
t}) (3-2)
Set the C that is input as of sorter
i, note sorter Classifier
1The judgement mark be Score
1,
S
1=Classifier
1(C
i) (3-3)
Then with C
iAs positive sample, with set X-{X
tAs negative sample, training linear judgement sorter Classifier
2,
Classifier
2=LDA(+:C
i,-:X-{X
t}) (3-4)
Set the X that is input as of sorter
t, note sorter Classifier
2The judgement mark be Score
2,
S
2=Classifier
2(X
t) (3-5)
The present invention defines facial image X so
tWith cluster centre C
iSimilarity do
S(X
t,C
i)=(Score
1+Score
2)/2 (3-6)
For each width of cloth image X among the training dataset X
t, calculate X respectively according to above-mentioned formula
tTo C
i(result's note is made S (X for i=1, similarity 2...200)
t), S (X wherein
t)=[S (X
t, C
1) ..., S (X
t, C
200)].Then with S (X
t) normalization, result's note is made S
N(X
t),
The present invention defines normalized S
N(X
t) be image X
tFixed reference feature.Fixed reference feature is based on that the cluster centre of training dataset X obtains.At last, for each width of cloth image X among the training dataset X
tAll calculate its fixed reference feature, the note fixed reference feature is R
t
(2) test image to be identified
1. extraction characteristics of image, same training process.
2. extraction fixed reference feature, except that not having cluster process, all the other same training process.
Above two steps the same with learning process, for the given facial image data of user, extract yardstick invariant features and local binary pattern characteristic at first thick and fast, and then, obtain characteristics of image with the method dimensionality reduction of principal component analysis.After obtaining characteristics of image, the distribution histogram of computed image characteristic on cluster centre obtains fixed reference feature again.
3. test facial image to be identified
At last, the fixed reference feature of test pattern and the fixed reference feature of training dataset are calculated similarity, make final conclusion.
For the facial image Y that is gathered by the high definition camera, to its processing of carrying out above-mentioned two steps respectively, the fixed reference feature of remembering is R
YCompare fixed reference feature R
YWith the concentrated fixed reference feature R of training data
t, I wherein
t∈ X.Remember that the similarity vector does,
S(R
Y,R
t)=exp(-(R
Y-R
t)
2) (4-1)
This computing formula is represented for vector R
YWith vector R
tEach corresponding dimension, calculate its difference respectively, obtain the logarithm value of its opposite number then, Here it is S (R
Y, R
t) numerical value of respective dimensions element s, so S (R
Y, R
t) be one and R
Y(or R
t) vector that dimension is identical.
With similarity vector S (R
Y, R
t) element is according to from big to small series arrangement,
S(R
Y,R
t)=[s
1,s
2...s
M],s
1≥s
2≥...≥s
M (4-2)
s
1Be exactly vector S (R
Y, R
t) maximal value in all elements, s
2Be second largest numerical value, by that analogy.
If satisfy s
1>1.5s
2, judge personage's identity and R among the facial image Y so
1Corresponding personage X is identical; This is because the similarity degree of facial image Y and facial image X is far longer than the similarity degree of facial image Y and other facial images; That facial image Y and facial image X represent so is same individual, and the present invention adopts such criterion judgement personage's identity.On the contrary, if satisfy s
1≤1.5s
2, the personage's identity among the facial image Y can't be confirmed so, and this is because facial image Y and all facial images all keep all having certain similarity, can't conclude identity, and the present invention adopts this criterion judgement personage identity to improve result's accuracy.
The present invention not only is confined to above-mentioned embodiment; Persons skilled in the art are according to content disclosed by the invention; Can adopt other multiple embodiment embodiment of the present invention, therefore, every employing thinking of the present invention; Do some simple variations or change, all fall into the scope of the present invention's protection.
Claims (4)
1. the face identification method based on fixed reference feature is characterized in that, this method comprises the steps:
(1) obtain the characteristics of image of facial image:
To facial image to be identified, at first extract the yardstick invariant features and the local binary pattern characteristic of facial image, and then, obtain the characteristics of image of facial image to be identified with principal component analysis method dimensionality reduction;
(2) obtain the fixed reference feature of facial image:
The characteristics of image that utilization obtains, the computed image characteristic obtains facial image fixed reference feature to be identified to the similarity of cluster centre;
(3) judgment analysis
The fixed reference feature that the fixed reference feature and the training data of facial image to be identified are concentrated is adjudicated the sorter analysis with linearity, obtain analysis result.
2. the face identification method based on fixed reference feature according to claim 1 is characterized in that, the detailed process of obtaining the facial image fixed reference feature in the step (2) is:
(2.1) calculate the similarity of the characteristics of image of people's face to be identified to each cluster centre of training dataset
Facial image to be identified, its characteristics of image note is made Y, remembers that the cluster centre set of training dataset does, and N representes the number of cluster centre, and its span is 150 to 250, for Y and C
1, desire is calculated its similarity, at first with Y as sample just, with C-{C
1As negative sample, training linear judgement sorter yClassifier
1, be
yClassifier
1=LDA(+:Y,-:{C-C
1})
With C
1As linearity judgement sorter yClassifier
1Input obtains adjudicating mark yScore
1, be
yScore
1=yClassifier
1(C
1)
YScore
1Weighed C
1Similarity to the facial image characteristic Y;
Then, with C
1As positive sample, with C-{C
1As negative sample, training judgement sorter yClassifier
2, be
yClassifier
2=LDA(+:C
1,-:{C-C
1})
Adjudicate sorter yClassifier with Y as linearity
2Input obtains adjudicating mark yScore
2, be
yScore
2=yClassifier(Y)
YSc1re
2Weighed the facial image characteristic Y to C
1Similarity;
So, definition facial image characteristic Y is to cluster centre C
1Similarity do
S(Y,C
1)=(yScore
1+yScore
2)/2
At last, calculate the similarity of Y to cluster centre for each cluster centre among the cluster centre set C, resulting similarity vector S (Y) note is done
S(Y)=[S(Y,C
1),S(Y,C
2),...,S(Y,C
N)]
(2.2) similarity vector S (Y) is carried out normalization, define normalized S
N(Y) be the fixed reference feature of facial image to be identified, be
S so
N(Y) be exactly the fixed reference feature of facial image Y to be identified.
3. the face identification method based on fixed reference feature according to claim 1 and 2 is characterized in that, the detailed process of obtaining the characteristics of image of facial image in the step (1) is:
(3.1) 13 of mark facial image to be identified key points, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth; The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be noted;
(3.2), extract the yardstick invariant features of its each gauge point for facial image to be identified;
(3.3) the local binary pattern characteristic of extraction facial image to be identified;
(3.4) utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary pattern characteristic difference dimensionality reduction,, be facial image characteristic to be identified yardstick invariant features behind the dimensionality reduction and the splicing of local binary pattern characteristic.
4. the face identification method based on fixed reference feature according to claim 1 and 2 is characterized in that, the fixed reference feature that the training data described in the step (3) is concentrated obtains according to following process:
(4.1) facial image among the mark training dataset X
For each opens 13 key points of facial image labelling human picture among the training dataset X; Serve as a mark a little; Key point comprises eyebrow two ends, eyes two ends, nose key area and the corners of the mouth, according to from left to right the two-dimensional coordinate of these key points of journal from top to bottom;
(4.2) the yardstick invariant features of extraction gauge point;
(4.3) the local binary pattern characteristic of extraction facial image;
(4.4) extract the facial image characteristic
Utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary pattern characteristic difference dimensionality reduction,, be the facial image characteristic yardstick invariant features behind the dimensionality reduction and the splicing of local binary pattern characteristic;
(4.5) extract fixed reference feature
Each width of cloth image X for training dataset X
i, the characteristics of image note of extracting is done
With all images characteristic among the training dataset X, as the clustering algorithm input, equal N with information TRANSFER MODEL method with the classification number and carry out cluster, obtain the cluster centre of data acquisition, the cluster centre note is made C, wherein C={C
1, C
2... C
N, be the central point of the classification, i=1,2 ..., N; For each width of cloth image X among the training dataset X
t, calculate its similarity to cluster centre C; At first calculate X
tTo C
1Similarity; Similarity calculating method is based on the realization of linear judgement sorter; At first, with X
tAs positive sample, with data set X-{X
tAs negative sample, training linear judgement sorter Classifier
1,
Classifier
1=LDA(+:X
t,-:X-{X
t})
Set the C that is input as of sorter
1, note sorter Classifier
1The judgement mark be Score
1,
S
1=Classifier
1(C
1)
Then with C
iAs positive sample, with data set X-{X
tAs negative sample, training linear judgement sorter Classifier
2,
Classifier
2=LDA(+:C
1,-:X-{X
t})
Set the X that is input as of sorter
t, note sorter Classifier
2The judgement mark be Score
2,
S
2=Classifier
2(X
t)
Define facial image X so
tWith cluster centre C
1Similarity do
S(X
t,C
1)=(Score
1+Score
2)/2
For each width of cloth image X among the training dataset X
t, calculate X respectively according to above-mentioned formula
tThe similarity that arrives,, result's note is made S (X
t), S (X wherein
t)=[S (X
t, C
1) ..., S (X
t, C
N)]; Then with S (X
t) normalization, result's note is made S
N(X
t),
S
N(X
t) be image X
tFixed reference feature, be based on that the cluster centre of training dataset X obtains; At last, for each width of cloth image X among the training dataset X
tAll calculate its fixed reference feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210074224 CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210074224 CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102637251A true CN102637251A (en) | 2012-08-15 |
CN102637251B CN102637251B (en) | 2013-10-30 |
Family
ID=46621640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201210074224 Expired - Fee Related CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102637251B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034840A (en) * | 2012-12-05 | 2013-04-10 | 山东神思电子技术股份有限公司 | Gender identification method |
CN103268653A (en) * | 2013-05-30 | 2013-08-28 | 苏州福丰科技有限公司 | Face identification method for access control system |
CN103955667A (en) * | 2013-05-31 | 2014-07-30 | 华北电力大学 | SIFT human face matching method based on geometrical constraint |
CN104036259A (en) * | 2014-06-27 | 2014-09-10 | 北京奇虎科技有限公司 | Face similarity recognition method and system |
CN104376312A (en) * | 2014-12-08 | 2015-02-25 | 广西大学 | Face recognition method based on word bag compressed sensing feature extraction |
CN105320948A (en) * | 2015-11-19 | 2016-02-10 | 北京文安科技发展有限公司 | Image based gender identification method, apparatus and system |
CN105335753A (en) * | 2015-10-29 | 2016-02-17 | 小米科技有限责任公司 | Image recognition method and device |
CN105469059A (en) * | 2015-12-01 | 2016-04-06 | 上海电机学院 | Pedestrian recognition, positioning and counting method for video |
CN105740808A (en) * | 2016-01-28 | 2016-07-06 | 北京旷视科技有限公司 | Human face identification method and device |
CN105740378A (en) * | 2016-01-27 | 2016-07-06 | 北京航空航天大学 | Digital pathology whole slice image retrieval method |
CN105913050A (en) * | 2016-05-25 | 2016-08-31 | 苏州宾果智能科技有限公司 | Method and system for face recognition based on high-dimensional local binary pattern features |
CN106127170A (en) * | 2016-07-01 | 2016-11-16 | 重庆中科云丛科技有限公司 | A kind of merge the training method of key feature points, recognition methods and system |
CN106250821A (en) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | The face identification method that a kind of cluster is classified again |
CN106314356A (en) * | 2016-08-22 | 2017-01-11 | 乐视控股(北京)有限公司 | Control method and control device of vehicle and vehicle |
CN106529377A (en) * | 2015-09-15 | 2017-03-22 | 北京文安智能技术股份有限公司 | Age estimating method, age estimating device and age estimating system based on image |
CN107463865A (en) * | 2016-06-02 | 2017-12-12 | 北京陌上花科技有限公司 | Face datection model training method, method for detecting human face and device |
CN107944431A (en) * | 2017-12-19 | 2018-04-20 | 陈明光 | A kind of intelligent identification Method based on motion change |
CN108197282A (en) * | 2018-01-10 | 2018-06-22 | 腾讯科技(深圳)有限公司 | Sorting technique, device and the terminal of file data, server, storage medium |
CN108388141A (en) * | 2018-03-21 | 2018-08-10 | 特斯联(北京)科技有限公司 | A kind of wisdom home control system and method based on recognition of face |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN108629283A (en) * | 2018-04-02 | 2018-10-09 | 北京小米移动软件有限公司 | Face tracking method, device, equipment and storage medium |
CN109815887A (en) * | 2019-01-21 | 2019-05-28 | 浙江工业大学 | A kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation |
CN110334763A (en) * | 2019-07-04 | 2019-10-15 | 北京字节跳动网络技术有限公司 | Model data file generation, image-recognizing method, device, equipment and medium |
CN111259822A (en) * | 2020-01-19 | 2020-06-09 | 杭州微洱网络科技有限公司 | Method for detecting key point of special neck in E-commerce image |
CN117373100A (en) * | 2023-12-08 | 2024-01-09 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016026135A1 (en) | 2014-08-22 | 2016-02-25 | Microsoft Technology Licensing, Llc | Face alignment with shape regression |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080304755A1 (en) * | 2007-06-08 | 2008-12-11 | Microsoft Corporation | Face Annotation Framework With Partial Clustering And Interactive Labeling |
CN101840510A (en) * | 2010-05-27 | 2010-09-22 | 武汉华博公共安全技术发展有限公司 | Adaptive enhancement face authentication method based on cost sensitivity |
CN102169581A (en) * | 2011-04-18 | 2011-08-31 | 北京航空航天大学 | Feature vector-based fast and high-precision robustness matching method |
-
2012
- 2012-03-20 CN CN 201210074224 patent/CN102637251B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080304755A1 (en) * | 2007-06-08 | 2008-12-11 | Microsoft Corporation | Face Annotation Framework With Partial Clustering And Interactive Labeling |
CN101840510A (en) * | 2010-05-27 | 2010-09-22 | 武汉华博公共安全技术发展有限公司 | Adaptive enhancement face authentication method based on cost sensitivity |
CN102169581A (en) * | 2011-04-18 | 2011-08-31 | 北京航空航天大学 | Feature vector-based fast and high-precision robustness matching method |
Non-Patent Citations (1)
Title |
---|
CONG YANG 等: "Class-specific object contour detection by iteratively combining context information", 《2011 8TH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING》 * |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034840B (en) * | 2012-12-05 | 2016-05-04 | 山东神思电子技术股份有限公司 | A kind of gender identification method |
CN103034840A (en) * | 2012-12-05 | 2013-04-10 | 山东神思电子技术股份有限公司 | Gender identification method |
CN103268653A (en) * | 2013-05-30 | 2013-08-28 | 苏州福丰科技有限公司 | Face identification method for access control system |
CN103955667A (en) * | 2013-05-31 | 2014-07-30 | 华北电力大学 | SIFT human face matching method based on geometrical constraint |
CN103955667B (en) * | 2013-05-31 | 2017-04-19 | 华北电力大学 | SIFT human face matching method based on geometrical constraint |
CN104036259A (en) * | 2014-06-27 | 2014-09-10 | 北京奇虎科技有限公司 | Face similarity recognition method and system |
CN104036259B (en) * | 2014-06-27 | 2016-08-24 | 北京奇虎科技有限公司 | Human face similarity degree recognition methods and system |
CN104376312A (en) * | 2014-12-08 | 2015-02-25 | 广西大学 | Face recognition method based on word bag compressed sensing feature extraction |
CN104376312B (en) * | 2014-12-08 | 2019-03-01 | 广西大学 | Face identification method based on bag of words compressed sensing feature extraction |
CN106529377A (en) * | 2015-09-15 | 2017-03-22 | 北京文安智能技术股份有限公司 | Age estimating method, age estimating device and age estimating system based on image |
CN105335753A (en) * | 2015-10-29 | 2016-02-17 | 小米科技有限责任公司 | Image recognition method and device |
CN105320948A (en) * | 2015-11-19 | 2016-02-10 | 北京文安科技发展有限公司 | Image based gender identification method, apparatus and system |
CN105469059A (en) * | 2015-12-01 | 2016-04-06 | 上海电机学院 | Pedestrian recognition, positioning and counting method for video |
CN105740378A (en) * | 2016-01-27 | 2016-07-06 | 北京航空航天大学 | Digital pathology whole slice image retrieval method |
CN105740378B (en) * | 2016-01-27 | 2020-07-21 | 北京航空航天大学 | Digital pathology full-section image retrieval method |
CN105740808A (en) * | 2016-01-28 | 2016-07-06 | 北京旷视科技有限公司 | Human face identification method and device |
CN105740808B (en) * | 2016-01-28 | 2019-08-09 | 北京旷视科技有限公司 | Face identification method and device |
CN105913050A (en) * | 2016-05-25 | 2016-08-31 | 苏州宾果智能科技有限公司 | Method and system for face recognition based on high-dimensional local binary pattern features |
CN107463865A (en) * | 2016-06-02 | 2017-12-12 | 北京陌上花科技有限公司 | Face datection model training method, method for detecting human face and device |
CN106127170B (en) * | 2016-07-01 | 2019-05-21 | 重庆中科云从科技有限公司 | A kind of training method, recognition methods and system merging key feature points |
CN106127170A (en) * | 2016-07-01 | 2016-11-16 | 重庆中科云丛科技有限公司 | A kind of merge the training method of key feature points, recognition methods and system |
CN106250821A (en) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | The face identification method that a kind of cluster is classified again |
CN106314356A (en) * | 2016-08-22 | 2017-01-11 | 乐视控股(北京)有限公司 | Control method and control device of vehicle and vehicle |
CN107944431A (en) * | 2017-12-19 | 2018-04-20 | 陈明光 | A kind of intelligent identification Method based on motion change |
CN107944431B (en) * | 2017-12-19 | 2019-04-26 | 天津天远天合科技有限公司 | A kind of intelligent identification Method based on motion change |
CN108197282A (en) * | 2018-01-10 | 2018-06-22 | 腾讯科技(深圳)有限公司 | Sorting technique, device and the terminal of file data, server, storage medium |
CN108388141A (en) * | 2018-03-21 | 2018-08-10 | 特斯联(北京)科技有限公司 | A kind of wisdom home control system and method based on recognition of face |
CN108629283A (en) * | 2018-04-02 | 2018-10-09 | 北京小米移动软件有限公司 | Face tracking method, device, equipment and storage medium |
CN108416336B (en) * | 2018-04-18 | 2019-01-18 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN109815887A (en) * | 2019-01-21 | 2019-05-28 | 浙江工业大学 | A kind of classification method of complex illumination servant's face image based on Multi-Agent Cooperation |
CN109815887B (en) * | 2019-01-21 | 2020-10-16 | 浙江工业大学 | Multi-agent cooperation-based face image classification method under complex illumination |
CN110334763A (en) * | 2019-07-04 | 2019-10-15 | 北京字节跳动网络技术有限公司 | Model data file generation, image-recognizing method, device, equipment and medium |
CN110334763B (en) * | 2019-07-04 | 2021-07-23 | 北京字节跳动网络技术有限公司 | Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium |
CN111259822A (en) * | 2020-01-19 | 2020-06-09 | 杭州微洱网络科技有限公司 | Method for detecting key point of special neck in E-commerce image |
CN117373100A (en) * | 2023-12-08 | 2024-01-09 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
CN117373100B (en) * | 2023-12-08 | 2024-02-23 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
Also Published As
Publication number | Publication date |
---|---|
CN102637251B (en) | 2013-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102637251B (en) | Face recognition method based on reference features | |
CN107609497B (en) | Real-time video face recognition method and system based on visual tracking technology | |
CN107273845B (en) | Facial expression recognition method based on confidence region and multi-feature weighted fusion | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
Zhan et al. | Face detection using representation learning | |
CN101419671B (en) | Face gender identification method based on fuzzy support vector machine | |
CN105373777B (en) | A kind of method and device for recognition of face | |
CN102156871B (en) | Image classification method based on category correlated codebook and classifier voting strategy | |
CN105512624A (en) | Smile face recognition method and device for human face image | |
Hasan | An application of pre-trained CNN for image classification | |
CN102156887A (en) | Human face recognition method based on local feature learning | |
CN106778487A (en) | A kind of 2DPCA face identification methods | |
Wang et al. | Multi-scale feature extraction algorithm of ear image | |
Qin et al. | Finger-vein quality assessment by representation learning from binary images | |
CN104008375A (en) | Integrated human face recognition mehtod based on feature fusion | |
CN105574475A (en) | Common vector dictionary based sparse representation classification method | |
CN103955671A (en) | Human behavior recognition method based on rapid discriminant common vector algorithm | |
Gao et al. | A novel face feature descriptor using adaptively weighted extended LBP pyramid | |
Ebrahimian et al. | Automated person identification from hand images using hierarchical vision transformer network | |
CN103632145A (en) | Fuzzy two-dimensional uncorrelated discriminant transformation based face recognition method | |
CN107194314A (en) | The fuzzy 2DPCA and fuzzy 2DLDA of fusion face identification method | |
Liu et al. | Gender identification in unconstrained scenarios using self-similarity of gradients features | |
Sadeghzadeh et al. | Triplet loss-based convolutional neural network for static sign language recognition | |
CN105550642B (en) | Gender identification method and system based on multiple dimensioned linear Differential Characteristics low-rank representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131030 |
|
CF01 | Termination of patent right due to non-payment of annual fee |