[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103699897A - Robust face alignment method and device - Google Patents

Robust face alignment method and device Download PDF

Info

Publication number
CN103699897A
CN103699897A CN201310672116.5A CN201310672116A CN103699897A CN 103699897 A CN103699897 A CN 103699897A CN 201310672116 A CN201310672116 A CN 201310672116A CN 103699897 A CN103699897 A CN 103699897A
Authority
CN
China
Prior art keywords
registration
image
prime
subject
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310672116.5A
Other languages
Chinese (zh)
Inventor
邓亮
樊春玲
张冠军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201310672116.5A priority Critical patent/CN103699897A/en
Publication of CN103699897A publication Critical patent/CN103699897A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention is applied to the field of face recognition, and provides a robust face alignment method and device. The method comprises the steps of adopting similarity transformation to obtain the rotation angle theta of an image to be aligned according to the position relation between the image to be aligned and feature points in a template image, rotating the image to be aligned according to the rotation angle theta, and scaling and translating the rotated image to be aligned in horizontal and vertical directions according to the position relation between the rotated image to be aligned and the feature points in the template image, so as to obtain an aligned image. By performing rotation for certain angles firstly through similarity transformation and then performing translating and scaling, the device not only keeps the basic shape of facial features not changed, but also enables the alignment to be much free, and facilitates the improvement of the alignment precision rate.

Description

A kind of robust human face method for registering and device
Technical field
The invention belongs to recognition of face field, relate in particular to a kind of robust human face method for registering and device.
Background technology
Face recognition technology has passed through long research and development, and particularly, along with the fast development of computer technology, face recognition technology obtains more concern and understanding.Although face recognition technology does not also have full maturity at present, but obtained application in some occasions, as use more gate control system, and as people's face verification system of the 2008 Beijing Olympic Games, and people's face search system of using at intelligent transportation field like a raging fire at present etc.
Face recognition technology is that the residing environment of people's face can be complicated and changeable compared to a more difficult major reason of the other biological feature identification techniques such as fingerprint, as ambient lighting, facial expression, facial pose etc., any one aspect wherein all may make face identification system effectively not work, and a common system will solve these all problems, is more difficult.As due to illumination or expression shape change, the unique point of people's face is often located inaccurate, and positioning feature point is inaccurate will directly cause people's face registration and feature extraction failure, thereby causes face identification system not work.
Face identification system mid-early stage part and parcel is exactly that facial characteristics (Landmarks) extracts, face features point plays an important role at face identification system, a large amount of researchers made great progress in this respect in recent years, but when light is poor or facial expression changes when violent, location tends to occur deviation.Traditional people's face method for registering, is to utilize the face feature point detecting, and use similarity transformation or affined transformation are mapped to point corresponding to template and get on, thereby realize the registration of people's face.But when unique point is more sparse, the detection error of single unique point can cause larger impact to mapping matrix, and registration even directly leads to errors.
Summary of the invention
The object of the embodiment of the present invention is to provide a kind of robust human face method for registering, while using similarity transformation or affined transformation to realize people's face registration to solve in prior art, when unique point is more sparse, the detection error of single unique point can affect greatly mapping matrix, may cause the problem of registration mistake.
The embodiment of the present invention is achieved in that a kind of robust human face method for registering, and described method comprises:
According to the position relationship of unique point in image subject to registration and template image, adopt similarity transformation to obtain the anglec of rotation θ of described image subject to registration;
According to the described image subject to registration of described anglec of rotation θ rotation;
According to the position relationship of unique point in postrotational image subject to registration and template image, described postrotational image subject to registration is carried out to the zooming and panning of horizontal and vertical direction, obtain the image after registration.
On the other hand, a kind of robust human face registration apparatus, described device comprises:
Anglec of rotation acquiring unit, for according to the position relationship of image subject to registration and template image unique point, adopts similarity transformation to obtain the anglec of rotation θ of described image subject to registration;
Rotary unit, for rotating described image subject to registration according to described anglec of rotation θ;
Zooming and panning unit, for according to the position relationship of postrotational image subject to registration and template image unique point, carries out the zooming and panning of horizontal and vertical direction to described postrotational image subject to registration, obtain the image after registration.
In embodiments of the present invention, according to the position relationship of unique point in image subject to registration and template image, by similarity transformation, obtained the anglec of rotation of image subject to registration, according to the described anglec of rotation, rotate image subject to registration, obtain postrotational image subject to registration, and the position relationship to unique point in the unique point of described postrotational image subject to registration and template image, further carry out the zooming and panning of horizontal and vertical direction, after rotating to an angle by first similarity transformation, carry out again Pan and Zoom, both the basic configuration that had maintained face characteristic is constant, also give the degree of freedom that registration is very large, be conducive to improve the accuracy rate of registration.
Accompanying drawing explanation
Fig. 1 is the realization flow figure of the robust human face method for registering that provides of first embodiment of the invention;
Fig. 2 is the template image unique point schematic diagram that first embodiment of the invention provides;
Fig. 3 is the schematic diagram of the image subject to registration that provides of first embodiment of the invention;
Fig. 4 is the schematic diagram of the postrotational image subject to registration that provides of first embodiment of the invention;
Fig. 5 is the schematic diagram of the image subject to registration after the zooming and panning that provide of first embodiment of the invention;
Fig. 6 is the structured flowchart of the robust human face registration apparatus that provides of second embodiment of the invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
Traditional people's face registration Algorithm is divided into two kinds, similarity transformation and affined transformation.Similarity transformation is that the shape ratio of underwriter's face does not change, only in scaled size, and 4 the enterprising line translations of degree of freedom of displacement in the anglec of rotation and level and vertical direction; Affined transformation has increased by two degree of freedom, level from vertical direction, carry out the rotation of different yardstick convergent-divergents and angle.The advantage of similarity transformation is that shape that it has guaranteed people's face and the ratio between facial characteristics do not change, but because the facial ratio of different people is often inconsistent, and this can cause the face of different people to be difficult to be registrated to a unified template getting on; Affined transformation has been carried out comprehensive conversion to the ratio of people's face, and advantage is that it can go with minimum error map unique point on template, but this deformation that can cause other parts to make a mistake.Above-mentioned two kinds of traditional transform methods are when unique point is more sparse, and the detection error that unique point is less can produce larger registration mistake.Robust human face method for registering described in the embodiment of the present invention, had both inherited the advantage of similarity transformation, had kept people's face basic configuration constant on certain procedures, had retained again the more freedoms with affined transformation, made people's face registration error less.Its embodiment is described in detail in detail below.
Fig. 1 shows the realization flow of the robust human face method for registering that first embodiment of the invention provides, and details are as follows:
In step S101, according to the position relationship of unique point in image subject to registration and template image, adopt similarity transformation to obtain the anglec of rotation θ of described image subject to registration.
Optionally, described according to the position relationship of unique point in image subject to registration and template image, the anglec of rotation step that adopts similarity transformation to obtain described image subject to registration comprises: according to rotation error computing formula
E(a,b,t x,t y)=∑w i((ax i-by i+t x-x′ i) 2+(bx i+ay i+t y-y′ i) 2)
Calculating is at described error hour corresponding rotation angle value θ;
Wherein, the pass of described parameter a, b and anglec of rotation θ is: described parametric t x, t ywhile being zero for the average of the matching characteristic point when image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, described E (a, b, t x, t y) be rotation error value, described w ibe the weights of i unique point, 0≤w i≤ 1, and ∑ w i=1, described x i, y ifor the coordinate of image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
Concrete, the characteristic point position on template image is X ' i=[x ' i, y ' i], the characteristic point position on image subject to registration is X i=[x i, y i], i=1 wherein, 2 ..., N, the number that N is unique point.Similarity transformation T (X i) by the feature X on image itransform to X " i, i.e. X " i=T (X i) make following error minimum:
E = Σ i = 1 N w i ( ( x ′ ′ i - x ′ i ) 2 + ( y ′ ′ i - y ′ i ) 2 )
W wherein ifor the weights at i reference mark of shape, 0≤w i≤ 1, ∑ w i=1.Two-dimensional phase is as follows like transform definition, wherein only has 4 parameters:
T x y = a - b b a x y + t x t y
Error formula becomes E (a, b, t so x, t y)=∑ w i((ax i-by i+ t x-x ' i) 2+ (bx i+ ay i+ t y-y ' i) 2), minimize this error and can obtain optimal transformation parameter.In order to obtain simple result, the average of supposing unique point to be matched be 0(this can carry out simple initialization to list entries and obtain), i.e. S x=∑ w ix i/ N=0, S y=∑ w iy i/ N=0.Optimization can obtain final argument value:
t x=S x′
t y=S y′
a=(S xx′+S yy′)/(S xx+S yy)=X.X′/|X| 2
b=(S xy′-S yx′)/(S xx+S yy)=(S xy′-S yx′)/|X| 2
S wherein x '=∑ w ix ' i/ N, S y '=∑ w iy ' i/ N is the average of template characteristic point, S xx=∑ w ix ix i/ N, S yy=∑ w iy iy i/ N, S xx '=∑ w ix ix ' i/ N, S yy '=∑ w iy iy ' i/ N, S xy '=∑ w ix iy ' i/ N, S yx '=∑ w iy ix ' i/ N.
Corresponding rotation angle θ and parameter a, b is expressed as cos ( θ ) = a / a 2 + b 2 sin ( θ ) = b / a 2 + b 2 .
The similarity transformation of Weighted Coefficients described in this step, can further improve the accuracy rate of angular transformation, and persons skilled in the art can be understood, if use not Weighted Coefficients can calculate the anglec of rotation of similarity transformation too.
In step S102, according to the described image subject to registration of described anglec of rotation θ rotation.
Concrete, describedly according to the described image step subject to registration of described anglec of rotation rotation, be:
According to described anglec of rotation θ, by rotation formula x i ang y i ang = cos ( θ ) - sin ( θ ) sin ( θ ) cos ( θ ) x i y i Rotate described image subject to registration, wherein, described parameter
Figure BDA0000434162960000053
for the coordinate figure of postrotational image characteristic point subject to registration, described x i, y ifor the coordinate of image characteristic point subject to registration, θ is the anglec of rotation.
In step S103, according to the position relationship of unique point in postrotational image subject to registration and template image, described postrotational image subject to registration is carried out to the zooming and panning of horizontal and vertical direction, obtain the image after registration.
Described according to the position relationship of unique point in postrotational image subject to registration and template image, the zooming and panning of described postrotational image subject to registration being carried out to horizontal and vertical direction, the image step of obtaining after registration comprises: according to zooming and panning error calculation formula
E ′ ( a ′ , b ′ , t ′ x , t ′ y ) = Σ ( w i x ( a ′ x ′ ′ i + t x - x ′ i ) 2 + w i y ( b ′ y i ′ ′ + t y - y ′ i ) 2 )
Calculating, in described error hour corresponding scale value and shift value, is carried out zooming and panning to described postrotational image subject to registration, obtains the image after registration;
Wherein, described parameter a ', b ' are respectively the scale value of horizontal direction and vertical direction, described parametric t x', t y' while being zero for the average when the matching characteristic point of postrotational image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, be respectively horizontal direction shift value and vertical direction shift value, described E ' (a ', b ', t x', t y') be the error amount after convergent-divergent and steady conversion, described in
Figure BDA0000434162960000061
be respectively the weights of i unique point in the horizontal direction, in vertical direction, 0 ≤ w i x ≤ 1 , 0 ≤ w i y ≤ 1 , And Σ w i x = 1 , Σ w i y = 1 , Described x " i, y i" be the coordinate of postrotational image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
The affined transformation difference of step S103 and prior art is, the affined transformation of this step is not carried out angle rotation, and can carry out independent yardstick convergent-divergent with vertical both direction mountain in level, the conversion obtaining has like this maintained facial basic configuration, and conversion is shown below:
T x y = a ′ 0 0 b ′ x y + t x ′ t y ′
Error formula becomes like this E ′ ( a ′ , b ′ , t ′ x , t ′ y ) = Σ ( w i x ( a ′ x ′ ′ i + t x - x ′ i ) 2 + w i y ( b ′ y i ′ ′ + t y - y ′ i ) 2 ) , Level and vertical direction in this error formula (x, y direction) have been used different weight w xwith w y.The average of same hypothesis input feature vector point is 0, S x = Σ w i x x i / N = 0 , S y = Σ w i y y i / N = 0 , Can optimize so error obtains optimum parameter and is:
t x′=S x′
t y′=S y′
a′=S xx′/S xx
b′=S yy′/S yy
Wherein
Figure BDA00004341629600000610
Figure BDA00004341629600000611
for the average of template characteristic point, S xx = Σ w i x x i x i / N , S yy = Σ w i y y i y i / N , S xx ′ = Σ w i x x i x ′ i / N , S yy ′ = Σ w i y y i y ′ i / N , Weights herein
Figure BDA00004341629600000616
different with the similarity transformation in step S101.
Through Rotation and Zoom, translation, people's face that we obtain after registration is:
x ′ ′ ′ i y ′ ′ ′ i = a ′ 0 0 b ′ x i ang y i ang + t x ′ t y ′ = a ′ cos ( θ ) - a ′ sin ( θ ) b ′ sin ( θ ) b ′ cos ( θ ) x i y i + t x ′ t y ′
S101 is similar to step, and the similarity transformation of Weighted Coefficients, can further improve the accuracy rate of angular transformation described in this step, and persons skilled in the art can be understood, if use not Weighted Coefficients can calculate the anglec of rotation of similarity transformation too.
Be illustrated in figure 2 the schematic layout pattern of the unique point of template image, wherein the label of the label of the unique point of right eye and the unique point of left eye is corresponding, for the image subject to registration shown in Fig. 3 is mated with the template image of Fig. 2, we get the human face characteristic point position in template in Fig. 2 is 1,2,25,26,62,63 totally 6 unique points, because unique point is more rare, not accurate enough by the rotation angle that two eyes are definite.Adopting the similarity transformation of Weight to determine rotation angle, get a weight for point and equate, is 1/6.
Use the rotation angle obtaining to be rotated human face characteristic point subject to registration,
Figure BDA0000434162960000072
Figure BDA0000434162960000073
obtain the postrotational image shown in Fig. 4.Use irrotational constraint affined transformation to obtain yardstick convergent-divergent a ', b ' and translation parameters t x', t y', wherein horizontal direction weight is w x=[0.25,0.25,0.25,0.25,0,0], vertical direction weight is w y=[0.125,0.125,0.125,0.125,0.25,0.25].
Synthetic final transformation parameter is as follows:
x ′ ′ ′ i y ′ ′ ′ i = a ′ 0 0 b ′ x i ang y i ang + t x ′ t y ′ = a ′ cos ( θ ) - a ′ sin ( θ ) b ′ sin ( θ ) b ′ cos ( θ ) x i y i + t x ′ t y ′ , Use this transformation relation to carry out registration to facial image subject to registration, obtain people's face of registration as shown in Figure 5.
The embodiment of the present invention is according to the position relationship of unique point in image subject to registration and template image, by similarity transformation, obtained the anglec of rotation of image subject to registration, according to the described anglec of rotation, rotate image subject to registration, obtain postrotational image subject to registration, and the position relationship to unique point in the unique point of described postrotational image subject to registration and template image, further carry out the zooming and panning of horizontal and vertical direction, after rotating to an angle by first similarity transformation, carry out again Pan and Zoom, both the basic configuration that had maintained face characteristic is constant, also give the degree of freedom that registration is very large, be conducive to improve the accuracy rate of registration.
Embodiment bis-:
Fig. 6 shows the structural representation of the robust human face registration apparatus that second embodiment of the invention provides, and details are as follows:
Robust human face registration apparatus described in the embodiment of the present invention, comprising:
Anglec of rotation acquiring unit, for according to the position relationship of image subject to registration and template image unique point, adopts similarity transformation to obtain the anglec of rotation θ of described image subject to registration;
Rotary unit, for rotating described image subject to registration according to described anglec of rotation θ;
Zooming and panning unit, for according to the position relationship of postrotational image subject to registration and template image unique point, carries out the zooming and panning of horizontal and vertical direction to described postrotational image subject to registration, obtain the image after registration.
Optionally, described anglec of rotation acquiring unit is for according to rotation error computing formula
E(a,b,t x,t y)=∑w i((ax i-by i+t x-x′ i) 2+(bx i+ay i+t y-y′ i) 2)
Calculating is at described error hour corresponding rotation angle value θ;
Wherein, the pass of described parameter a, b and rotation angle value θ is:
Figure BDA0000434162960000081
described parametric t x, t ywhile being zero for the average of the matching characteristic point when image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, described E (a, b, t x, t y) be rotation error value, described w ibe the weights of i unique point, 0≤w i≤ 1, and ∑ w i=1, described x i, y ifor the coordinate of image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
Optionally, described rotary unit is used for according to described anglec of rotation θ, by rotation formula x i ang y i ang = cos ( θ ) - sin ( θ ) sin ( θ ) cos ( θ ) x i y i Rotate described image subject to registration, wherein, described parameter
Figure BDA0000434162960000083
for the coordinate figure of postrotational image characteristic point subject to registration, described x i, y ifor the coordinate of image characteristic point subject to registration, θ is the anglec of rotation.
Optionally, described convergent-divergent and steadily unit for according to zooming and panning error calculation formula
E ′ ( a ′ , b ′ , t ′ x , t ′ y ) = Σ ( w i x ( a ′ x ′ ′ i + t x - x ′ i ) 2 + w i y ( b ′ y i ′ ′ + t y - y ′ i ) 2 )
Calculating, in described error hour corresponding scale value and shift value, is carried out zooming and panning to described postrotational image subject to registration, obtains the image after registration;
Wherein, described parameter a ', b ' are respectively the scale value of horizontal direction and vertical direction, described parametric t x', t y' while being zero for the average when the matching characteristic point of postrotational image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, be respectively horizontal direction shift value and vertical direction shift value, described E ' (a, b, t x, t y) be the error amount after convergent-divergent and steady conversion, described in
Figure BDA0000434162960000091
be respectively the weights of i unique point in the horizontal direction, in vertical direction, 0 ≤ w i x ≤ 1 , 0 ≤ w i y ≤ 1 , And Σ w i x = 1 , Σ w i y = 1 , Described x " i, y i" be the coordinate of postrotational image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
Robust human face registration apparatus described in the embodiment of the present invention is corresponding with robust human face method for registering described in embodiment mono-, at this, does not repeat.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (8)

1. a robust human face method for registering, is characterized in that, described method comprises:
According to the position relationship of unique point in image subject to registration and template image, adopt similarity transformation to obtain the anglec of rotation θ of described image subject to registration;
According to the described image subject to registration of described anglec of rotation θ rotation;
According to the position relationship of unique point in postrotational image subject to registration and template image, described postrotational image subject to registration is carried out to the zooming and panning of horizontal and vertical direction, obtain the image after registration.
2. described method according to claim 1, is characterized in that, described according to the position relationship of unique point in image subject to registration and template image, and the anglec of rotation step that adopts similarity transformation to obtain described image subject to registration comprises: according to rotation error computing formula
E(a,b,t x,t y)=∑w i((ax i-by i+t x-x′ i) 2+(bx i+ay i+t y-y′ i) 2)
Calculating is at described error hour corresponding rotation angle value θ;
Wherein, the pass of described parameter a, b and anglec of rotation θ is:
Figure FDA0000434162950000011
described parametric t x, t ywhile being zero for the average of the matching characteristic point when image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, described E (a, b, t x, t y) be rotation error value, described w ibe the weights of i unique point, 0≤w i≤ 1, and ∑ w i=1, described x i, y ifor the coordinate of image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
3. method according to claim 1, is characterized in that, describedly according to the described image step subject to registration of described anglec of rotation rotation, is:
According to described anglec of rotation θ, by rotation formula x i ang y i ang = cos ( θ ) - sin ( θ ) sin ( θ ) cos ( θ ) x i y i Rotate described image subject to registration, wherein, described parameter
Figure FDA0000434162950000013
for the coordinate figure of postrotational image characteristic point subject to registration, described x i, y ifor the coordinate of image characteristic point subject to registration, θ is the anglec of rotation.
4. method according to claim 1, it is characterized in that, described according to the position relationship of unique point in postrotational image subject to registration and template image, the zooming and panning of described postrotational image subject to registration being carried out to horizontal and vertical direction, the image step of obtaining after registration comprises: according to zooming and panning error calculation formula
E ′ ( a ′ , b ′ , t ′ x , t ′ y ) = Σ ( w i x ( a ′ x ′ ′ i + t x - x ′ i ) 2 + w i y ( b ′ y i ′ ′ + t y - y ′ i ) 2 )
Calculating, in described error hour corresponding scale value and shift value, is carried out zooming and panning to described postrotational image subject to registration, obtains the image after registration;
Wherein, described parameter a ', b ' are respectively the scale value of horizontal direction and vertical direction, described parametric t x', t y' while being zero for the average when the matching characteristic point of postrotational image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, be respectively horizontal direction shift value and vertical direction shift value, described E ' (a ', b ', t x', t y') be the error amount after convergent-divergent and steady conversion, described in
Figure FDA0000434162950000021
be respectively the weights of i unique point in the horizontal direction, in vertical direction, 0 ≤ w i x ≤ 1 , 0 ≤ w i y ≤ 1 , And Σ w i x = 1 , Σ w i y = 1 , Described x " i, y i" be the coordinate of postrotational image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
5. a robust human face registration apparatus, is characterized in that, described device comprises:
Anglec of rotation acquiring unit, for according to the position relationship of image subject to registration and template image unique point, adopts similarity transformation to obtain the anglec of rotation θ of described image subject to registration;
Rotary unit, for rotating described image subject to registration according to described anglec of rotation θ;
Zooming and panning unit, for according to the position relationship of postrotational image subject to registration and template image unique point, carries out the zooming and panning of horizontal and vertical direction to described postrotational image subject to registration, obtain the image after registration.
6. install according to claim 5, it is characterized in that, described anglec of rotation acquiring unit is used for according to rotation error computing formula
E(a,b,t x,t y)=∑w i((ax i-by i+t x-x′ i) 2+(bx i+ay i+t y-y′ i) 2)
Calculating is at described error hour corresponding rotation angle value θ;
Wherein, the pass of described parameter a, b and rotation angle value θ is:
Figure FDA0000434162950000025
described parametric t x, t ywhile being zero for the average of the matching characteristic point when image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, described E (a, b, t x, t y) be rotation error value, described w ibe the weights of i unique point, 0≤w i≤ 1, and ∑ w i=1, described x i, y ifor the coordinate of image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
7. install according to claim 5, it is characterized in that, described rotary unit is used for according to described anglec of rotation θ, by rotation formula x i ang y i ang = cos ( θ ) - sin ( θ ) sin ( θ ) cos ( θ ) x i y i Rotate described image subject to registration, wherein, described parameter
Figure FDA0000434162950000032
for the coordinate figure of postrotational image characteristic point subject to registration, described x i, y ifor the coordinate of image characteristic point subject to registration, θ is the anglec of rotation.
8. install according to claim 5, it is characterized in that, described convergent-divergent and steadily unit are used for according to zooming and panning error calculation formula
E ′ ( a ′ , b ′ , t ′ x , t ′ y ) = Σ ( w i x ( a ′ x ′ ′ i + t x - x ′ i ) 2 + w i y ( b ′ y i ′ ′ + t y - y ′ i ) 2 )
Calculating, in described error hour corresponding scale value and shift value, is carried out zooming and panning to described postrotational image subject to registration, obtains the image after registration;
Wherein, described parameter a ', b ' are respectively the scale value of horizontal direction and vertical direction, described parametric t x', t y' while being zero for the average when the matching characteristic point of postrotational image subject to registration, be respectively the average of horizontal ordinate and the ordinate of template image unique point, be respectively horizontal direction shift value and vertical direction shift value, described E ' (a, b, t x, t y) be the error amount after convergent-divergent and steady conversion, described in
Figure FDA0000434162950000033
be respectively the weights of i unique point in the horizontal direction, in vertical direction, 0 ≤ w i x ≤ 1 , 0 ≤ w i y ≤ 1 , And Σ w i x = 1 , Σ w i y = 1 , Described x " i, y i" be the coordinate of postrotational image characteristic point subject to registration, described x ' i, y ' icoordinate for template image unique point.
CN201310672116.5A 2013-12-10 2013-12-10 Robust face alignment method and device Pending CN103699897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310672116.5A CN103699897A (en) 2013-12-10 2013-12-10 Robust face alignment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310672116.5A CN103699897A (en) 2013-12-10 2013-12-10 Robust face alignment method and device

Publications (1)

Publication Number Publication Date
CN103699897A true CN103699897A (en) 2014-04-02

Family

ID=50361419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310672116.5A Pending CN103699897A (en) 2013-12-10 2013-12-10 Robust face alignment method and device

Country Status (1)

Country Link
CN (1) CN103699897A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284017A (en) * 2014-09-04 2015-01-14 广东欧珀移动通信有限公司 Information prompting method and device
CN106254845A (en) * 2015-10-20 2016-12-21 深圳超多维光电子有限公司 Method, device and the electronic equipment of a kind of bore hole stereo display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1538562A1 (en) * 2003-04-17 2005-06-08 Seiko Epson Corporation Generation of still image from a plurality of frame images
CN102005047A (en) * 2010-11-15 2011-04-06 无锡中星微电子有限公司 Image registration system and method thereof
CN102075679A (en) * 2010-11-18 2011-05-25 无锡中星微电子有限公司 Method and device for acquiring image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1538562A1 (en) * 2003-04-17 2005-06-08 Seiko Epson Corporation Generation of still image from a plurality of frame images
CN102005047A (en) * 2010-11-15 2011-04-06 无锡中星微电子有限公司 Image registration system and method thereof
CN102075679A (en) * 2010-11-18 2011-05-25 无锡中星微电子有限公司 Method and device for acquiring image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
T.F.COOTES 等: "Statistical Models of Appearance for Computer Vision", 《WWW.FACE-REC.ORG/ALGORITHMS/AAM/APP-MODELS.PDF》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284017A (en) * 2014-09-04 2015-01-14 广东欧珀移动通信有限公司 Information prompting method and device
CN106254845A (en) * 2015-10-20 2016-12-21 深圳超多维光电子有限公司 Method, device and the electronic equipment of a kind of bore hole stereo display
CN106254845B (en) * 2015-10-20 2017-08-25 深圳超多维光电子有限公司 A kind of method of bore hole stereoscopic display, device and electronic equipment
CN107566822A (en) * 2015-10-20 2018-01-09 深圳超多维光电子有限公司 The method, apparatus and electronic equipment of a kind of bore hole stereoscopic display
CN107566822B (en) * 2015-10-20 2019-03-26 深圳超多维科技有限公司 A kind of method, apparatus and electronic equipment of bore hole stereoscopic display

Similar Documents

Publication Publication Date Title
CN111046125A (en) Visual positioning method, system and computer readable storage medium
Liu et al. Target localization in local dense mapping using RGBD SLAM and object detection
Wang et al. ModeT: Learning deformable image registration via motion decomposition transformer
Cheng et al. Real-time and efficient 6-D pose estimation from a single RGB image
Armagan et al. Learning to align semantic segmentation and 2.5 d maps for geolocalization
Chidsin et al. AR-based navigation using RGB-D camera and hybrid map
CN110389995A (en) Lane information detection method, device, equipment and medium
Zou et al. Robust RGB-D SLAM using point and line features for low textured scene
CN107392847A (en) A kind of fingerprint image joining method based on minutiae point and range image
Yu et al. Monocular urban localization using street view
Li et al. Pose estimation of non-cooperative space targets based on cross-source point cloud fusion
Armagan et al. Accurate Camera Registration in Urban Environments Using High-Level Feature Matching.
US20220327740A1 (en) Registration method and registration apparatus for autonomous vehicle
CN103699897A (en) Robust face alignment method and device
Wang et al. An illumination-invariant shadow-based scene matching navigation approach in low-altitude flight
CN107463871A (en) A kind of point cloud matching method based on corner characteristics weighting
Xiang et al. Semantic-structure-aware multi-level information fusion for robust global orientation optimization of autonomous mobile robots
CN105976314B (en) Take the different reference ellipsoid projection plane coordinates transform methods of same central meridian into account
Yu et al. Image matching algorithm with color information based on SIFT
Li et al. Multimodal Feature Association-based Stereo Visual SLAM Method
Wang et al. Fisheye‐Lens‐Based Visual Sun Compass for Perception of Spatial Orientation
Cheng et al. Research on Positioning Method in Underground Complex Environments Based on Fusion of Binocular Vision and IMU
Wu et al. Indoor Map Boundary Correction Based on Normalized Total Least Squares of Condition Equation
Wei et al. Synthetic velocity measurement algorithm of monocular vision based on square-root cubature Kalman filter
CN103699789B (en) Method and device for shape correspondence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140402