CN105809085B - Human eye positioning method and device - Google Patents
Human eye positioning method and device Download PDFInfo
- Publication number
- CN105809085B CN105809085B CN201410837111.8A CN201410837111A CN105809085B CN 105809085 B CN105809085 B CN 105809085B CN 201410837111 A CN201410837111 A CN 201410837111A CN 105809085 B CN105809085 B CN 105809085B
- Authority
- CN
- China
- Prior art keywords
- angle point
- image
- eye
- mask
- human eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 210000001747 pupil Anatomy 0.000 claims description 27
- 230000005484 gravity Effects 0.000 claims description 20
- 230000001629 suppression Effects 0.000 claims description 12
- 235000013399 edible fruits Nutrition 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 8
- 239000011521 glass Substances 0.000 description 4
- 241001300078 Vitrea Species 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human eye positioning method and a human eye positioning device, wherein the human eye positioning method comprises the following steps: establishing a mask plate according to the symmetrical characteristics of characteristic points of human eyes, wherein the characteristic points comprise outer angular points, inner angular points, highest angular points and lowest angular points; acquiring a mask image based on a pre-acquired human eye image and the mask plate; acquiring position information of a strong corner point from a human eye image according to a mask image; and acquiring the positions of the characteristic points of the human eyes in the human eye image based on rough position information of the characteristic points of the human eyes acquired from the human eye image in advance and the position information of the strong angular points. The invention can accurately position the characteristic points of human eyes.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of human-eye positioning methods and device.
Background technique
In human-computer interaction technique field, face recognition technology plays increasingly important role, especially in face
The forms position abundant such as eyes, mouth, can analyze it and realize Expression Recognition, age identification etc..Currently, for
The positioning of human eye, used technology are mainly to pass through edge or profile is handled, and are such as added using left and right angle point
Profile is positioned, but profile is easy by external interference, especially in the case where people wears glasses, since glasses can be to human eye
Profile has an impact that human eye positioning is caused to be inaccurate;In addition, complex and when illumination is influenced by background,
The positioning of human eye can be further influenced, the effect of positioning is undesirable.
Summary of the invention
The main purpose of the present invention is to provide a kind of human-eye positioning method and devices, it is intended to which the positioning for solving human eye is inadequate
Accurate technical problem.
To achieve the above object, the present invention provides a kind of human-eye positioning method, and the human-eye positioning method includes following step
It is rapid:
Mask plate is established according to the symmetrical feature of the characteristic point of human eye, the characteristic point includes outer angle point, interior angle point, highest
Angle point and minimum angle point;
Mask image is obtained based on the eye image obtained in advance and the mask plate;
The location information of strong angle point is obtained from eye image according to mask image;
The position of the coarse position information of characteristic point based on the human eye obtained from eye image in advance and the strong angle point
Set the position of the characteristic point of human eye in acquisition of information eye image.
Preferably, the step of symmetrical feature of the characteristic point according to human eye establishes mask plate include:
Establish a mask plate corresponding with human eye shape, wherein position corresponding with the outer angle point in the mask plate
Pixel quantity outer corner location from the mask plate edge to the mask plate center direction be in arithmetic progression;It is described
The edge of pixel quantity interior corner location from the mask plate of position corresponding with the interior angle point is to described in mask plate
The direction at mask plate center is in arithmetic progression.
Preferably, described the step of obtaining mask image based on the eye image obtained in advance and the mask plate, includes:
By the mask plate, mobile row pixel value of going forward side by side compares on the eye image, obtained according to comparison result described in
Effective pixel points on eye image;
The effective pixel points are counted and carried out with thresholding and non-maxima suppression processing, obtains mask image;
Center and the position of centre of gravity for calculating the mask image, according to the center and position of centre of gravity to described
Mask image is modified.
Preferably, the step of location information for obtaining strong angle point from eye image according to mask image includes:
Characteristic point pair is obtained from eye image according to the effective pixel points of mask image and preset strong angle point algorithm
The location information for the strong angle point answered.
Preferably, the coarse position information of the characteristic point based on the human eye obtained from eye image in advance and described
The step of position of the characteristic point of human eye, includes: in the location information acquisition human eye figure of strong angle point
The coarse position information of characteristic point based on the human eye obtained from human eye figure in advance calculates the centre bit of pupil
It sets;
It obtains at a distance from the line where the characteristic point of each strong angle point and corresponding human eye and the center two o'clock of pupil,
The position of characteristic point using the position of the minimum corresponding strong angle point of distance as human eye described in eye image.
In addition, to achieve the above object, the present invention also provides a kind of human eye positioning device, the human eye positioning device packet
It includes:
Module is established, the symmetrical feature for the characteristic point according to human eye establishes mask plate, and the characteristic point includes exterior angle
Point, interior angle point, highest angle point and minimum angle point;
First obtains module, for obtaining mask image based on the eye image obtained in advance and the mask plate;
Strong angle point obtains module, for obtaining the location information of strong angle point from eye image according to mask image;
Second obtains module, the coarse position information for the characteristic point based on the human eye obtained from eye image in advance
And the location information of the strong angle point obtains the position of the characteristic point of human eye in eye image.
Preferably, the module of establishing is further used for establishing a mask plate corresponding with human eye shape, wherein described to cover
The edge of pixel quantity outer corner location from the mask plate of position corresponding with the outer angle point is covered to described in diaphragm plate
The direction at diaphragm plate center is in arithmetic progression;The pixel quantity of position corresponding with the interior angle point is covered from described in the mask plate
The direction at the edge of interior corner location to the mask plate center is in arithmetic progression on diaphragm plate.
Preferably, the first acquisition module includes:
Comparing unit, for by the mask plate, mobile row pixel value of going forward side by side to compare on the eye image, according to than
Relatively result obtains the effective pixel points on the eye image;
Processing unit, for the effective pixel points to be counted and are carried out with thresholding and non-maxima suppression processing,
Obtain mask image;
Amending unit, for calculating center and the position of centre of gravity of the mask image, according to the center and
Position of centre of gravity is modified the mask image.
Preferably, the strong angle point obtains module and is specifically used for according to the effective pixel points of mask image and preset strong
Angle point algorithm obtains the location information of the corresponding strong angle point of characteristic point from eye image.
Preferably, the second acquisition module includes:
Computing unit, the coarse position information for the characteristic point based on the human eye obtained from eye image in advance calculate
The center of pupil;
Acquiring unit, for obtaining the center two o'clock place of each strong angle point with the characteristic point of corresponding human eye and pupil
Line distance, the position of the characteristic point using the position of the minimum corresponding strong angle point of distance as human eye described in eye image
It sets.
A kind of human-eye positioning method of the present invention and device, the characteristic point of selected human eye include outer angle point, interior angle point, highest
Angle point and minimum angle point establish exposure mask according to the symmetry characteristic of the outer angle point of human eye, interior angle point, highest angle point and minimum angle point
Then plate, the shape of mask plate obtain mask image according to eye image and mask plate, according to exposure mask closer to the shape of human eye
Image obtains strong angle point from eye image, is selected most preferably according to the position of the rough position of the characteristic point of human eye and strong angle point
Pixel realizes the positioning of human eye feature point using its position as the position of the characteristic point of human eye, and the present invention is multiple due to using
Angle point establishes mask plate, and the size of mask plate is less than the size of eye image, even if dry by the external world
It disturbs either in the case where people wears glasses, the characteristic point in spectacle-frame can be obtained, by mask plate so as to human eye
Characteristic point is accurately positioned.
Detailed description of the invention
Fig. 1 is the flow diagram of one embodiment of human-eye positioning method of the present invention;
Fig. 2 is the schematic diagram of the characteristic point of human eye of the present invention;
Fig. 3 is the schematic diagram for the mask plate that the present invention establishes;
Fig. 4 is the refinement flow diagram of step S102 in Fig. 1;
Fig. 5 is the refinement flow diagram of step S104 in Fig. 1;
Fig. 6 is the functional block diagram of one embodiment of human eye positioning device of the present invention;
Fig. 7 is the first refinement the functional block diagram for obtaining module in Fig. 6;
Fig. 8 is the second refinement the functional block diagram for obtaining module in Fig. 6.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of human-eye positioning method, and referring to Fig.1, in one embodiment, which includes:
Step S101 establishes mask plate according to the symmetrical feature of the characteristic point of human eye, and the characteristic point includes outer angle point, interior
Angle point, highest angle point and minimum angle point;
In the present embodiment, during analyzing input eye image, the position of eyes is that right and left eyes respectively use 4
A characteristic point is stated, as shown in Fig. 2, outer angle point is 01, and interior angle point is 02, highest angle point is 03 by taking the right eye of people as an example
And minimum angle point is 04, left eye carries out similar design and processing with right eye, and the present invention needs to carry out this four characteristic points quasi-
True positioning.
Mask plate is the physical form distribution character according to human eye in the present embodiment, i.e. symmetry is designed.Specifically:
A template window corresponding with human eye shape is established with pixel, wherein position corresponding with outer angle point, interior angle point in template window
Direction of the pixel set from edge to center is in arithmetic progression.Template window after the completion of foundation is mask plate, mask plate
Size be less than eye image size.
As shown in figure 3, the mask plate of the present embodiment is approximately prismatic (dash area), cover corresponding with human eye shape is established
Diaphragm plate, 01,02,03,04 respectively corresponds outer angle point, interior angle point, highest angle point and minimum angle point, and mask plate is a default size
Binary image, i.e. fixed image size, the relative position of four angle points in a template is fixed, and meets human eye shape
Feature, is effective pixel area in the region of four angle points composition on mask plate, pixel value 1, and region exterior pixel value is 0,
Will pass through the interference that valid pixel removes inhuman Vitrea eye area image in eye image.In mask plate with outer angle point corresponding region
The direction at center of the edge of pixel quantity outer angle point position from mask plate to mask plate is in arithmetic progression (such as dotted line institute
The part framed), the edge in mask plate with pixel quantity interior angle point position from mask plate of interior angle point corresponding region
Direction to center is in arithmetic progression (part framed such as dotted line), to the outer angle point 01 (or interior angle point) of right eye, first row
Pixel number is 1, and secondary series pixel number is 3, and third column pixel number is 5 one-tenth arithmetic progression;Wherein first row is proximate to exterior angle point
The edge set, and third column be proximate to mask plate center (i.e. pupil position on mask plate), and highest angle point (or
Minimum angle point), row pixel number can not be at arithmetic progression or at arithmetic progression.In addition, in the process for carrying out mask plate design
In, the quantity of the column pixel of mask plate is greater than the quantity of row pixel, this is primarily due to human eye at highest angle point (or minimum angle
Point) radian be greater than the radian in outer angle point (or interior angle point).
In the present embodiment, an exposure mask including outer angle point, interior angle point, highest angle point and minimum angle point can be only established
Plate can also establish a mask plate with each angle point, such as individually establish the mask plate of outer angle point, the mask plate of interior angle point, most
Mask plate totally four mask plates of the mask plate of high angle point, minimum angle point, four mask plates are corresponding with human eye shape after merging, and
Similarly, the edge of pixel quantity angle point position from mask plate of outer angle point and the two corresponding mask plates of interior angle point
Direction to center is in arithmetic progression.
Step S102 obtains mask image based on the eye image obtained in advance and the mask plate;
In the present embodiment, mask plate is moved on eye image, and by the pixel and eye image on mask plate
On the pixel value of pixel be compared, to obtain the effective pixel points in eye image, wherein the effective pixel points of each angle point
Collection be combined into the effective pixel area that angle point is corresponded in eye image.
Then, thresholding is carried out to the effective pixel points in eye image and obtains mask image, but deposited in the mask image
In a large amount of pseudo-characteristic, therefore, it is necessary to include a small amount of to finally obtain to there are the mask images of a large amount of pseudo-characteristics to be modified
The mask image of effective pixel points.
Step S103 obtains the location information of strong angle point according to mask image from eye image;
Wherein, this step includes two kinds of modes for obtaining the location information of strong angle point.
In the first embodiment, it includes: according to exposure mask that step S103, which obtains the first way of the location information of strong angle point,
The effective pixel points of image and preset strong angle point algorithm obtain the position of the corresponding strong angle point of characteristic point from eye image
Information determines the position of effective pixel points in eye image, then according to the location of pixels of effective pixel points in mask image
Preset strong angle point algorithm is carried out to the pixel for determining effective pixel points position in eye image, determines strong angle point in eye image
Location information.
In a second embodiment, on the basis of first embodiment, step S103 obtains the of the location information of strong angle point
Two kinds of modes include: to be still further comprised before obtaining strong corner location information in eye image to exposure mask according to mask image
Image carries out pseudo- processing, and mask image goes pseudo- processing to specifically include: can to mask image using scheduled strong angle point algorithm into
Row calculates, and the receptance function value of mask image is calculated, and judges mask image by receptance function value and preset threshold value
In effective pixel points whether be strong angle point, to further be screened to effective pixel points, reject a large amount of pseudo- angle point, obtain
Then the location information for getting strong angle point in mask image recycles this to go the strong angle point of pseudo- treated mask image as going
The effective pixel points of mask image and preset strong angle point algorithm obtain the strong of eye image from eye image after puppet processing
The location information of angle point.
Wherein, in the above two location information mode for obtaining strong angle point, according to preset strong angle point algorithm from human eye figure
The location information that the corresponding strong angle point of characteristic point is obtained as in includes: the outer angle point for human eye, interior angle point, highest angle point and most
The low respective rough position of angle point acquires angle point receptance function value using preset strong angle point algorithm (such as Harris algorithm), so
The subrange of non-maxima suppression is defined according to mask image afterwards, and carries out non-maxima suppression, eventually by pixel
Receptance function value whether meet default angle point threshold value to determine the strong angle point of eye image.
Step S104, the coarse position information of the characteristic point based on the human eye obtained from eye image in advance and described strong
The location information of angle point obtains the position of the characteristic point of human eye in eye image.
In the present embodiment, for the outer angle point of human eye, interior angle point, highest angle point and the respective rough position of minimum angle point,
Various implementations in the prior art can be used, such as can obtain face first with Haar feature et al. face detection algorithm
Position, so obtains the position of face by AAM (Active Appearance Model, subjective display model) algorithm, and obtains
The coarse position information of eight characteristic points of right and left eyes, while the center of pupil is obtained, or its other party can also be used
Method such as obtains the position (i.e. outer angle point) at the canthus detector positioning canthus of tandem type using adaBoost (i.e. iteration) training,
The location information of inside and outside two angle points is obtained according to classifier, then obtains eyes highest angle point and minimum angle using back projection
The position of point.
The present embodiment respectively according to the line between the rough position of the center of pupil and the characteristic point of human eye, according to
The distance relation of point to line between the line and above-mentioned strong angle point selects best pixel point from above-mentioned strong angle point, with its position
The position of characteristic point as human eye, wherein the nearest corresponding strong angle point of distance is the characteristic point of human eye.
Compared with prior art, in the present embodiment, the characteristic point of selected human eye includes outer angle point, interior angle point, the most angle of elevation
Point and minimum angle point, establish mask plate according to the symmetry characteristic of the outer angle point of human eye, interior angle point, highest angle point and minimum angle point,
Then the shape of mask plate obtains mask image according to eye image and mask plate, according to exposure mask figure closer to the shape of human eye
As obtaining strong angle point from eye image, best picture is selected according to the position of the rough position of the characteristic point of human eye and strong angle point
Vegetarian refreshments realizes the positioning of human eye feature point using its position as the position of the characteristic point of human eye, and the present embodiment is multiple due to using
Angle point establishes mask plate, and the size of mask plate is less than the size of eye image, even if dry by the external world
It disturbs either in the case where people wears glasses, the characteristic point in spectacle-frame can be obtained, by mask plate so as to human eye
Characteristic point is accurately positioned.
In a preferred embodiment, as shown in figure 4, on the basis of the embodiment of above-mentioned Fig. 1, above-mentioned steps S102 packet
It includes:
Step S1021, by the mask plate, mobile row pixel value of going forward side by side compares on the eye image, ties according to comparing
Fruit obtains the effective pixel points on the eye image;
Step S1022 is counted and is carried out to the effective pixel points thresholding and non-maxima suppression processing, obtained
Mask image;
Step S1023 calculates center and the position of centre of gravity of the mask image, according to the center and center of gravity
Position is modified the mask image.
In the present embodiment, mask plate and eye image use same coordinate, and the position of the pixel of the two is corresponding.By mask plate
It is moved on eye image, in the process of moving, by all pictures to eye image in the mask plate valid pixel
Plain value is compared with characteristic point position pixel value each on eye image;Wherein, I (i) refers to i-th corresponding with mask plate and has
It imitates under position corresponding to pixel, the pixel value of eye image;I (o) refers to currently calculated feature corresponding with mask plate
Under the corresponding position of point, the pixel value of eye image;Position corresponding to C (i) i-th of valid pixel corresponding with mask plate
Set lower pixel comparison result;It is as follows:
Wherein, i value is the location of pixels of mask plate, and t is a pixel difference threshold value, relatively low generally for contrast
Lesser t is chosen in region;Conversely, then the threshold value of t can choose greatly, the pixel of C (i)=1 is effective pixel points, this reality
It applies in example,
Above-mentioned effective pixel points are subjected to summation statistics, it may be assumed that
Then, thresholding and non-maxima suppression processing are carried out, mask images certain position under current signature point is obtained
Value mask (o);
Wherein, a threshold value of the g between [1,55].
It include a large amount of pseudo-characteristic, it is therefore desirable to according to mask images in the mask images mask (o) that aforesaid way obtains
The center of mask (o) and position of centre of gravity are modified it, and the calculating of position of centre of gravity is as follows:
(x0, y0) is position of centre of gravity, and x is axis of abscissas, and y is vertical sits
Parameter, m are the total number of mask images mask (o) all pixels point, and the prior art can be used in the calculating of center.
Setting threshold value a g, g is the number of valid pixel, both the center of mask images mask (o) and position of centre of gravity
Distance be greater than the threshold value when, mask images mask (o) is modified.
In a preferred embodiment, as shown in figure 5, on the basis of the embodiment of above-mentioned Fig. 4, above-mentioned steps S104 packet
It includes:
The coarse position information of step S1041, the characteristic point based on the human eye obtained from eye image in advance calculate pupil
The center in hole;
Step S1042, obtain each strong angle point with where the characteristic point of corresponding human eye and the center two o'clock of pupil
The distance of line, using the position of the minimum corresponding strong angle point of distance as the position of the characteristic point of human eye in the eye image.
In the present embodiment, after obtaining strong angle point, candidate pixel is fewer, next will be in these candidates
Optimal angle point is extracted in point as eye feature point comprising: the center of pupil is obtained based on characteristic image, according to pupil
The center in hole selects people with the distance between the respective rough position line of characteristic point and the strong angle point relationship respectively
The characteristic point of eye.
In the present embodiment, the position of pupil of human is known using above-mentioned AAM algorithm, is distinguished according to the center of pupil
Linear equation is established with the respective rough position of characteristic point, such as the center of pupil and the linear equation of highest angle point are as follows: y=
Kx+b, then the distance L of strong angle point to the straight line is the weight of best human eye feature point, and when distance L is bigger, weight is lower, can
Removal;Distance L is smaller, and weight is bigger, retains corresponding strong angle point.
Then the strong angle point that is retained is calculated to the difference of interpupillary distance L and human eye width, which is centainly missing
Strong angle point within difference becomes best strong angle point, then which obtains its position.If nothing
Best strong angle point, then using above-mentioned corresponding rough position as the characteristic point of optimal human eye.
In the present embodiment, the interior angle point rough position of right and left eyes is obtained, distance d between the two is calculated, it can be according to the value
Calculate the width of human eye.
The present invention provides a kind of human eye positioning device, and referring to Fig. 6, in one embodiment, which includes:
Module 101 is established, the symmetrical feature for the characteristic point according to human eye establishes mask plate, and the characteristic point includes outer
Angle point, interior angle point, highest angle point and minimum angle point;
In the present embodiment, during analyzing input eye image, the position of eyes is that right and left eyes respectively use 4
A characteristic point is stated, as shown in Fig. 2, outer angle point is 01, and interior angle point is 02, highest angle point is 03 by taking the right eye of people as an example
And minimum angle point is 04, left eye carries out similar design and processing with right eye, and the present invention needs to carry out this four characteristic points quasi-
True positioning.
Mask plate is the physical form distribution character according to human eye in the present embodiment, i.e. symmetry is designed.Specifically:
Establish a template window symmetrical and corresponding with human eye shape with pixel, wherein in template window with outer angle point, interior angle point
Direction of the pixel of corresponding position from edge to center is in arithmetic progression.Template window after the completion of foundation is exposure mask
Plate, the size of exposure mask plate suqare are less than the size of eye image area.
As shown in figure 3, the mask plate of the present embodiment is approximately prismatic (dash area), cover corresponding with human eye shape is established
Diaphragm plate, 01,02,03,04 respectively corresponds outer angle point, interior angle point, highest angle point and minimum angle point, and mask plate is a default size
Binary image, i.e. fixed image size, the relative position of four angle points in a template is fixed, and meets human eye shape
Feature, is effective pixel area in the region of four angle points composition on mask plate, pixel value 1, and region exterior pixel value is 0,
Will pass through the interference that valid pixel removes inhuman Vitrea eye area image in eye image.In mask plate with outer angle point corresponding region
The direction at center of the edge of pixel quantity outer angle point position from mask plate to mask plate is in arithmetic progression (such as dotted line institute
The part framed), the edge in mask plate with pixel quantity interior angle point position from mask plate of interior angle point corresponding region
Direction to center is in arithmetic progression (part framed such as dotted line), to the outer angle point 01 (or interior angle point) of right eye, first row
Pixel number is 1, and secondary series pixel number is 3, and third column pixel number is 5 one-tenth arithmetic progression;Wherein first row is proximate to exterior angle point
The edge set, and third column be proximate to mask plate center (i.e. pupil position on mask plate), and highest angle point (or
Minimum angle point), row pixel number can not be at arithmetic progression or at arithmetic progression.In addition, in the process for carrying out mask plate design
In, the quantity of the column pixel of mask plate is greater than the quantity of row pixel, this is primarily due to human eye at highest angle point (or minimum angle
Point) radian be greater than the radian in outer angle point (or interior angle point).
In the present embodiment, an exposure mask including outer angle point, interior angle point, highest angle point and minimum angle point can be only established
Plate can also establish a mask plate with each angle point, such as individually establish the mask plate of outer angle point, the mask plate of interior angle point, most
Mask plate totally four mask plates of the mask plate of high angle point, minimum angle point, four mask plates are corresponding with human eye shape after merging, and
Similarly, the edge of pixel quantity angle point position from mask plate of outer angle point and the two corresponding mask plates of interior angle point
Direction to center is in arithmetic progression.
First obtains module 102, for being obtained based on the eye image and the mask plate that obtain from eye image in advance
Take mask image;
In the present embodiment, mask plate is moved on eye image, and by the pixel and eye image on mask plate
On the pixel value of pixel be compared, to obtain the effective pixel points in eye image, wherein the effective pixel points of each angle point
Collection be combined into the effective pixel area that angle point is corresponded in eye image.
Then, thresholding is carried out to the effective pixel points in eye image and obtains mask image, but deposited in the mask image
In a large amount of pseudo-characteristic, therefore, it is necessary to include a small amount of to finally obtain to there are the mask images of a large amount of pseudo-characteristics to be modified
The mask image of effective pixel points.
Strong angle point obtains module 103, for obtaining the location information of strong angle point from eye image according to mask image;
Wherein, it includes two ways that strong angle point, which obtains module 103 and obtains the location information of strong angle point,.
In the first embodiment, strong angle point obtains the first way for the location information that module 103 obtains strong angle point and includes:
The corresponding strong angle of characteristic point is obtained from eye image according to the effective pixel points of mask image and preset strong angle point algorithm
The location information of point, i.e., determine the position of effective pixel points in eye image according to the location of pixels of effective pixel points in mask image
It sets, then to the preset strong angle point algorithm of pixel progress for determining effective pixel points position in eye image, determines eye image
In strong angle point location information.
In a second embodiment, on the basis of first embodiment, strong angle point obtains the position that module 103 obtains strong angle point
The second way of information includes: that strong angle point acquisition module 103 first carries out pseudo- processing to mask image, and mask image goes to pseudo- place
Reason specifically includes: can be calculated using scheduled strong angle point algorithm mask image, the response letter of mask image is calculated
Numerical value judges whether the effective pixel points in mask image are strong angle point by receptance function value and preset threshold value, thus
Further effective pixel points are screened, a large amount of pseudo- angle point is rejected, gets the location information of strong angle point in mask image,
Then recycle this go the strong angle point of pseudo- treated mask image as go the effective pixel points of mask image after pseudo- processing with
And preset strong angle point algorithm obtains the location information of the strong angle point of eye image from eye image.
Wherein, in the above two location information mode for obtaining strong angle point, according to preset strong angle point algorithm from human eye figure
The location information that the corresponding strong angle point of characteristic point is obtained as in includes: the outer angle point for human eye, interior angle point, highest angle point and most
The low respective rough position of angle point acquires angle point receptance function value using Harris algorithm, is then defined according to mask image
The subrange of non-maxima suppression, and non-maxima suppression is carried out, whether meet eventually by the receptance function value of pixel
Angle point threshold value is preset to determine the strong angle point of eye image.
Second obtains module 104, the rough position for the characteristic point based on the human eye obtained from eye image in advance
The location information of information and the strong angle point obtains the position of the characteristic point of human eye in eye image.
In the present embodiment, for the outer angle point of human eye, interior angle point, highest angle point and the respective rough position of minimum angle point,
Various implementations in the prior art can be used, such as can obtain face first with Haar feature et al. face detection algorithm
Position, so obtains the position of face by AAM subjectivity display model algorithm, and obtains the rough position of eight characteristic points of right and left eyes
Confidence breath, while the center of pupil is obtained, or other methods can also be used, such as instructed using adaBoost (i.e. iteration)
The position (i.e. outer angle point) for getting the canthus detector positioning canthus of tandem type, obtains inside and outside two angle points according to classifier
Then location information obtains the position of eyes highest angle point and minimum angle point using back projection.
The present embodiment respectively according to the line between the rough position of the center of pupil and the characteristic point of human eye, according to
The distance relation of point to line between the line and above-mentioned strong angle point selects best pixel point from above-mentioned strong angle point, with its position
The position of characteristic point as human eye, wherein the nearest corresponding strong angle point of distance is the characteristic point of human eye.
In a preferred embodiment, as shown in fig. 7, on the basis of the embodiment of above-mentioned Fig. 6, described first obtains mould
Block 102 includes:
Comparing unit 1021 compares, root for the mask plate to be moved to row pixel value of going forward side by side on the eye image
The effective pixel points on the eye image are obtained according to comparison result;
Processing unit 1022, for being counted and being carried out at thresholding and non-maxima suppression to the effective pixel points
Reason obtains mask image;
Amending unit 1023, for calculating center and the position of centre of gravity of the mask image, according to the centre bit
It sets and position of centre of gravity is modified the mask image.
In the present embodiment, mask plate and eye image use same coordinate, and the position of the pixel of the two is corresponding.By mask plate
It is moved on eye image, in the process of moving, by all pictures to eye image in the mask plate valid pixel
Plain value is compared with characteristic point position pixel value each on eye image;Wherein, I (i) refers to i-th corresponding with mask plate and has
It imitates under position corresponding to pixel, the pixel value of eye image;I (o) refers to currently calculated feature corresponding with mask plate
Under the corresponding position of point, the pixel value of eye image;Position corresponding to C (i) i-th of valid pixel corresponding with mask plate
Set lower pixel comparison result;It is as follows:
Wherein, i value is the location of pixels of mask plate, and t is a pixel difference threshold value, relatively low generally for contrast
Lesser t is chosen in region;Conversely, then the threshold value of t can choose greatly, the pixel of C (i)=1 is effective pixel points, this reality
It applies in example,
Above-mentioned effective pixel points are subjected to summation statistics, it may be assumed that
Then, thresholding and non-maxima suppression processing are carried out, mask images certain position under current signature point is obtained
Value mask (o);
Wherein, a threshold value of the g between [1,55].
It include a large amount of pseudo-characteristic, it is therefore desirable to according to mask images in the mask images mask (o) that aforesaid way obtains
The center of mask (o) and position of centre of gravity are modified it, and the calculating of position of centre of gravity is as follows:
(x0, y0) is position of centre of gravity, and x is axis of abscissas, and y is vertical sits
Parameter, m are the total number of mask images mask (o) all pixels point, and the prior art can be used in the calculating of center.
Setting threshold value a g, g is the number of valid pixel, both the center of mask images mask (o) and position of centre of gravity
Distance be greater than the threshold value when, mask images mask (o) is modified.
In a preferred embodiment, as shown in figure 8, on the basis of the embodiment of above-mentioned Fig. 6, described second obtains mould
Block 104 includes:
Computing unit 1041, the coarse position information for the characteristic point based on the human eye obtained from eye image in advance
Calculate the center of pupil;
Acquiring unit 1042, for obtaining the center two o'clock of each strong angle point with the characteristic point of corresponding human eye and pupil
The distance of the line at place, using the position of the minimum corresponding strong angle point of distance as the characteristic point of human eye in the eye image
Position.
In the present embodiment, after obtaining strong angle point, candidate pixel is fewer, next will be in these candidates
Optimal angle point is extracted in point as eye feature point comprising: the center of pupil is obtained based on characteristic image, according to pupil
The center in hole selects people with the distance between the respective rough position line of characteristic point and the strong angle point relationship respectively
The characteristic point of eye.
In the present embodiment, the position of pupil of human is known using above-mentioned AAM algorithm, is distinguished according to the center of pupil
Linear equation is established with the respective rough position of characteristic point, such as the center of pupil and the linear equation of highest angle point are as follows: y=
Kx+b, then the distance L of strong angle point to the straight line is the weight of best human eye feature point, and when distance L is bigger, weight is lower, can
Removal;Distance L is smaller, and weight is bigger, retains corresponding strong angle point.
Then the strong angle point that is retained is calculated to the difference of interpupillary distance L and human eye width, which is centainly missing
Strong angle point within difference becomes best strong angle point, then which obtains its position.If nothing
Best strong angle point, then using above-mentioned corresponding rough position as the characteristic point of optimal human eye.
In the present embodiment, the interior angle point rough position of right and left eyes is obtained, distance d between the two is calculated, it can be according to the value
Calculate the width of human eye.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (8)
1. a kind of human-eye positioning method, which is characterized in that the human-eye positioning method the following steps are included:
Mask plate is established according to the symmetrical feature of the characteristic point of human eye, the characteristic point includes outer angle point, interior angle point, highest angle point
And minimum angle point;
Mask image is obtained based on the eye image obtained in advance and the mask plate;
The location information of strong angle point is obtained from eye image according to mask image;
The position of the coarse position information of characteristic point based on the human eye obtained from eye image in advance and strong angle point letter
Breath obtains the position of the characteristic point of human eye in eye image;
Described the step of obtaining mask image based on the eye image obtained in advance and the mask plate includes:
By the mask plate, mobile row pixel value of going forward side by side compares on the eye image, obtains the human eye according to comparison result
Effective pixel points on image;
The effective pixel points are counted and carried out with thresholding and non-maxima suppression processing, obtains mask image;
Center and the position of centre of gravity for calculating the mask image, according to the center and position of centre of gravity to the exposure mask
Image is modified.
2. human-eye positioning method as described in claim 1, which is characterized in that the symmetrical feature of the characteristic point according to human eye
The step of establishing mask plate include:
Establish a mask plate corresponding with human eye shape, wherein the picture of position corresponding with the outer angle point in the mask plate
The direction at the edge of prime number amount outer corner location from the mask plate to the mask plate center is in arithmetic progression;The exposure mask
The edge of pixel quantity interior corner location from the mask plate of position corresponding with the interior angle point is to the exposure mask in plate
The direction at plate center is in arithmetic progression.
3. human-eye positioning method as described in claim 1, which is characterized in that described to be obtained from eye image according to mask image
The step of taking the location information of strong angle point include:
It is corresponding that characteristic point is obtained from eye image according to the effective pixel points of mask image and preset strong angle point algorithm
The location information of strong angle point.
4. human-eye positioning method as described in claim 1, which is characterized in that described based on being obtained from eye image in advance
The location information of the coarse position information of the characteristic point of human eye and the strong angle point obtains the characteristic point of human eye in human eye figure
The step of position includes:
The coarse position information of characteristic point based on the human eye obtained from human eye figure in advance calculates the center of pupil;
Obtain line where the characteristic point of corresponding human eye and the center two o'clock of pupil and each strong angle point and the line
Distance, the position of the characteristic point using the position of the minimum corresponding strong angle point of distance as human eye described in eye image.
5. a kind of human eye positioning device, which is characterized in that the human eye positioning device includes:
Module is established, establishes mask plate for the symmetrical feature according to the characteristic point of human eye, the characteristic point includes outer angle point, interior
Angle point, highest angle point and minimum angle point;
First obtains module, for obtaining mask image based on the eye image obtained in advance and the mask plate, described first
Obtaining module includes:
Comparing unit compares for the mask plate to be moved to row pixel value of going forward side by side on the eye image, ties according to comparing
Fruit obtains the effective pixel points on the eye image;
Processing unit is obtained for the effective pixel points to be counted and carried out with thresholding and non-maxima suppression processing
Mask image;
Amending unit, for calculating center and the position of centre of gravity of the mask image, according to the center and center of gravity
Position is modified the mask image;
Strong angle point obtains module, for obtaining the location information of strong angle point from eye image according to mask image;
Second obtains module, coarse position information and institute for the characteristic point based on the human eye obtained from eye image in advance
The location information for stating strong angle point obtains the position of the characteristic point of human eye in eye image.
6. human eye positioning device as claimed in claim 5, which is characterized in that it is described establish module be further used for establishing one with
The corresponding mask plate of human eye shape, wherein the pixel quantity of position corresponding with the outer angle point is from described in the mask plate
The direction at the edge of outer corner location to the mask plate center is in arithmetic progression on mask plate;In the mask plate with it is described interior
The direction at the edge of pixel quantity interior corner location from the mask plate of the corresponding position of angle point to the mask plate center
In arithmetic progression.
7. human eye positioning device as claimed in claim 5, which is characterized in that the strong angle point obtains module and is specifically used for basis
The effective pixel points of mask image and preset strong angle point algorithm obtain the corresponding strong angle point of characteristic point from eye image
Location information.
8. human eye positioning device as claimed in claim 5, which is characterized in that described second, which obtains module, includes:
Computing unit, the coarse position information for the characteristic point based on the human eye obtained from eye image in advance calculate pupil
Center;
Acquiring unit, for obtaining line where the characteristic point of corresponding human eye and the center two o'clock of pupil and each strong
Angle point is at a distance from the line, using the position of the minimum corresponding strong angle point of distance as the characteristic point of human eye described in eye image
Position.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410837111.8A CN105809085B (en) | 2014-12-29 | 2014-12-29 | Human eye positioning method and device |
PCT/CN2014/095742 WO2016106617A1 (en) | 2014-12-29 | 2014-12-31 | Eye location method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410837111.8A CN105809085B (en) | 2014-12-29 | 2014-12-29 | Human eye positioning method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105809085A CN105809085A (en) | 2016-07-27 |
CN105809085B true CN105809085B (en) | 2019-07-26 |
Family
ID=56283892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410837111.8A Active CN105809085B (en) | 2014-12-29 | 2014-12-29 | Human eye positioning method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105809085B (en) |
WO (1) | WO2016106617A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107240112B (en) * | 2017-06-28 | 2021-06-22 | 北京航空航天大学 | Individual X corner extraction method in complex scene |
CN107808397B (en) * | 2017-11-10 | 2020-04-24 | 京东方科技集团股份有限公司 | Pupil positioning device, pupil positioning method and sight tracking equipment |
CN110443203B (en) * | 2019-08-07 | 2021-10-15 | 中新国际联合研究院 | Confrontation sample generation method of face spoofing detection system based on confrontation generation network |
CN111783621B (en) * | 2020-06-29 | 2024-01-23 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for facial expression recognition and model training |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120308124A1 (en) * | 2011-06-02 | 2012-12-06 | Kriegman-Belhumeur Vision Technologies, Llc | Method and System For Localizing Parts of an Object in an Image For Computer Vision Applications |
CN103514452A (en) * | 2013-07-17 | 2014-01-15 | 浙江大学 | Method and device for detecting shape of fruit |
CN103839050A (en) * | 2014-02-28 | 2014-06-04 | 福州大学 | ASM positioning algorithm based on feature point expansion and PCA feature extraction |
CN104063700A (en) * | 2014-07-04 | 2014-09-24 | 武汉工程大学 | Method for locating central points of eyes in natural lighting front face image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100452081C (en) * | 2007-06-01 | 2009-01-14 | 华南理工大学 | Human eye positioning and human eye state recognition method |
EP2701122B1 (en) * | 2011-04-19 | 2020-02-26 | Aisin Seiki Kabushiki Kaisha | Eyelid detection device, eyelid detection method, and program |
CN102831399A (en) * | 2012-07-30 | 2012-12-19 | 华为技术有限公司 | Method and device for determining eye state |
CN103136512A (en) * | 2013-02-04 | 2013-06-05 | 重庆市科学技术研究院 | Pupil positioning method and system |
-
2014
- 2014-12-29 CN CN201410837111.8A patent/CN105809085B/en active Active
- 2014-12-31 WO PCT/CN2014/095742 patent/WO2016106617A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120308124A1 (en) * | 2011-06-02 | 2012-12-06 | Kriegman-Belhumeur Vision Technologies, Llc | Method and System For Localizing Parts of an Object in an Image For Computer Vision Applications |
CN103514452A (en) * | 2013-07-17 | 2014-01-15 | 浙江大学 | Method and device for detecting shape of fruit |
CN103839050A (en) * | 2014-02-28 | 2014-06-04 | 福州大学 | ASM positioning algorithm based on feature point expansion and PCA feature extraction |
CN104063700A (en) * | 2014-07-04 | 2014-09-24 | 武汉工程大学 | Method for locating central points of eyes in natural lighting front face image |
Non-Patent Citations (2)
Title |
---|
Semantic feature extraction for accurate eye corner detection;Cui Xu 等;《19th International Conference on Pattern Recognition》;20081211;正文第1-4页 * |
基于近红外光和可变形模板的人眼特征检测方法;黄志武 等;《电脑知识与技术》;20080520;第1300-1308页 * |
Also Published As
Publication number | Publication date |
---|---|
WO2016106617A1 (en) | 2016-07-07 |
CN105809085A (en) | 2016-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147721B (en) | Three-dimensional face recognition method, model training method and device | |
US11403874B2 (en) | Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium | |
US7953253B2 (en) | Face detection on mobile devices | |
US12056954B2 (en) | System and method for selecting images for facial recognition processing | |
CN105205480B (en) | Human-eye positioning method and system in a kind of complex scene | |
CN109840565A (en) | A kind of blink detection method based on eye contour feature point aspect ratio | |
CN103093210B (en) | Method and device for glasses identification in face identification | |
CN104036278B (en) | The extracting method of face algorithm standard rules face image | |
CN101840509B (en) | Measuring method for eye-observation visual angle and device thereof | |
CN109670430A (en) | A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning | |
Jana et al. | Age estimation from face image using wrinkle features | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
CN103971112B (en) | Image characteristic extracting method and device | |
CN104050448B (en) | A kind of human eye positioning, human eye area localization method and device | |
CN105359162A (en) | Image masks for face-related selection and processing in images | |
CN104318603A (en) | Method and system for generating 3D model by calling picture from mobile phone photo album | |
CN105893946A (en) | Front face image detection method | |
CN103942539B (en) | A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face | |
CN103902958A (en) | Method for face recognition | |
CN105117705B (en) | A kind of iris image quality tandem type evaluation method | |
CN106446862A (en) | Face detection method and system | |
CN103218605A (en) | Quick eye locating method based on integral projection and edge detection | |
CN105809085B (en) | Human eye positioning method and device | |
CN104123543A (en) | Eyeball movement identification method based on face identification | |
CN109858375A (en) | Living body faces detection method, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |