[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103218605B - A kind of fast human-eye positioning method based on integral projection and rim detection - Google Patents

A kind of fast human-eye positioning method based on integral projection and rim detection Download PDF

Info

Publication number
CN103218605B
CN103218605B CN201310119843.9A CN201310119843A CN103218605B CN 103218605 B CN103218605 B CN 103218605B CN 201310119843 A CN201310119843 A CN 201310119843A CN 103218605 B CN103218605 B CN 103218605B
Authority
CN
China
Prior art keywords
image
gray
point
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310119843.9A
Other languages
Chinese (zh)
Other versions
CN103218605A (en
Inventor
路小波
陈伍军
曾维理
杜一君
祁慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201310119843.9A priority Critical patent/CN103218605B/en
Publication of CN103218605A publication Critical patent/CN103218605A/en
Application granted granted Critical
Publication of CN103218605B publication Critical patent/CN103218605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of fast human-eye positioning method based on integral projection and rim detection, it mainly contains following step: the first step: turn gray scale to the facial image detected, and utilizes wave filter to its smoothing denoising; Second step: adopt integrate levels sciagraphy to obtain human eye Position Approximate; 3rd step: edge extracting is carried out to the image in the first step, and by image binaryzation; 4th step: calculate row, column complexity, accurately locate position of human eye; 5th step: correct, obtain final position of human eye.The method computing velocity is fast, and the impact that can effectively suppress the ornaments in facial image to be located human eye, has the advantage of fast and stable.

Description

A kind of fast human-eye positioning method based on integral projection and rim detection
Technical field
The invention belongs to the characteristic point positioning method in pattern-recognition, particularly relate to a kind of fast human-eye positioning method based on integral projection and rim detection, a kind of fast and convenient solution proposed mainly for human eye orientation problem in input picture, can locate human eye fast and effectively.
Background technology
Computer face identification is the research field enlivened very much in recent years.Being of wide application of it, as Sex, Age analysis, security system authentication, Expression analysis and video conference etc.It mainly comprises Face datection, feature location is extracted and the several step of feature identification.Can human eye, as a key feature of face, accurately have tremendous influence to the result of feature extraction and feature identification in location to it.
Herein based on half-tone information and the marginal information of image, one human eye detection algorithm is fast and effectively proposed.This algorithm can quick position human eye, and can be good at the impact that suppresses illumination and jewelry to bring to positioning result.
Summary of the invention
The invention provides the fast human-eye positioning method based on integral projection and rim detection that a kind of succinct and accuracy is high.
In order to realize this target, the present invention takes following technical scheme:
Step 1: initialization, reads in an image collected containing face ,
Step 2: utilize Adaboost algorithm to the digital picture collected carry out Face datection operation, get facial image wherein ,
Step 3: to the facial image acquired in step 2 carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired be converted into facial image gray level image, and by facial image gray level image be normalized to W the image of H , wherein W, H are positive integer, represent facial image respectively gray level image be normalized to W the image of H line number and columns,
Step 3.2: utilize Gaussian filter to W the image of H smoothing, denoising, concrete grammar is as follows: first, the image after level and smooth, denoising the gray-scale value of each boundary pixel be respectively smoothly, W before denoising the image of H the gray-scale value of each boundary pixel; Secondly, for W the image of H the gray-scale value of non-border pixel, then with W the image of H in any one non-border pixel centered by pixel, choose 3 the Gaussian template of 3, obtains the gray-scale value of described center pixel, that is:
In formula, represent row-coordinate, represent row coordinate, for image in the gray-scale value of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some lower-left angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some upper left angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some bottom right angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some upper right angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of the horizontal left direction point of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of point immediately below point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of the horizontal right direction point of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of point directly over point, for after gaussian filtering the gray-scale value of point, obtains image after process , its size is still W h, traversal W the image of H in all non-border pixel,
Step 4: to image carry out integrate levels projection, projection computing formula is as follows:
In formula, represent the the integrate levels projection result of row, for image 's the gray-scale value at some place, W is picturedeep, and H is picturewide,
From the integrate levels projection result that W is capable, choose the minimum integrate levels projection result of gray-scale value, and be designated as , for making integrate levels projection computing formula when obtaining minimum value corresponding line number, from image in select vertical coordinate scope and be region, as the region to be detected that human eye may exist, wherein, , represent value be round value downwards, for the vertical coordinate interval estimation parameter that human eye may exist,
Step 5: to image carry out edge extracting, specific implementation process is as follows:
Step 5.1: first, carries out the gradient magnitude matrix correspondence image after gradient calculation the grey scale pixel value of each frontier point be image before carrying out gradient calculation the grey scale pixel value of frontier point, secondly, for image middle non-border pixel point , choose , , for horizontal direction edge detection operator, for vertical direction edge detection operator, calculate non-border pixel point the horizontal direction at place, the first-order partial derivative of vertical direction, and non-border pixel point the gradient magnitude at place and gradient direction, computing formula is as follows:
In formula representative image middle non-border pixel point gray-scale value, i is image in the horizontal coordinate of non-frontier point, j is image in the vertical coordinate of non-frontier point, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid lower left corner, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid upper right corner, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid lower right corner, , represent non-border pixel point respectively the horizontal direction at place, the first-order partial derivative of vertical direction, represent non-border pixel point gradient magnitude, represent non-border pixel point gradient direction,
Step 5.2: by the gradient direction value of each non-border pixel point obtained in step 5.1 carry out discretize and obtain new gradient direction value , choose with non-border pixel point centered by 3 3 windows, right value process, computing formula is as follows:
In formula, with represent respectively with non-border pixel point centered by 3 in 3 windows along the gradient magnitude of two pixels in direction, for the gradient magnitude after above formula process, with the gradient magnitude after processing as central element, image frontier point grey scale pixel value as boundary element structural matrix, this matrix correspondence produces image ,
Step 6: to the image obtained in step 5.2 carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture in the gray-scale value of each pixel, be integer using the highest gray-scale value as most high grade grey level m, m, then image grayscale range is made up of each round values in interval [0, m], and gray level is that the number of pixels of t is set to , then total number of pixels , the probability of each gray-scale value is if use integer gray scale is divided into two groups , , for gray level is less than or equal to gray level group, for gray level is greater than be less than gray level group, utilize the variance between following formulae discovery two gray level groups:
In formula, for general image mean value, for the mean value of group, for the probability of group, chooses the value of each round values in interval [0, m] as k respectively, then calculates each variance yields corresponding to k value respectively , and maximum variance yields is selected from m+1 variance yields, then using the k value corresponding to maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image carry out binary conversion treatment, image its gray-scale value is set to 255 by the pixel that middle gray-scale value is more than or equal to threshold value T, and the grey scale pixel value lower than threshold value T is set to 0, obtains bianry image ,
Step 7: horizontal level and the upright position of accurately determining human eye, specific implementation process is as follows:
Step 7.1: for bianry image , calculate row, column complexity function, computing formula is as follows:
In formula, represent row complexity function, represent row complexity function, representative image in be positioned at the pixel value of the pixel at place, , for the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function , row complexity function carry out one dimension low-pass filtering and obtain new row complexity function , new row complexity function , find new row complexity function by calculating maximum of points and new row complexity function maximum point determination position of human eye coordinate, through calculating function a maximum of points , function two maximum points , , obtain pixel , pixel ,
Step 7.3: the new image obtained in step 3.2 in, select the pixel that step 7.2 obtains respectively , pixel 6 6 neighborhoods, minimum two pixels of the gray-scale value in this field are set as human eye center, and the position coordinates of minimum for gray-scale value two pixels is defined as eyes coordinate , .
Compared with prior art, feature of the present invention is:
First this algorithm carried out integral projection process to image before accurately locating human eye, so just roughly can determine the region that human eye may exist, eliminate other interference region, the impact that such as face, nose bring, can greatly save calculating required time like this, also can be good at the positioning result improving final human eye simultaneously.In addition, because the change of human eye area is compared to other organ more complicated of face, edge variation, than stronger, adopts the method for rim detection can identify human eye area simply and effectively.
Accompanying drawing explanation
Fig. 1 is fast human-eye location algorithm process flow diagram.
Fig. 2 is gradient direction angular discretization standard drawing.
Fig. 3 is facial image integrate levels perspective view.
Fig. 4 is the edge detection results figure of human eye area and this human eye area.
Fig. 5 is the row complexity function schematic diagram of human eye area edge detection results figure.
Fig. 6 is the row complexity function schematic diagram of human eye area edge detection results figure.
Fig. 7 is the result schematic diagram of row complexity function after low-pass filtering of human eye area edge detection results figure.
Fig. 8 is the result schematic diagram of row complexity function after low-pass filtering of human eye area edge detection results figure.
Embodiment
In a particular embodiment, will by reference to the accompanying drawings, the clear detailed implementation fully describing fast human-eye location algorithm,
A kind of fast human-eye positioning method, is characterized in that carrying out according to following steps:
Step 1: initialization, reads in an image collected containing face ,
Step 2: utilize Adaboost algorithm to the digital picture collected carry out Face datection operation, get facial image wherein ,
Step 3: to the facial image acquired in step 2 carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired be converted into facial image gray level image, computing formula is as follows:
for processing the brightness value of rear pixel, R, G, B are the relative intensity of three primary colours, by facial image gray level image be normalized to W the image of H , wherein W, H are positive integer, represent facial image respectively gray level image be normalized to W the image of H line number and columns,
Step 3.2: utilize Gaussian filter to W the image of H smoothing, denoising, concrete grammar is as follows: first, the image after level and smooth, denoising the gray-scale value of each boundary pixel be respectively smoothly, W before denoising the image of H the gray-scale value of each boundary pixel; Secondly, for W the image of H the gray-scale value of non-border pixel, then with W the image of H in any one non-border pixel centered by pixel, choose 3 the Gaussian template of 3, obtains the gray-scale value of described center pixel, that is:
In formula, represent row-coordinate, represent row coordinate, for image in the gray-scale value of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some lower-left angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some upper left angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some bottom right angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of some upper right angle point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of the horizontal left direction point of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of point immediately below point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of the horizontal right direction point of point, for image in with centered by point 3 be positioned in 3 grids the gray-scale value of point directly over point, for after gaussian filtering the gray-scale value of point, obtains image after process , its size is still W h, traversal W the image of H in all non-border pixel,
Step 4: to image carry out integrate levels projection, projection computing formula is as follows:
In formula, represent the the integrate levels projection result of row, for image 's the gray-scale value at some place, W is picturedeep, and H is picturewide,
From the integrate levels projection result that W is capable, choose the minimum integrate levels projection result of gray-scale value, and be designated as , for making integrate levels projection computing formula when obtaining minimum value corresponding line number, from image in select vertical coordinate scope and be region, as the region to be detected that human eye may exist, wherein, , represent value be round value downwards, for the vertical coordinate interval estimation parameter that human eye may exist,
Step 5: to image carry out edge extracting, specific implementation process is as follows:
Step 5.1: first, carries out the gradient magnitude matrix correspondence image after gradient calculation the grey scale pixel value of each frontier point be image before carrying out gradient calculation the grey scale pixel value of frontier point, secondly, for image middle non-border pixel point , choose , , for horizontal direction edge detection operator, for vertical direction edge detection operator, calculate non-border pixel point the horizontal direction at place, the first-order partial derivative of vertical direction, and non-border pixel point the gradient magnitude at place and gradient direction, computing formula is as follows:
In formula representative image middle non-border pixel point gray-scale value, i is image in the horizontal coordinate of non-frontier point, j is image in the vertical coordinate of non-frontier point, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid lower left corner, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid upper right corner, for image in with non-border pixel point point is top left hand element grid in be positioned at the gray-scale value of the pixel in the grid lower right corner, , represent non-border pixel point respectively the horizontal direction at place, the first-order partial derivative of vertical direction, represent non-border pixel point gradient magnitude, represent non-border pixel point gradient direction,
Step 5.2: by the gradient direction value of each non-border pixel point obtained in step 5.1 carry out discretize according to the discretize standard of accompanying drawing 2 and obtain new gradient direction value , choose with non-border pixel point centered by 3 3 windows, right value process, computing formula is as follows:
In formula, with represent respectively with non-border pixel point centered by 3 in 3 windows along the gradient magnitude of two pixels in direction, for the gradient magnitude after above formula process, with the gradient magnitude after processing as central element, image frontier point grey scale pixel value as boundary element structural matrix, this matrix correspondence produces image ,
Step 6: to the image obtained in step 5.2 carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture in the gray-scale value of each pixel, be integer using the highest gray-scale value as most high grade grey level m, m, then image grayscale range is made up of each round values in interval [0, m], and gray level is that the number of pixels of t is set to , then total number of pixels , the probability of each gray-scale value is if use integer gray scale is divided into two groups , , for gray level is less than or equal to gray level group, for gray level is greater than be less than gray level group, utilize the variance between following formulae discovery two gray level groups:
In formula, for general image mean value, for the mean value of group, for the probability of group, chooses the value of each round values in interval [0, m] as k respectively, then calculates each variance yields corresponding to k value respectively , and maximum variance yields is selected from m+1 variance yields, then using the k value corresponding to maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image carry out binary conversion treatment, image its gray-scale value is set to 255 by the pixel that middle gray-scale value is more than or equal to threshold value T, and the grey scale pixel value lower than threshold value T is set to 0, obtains bianry image ,
Step 7: horizontal level and the upright position of accurately determining human eye, specific implementation process is as follows:
Step 7.1: for bianry image , calculate row, column complexity function, computing formula is as follows:
In formula, represent row complexity function, represent row complexity function, representative image in be positioned at the pixel value of the pixel at place, , for the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function , row complexity function carry out one dimension low-pass filtering and obtain new row complexity function , new row complexity function , find new row complexity function by calculating maximum of points and new row complexity function maximum point determination position of human eye coordinate, through calculating function a maximum of points , function two maximum points , , obtain pixel , pixel ,
Step 7.3: the new image obtained in step 3.2 in, select the pixel that step 7.2 obtains respectively , pixel 6 6 neighborhoods, minimum two pixels of the gray-scale value in this field are set as human eye center, and the position coordinates of minimum for gray-scale value two pixels is defined as eyes coordinate , .

Claims (1)

1., based on a fast human-eye positioning method for integral projection and rim detection, it is characterized in that carrying out according to following steps:
Step 1: initialization, reads in an image I collected containing face 1,
Step 2: utilize Adaboost algorithm to the digital picture I collected 1carry out Face datection operation, get facial image I wherein 2,
Step 3: to the facial image I acquired in step 2 2carry out pre-service, method is as follows:
Step 3.1: by the facial image I acquired 2be converted into facial image I 2gray level image, and by facial image I 2gray level image be normalized to the image I of W × H 3, wherein W, H are positive integer, represent facial image I respectively 2gray level image be normalized to the image I of W × H 3line number and columns,
Step 3.2: utilize Gaussian filter to the image I of W × H 3smoothing, denoising, concrete grammar is as follows: first, the image I after level and smooth, denoising 4the gray-scale value of each boundary pixel be respectively smoothly, the image I of W × H before denoising 3the gray-scale value of each boundary pixel; Secondly, for the image I of W × H 3the gray-scale value of non-border pixel, then with the image I of W × H 3in any one non-border pixel centered by pixel, choose the Gaussian template of 3 × 3, obtain the gray-scale value of described center pixel, that is:
g 4(x,y)={g 3(x-1,y-1)+g 3(x-1,y+1)+g 3(x+1,y-1)+
g 3(x+1,y+1)+[g 3(x-1,y)+g 3(x,y-1)+
g 3(x+1,y)+g 3(x,y+1)]×2+g 3(x,y)×4}/16
In formula, y represents row-coordinate, and x represents row coordinate, g 3(x, y) is image I 3in (x, y) gray-scale value of putting, g 3(x-1, y-1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts lower-left angle point, g 3(x-1, y+1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts upper left angle point, g 3(x+1, y-1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts bottom right angle point, g 3(x+1, y+1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts upper right angle point, g 3(x-1, y) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts horizontal left direction point, g 3(x, y-1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at (x, y) point immediately below point gray-scale value, g 3(x+1, y) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at the gray-scale value that (x, y) puts horizontal right direction point, g 3(x, y+1) is image I 3in with (x, y) point centered by 3 × 3 grids in be positioned at (x, y) point directly over point gray-scale value, g 4(x, y) is the gray-scale value that (x, y) puts after gaussian filtering, obtains image I after process 4, its size is still W × H, the image I of traversal W × H 3in all non-border pixel,
Step 4: to image I 4carry out integrate levels projection, projection computing formula is as follows:
1 ( r ) = Σ s = 1 H g 4 ( s , r ) , 1 ≤ r ≤ W
In formula, l (r) represents the integrate levels projection result that r is capable, g 4(s, r) is image I 4the gray-scale value at (s, r) some place, W is picturedeep, and H is picturewide,
From the integrate levels projection result that W is capable, choose the minimum integrate levels projection result of gray-scale value, make integrate levels line number r corresponding to of computing formula l (r) when obtaining minimum value that project be c, from image I 4in select vertical coordinate scope for (c-δ, c+ δ) region, as the region to be detected that human eye may exist, wherein, represent the value of δ be H/12 round value downwards, δ is the vertical coordinate interval estimation parameter that human eye may exist,
Step 5: to image I 4carry out edge extracting, specific implementation process is as follows:
Step 5.1: first, carries out the gradient magnitude matrix correspondence image I after gradient calculation 5the grey scale pixel value of each frontier point be image I before carrying out gradient calculation 4the grey scale pixel value of frontier point, secondly, for image I 4middle non-border pixel point (i, j), chooses S x = - 1 1 - 1 1 , S y = 1 1 - 1 - 1 , S xfor horizontal direction edge detection operator, S yfor vertical direction edge detection operator, calculate the horizontal direction at non-border pixel point (i, j) place, the first-order partial derivative of vertical direction, and the gradient magnitude at non-border pixel point (i, j) place and gradient direction, computing formula is as follows:
P(i,j)=[f(i,j+1)-f(i,j)+f(i+1,j+1)-f(i+1,j)]/2
Q(i,j)=[f(i,j)-f(i+1,j)+f(i,j+1)-f(i+1,j+1)]/2
M ( i , j ) = P ( i , j ) 2 + Q ( i , j ) 2
θ(i,j)=arctan[Q(i,j)/P(i,j)]
F (i, j) representative image I in formula 4the gray-scale value of middle non-border pixel point (i, j), i is image I 4in the horizontal coordinate of non-frontier point, j is image I 4in the vertical coordinate of non-frontier point, f (i+1, j) is image I 4in be the gray-scale value being positioned at the pixel in the grid lower left corner in the grid of 2 × 2 of top left hand element with non-border pixel point (i, j) point, f (i, j+1) is image I 4in be the gray-scale value being positioned at the pixel in the grid upper right corner in the grid of 2 × 2 of top left hand element with non-border pixel point (i, j) point, f (i+1, j+1) is image I 4in with non-border pixel point (i, j) for top left hand element 2 × 2 grid in be positioned at the gray-scale value of the pixel in the grid lower right corner, P (i, j), Q (i, j) non-border pixel point (i is represented respectively, j) horizontal direction at place, the first-order partial derivative of vertical direction, M (i, j) non-border pixel point (i is represented, j) gradient magnitude, θ (i, j) represents non-border pixel point (i, j) gradient direction
Step 5.2: by the θ (i obtained in step 5.1, j) carry out discretize and obtain new gradient direction value θ ' (i, j), choose with non-border pixel point (i, j) 3 × 3 windows centered by, process the value of M (i, j), computing formula is as follows:
M &prime; ( i , j ) = 0 , M ( i , j ) < &omega; 1 o r M ( i , j ) < &omega; 2 M ( i , j ) , e l s e
In formula, ω 1and ω 2represent with non-border pixel point (i respectively, j) along θ ' (i in 3 × 3 windows centered by, j) gradient magnitude of two pixels in direction, M'(i, j) be gradient magnitude after above formula process, gradient magnitude M'(i, j after process) as central element, image I 4frontier point grey scale pixel value as boundary element structural matrix, this matrix correspondence produces image I 5,
Step 6: to the image I obtained in step 5.2 5carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture I 5in the gray-scale value of each pixel, be integer using the highest gray-scale value as most high grade grey level m, m, then image grayscale range is made up of each round values in interval [0, m], and gray level is that the number of pixels of t is set to N t, then total number of pixels the probability of each gray-scale value is p t=N t/ N, if be divided into two groups of G by integer k (0≤k≤m) by gray scale 1=0,1 ..., k}, G 2=k+1 ..., m}, G 1for gray level is less than or equal to the gray level group of k, G 2for gray level is greater than the gray level group that k is less than or equal to m, utilize the variance between following formulae discovery two gray level groups:
&sigma; 2 ( k ) = &lsqb; &mu; &omega; ( k ) - &mu; ( k ) &rsqb; 2 &omega; ( k ) &lsqb; 1 - &omega; ( k ) &rsqb;
In formula, &mu; = &Sigma; t = 0 m tp t For general image mean value, &mu; ( k ) = &Sigma; t = 0 k t p t For G 1the mean value of group, &omega; ( k ) = &Sigma; t = 0 k p t For G 1the probability of group, chooses the value of each round values in interval [0, m] as k respectively, then calculates each variance yields σ corresponding to k value respectively 2(k), and maximum variance yields is selected from m+1 variance yields, then using the k value corresponding to maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image I 5carry out binary conversion treatment, image I 5its gray-scale value is set to 255 by the pixel that middle gray-scale value is more than or equal to threshold value T, and the grey scale pixel value lower than threshold value T is set to 0, obtains bianry image I 6,
Step 7: horizontal level and the upright position of accurately determining human eye, specific implementation process is as follows:
Step 7.1: for bianry image I 6, calculate row, column complexity function, computing formula is as follows:
f H ( p ) = &Sigma; q = 1 H I 6 ( p , q ) , c - &delta; &le; p &le; c + &delta;
f L ( q ) = &Sigma; p = c - &delta; c + &delta; I 6 ( p , q ) , 1 &le; q &le; H
In formula, f hp () represents row complexity function, f lq () represents row complexity function, I 6(p, q) representative image I 6in be positioned at the pixel value of the pixel at (p, q) place, c, δ are the human face region parameter obtained in step 4,
Step 7.2: adopt mean filter, respectively to row complexity function f h(p), row complexity function f lq () is carried out one dimension low-pass filtering and is obtained new row complexity function f h' (p), new row complexity function f l' (q), find new row complexity function f by calculating h' the maximum of points of (p) and new row complexity function f l' the maximum point determination position of human eye coordinate of (q), through calculating function f h' the maximum of points p of (p) 1, function f l' two maximum point q of (q) 1, q 2, obtain pixel (p 1, q 1), pixel (p 1, q 2),
Step 7.3: the new image I obtained in step 3.2 4in, select the pixel (p that step 7.2 obtains respectively 1, q 1), pixel (p 1, q 2) 6 × 6 neighborhoods, minimum two pixels of the gray-scale value in this neighborhood are set as human eye center, and the position coordinates of minimum for gray-scale value two pixels is defined as eyes coordinate (μ 1, ν 1), (μ 2, ν 2).
CN201310119843.9A 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection Active CN103218605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310119843.9A CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310119843.9A CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Publications (2)

Publication Number Publication Date
CN103218605A CN103218605A (en) 2013-07-24
CN103218605B true CN103218605B (en) 2016-01-13

Family

ID=48816374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310119843.9A Active CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Country Status (1)

Country Link
CN (1) CN103218605B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617638B (en) * 2013-12-05 2017-03-15 北京京东尚科信息技术有限公司 The method and device of image procossing
CN104484679B (en) * 2014-09-17 2017-12-22 北京邮电大学 Non- standard rifle shooting warhead mark image automatic identifying method
CN106407657A (en) * 2016-08-31 2017-02-15 无锡雅座在线科技发展有限公司 Method and device for capturing event
CN108303420A (en) * 2017-12-30 2018-07-20 上饶市中科院云计算中心大数据研究院 A kind of domestic type sperm quality detection method based on big data and mobile Internet
CN108648206B (en) * 2018-04-28 2022-09-16 成都信息工程大学 Robert edge detection film computing system and method
CN109241862A (en) * 2018-08-14 2019-01-18 广州杰赛科技股份有限公司 Target area determines method and system, computer equipment, computer storage medium
CN109063689B (en) * 2018-08-31 2022-04-05 江苏航天大为科技股份有限公司 Face image hairstyle detection method
CN110070017B (en) * 2019-04-12 2021-08-24 北京迈格威科技有限公司 Method and device for generating human face artificial eye image
CN110288540B (en) * 2019-06-04 2021-07-06 东南大学 Carbon fiber wire X-ray image online imaging standardization method
CN110516649B (en) * 2019-09-02 2023-08-22 南京微小宝信息技术有限公司 Face recognition-based alumni authentication method and system
CN111814795B (en) * 2020-06-05 2024-08-27 嘉楠明芯(北京)科技有限公司 Character segmentation method, device and computer readable storage medium
CN111860423B (en) * 2020-07-30 2024-04-30 江南大学 Improved human eye positioning method by integral projection method
CN114913440A (en) * 2022-06-10 2022-08-16 国网江苏省电力有限公司泰州供电分公司 Method for accurately positioning boundary features of unmanned aerial vehicle inspection image
CN115331269B (en) * 2022-10-13 2023-01-13 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application
CN116363736B (en) * 2023-05-31 2023-08-18 山东农业工程学院 Big data user information acquisition method based on digitalization
CN118628502A (en) * 2024-08-15 2024-09-10 大连亚明汽车部件股份有限公司 Die-casting clock surface identification method and system based on machine vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
CN102968624A (en) * 2012-12-12 2013-03-13 天津工业大学 Method for positioning human eyes in human face image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5538909B2 (en) * 2010-01-05 2014-07-02 キヤノン株式会社 Detection apparatus and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
CN102968624A (en) * 2012-12-12 2013-03-13 天津工业大学 Method for positioning human eyes in human face image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自适应边缘提取的人眼定位方法;欧阳;《微计算机信息》;20081231;第24卷(第5-3期);第100-101页 *

Also Published As

Publication number Publication date
CN103218605A (en) 2013-07-24

Similar Documents

Publication Publication Date Title
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
WO2018024030A1 (en) Saliency-based method for extracting road target from night vision infrared image
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
US10466797B2 (en) Pointing interaction method, apparatus, and system
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN104123529A (en) Human hand detection method and system thereof
CN103413303A (en) Infrared target segmentation method based on joint obviousness
CN105809173B (en) A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform
CN104794440B (en) A kind of false fingerprint detection method based on the multiple dimensioned LBP of more piecemeals
CN106203375A (en) A kind of based on face in facial image with the pupil positioning method of human eye detection
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN105138983B (en) The pedestrian detection method divided based on weighting block model and selective search
CN105138990A (en) Single-camera-based gesture convex hull detection and palm positioning method
CN103425970A (en) Human-computer interaction method based on head postures
CN104318559A (en) Quick feature point detecting method for video image matching
CN105225216A (en) Based on the Iris preprocessing algorithm of space apart from circle mark rim detection
CN104102904A (en) Static gesture identification method
CN104794449A (en) Gait energy image acquisition method based on human body HOG (histogram of oriented gradient) features and identity identification method
CN106297492A (en) A kind of Educational toy external member and utilize color and the method for outline identification programming module
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN103914829A (en) Method for detecting edge of noisy image
CN102509293A (en) Method for detecting consistency of different-source images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant