[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

JPH0944676A - Face detector - Google Patents

Face detector

Info

Publication number
JPH0944676A
JPH0944676A JP7196806A JP19680695A JPH0944676A JP H0944676 A JPH0944676 A JP H0944676A JP 7196806 A JP7196806 A JP 7196806A JP 19680695 A JP19680695 A JP 19680695A JP H0944676 A JPH0944676 A JP H0944676A
Authority
JP
Japan
Prior art keywords
face
points
candidate
image
correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP7196806A
Other languages
Japanese (ja)
Inventor
Makoto Nishida
誠 西田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Original Assignee
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp filed Critical Toyota Motor Corp
Priority to JP7196806A priority Critical patent/JPH0944676A/en
Publication of JPH0944676A publication Critical patent/JPH0944676A/en
Pending legal-status Critical Current

Links

Landscapes

  • Auxiliary Drives, Propulsion Controls, And Safety Devices (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To exactly detect the feature point of face image by performing correlative arithmetic by scanning the face image while using a template corresponding to the feature point of face, deciding the candidate points of feature point when they exceeds a prescribed threshold value, and providing the candidate point having position relation similar to the position relation of feature point out of the combination of those candidate points. SOLUTION: The image of the face of a reagent is picked up, and the face image is provided by an image pickup means M1. Next, the correlative arithmetic is performed by a correlative arithmetic means M2 while using plural templates prepared in advance corresponding to the respective feature points of face. Next, a position relation comparing means M3 detects the feature point of face image by finding the combination of candidate points having the position relation similar to the position relation of plural feature points on face prepared in advance out of the combination of candidate points of feature points. Thus, since the plural candidate points are permitted for the plural feature points, the feature points are not missed and because of the combination of candidate points having the position relation similar to the position relation of respective feature points, erroneous detection can be prevented.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【発明の属する技術分野】本発明は顔面検出装置に関
し、特に撮像して得られた被験者の顔画像から顔面の特
徴点を検出する顔面検出装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a face detecting apparatus, and more particularly to a face detecting apparatus for detecting feature points of a face from a face image of a subject obtained by imaging.

【0002】[0002]

【従来の技術】従来より、車両運転者の運転状態を検出
するために、運転者の顔を撮像し、顔の方向又は視線の
方向を監視することが提案されている。例えば、特開平
6−243367号公報には、運転者の顔を撮像し、顔
画像から顔の特徴点として両眼の目頭と唇の両端の4点
を抽出し、顔が中心線に関して対称であることに基づい
て、抽出した4点が作る等脚台形の方向を求めることで
顔の方向を計算している。
2. Description of the Related Art Conventionally, in order to detect the driving state of a vehicle driver, it has been proposed to image the driver's face and monitor the direction of the face or the direction of the line of sight. For example, in Japanese Patent Laid-Open No. 6-243367, a driver's face is imaged, and four points at the inner and outer ends of both eyes are extracted as facial feature points from the face image, and the face is symmetrical with respect to the center line. Based on this, the face direction is calculated by determining the direction of the isosceles trapezoid formed by the extracted four points.

【0003】[0003]

【発明が解決しようとする課題】従来技術で顔画像の特
徴点を抽出する際には、顔の特徴点のテンプレートつま
り目テンプレート、口テンプレートを用い、顔画像上で
上記テンプレートを順次走査して画像とテンプレートと
の相関演算を行い、相関度が所定の閾値を超えたとき
目,口の特徴点を抽出することが考えられる。
When extracting feature points of a face image by the conventional technique, a template of face feature points, that is, an eye template and a mouth template is used, and the template is sequentially scanned on the face image. It is conceivable to perform a correlation calculation between the image and the template and extract the eye and mouth feature points when the degree of correlation exceeds a predetermined threshold value.

【0004】しかし、運転者が変った場合、顔向きが変
化した場合、カメラから運転者までの距離が変化した場
合、外光が変化した場合等においては顔画像とテンプレ
ートとの相関度が変化するために、相関度の閾値を高く
すると特徴点を抽出できなくなる。これをなくすため相
関度の閾値を低くすると、例えば顔画像における眉や鼻
等を目と誤って抽出することがあるという問題があっ
た。
However, when the driver changes, the face orientation changes, the distance from the camera to the driver changes, or the outside light changes, the degree of correlation between the face image and the template changes. Therefore, if the threshold value of the degree of correlation is increased, the feature points cannot be extracted. If the threshold value of the degree of correlation is lowered in order to eliminate this, there is a problem that eyebrows, noses, etc. in a face image may be mistakenly extracted as eyes.

【0005】本発明は上記の点に鑑みなされたもので、
相関演算で得た各特徴点の候補点の組合わせのうち顔の
特徴点の位置関係と相似な位置関係の候補点の組合わせ
を見付けることにより、顔の特徴点を見落しなく、かつ
誤りなく高精度に検出できる顔面検出装置を提供するこ
とを目的とする。
The present invention has been made in view of the above points,
Among the combinations of candidate points of each feature point obtained by the correlation calculation, by finding the combination of candidate points having a similar positional relationship to the positional relationship of the facial feature points, the facial feature points are not overlooked and an error occurs. It is an object of the present invention to provide a face detection device that can detect a face with high accuracy.

【0006】[0006]

【課題を解決するための手段】請求項1に記載の発明は
図1に示す如く、被験者の顔を撮像して顔画像を得る撮
像手段M1と、予め用意されている顔の複数の特徴点に
それぞれ対応する複数のテンプレートを用いて上記顔画
像を走査して相関演算を行い、相関度が所定の閾値を超
えたとき各特徴点の候補点とする相関演算手段M2と、
上記各特徴点の候補点の組合わせのうち、予め用意され
ている上記顔の複数の特徴点の位置関係と相似な位置関
係となる候補点の組合せを見付けることにより上記顔画
像における特徴点を検出する位置関係比較手段M3とを
有する。
According to a first aspect of the present invention, as shown in FIG. 1, an image pickup means M1 for picking up a face of a subject to obtain a face image, and a plurality of feature points of a face prepared in advance. Correlation calculation means M2 that scans the face image using a plurality of templates respectively corresponding to the above to perform a correlation calculation, and makes a candidate point of each feature point when the degree of correlation exceeds a predetermined threshold,
Among the combinations of the candidate points of each of the feature points, the feature points in the face image are found by finding a combination of candidate points having a similar positional relationship to the positional relationship of the plurality of feature points of the face prepared in advance. It has a positional relationship comparing means M3 for detecting.

【0007】このため、複数の特徴点夫々で複数の候補
点が許されるので特徴点の見落しがなくなり、各特徴点
の位置関係と相似な位置関係の候補点の組合わせから誤
りなく顔画像の特徴点を検出することできる。請求項2
に記載の発明は、請求項1記載の顔面検出装置におい
て、前記相関演算手段は、複数の特徴点毎に相関度の閾
値を異ならしめる。
Therefore, a plurality of candidate points are allowed for each of the plurality of feature points, so that the feature points are not overlooked, and the face image can be detected without error from the combination of the candidate points having the similar positional relationship to the positional relationship between the characteristic points. It is possible to detect the feature points of. Claim 2
The invention according to claim 1 is the face detecting apparatus according to claim 1, wherein the correlation calculating means changes the threshold value of the degree of correlation for each of a plurality of feature points.

【0008】このため、顔のいくつかの特徴点のうち、
像のコントラストの大小や個人差の大小に応じて、各特
徴点に最適の閾値を設定でき、特徴点の抽出精度を向上
できる。
For this reason, among several feature points of the face,
An optimum threshold value can be set for each feature point according to the size of the image contrast and the individual difference, and the feature point extraction accuracy can be improved.

【0009】[0009]

【発明の実施の形態】図2は本発明のブロック図を示
す。同図中、運転者の顔を撮像する撮像手段M1として
のカメラ10は車両の所定位置に取付られており、カメ
ラ10で撮像された運転者の顔の画像信号はD/Aコン
バータ11でディジタル化されて画像メモリ12に格納
される。また、ROM13には図3(A),(B),
(C)に示す如き目,鼻,口のテンプレート及び同図
(D)に示す如き目,鼻,口の位置関係が予め格納され
ている。
2 shows a block diagram of the present invention. In the figure, a camera 10 as an image pickup means M1 for picking up an image of a driver's face is attached to a predetermined position of a vehicle, and an image signal of the driver's face picked up by the camera 10 is digitally converted by a D / A converter 11. The image is converted and stored in the image memory 12. In addition, in the ROM 13, as shown in FIGS.
The eye, nose, and mouth templates shown in (C) and the positional relationship between the eyes, nose, and mouth shown in FIG.

【0010】画像処理回路14は正規化部及び相関演算
部を有しており、画像メモリ12に格納されている顔画
像の濃淡正規化処理を行った後、ROM13からロード
したテンプレートを用いて上記顔画像を走査して相関演
算を行い、目,鼻,口夫々の候補点を抽出し、各部の候
補点の中から目,鼻,口の位置関係を満足するものを抽
出して、目,鼻,口夫々の位置を決定する。
The image processing circuit 14 has a normalization section and a correlation calculation section. After performing the grayscale normalization processing of the face image stored in the image memory 12, the image processing circuit 14 uses the template loaded from the ROM 13 to perform the above-mentioned processing. The face image is scanned and the correlation operation is performed to extract candidate points for each of the eyes, nose, and mouth. From the candidate points of each part, those satisfying the positional relationship of the eyes, nose, and mouth are extracted, Determine the positions of the nose and mouth.

【0011】図4及び図5は画像処理回路が実行する目
検出処理のフローチャートを示す。この処理はイグニッ
ションオン時、及び目のトラッキングを失敗したときに
実行される。同図中、ステップS10ではROM13か
ら図3(A),(B),(C)夫々に示す32×16画
素の左目の目テンプレート、32×16画素の鼻テンプ
レート、64×16画素の口テンプレート夫々をロード
する。次にステップS12でROM13から図3(D)
に示す如き目の位置を原点(0,0)とした鼻,口夫々
の基準座標(Xn,Yn),(Xm,Ym)をロードす
る。
4 and 5 are flow charts of the eye detection processing executed by the image processing circuit. This processing is executed when the ignition is turned on and when eye tracking fails. In the figure, in step S10, the left eye template of 32 × 16 pixels, the nose template of 32 × 16 pixels, and the mouth template of 64 × 16 pixels shown in FIGS. 3A, 3B, and 3C are read from the ROM 13 in step S10. Load each one. Next, in step S12, the ROM 13 is read as shown in FIG.
The reference coordinates (Xn, Yn) and (Xm, Ym) of the nose and mouth, respectively, whose origin is (0, 0), are loaded.

【0012】次に、ステップS14で図6に示す如き5
12×512画素の顔画像を入力し、図3(A),
(B),(C)夫々に示す目テンプレート、鼻テンプレ
ート、口テンプレート夫々で顔画像を走査して相関演算
を行い、図7に示す目の相関分布、図8に示す鼻の相関
分布、図9に示す口の相関分布夫々を算出する。ここで
は、顔画像上の座標(x,y)での相関分布γ(x,
y)は次式で計算する。
Next, in step S14, as shown in FIG.
A face image of 12 × 512 pixels is input, and the image shown in FIG.
(B), (C) The eye template, the nose template, and the mouth template shown in each of FIG. 8 scan the face image to perform the correlation calculation, and the correlation distribution of the eyes shown in FIG. 7, the correlation distribution of the nose shown in FIG. The respective mouth correlation distributions shown in 9 are calculated. Here, the correlation distribution γ (x, at the coordinates (x, y) on the face image
y) is calculated by the following formula.

【0013】[0013]

【数1】 [Equation 1]

【0014】但し、I(x,y)は顔画像上の座標
(x,y)の濃度値であり、0<x<ωx,0<y<ω
y,ωx,ωy夫々はテンプレートの横方向,縦方向夫
々の画素数である。また、T(x’,y’)はテンプレ
ートブロック上での座標(x’,y’)の濃度値であ
る。図7,図8,図9夫々では相関分布値が小さい(相
関度が高い)部分を暗く、相関分布値が大きい(相関度
が低い)部分を明るく表示している。つまり、暗いほど
相関度が高い。
However, I (x, y) is a density value of coordinates (x, y) on the face image, and 0 <x <ωx, 0 <y <ω
Each of y, ωx, and ωy is the number of pixels in the horizontal and vertical directions of the template. Further, T (x ', y') is the density value of the coordinates (x ', y') on the template block. In each of FIG. 7, FIG. 8 and FIG. 9, a portion having a small correlation distribution value (high correlation degree) is displayed dark and a portion having a large correlation distribution value (low correlation degree) is displayed brightly. That is, the darker the correlation, the higher the correlation.

【0015】次にステップS18では図7,図8,図9
夫々の目,鼻,口の相関分布について候補点を抽出し
て、目,鼻,口夫々毎に番号付けを行う。ここで、候補
点は次の(2)式から(6)式を全て満足する点であ
る。 γ(x,y)<THR …(2) γ(x−1,y)−γ(x,y)>THR’ …(3) γ(x+1,y)−γ(x,y)>THR’ …(4) γ(x,y−1)−γ(x,y)>THR’ …(5) γ(x,y+1)−γ(x,y)>THR’ …(6) 但し、THRは相関分布の閾値、THR’は相関分布の
差分の閾値である。つまり、相関分布値が閾値THR未
満で、隣接画素に対する相関分布値の差分が閾値TH
R’を超える点、即ち、図7,図8,図9で際立って暗
い点を目,鼻,口夫々の候補点とする。
Next, in step S18, FIG. 7, FIG. 8 and FIG.
Candidate points are extracted from the correlation distribution of each eye, nose, and mouth, and numbering is performed for each eye, nose, and mouth. Here, the candidate points are points that satisfy all of the following expressions (2) to (6). γ (x, y) <THR (2) γ (x-1, y) -γ (x, y)> THR '(3) γ (x + 1, y) -γ (x, y)>THR' … (4) γ (x, y−1) −γ (x, y)> THR ′… (5) γ (x, y + 1) −γ (x, y)> THR ′… (6) where THR is A correlation distribution threshold value, THR ′, is a correlation distribution difference threshold value. That is, the correlation distribution value is less than the threshold value THR, and the difference between the correlation distribution values for adjacent pixels is the threshold value TH.
Points exceeding R ', that is, points that are markedly dark in FIGS. 7, 8, and 9 are candidate points for the eyes, nose, and mouth.

【0016】なお、目,鼻に対する相関分布の閾値を1
0000としたとき、その差分の閾値は200程度とす
る。また、口は目,鼻に比べて像のコントラストが低
く、かつ個人差が大きいため,口に対する相関分布の閾
値を5000とし、その差分の閾値は100とする。更
に、鼻は目、口のように開閉動作がないため、鼻に対す
る相関分布の閾値を目の閾値より高めに設定しても良
い。
The threshold value of the correlation distribution for eyes and nose is set to 1
When it is set to 0000, the threshold value of the difference is set to about 200. Further, since the image contrast of the mouth is lower than that of the eyes and the nose, and the individual difference is large, the threshold value of the correlation distribution for the mouth is set to 5000 and the threshold value of the difference is set to 100. Further, since the nose does not open and close like the eyes and the mouth, the threshold value of the correlation distribution for the nose may be set higher than the threshold value of the eyes.

【0017】これによって図7に矢印で示す目の候補点
e1〜e10、図8に矢印で示す鼻の候補点n1〜n1
0、図9に矢印で示す口の候補点m1〜m9が抽出され
る。上記のステップS16,S18が相関演算手段M2
に対応する。このように、顔のいくつかの特徴である
目,鼻,口夫々で像のコントラストの大小や個人差の大
小に応じて、各特徴点に最適の閾値を設定でき、特徴点
の抽出精度を向上できる。
As a result, eye candidate points e1 to e10 indicated by arrows in FIG. 7 and nose candidate points n1 to n1 indicated by arrows in FIG.
0, mouth candidate points m1 to m9 indicated by arrows in FIG. 9 are extracted. The above steps S16 and S18 are the correlation calculation means M2.
Corresponding to. In this way, it is possible to set the optimum threshold value for each feature point according to the size of the image contrast and the individual difference between the eyes, nose, and mouth, which are some features of the face, and the extraction accuracy of the feature points can be improved. Can be improved.

【0018】この後、ステップS20で目発見候補数n
sを0にリセットし、ステップS22で目候補カウンタ
ieを0にリセットし、ステップS24で鼻候補カウン
タinを0にリセットし、ステップS26で口候補カウ
ンタimを0にリセットする。
Thereafter, in step S20, the number of eye discovery candidates n
s is reset to 0, the eye candidate counter ie is reset to 0 in step S22, the nose candidate counter in is reset to 0 in step S24, and the mouth candidate counter im is reset to 0 in step S26.

【0019】次のステップS28では、1又は複数の目
の候補点、1又は複数の鼻の候補点、1又は複数の候補
点について目,鼻,口の候補点を1つずつ選択した全て
の組合わせを作り、各組合わせ毎に目の候補点に図3
(D)の目の位置を重ね合わせたときの、鼻の基準座標
(Xn,Yn)に対する鼻の候補点の誤差En、及び口
の基準座標(Xm,Ym)に対する口の候補点の誤差E
mを(7)〜(9)式により算出する。但し、カメラか
ら顔までの距離によって目の候補点と鼻,口夫々の候補
点との間隔が異なるので、目の候補点と鼻の候補点との
X方向の間隔を図3(D)の目と鼻のX方向の間隔と一
致するよう正規化しており、このために係数gを用い
る。
In the next step S28, one or more eye candidate points, one or more nose candidate points, or one or more eye, nose, or mouth candidate points are selected for all candidate points. A combination is made, and the eye candidate points for each combination are shown in FIG.
(D) The error En of the nose candidate point with respect to the reference coordinate (Xn, Yn) of the nose and the error E of the candidate point of the mouth with respect to the reference coordinate (Xm, Ym) of the nose when the positions of the eyes are overlapped.
m is calculated by the equations (7) to (9). However, since the distances between the eye candidate points and the nose and mouth candidate points differ depending on the distance from the camera to the face, the distance in the X direction between the eye candidate points and the nose candidate points is shown in FIG. It is normalized so as to match the distance between the eyes and the nose in the X direction, and the coefficient g is used for this purpose.

【0020】 g=Xn/〔xn(in)−xe(ie)〕 …(7) En=〔{Xn−g・(xn(in)−xe(ie))}2 +{Yn−g・(yn(in)−ye(ie)}2 1/2 …(8) Em=〔{Xm−g・(xm(im)−xe(ie))}2 +{Ym−g・(ym(im)−ye(ie)}2 1/2 …(9) 但し、xe(ie),yn(ie)はie番目の目の候
補点の顔画像上のx,y座標、xn(in),yn(i
n)はin番目の鼻の候補点のx,y座標、xm(i
m),ym(im)はim番目の口の候補点のx,y座
標である。
G = Xn / [xn (in) -xe (ie)] (7) En = [{Xn-g. (Xn (in) -xe (ie))} 2 + {Yn-g. ( yn (in) -ye (ie)} 2 ] 1/2 (8) Em = [{Xm-g · (xm (im) -xe (ie))} 2 + {Ym-g · (ym (im (im ) -Ye (ie)} 2 ] 1/2 (9) where xe (ie), yn (ie) are x, y coordinates on the face image of the candidate point of the ith eye, xn (in), yn (i
n) is the x, y coordinates of the candidate point of the inth nose, xm (i
m) and ym (im) are the x and y coordinates of the im-th mouth candidate point.

【0021】次に、ステップS30で誤差Enが所定の
閾値THE未満、かつ誤差Emが閾値THE未満か、否
かを判別する。En,Emが共に閾値THE未満であれ
ば、有力な目の候補点とみなし、図5のステップS32
に進んで目の候補点eiのx,y座標と誤差En+Em
とを記憶し、ステップS34で目発見候補数nsを1だ
けインクリメントしてステップS36に進む。
Next, in step S30, it is determined whether the error En is less than a predetermined threshold value THE and the error Em is less than the threshold value THE. If both En and Em are less than the threshold value THE, it is considered as a strong eye candidate point, and step S32 in FIG.
X and y coordinates of eye candidate point ei and error En + Em
Is stored, the number of eye discovery candidates ns is incremented by 1 in step S34, and the process proceeds to step S36.

【0022】またステップS30でEn≧THE又はE
m≧THEの場合は、この候補点の組合わせが図3
(D)の位置関係に合わないとして候補点を変更するた
めにステップS36に進む。ステップS36では口候補
カウンタimを1だけインクリメントし、ステップS3
8でimの値が口候補点の総数Nm未満か否かを判別
し、im<Nm未満であればステップS28に進んで処
理を続ける。またim≧NmであればステップS40で
鼻候補カウンタinを1だけインクリメントしてステッ
プS42でinの値が鼻候補点の総数Nn未満か否かを
判別し、in<NnであればステップS26に進んで口
候補カウンタimを0にリセットする。
In step S30, En ≧ THE or E
When m ≧ THE, the combination of the candidate points is shown in FIG.
The process proceeds to step S36 in order to change the candidate point because it does not match the positional relationship of (D). In step S36, the mouth candidate counter im is incremented by 1, and in step S3
In step 8, it is determined whether or not the value of im is less than the total number Nm of mouth candidate points. If im <Nm, the process proceeds to step S28 to continue the process. If im ≧ Nm, the nose candidate counter in is incremented by 1 in step S40, and it is determined in step S42 whether or not the value of in is less than the total number Nn of nose candidate points. If in <Nn, the process proceeds to step S26. Then, the mouth candidate counter im is reset to zero.

【0023】また、in≧Nnの場合はステップS44
で目候補カウンタieを1だけインクリメントしてステ
ップS46でieの値が目候補点の総数Ne未満か否か
を判別し、ie<Ne未満であればステップS24に進
んで鼻候補カウンタinを0にリセット、ie≧Neで
あればステップS48に進む。これによってステップS
48に進む時点では目,鼻,口の候補の全ての組合わせ
についてステップS28,S30が実行される。
If in ≧ Nn, step S44
Then, the eye candidate counter ie is incremented by 1, and in step S46 it is determined whether or not the value of ie is less than the total number Ne of eye candidate points. If less than ie <Ne, the process proceeds to step S24 and the nose candidate counter in is set to 0. Reset, and if ie ≧ Ne, the process proceeds to step S48. This causes step S
At step 48, steps S28 and S30 are executed for all combinations of the eye, nose, and mouth candidates.

【0024】ステップS48では目発見候補数nsが0
か否かを判別する。ns=0であれば図3(D)の位置
関係を満足する目,鼻,口の候補点がなかった、つまり
目の有力な候補点がなかったため、目の検出ができなか
ったとしてステップS14に進み、顔画像を再度入力し
て、上記の処理を繰り返す。ns>1であればステップ
S50に進んで目発見候補数nsで表わされる有力な候
補のうち誤差En+Emが最小である目の候補点を目と
して認識し、ステップS52で、この目候補点の座標を
次のトラッキング処理に渡し、目の検出を行って処理を
終了する。上記のステップS20〜S50が位置関係比
較手段M3に対応する。
In step S48, the number ns of eye discovery candidates is 0.
It is determined whether or not. If ns = 0, there is no eye, nose, or mouth candidate point that satisfies the positional relationship of FIG. 3D, that is, there is no strong eye candidate point, and therefore it is determined that the eye cannot be detected in step S14. Then, the face image is input again and the above processing is repeated. If ns> 1, the process proceeds to step S50, the eye candidate point having the smallest error En + Em among the promising candidates represented by the number ns of eye discovery candidates is recognized as an eye, and in step S52, the coordinates of this eye candidate point are recognized. Is passed to the next tracking process, eyes are detected, and the process is terminated. The above steps S20 to S50 correspond to the positional relationship comparing means M3.

【0025】このように、目,鼻,口夫々で複数の候補
点が許されるので特徴点の見落としがなくなり、各特徴
点の位置関係と相似な位置関係の候補点の組合わせを見
付けることから顔画像の特徴点である目,鼻,口を誤り
なく検出することができる。
Since a plurality of candidate points are allowed for each of the eyes, nose, and mouth in this way, feature points are not overlooked, and a combination of candidate points having a positional relationship similar to the positional relationship of each characteristic point is found. Eyes, nose, and mouth, which are the feature points of the face image, can be detected without error.

【0026】[0026]

【発明の効果】上述の如く、請求項1に記載の発明は被
験者の顔を撮像して顔画像を得る撮像手段と、予め用意
されている顔の複数の特徴点にそれぞれ対応する複数の
テンプレートを用いて上記顔画像を走査して相関演算を
行い、相関度が所定の閾値を超えたとき各特徴点の候補
点とする相関演算手段と、上記各特徴点の候補点の組合
わせのうち、予め用意されている上記顔の複数の特徴点
の位置関係と相似な位置関係となる候補点の組合わせを
見付けることにより上記顔画像における特徴点を検出す
る位置関係比較手段とを有するため、複数の特徴点夫々
で複数の候補点が許されるので特徴点の見落しがなくな
り、各特徴点の位置関係と相似な位置関係の候補点の組
合わせから誤りなく顔画像の特徴点を検出することでき
る。
As described above, according to the invention as set forth in claim 1, the image pickup means for picking up the face of the subject to obtain the face image, and the plurality of templates prepared in advance respectively corresponding to the plurality of feature points of the face. Of the combination of the correlation calculation means for scanning the face image using the above and performing the correlation calculation, and setting the correlation point as the candidate point of each feature point when the degree of correlation exceeds a predetermined threshold, , Because it has a positional relationship comparison means for detecting a characteristic point in the face image by finding a combination of candidate points having a similar positional relationship to the positional relationship of the plurality of characteristic points of the face prepared in advance, Since multiple candidate points are allowed for each of the multiple feature points, feature points are not overlooked, and feature points of the face image are detected without error from the combination of candidate points with a positional relationship similar to the positional relationship of each feature point. You can do it.

【0027】また、請求項2に記載の発明は、請求項1
記載の顔面検出装置において、前記相関演算手段は、複
数の特徴点毎に相関度の閾値を異ならしめるため、顔の
いくつかの特徴点のうち、像のコントラストの大小や個
人差の大小に応じて、各特徴点に最適の閾値を設定で
き、特徴点の抽出精度を向上でき、実用上きわめて有用
である。
[0027] The invention described in claim 2 is the same as that in claim 1.
In the face detection device described above, the correlation calculating means changes the threshold value of the degree of correlation for each of a plurality of feature points, and therefore, among several feature points of the face, depending on the magnitude of the image contrast or the magnitude of the individual difference. Therefore, an optimum threshold value can be set for each feature point, the extraction accuracy of feature points can be improved, and it is extremely useful in practice.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の原理図である。FIG. 1 is a principle diagram of the present invention.

【図2】本発明のブロック図である。FIG. 2 is a block diagram of the present invention.

【図3】テンプレート及び目鼻口の位置関係を示す図で
ある。
FIG. 3 is a diagram showing a positional relationship between a template and a nose and mouth.

【図4】目検出処理のフローチャートである。FIG. 4 is a flowchart of eye detection processing.

【図5】目検出処理のフローチャートである。FIG. 5 is a flowchart of eye detection processing.

【図6】顔画像を示す図である。FIG. 6 is a diagram showing a face image.

【図7】目の相関分布を示す図である。FIG. 7 is a diagram showing an eye correlation distribution.

【図8】鼻の相関分布を示す図である。FIG. 8 is a diagram showing a correlation distribution of the nose.

【図9】口の相関分布を示す図である。FIG. 9 is a diagram showing a mouth correlation distribution.

【符号の説明】[Explanation of symbols]

10 カメラ 11 D/Aコンバータ 12 画像メモリ 13 ROM 14 画像処理回路 M1 撮像手段 M2 相関演算手段 M3 位置関係比較手段 10 camera 11 D / A converter 12 image memory 13 ROM 14 image processing circuit M1 image pickup means M2 correlation calculation means M3 positional relationship comparison means

─────────────────────────────────────────────────────
─────────────────────────────────────────────────── ───

【手続補正書】[Procedure amendment]

【提出日】平成7年12月5日[Submission date] December 5, 1995

【手続補正1】[Procedure amendment 1]

【補正対象書類名】明細書[Document name to be amended] Statement

【補正対象項目名】図面の簡単な説明[Correction target item name] Brief description of drawings

【補正方法】変更[Correction method] Change

【補正内容】[Correction contents]

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明の原理図である。FIG. 1 is a principle diagram of the present invention.

【図2】本発明のブロック図である。FIG. 2 is a block diagram of the present invention.

【図3】テンプレート及び目鼻口の位置関係を示す図で
ある。
FIG. 3 is a diagram showing a positional relationship between a template and a nose and mouth.

【図4】目検出処理のフローチャートである。FIG. 4 is a flowchart of eye detection processing.

【図5】目検出処理のフローチャートである。FIG. 5 is a flowchart of eye detection processing.

【図6】ディスプレイ上に表示させた場合の顔画像を示
写真である。
FIG. 6 is a photograph showing a face image when displayed on a display .

【図7】ディスプレイ上に表示させた場合の目の相関分
布を示す写真である。
FIG. 7 is a photograph showing the correlation distribution of eyes when displayed on a display .

【図8】ディスプレイ上に表示させた場合の鼻の相関分
布を示す写真である。
FIG. 8 is a photograph showing the correlation distribution of the nose when it is displayed on the display .

【図9】ディスプレイ上に表示させた場合の口の相関分
布を示す写真である。
FIG. 9 is a photograph showing the correlation distribution of the mouth when displayed on the display .

【符号の説明】 10 カメラ 11 D/Aコンバータ 12 画像メモリ 13 ROM 14 画像処理回路 M1 撮像手段 M2 相関演算手段 M3 位置関係比較手段[Explanation of reference numerals] 10 camera 11 D / A converter 12 image memory 13 ROM 14 image processing circuit M1 image pickup means M2 correlation calculation means M3 positional relationship comparison means

Claims (2)

【特許請求の範囲】[Claims] 【請求項1】 被験者の顔を撮像して顔画像を得る撮像
手段と、 予め用意されている顔の複数の特徴点にそれぞれ対応す
る複数のテンプレートを用いて上記顔画像を走査して相
関演算を行い、相関度が所定の閾値を超えたとき各特徴
点の候補点とする相関演算手段と、 上記各特徴点の候補点の組合わせのうち、予め用意され
ている上記顔の複数の特徴点の位置関係と相似な位置関
係となる候補点の組合わせを見付けることにより上記顔
画像における特徴点を検出する位置関係比較手段とを有
することを特徴とする顔面検出装置。
1. An image pickup means for picking up a face of a subject to obtain a face image, and a correlation calculation by scanning the face image using a plurality of templates respectively corresponding to a plurality of feature points of a face prepared in advance. And a plurality of features of the face prepared in advance among the combinations of the correlation calculation means that determines the candidate points of the respective feature points when the degree of correlation exceeds a predetermined threshold value, and the candidate points of the respective feature points. And a positional relationship comparing means for detecting a characteristic point in the face image by finding a combination of candidate points having a positional relationship similar to that of the point.
【請求項2】 請求項1記載の顔面検出装置において、 前記相関演算手段は、複数の特徴点毎に相関度の閾値を
異ならしめたことを特徴とする顔面検出装置。
2. The face detecting apparatus according to claim 1, wherein the correlation calculating means makes the threshold value of the degree of correlation different for each of a plurality of feature points.
JP7196806A 1995-08-01 1995-08-01 Face detector Pending JPH0944676A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP7196806A JPH0944676A (en) 1995-08-01 1995-08-01 Face detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP7196806A JPH0944676A (en) 1995-08-01 1995-08-01 Face detector

Publications (1)

Publication Number Publication Date
JPH0944676A true JPH0944676A (en) 1997-02-14

Family

ID=16363965

Family Applications (1)

Application Number Title Priority Date Filing Date
JP7196806A Pending JPH0944676A (en) 1995-08-01 1995-08-01 Face detector

Country Status (1)

Country Link
JP (1) JPH0944676A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000132693A (en) * 1998-10-27 2000-05-12 Sony Corp Device and method for processing picture, and providing medium
JP2002288670A (en) * 2001-03-22 2002-10-04 Honda Motor Co Ltd Personal authentication device using facial image
WO2005008593A1 (en) 2003-07-18 2005-01-27 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
JP2007158829A (en) * 2005-12-06 2007-06-21 Hitachi Kokusai Electric Inc Image processor, and image processing method and program
JP2008003749A (en) * 2006-06-21 2008-01-10 Fujifilm Corp Feature point detection device, method, and program
US7577297B2 (en) 2002-12-16 2009-08-18 Canon Kabushiki Kaisha Pattern identification method, device thereof, and program thereof
US8594430B2 (en) 2006-04-08 2013-11-26 The University Of Manchester Method of locating features of an object
WO2014156430A1 (en) * 2013-03-26 2014-10-02 富士フイルム株式会社 Authenticity determination system, feature point registration device, operation control method therefor, matching determination device, and operation control method therefor
CN106710145A (en) * 2016-12-29 2017-05-24 清华大学苏州汽车研究院(吴江) Guided driver tiredness prevention method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000132693A (en) * 1998-10-27 2000-05-12 Sony Corp Device and method for processing picture, and providing medium
JP2002288670A (en) * 2001-03-22 2002-10-04 Honda Motor Co Ltd Personal authentication device using facial image
US7577297B2 (en) 2002-12-16 2009-08-18 Canon Kabushiki Kaisha Pattern identification method, device thereof, and program thereof
US8942436B2 (en) 2003-07-18 2015-01-27 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
WO2005008593A1 (en) 2003-07-18 2005-01-27 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
EP3358501A1 (en) 2003-07-18 2018-08-08 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
US8515136B2 (en) 2003-07-18 2013-08-20 Canon Kabushiki Kaisha Image processing device, image device, image processing method
EP2955662A1 (en) 2003-07-18 2015-12-16 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
JP2007158829A (en) * 2005-12-06 2007-06-21 Hitachi Kokusai Electric Inc Image processor, and image processing method and program
US7999846B2 (en) 2005-12-06 2011-08-16 Hitachi Kokusai Electric Inc. Image processing apparatus, image processing system, and recording medium for programs therefor
US8594430B2 (en) 2006-04-08 2013-11-26 The University Of Manchester Method of locating features of an object
JP2008003749A (en) * 2006-06-21 2008-01-10 Fujifilm Corp Feature point detection device, method, and program
JP2014191361A (en) * 2013-03-26 2014-10-06 Fujifilm Corp Authenticity determine system, feature point registration device, operation control method thereof, collation determine apparatus and operation control method thereof
WO2014156430A1 (en) * 2013-03-26 2014-10-02 富士フイルム株式会社 Authenticity determination system, feature point registration device, operation control method therefor, matching determination device, and operation control method therefor
US10083371B2 (en) 2013-03-26 2018-09-25 Fujifilm Corporation Authenticity determination system, feature point registration apparatus and method of controlling operation of same, and matching determination apparatus and method of controlling operation of same
CN106710145A (en) * 2016-12-29 2017-05-24 清华大学苏州汽车研究院(吴江) Guided driver tiredness prevention method

Similar Documents

Publication Publication Date Title
JP3987264B2 (en) License plate reader and method
US6591005B1 (en) Method of estimating image format and orientation based upon vanishing point location
US6968094B1 (en) Method of estimating and correcting camera rotation with vanishing point location
US20090010544A1 (en) Method, apparatus, and program for detecting facial characteristic points
JP2002516440A (en) Image recognition and correlation system
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
CN111340701A (en) Circuit board image splicing method for screening matching points based on clustering method
JP2002063567A (en) Device and method for estimating body position and attitude, method for feature point extraction method using the same, and image collating method
JPH0944676A (en) Face detector
JP2021033510A (en) Driving assistance device
JP2011165170A (en) Object detection device and program
JP2007253699A (en) Optical axis deviation sensing device
JP4825473B2 (en) Face orientation discrimination device
US6650362B1 (en) Movement detecting apparatus with feature point extractor based on luminance gradient in current frame
JP2981382B2 (en) Pattern matching method
JPH11190611A (en) Three-dimensional measuring method and three-dimensional measuring processor using this method
JP6278757B2 (en) Feature value generation device, feature value generation method, and program
JP2004199200A (en) Pattern recognition device, imaging apparatus, information processing system, pattern recognition method, recording medium and program
JP2002008028A (en) Pattern matching method and device therefor
JP4719605B2 (en) Object detection data generation device, method and program, and object detection device, method and program
JP2516844B2 (en) Parts detection method and device
CN115578729B (en) AI intelligent process arrangement method for digital staff
JPH09198505A (en) Line position detector
JP2001012945A (en) Distance detecting device
JP3265171B2 (en) Stereo image correction condition detection device