JP2785697B2 - Three-dimensional object region estimating apparatus and method - Google Patents
Three-dimensional object region estimating apparatus and methodInfo
- Publication number
- JP2785697B2 JP2785697B2 JP6203541A JP20354194A JP2785697B2 JP 2785697 B2 JP2785697 B2 JP 2785697B2 JP 6203541 A JP6203541 A JP 6203541A JP 20354194 A JP20354194 A JP 20354194A JP 2785697 B2 JP2785697 B2 JP 2785697B2
- Authority
- JP
- Japan
- Prior art keywords
- image
- dimensional object
- estimating
- dimensional
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Input (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Description
【0001】[0001]
【産業上の利用分野】本発明は3次元物体領域推定装置
およびその方法に関し、特に、人の顔を撮影するマンマ
シンインタフェースにおいて、人物の頭部に取り付けた
位置センサの情報から画像内の人物の顔の領域を推定す
る3次元物体領域推定装置およびその方法に関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an apparatus and a method for estimating a three-dimensional object area, and more particularly to a man-machine interface for photographing a person's face. The present invention relates to a three-dimensional object region estimating device and method for estimating a region of a face.
【0002】[0002]
【従来の技術】従来、画像の中から対象とする3次元物
体が写っている領域を求めるには、画像を構成する各画
素の色や輝度に基づいて画像を連続的な領域に分割し、
それらの中から、対象物が写っている領域を定めてい
る。そのような従来例として、富高らの“自然動画像中
の対象物位置検出方式(社団法人情報処理学会発行の
「情処研報Vol.93,No.62、情報処理学会研
究報告」に発表)”が知られている。この例では、入力
した自然動画像の輝度のヒストグラムを作成して、それ
に基づいて局所ヒストグラム成長法による対象物画像の
分離を行ない、その位置を検出している。2. Description of the Related Art Conventionally, in order to obtain an area where a target three-dimensional object appears in an image, the image is divided into continuous areas based on the color and brightness of each pixel constituting the image.
From these, the area where the object is shown is determined. As such a conventional example, a method for detecting the position of an object in a natural moving image ("Information Research Report Vol. 93, No. 62, Information Processing Society of Japan" published by Information Processing Society of Japan) by Tomitaka et al. In this example, a brightness histogram of an input natural moving image is created, and based on the histogram, an object image is separated by a local histogram growth method, and its position is detected. .
【0003】[0003]
【発明が解決しようとする課題】この従来例では画像処
理を行なう必要があるため、動画像の中から連続的に領
域を検出しようとすると、専用の画像処理装置を必要と
する。上記の“自然動画像中の対象物位置検出方式”で
は米国インテル社のi860プロセッサを用いた専用の
画像処理装置を用いている。また、対象物の写っている
領域を背景から分離するためには、背景と対象物とが明
らかに異なるような画像となっている必要がある。この
ため、例えば、人の顔が多く写っているような街角で撮
影した画像から対象とする人物の顔を確実に抽出するこ
とはできない。In this conventional example, since it is necessary to perform image processing, a dedicated image processing device is required to continuously detect an area from a moving image. In the "method for detecting the position of an object in a natural moving image", a dedicated image processing device using an i860 processor manufactured by Intel Corporation of the United States is used. Further, in order to separate the area where the object is captured from the background, it is necessary that the background and the object have an image that is clearly different. For this reason, for example, it is not possible to reliably extract the face of the target person from an image taken at a street corner where many human faces are shown.
【0004】本発明の目的は、上述の点を鑑み、画像処
理を行なわずに、簡易な装置で顔画像領域を抽出するこ
とにある。また、背景と対象物の画像が似た場合でも確
実に抽出できるようにすることも目的とする。An object of the present invention is to extract a face image region by a simple device without performing image processing in view of the above points. It is another object of the present invention to reliably extract even when the background and the image of the object are similar.
【0005】[0005]
【課題を解決するための手段】本発明の3次元物体領域
推定装置は、対象とする3次元物体の3次元的な位置情
報を測定する位置測定手段と、前記3次元物体を撮影し
て、その画像を生成する画像入力手段と、前記位置測定
手段が測定した位置情報と前記3次元物体の既知の大き
さとを用いて、前記画像入力手段が生成した画像に写し
出される前記3次元物体の当該画像内における位置と大
きさを推定する領域推定手段とを備えることを特徴とす
る。 A three-dimensional object region estimating apparatus according to the present invention comprises: a position measuring means for measuring three-dimensional position information of a target three-dimensional object; Image input means for generating the image; position information measured by the position measuring means; and a known size of the three-dimensional object.
And copy the image to the image generated by the image input means.
The position and size of the three-dimensional object output in the image
Region estimating means for estimating the size.
You.
【0006】[0006]
【作用】上記手段によれば、画像中に写っている対象物
の位置を求める際、従来のような画像処理によるのでは
なく、対象物の3次元的な位置情報から推定することに
より、専用の画像処理装置を必要とせずに簡単な処理で
高速に位置推定を行なうことができるようになる。According to the above means, when the position of the object shown in the image is obtained, the position is obtained by estimating from the three-dimensional position information of the object, not by the conventional image processing. This makes it possible to perform high-speed position estimation with simple processing without requiring the image processing device.
【0007】[0007]
【実施例】次に、本発明について図面を参照して説明す
る。本発明の一実施例をブロックで示す図1を参照する
と、この実施例の3次元物体領域推定装置は、対象とす
る3次元物体である頭部41の位置を測定する位置測定
手段1と、この3次元物体である頭部41をカメラ21
撮影して、その画像を生成する画像入力手段2と、位置
測定手段1が測定した位置情報を用いて、画像入力手段
2が生成した画像の中のどの位置に3次元物体である頭
部41が写っているかを推定する領域推定手段3とから
構成される。Next, the present invention will be described with reference to the drawings. Referring to FIG. 1 showing a block diagram of an embodiment of the present invention, a three-dimensional object region estimating apparatus of this embodiment includes a position measuring unit 1 for measuring a position of a head 41 which is a target three-dimensional object The head 41, which is a three-dimensional object, is
The image input means 2 for taking an image and generating the image, and the position 41 in the image generated by the image input means 2 at which position in the image generated by the image input means 2 using the position information measured by the position measuring means 1 And an area estimating means 3 for estimating whether or not is captured.
【0008】図2はこの実施例の動作を説明する図であ
る。図2を図1に併せて参照して、この実施例の動作を
説明する。まず、あらかじめ対象とする頭部41の頭頂
部にセンサ11を装着しておく。センサ11としては米
国のホルヘマス社(Polhemus Inc.社)の
磁気式3次元位置センサ3SPACEを用いる。位置測
定手段1はセンサ11の位置Pを測定して領域推定手段
3に出力する。画像入力手段2はカメラ21で撮影して
得られたカメラ画像23を領域推定手段3に出力する。
領域推定手段3はあらかじめ測定して求めておいたカメ
ラ21の仮想スクリーン22に位置測定手段1から得ら
れたセンサ11の位置Pを透視投影して仮想スクリーン
22内での投影位置Qを求める。この投影位置Qは、カ
メラ画像23内でのセンサ像12の位置Rに等しい。通
常、人の顔は画像内では頭部を上にして写っている。こ
のため、カメラ画像23内での頭頂部の位置が算出でき
れば、顔画像42はその頭頂部の下にあるものと推定で
きる。一方、頭部41の頭頂部にはセンサ11が装着さ
れているので、カメラ画像23内での頭頂部の位置はセ
ンサ像12の位置Rに等しい。このため、カメラ画像2
3の中での顔画像42の位置はセンサ像12の位置Rの
下であると推定できる。FIG. 2 is a diagram for explaining the operation of this embodiment. The operation of this embodiment will be described with reference to FIG. 2 and FIG. First, the sensor 11 is attached to the top of the target head 41 in advance. As the sensor 11, a magnetic three-dimensional position sensor 3SPACE manufactured by Polhemus Inc. of the United States is used. The position measuring means 1 measures the position P of the sensor 11 and outputs it to the area estimating means 3. The image input means 2 outputs a camera image 23 obtained by photographing with the camera 21 to the area estimating means 3.
The area estimating means 3 perspectively projects the position P of the sensor 11 obtained from the position measuring means 1 onto the virtual screen 22 of the camera 21 which has been measured and obtained in advance, and obtains a projection position Q in the virtual screen 22. This projection position Q is equal to the position R of the sensor image 12 in the camera image 23. Normally, a human face is shown with the head up in the image. Therefore, if the position of the top of the head in the camera image 23 can be calculated, it can be estimated that the face image 42 is below the top of the top. On the other hand, since the sensor 11 is attached to the top of the head 41, the position of the top in the camera image 23 is equal to the position R of the sensor image 12. Therefore, camera image 2
The position of the face image 42 in 3 can be estimated to be below the position R of the sensor image 12.
【0009】さらに、位置測定手段1が測定したセンサ
11の位置Pとカメラ21との間の距離を計算すること
により、頭部41とカメラ21との距離が求められるの
で、頭部41のおおよその大きさが予め判っていれば、
カメラ画像23内で顔画像42がどのくらいの大きさで
写っているかも推定できる。具体的には次のように推定
する。頭部41とカメラ21との距離をD、予め求めて
おいた仮想スクリーン22とカメラ21との距離をd、
また、頭部41の幅と高さをそれぞれW,H、顔画像4
2の幅と高さをw,hとすると、Further, by calculating the distance between the position P of the sensor 11 measured by the position measuring means 1 and the camera 21, the distance between the head 41 and the camera 21 is obtained. If you know the size of
The size of the face image 42 in the camera image 23 can also be estimated. Specifically, it is estimated as follows. D is the distance between the head 41 and the camera 21, d is the distance between the virtual screen 22 and the camera 21, which is determined in advance,
Also, the width and height of the head 41 are W and H, respectively, and the face image 4
If the width and height of 2 are w and h,
【0010】 [0010]
【0011】の関係が成立する。以上から、カメラ画像
23内での顔画像42の位置と大きさを推定することが
可能になる。The following relationship holds. From the above, the position and size of the face image 42 in the camera image 23 can be estimated.
【0012】[0012]
【発明の効果】以上説明したように、本発明によれば、
対象とする3次元物体の位置を測定し、この3次元物体
を撮影して、その画像を生成し、この位置情報を用い
て、この画像の中のどの位置に3次元物体が写っている
かを推定することにより、従来のような画像処理専用装
置を必要としない。また、領域推定手段で行なう演算は
透視変換だけであるから、高速に処理を行なうことがで
きる。さらに、対象物の位置情報を用いているため、対
象物と似たものが複数個、映像内に写っていても、他の
ものと混同することがない。As described above, according to the present invention,
The position of the target three-dimensional object is measured, the three-dimensional object is photographed, an image thereof is generated, and the position of the three-dimensional object in the image is determined using the position information. The estimation eliminates the need for a dedicated apparatus for image processing as in the related art. Further, since the calculation performed by the area estimating means is only perspective transformation, high-speed processing can be performed. Further, since the position information of the object is used, even if a plurality of objects similar to the object appear in the video, they are not confused with other objects.
【図1】本発明の一実施例の構成を示すブロック図であ
る。FIG. 1 is a block diagram showing the configuration of an embodiment of the present invention.
【図2】この実施例の動作を説明する図である。FIG. 2 is a diagram for explaining the operation of this embodiment.
1 位置測定手段 2 画像入力手段 3 領域推定手段 11 センサ 12 センサ像 21 カメラ 22 仮想スクリーン 23 カメラ画像 41 頭部 42 顔画像 P センサ11の位置 Q 投影位置 R センサ像12の位置 Reference Signs List 1 position measuring means 2 image input means 3 area estimating means 11 sensor 12 sensor image 21 camera 22 virtual screen 23 camera image 41 head 42 face image P position of sensor 11 Q projection position R position of sensor image 12
───────────────────────────────────────────────────── フロントページの続き (58)調査した分野(Int.Cl.6,DB名) G06T 1/00 H04N 5/232 H04N 7/15 H04N 7/18 G08B 13/196 G01S 13/86 JICSTファイル(JOIS)──────────────────────────────────────────────────続 き Continued on the front page (58) Fields investigated (Int. Cl. 6 , DB name) G06T 1/00 H04N 5/232 H04N 7/15 H04N 7/18 G08B 13/196 G01S 13/86 JICST file ( JOIS)
Claims (2)
情報を測定する位置測定手段と、前記3次元物体を撮影
して、その画像を生成する画像入力手段と、前記位置測
定手段が測定した位置情報と前記3次元物体の既知の大
きさとを用いて、前記画像入力手段が生成した画像に写
し出される前記3次元物体の当該画像内における位置と
大きさを推定する領域推定手段とを備えることを特徴と
する3次元物体領域推定装置。1. A position measuring means for measuring three-dimensional position information of a target three-dimensional object, an image input means for photographing the three-dimensional object and generating an image thereof, and the position measuring means comprises: The measured position information and the known size of the three-dimensional object
Using the image data to copy the image generated by the image input means.
The position of the three-dimensional object in the image,
A three-dimensional object region estimating device, comprising: a region estimating means for estimating a size .
情報を測定し、前記3次元物体を撮影して、その画像を
生成し、前記位置情報と前記3次元物体の既知の大きさ
とを用いて、前記画像に写し出される前記3次元物体の
当該画像内における位置と大きさを推定することを特徴
とする3次元物体領域推定方法。2. Measuring three-dimensional position information of a target three-dimensional object, photographing the three-dimensional object, generating an image of the three-dimensional object, and calculating the position information and a known size of the three-dimensional object.
And the three-dimensional object projected on the image
A three-dimensional object region estimating method characterized by estimating a position and a size in the image .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP6203541A JP2785697B2 (en) | 1994-08-29 | 1994-08-29 | Three-dimensional object region estimating apparatus and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP6203541A JP2785697B2 (en) | 1994-08-29 | 1994-08-29 | Three-dimensional object region estimating apparatus and method |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH0869530A JPH0869530A (en) | 1996-03-12 |
JP2785697B2 true JP2785697B2 (en) | 1998-08-13 |
Family
ID=16475857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP6203541A Expired - Fee Related JP2785697B2 (en) | 1994-08-29 | 1994-08-29 | Three-dimensional object region estimating apparatus and method |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP2785697B2 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2665035B2 (en) * | 1990-08-22 | 1997-10-22 | 日本電気株式会社 | Video conference system |
JPH04249991A (en) * | 1990-12-20 | 1992-09-04 | Fujitsu Ltd | Video conference equipment |
JPH05196425A (en) * | 1992-01-21 | 1993-08-06 | Ezel Inc | Three-dimensional position detection method for human being |
JPH05244587A (en) * | 1992-02-26 | 1993-09-21 | Mitsubishi Electric Corp | Camera controller for television conference |
JPH06217304A (en) * | 1993-01-18 | 1994-08-05 | Fujitsu Ltd | Three-dimensional coordinate automatic measurement system in voice tracking automatic sighting system |
-
1994
- 1994-08-29 JP JP6203541A patent/JP2785697B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JPH0869530A (en) | 1996-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4198054B2 (en) | 3D video conferencing system | |
US6873340B2 (en) | Method and apparatus for an automated reference indicator system for photographic and video images | |
JP6568374B2 (en) | Information processing apparatus, information processing method, and program | |
JP2001175868A (en) | Method and device for human detection | |
US20230394834A1 (en) | Method, system and computer readable media for object detection coverage estimation | |
JP2008226109A (en) | Video image analysis system and method | |
JP4203279B2 (en) | Attention determination device | |
JPH11150676A (en) | Image processor and tracking device | |
JP4198536B2 (en) | Object photographing apparatus, object photographing method and object photographing program | |
US7602943B2 (en) | Image processing apparatus, image processing method, and image processing program | |
JP2002008041A (en) | Action detecting device, action detecting method, and information storage medium | |
JP3263253B2 (en) | Face direction determination device and image display device using the same | |
JP2785697B2 (en) | Three-dimensional object region estimating apparatus and method | |
JP2004046464A (en) | Apparatus and method for estimating three-dimensional position of mobile object, program, and recording medium thereof | |
JP2005141655A (en) | Three-dimensional modeling apparatus and three-dimensional modeling method | |
JPH11248431A (en) | Three-dimensional model forming apparatus and computer readable medium recorded with three-dimensional model generating program | |
JP3912638B2 (en) | 3D image processing device | |
CN111489384A (en) | Occlusion assessment method, device, equipment, system and medium based on mutual view | |
JPH09145368A (en) | Moving and tracing method for object by stereoscopic image | |
JPH0443204B2 (en) | ||
JPH09229648A (en) | Input/output method and device for image information | |
JP2001092978A (en) | Device for estimating attitude of figure image and recording medium stored with attitude estimation program for figure image | |
JPH07220095A (en) | Extracting device for image of object | |
JP5896781B2 (en) | Image processing apparatus and image processing method | |
JP2860992B2 (en) | Moving target extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 19980428 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20090529 Year of fee payment: 11 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20100529 Year of fee payment: 12 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110529 Year of fee payment: 13 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20110529 Year of fee payment: 13 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120529 Year of fee payment: 14 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20120529 Year of fee payment: 14 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20130529 Year of fee payment: 15 |
|
LAPS | Cancellation because of no payment of annual fees |