CN100527165C - Real time object identification method taking dynamic projection as background - Google Patents
Real time object identification method taking dynamic projection as background Download PDFInfo
- Publication number
- CN100527165C CN100527165C CNB2007100710782A CN200710071078A CN100527165C CN 100527165 C CN100527165 C CN 100527165C CN B2007100710782 A CNB2007100710782 A CN B2007100710782A CN 200710071078 A CN200710071078 A CN 200710071078A CN 100527165 C CN100527165 C CN 100527165C
- Authority
- CN
- China
- Prior art keywords
- brightness
- pixel
- image
- video camera
- view field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a method for recognizing real time target on the background of dynamic projection, which comprises system geometric calibration, system color calibration and the object extraction. The invention, by establishing a geometric matrix transformation and a color lookup table, can efficiently utilize the frame buffer to build up a background model for object extraction in real time and realize fast target recognition under dynamic background. The recognition algorithm does not require extra hardware support but only a projector and a camera to achieve fast and precise target recognition by the shaped enhanced reality system.
Description
Technical field
The present invention relates to general view data and handle or produce, especially relating to a kind of is the real time object identification method of background with the dynamic projection.
Background technology
In recent years, video camera and projector are in reduction gradually and the raising gradually on manufacture craft on the cost, and relevant applied research topic is also more and more.The combination in the same space of projector and video camera, for computer system provides new input and output ability, and produced a kind of new man-machine interaction example thus, in systems such as the immersion self-correcting demonstration of target following, projector and remote collaboration working, all use to some extent.
In tracker based on projector-video camera, be input usually with the video camera, projector is output.System utilizes video camera to obtain the position of target, utilizes projector to export corresponding interactive information.When having comprised the project content of dynamic mapping in the video camera capturing visual,, must find a method that view field is rejected for the position of captured target.
Richard May utilizes projector, video camera and microphone have been built desktop interactive environment HumanInformation Workspace (HI-Space), can be with reference to [May2005] May R, Baddeley B, " Architecture and Performance of the HI-Space Projector-Camera Interface ", In:Computer Vision and Pattern Recognition, 2005 IEEE Computer Society Conference, Vol.3,2005, pp.103-103, this system can discern gesture, object and phonetic entry make the user simple mutual by showing that desktop and system carry out.The HI-Space system has adopted the light filter plant to prevent that video camera from photographing the visible light that projector sends, and adopts infrared light supply auxiliary camera photographic subjects, thereby has rejected the influence of view field to target following effectively.Like this, only need simple image Segmentation Technology just can capture the position of target, come down to a kind of solution of utilizing the still image background, and system needs extra hardware supported such as infrared equipment.
Magic Board system has then adopted a kind of pure visual tracking method that does not need additional hardware to support, can be with reference to [Crowley2000] Crowley JL, Coutaz J, Berard F, " Things that See ", Communications of the ACM 2000,43 (3): 54-64.System adopts the method for template matches that finger tip is discerned, and like this, recognition result can not be subjected to the interference of view field's content.The shortcoming of this disposal route is to discern simple posture, and when view field was very big, recognition speed was slack-off, thereby causes system performance to descend.
Summary of the invention
At the deficiencies in the prior art, the object of the present invention is to provide a kind of is the real time object identification method of background with the dynamic projection.This method has solved the real-time Target Recognition under the dynamic background based on the image segmentation based on background subtraction, and can obtain result in real time under the fast-changing situation of background.
For realizing such purpose, the technical solution used in the present invention is as follows:
1) geometric calibration of system: obtain the coordinate of projector view field under the video camera imaging plane with differential technique earlier, adopt the automatic correcting method of video camera to calculate the affine transformation matrix of frame buffer plane again, thereby set up the frame buffer image to the geometric maps between the pixel of video camera photographic images to the video camera imaging plane;
2) color calibration of system: the luminosity response function that calculates video camera, the projection brightness that each pixel of view field is set up in the frame buffer is taken the color lookup table of brightness to video camera, thereby obtain the projection brightness of each pixel and the brightness mapping between the video camera shooting brightness;
3) target is extracted: utilize the geometric calibration of system and the corresponding relation that color calibration is set up, each two field picture buffer memory is carried out conversion in real time, the image of transformation results and video camera shooting is asked for difference, finally target is extracted.
The present invention compares the beneficial effect that has with background technology:
The present invention adopts the method for setting up geometric transformation and color lookup table, can effectively utilize frame buffer and set up the background model that is used for the target extraction in real time, realizes the fast target identification under the dynamic background situation.Recognizer does not need extra hardware supported, therefore only needs projector and video camera just can build an enhancement mode reality system to carry out Target Recognition fast and accurately.
Description of drawings
Fig. 1 is a system construction drawing;
Fig. 2 is the acquisition figure of view field;
Fig. 3 is that the gridiron pattern apex coordinate is to figure;
Fig. 4 is the figure as a result of geometric calibration;
Fig. 5 is the figure as a result that target is extracted;
Fig. 6 is the target following process flow diagram.
Embodiment
The invention will be further described below in conjunction with drawings and Examples.
The a kind of of the present invention's proposition is the real time object identification method of background with the dynamic projection.This method comprises system's geometric calibration, and color calibration and target are extracted three steps.At first the system to projector and video camera (or camera) composition carries out geometric calibration, is divided into determining view field and calculating video camera two steps of transformation matrix; Then system is carried out color calibration, be divided into and calculate video camera luminosity response function and establishment two steps of color lookup table; The transformation relation of utilizing first two steps to determine at last can be carried out target in real time and be extracted.Lasting Real time identification to target has just realized the tracking to target, and its flow process as shown in Figure 6.The place system that this method is carried out as shown in Figure 1,1 is projector, the image that projection changes with the motion of target, 2 is video camera, take the target on projected image and the projecting plane, 3 is computing machine, control projector, video camera, tracking target and processing, drawing image, 4 is the projecting plane that projector throwed.
Now specifically introduce the step of this method:
1) acquisition of view field:
A) utilize computing machine to generate full frame black picture, it is projected in projection screen (comprising various projecting planes such as ground, metope), use video camera (or camera) to photograph the darkest projected picture then, shown in Fig. 2 a.
B) utilize computing machine to generate full frame white picture, be projected in projection screen (comprising various projecting planes such as ground, metope), use video camera (or camera) to photograph the brightest projected picture then, shown in Fig. 2 b.
C) with the darkest projected picture and the photographic images of bright projected picture subtract each other by pixel intensity, just obtain the position of view field in video camera (or camera) shooting picture, the result is shown in Fig. 2 c.
2) geometric calibration of system:
This step utilizes self-correcting (Self Calibration) method of video camera to obtain affine transformation matrix.Step is as follows:
A) get the chequered with black and white tessellated pattern of 9X7 and take in view field.
B) at first determine good tessellated X, Y-axis and their positive dirction.
C) gridiron pattern is evenly generated by computing machine, so each summit of gridiron pattern is known at the coordinate that computing machine generates in the picture.Fig. 3 a is exactly the cross-hatch pattern picture that is generated by computing machine.
D) image of taking (shown in Fig. 3 b) is carried out Corner Detection, obtain the coordinate of each summit of gridiron pattern in photographic images.
E) utilize two groups of apex coordinates that obtain above to solve affine transformation equation.
Through this step, obtain the point that computing machine generates picture is mapped to transformation relation (as shown in Figure 4) in the image that video camera takes.
3) video camera (or camera) response function is demarcated:
If system has adopted the video camera that can change f-number and time shutter automatically, need the response function of calibrating camera.In the general imaging system, the form of response function is M=g (I), and M is a brightness of image, and I is the scene brightness of picture record, and g is a response function.Suppose that response function is dull, then g has inverse function f, I=f (M).When scene brightness is constant, I during with video camera (or camera) photographic images employed aperture and shutter value be directly proportional.
Mitsunaga and Nayar come the analog response function f with the N order polynomial, can be with reference to [Mitsunaga99] Mitsunaga T, Nayar S, " Radiometric Self Calibration ", IEEEComputer Society Conference on Computer Vision and Pattern Recognition (CVPR ' 99) 1999,1:1374, the polynomial expression form is as follows:
Wherein, N is an order of a polynomial, C
nBe the coefficient of n time of polynomial expression, the process of Biao Dinging is found the solution N and each C exactly like this
n
The demarcating steps of response function is as follows
A) utilize computing machine to generate the sprite of full screen, picture brightness Y divides from minimum value 16 grades and changes to maximal value 235 3~4 times, clap 3~4 width of cloth images.
B) utilize the brightness M of environment pixel in these a few width of cloth images (point beyond the view field), [Mitsunaga99] the described method of employing calculates the response function f of camera.
C) adjust computing machine and generate picture brightness Y to maximal value, definition captured image this moment is a benchmark image, utilizes response function I=f (M), calculates the scene brightness I that the environment pixel (point beyond the view field) in the benchmark image is write down
q(subscript q represents benchmark image).
4) color calibration of system:
The color calibration of system adopts set up the method for color lookup table in the view field by pixel.Step is as follows:
A) utilize computing machine to generate the sprite of full screen, picture brightness Y changes to maximal value for n time from branches such as minimum value, photographs the image that the brightness of n width of cloth view field increases successively.N generally gets 10-20.
B) utilize response function f to calculate the scene brightness I of the extra-regional pixel record of n width of cloth image projection
P, q+1(pixel that subscript p representative is different, the photographic images that the q+1 representative is different).
C) utilize formula
Calculate every photographic images with respect to benchmark image exposure value (aperture size and aperture time) compare R
Q, q+n
D) utilize formula
I
p,q=I
p,q+1×R
q,q+1=f(M
p,q+1)×R
q,q+1
With the pixel brightness M in the view field in the photographic images under the different exposure values
Q+nThe brightness of image record I that normalizing is taken down to the benchmark exposure value
q
E) only consider view field,, can obtain computing machine and generate picture brightness M and write down I with corresponding photographic images brightness to each pixel wherein
qN the point right.
F) to pixel to carrying out linear interpolation, will obtain: for each pixel of view field, when its brightness when minimum value changes to maximal value, corresponding video camera (or camera) under the benchmark exposure value, clap the brightness record I of image
qEstimated value.
So far, just set up the difference model that can be directly used in the target extraction according to the content of computing machine generation picture.
5) target is extracted:
The corresponding relation that utilizes geometric calibration and color calibration to set up, each frame computing machine can be generated picture and carry out conversion in real time, the image of transformation results and video camera shooting is carried out difference, just target can be extracted (as shown in Figure 5), method is as follows:
Suppose: i-view field's interior pixel point
C (i)-generate the brightness record that image content obtains the color lookup table interpolation according to computing machine;
D (i)-video camera (or camera) clap under the benchmark exposure value the brightness record that obtains behind the benchmark exposure value of image normalizing;
T-threshold value: can obtain the error of color correction in advance, get the maximal value of error then and extract for threshold value T carries out target.
Then have: the some i that satisfies c (i)-T<d (i)<c (i)+T is differentiated is background pixel, otherwise is object pixel.
Claims (1)
1. one kind is the real time object identification method of background with the dynamic projection, it is characterized in that comprising following three steps:
1) geometric calibration of system: obtain the coordinate of projector view field under the video camera imaging plane with differential technique earlier, adopt the automatic correcting method of video camera to calculate the affine transformation matrix of frame buffer plane again, thereby set up the frame buffer image to the geometric maps between the pixel of video camera photographic images to the video camera imaging plane;
Described differential technique be with the darkest projected picture and the photographic images of bright projected picture subtract each other by pixel intensity;
Described automatic correcting method obtains affine transformation matrix, and step is as follows:
A) get the chequered with black and white tessellated pattern of 9X7 and take in view field;
B) at first determine good tessellated X, Y-axis and their positive dirction;
C) gridiron pattern is evenly generated by computing machine, so each summit of gridiron pattern is known at the coordinate that computing machine generates in the picture, is exactly the cross-hatch pattern picture that is generated by computing machine;
D) image of taking is carried out Corner Detection, obtain the coordinate of each summit of gridiron pattern in photographic images;
E) utilize two groups of apex coordinates that obtain above to solve affine transformation equation;
2) color calibration of system: the luminosity response function that calculates video camera, the projection brightness that each pixel of view field is set up in the frame buffer is taken the color lookup table of brightness to video camera, thereby obtain the projection brightness of each pixel and the brightness mapping between the video camera shooting brightness;
The step of the luminosity response function of described calculating video camera is as follows:
A) utilize computing machine to generate the sprite of full screen, picture brightness Y divides from minimum value 16 grades and changes to maximal value 235 3~4 times, clap 3~4 width of cloth images;
B) utilize environment pixel in these a few width of cloth images, promptly the brightness M of the point beyond the view field comes the analog response function f with the N order polynomial;
C) adjust computing machine and generate picture brightness Y to maximal value, definition captured image this moment is a benchmark image, utilizes response function I=f (M), calculates the environment pixel in the benchmark image, i.e. the scene brightness I that write down of point beyond the view field
q, subscript q represents benchmark image;
The step of setting up color lookup table is as follows:
A) utilize computing machine to generate the sprite of full screen, picture brightness Y changes to maximal value for n time from branches such as minimum value, photographs the image that the brightness of n width of cloth view field increases successively.N generally gets 10-20;
B) utilize response function f to calculate the scene brightness I of the extra-regional pixel record of n width of cloth image projection
P, q+1, the pixel that subscript p representative is different, the photographic images that the q+1 representative is different;
C) utilize formula
Calculate every photographic images with respect to benchmark image exposure value compare R
Q, q+n
D) utilize formula
I
p,q=I
p,q+1×R
q,q+1=f(M
p,q+1)×R
q,q+1
With the pixel brightness M in the view field in the photographic images under the different exposure values
Q+nThe brightness of image record I that normalizing is taken down to the benchmark exposure value
q
E) only consider view field,, can obtain computing machine and generate picture brightness M and write down I with corresponding photographic images brightness to each pixel wherein
qN the point right;
F) to pixel to carrying out linear interpolation, will obtain: for each pixel of view field, when its brightness when minimum value changes to maximal value, corresponding video camera or camera under the benchmark exposure value, clap the brightness record I of image
qEstimated value;
3) target is extracted: utilize the geometric calibration of system and the corresponding relation that color calibration is set up, each two field picture buffer memory is carried out conversion in real time, the image of transformation results and video camera shooting is asked for difference, finally target is extracted;
It is described that image is asked for the method for difference is as follows:
Suppose: i-view field's interior pixel point;
C (i)-generate the brightness record that image content obtains the color lookup table interpolation according to computing machine;
D (i)-video camera or camera clap under the benchmark exposure value the brightness record that obtains behind the benchmark exposure value of image normalizing;
T-threshold value: can obtain the error of color correction in advance, get the maximal value of error then and extract for threshold value T carries out target;
Then have: the some i that satisfies c (i)-T<d (i)<c (i)+T is differentiated is background pixel, otherwise is object pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100710782A CN100527165C (en) | 2007-09-04 | 2007-09-04 | Real time object identification method taking dynamic projection as background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100710782A CN100527165C (en) | 2007-09-04 | 2007-09-04 | Real time object identification method taking dynamic projection as background |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101140661A CN101140661A (en) | 2008-03-12 |
CN100527165C true CN100527165C (en) | 2009-08-12 |
Family
ID=39192605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2007100710782A Expired - Fee Related CN100527165C (en) | 2007-09-04 | 2007-09-04 | Real time object identification method taking dynamic projection as background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100527165C (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2382768A1 (en) * | 2008-12-22 | 2011-11-02 | Koninklijke Philips Electronics N.V. | Method for changing an image data signal, device for changing an image data signal, display device |
US8355601B2 (en) * | 2010-01-15 | 2013-01-15 | Seiko Epson Corporation | Real-time geometry aware projection and fast re-calibration |
CN102109972B (en) * | 2011-02-14 | 2012-09-12 | 深圳雅图数字视频技术有限公司 | Projector television wall display method and system |
CN102801952B (en) * | 2011-05-28 | 2015-01-21 | 华为终端有限公司 | Method and device for adjusting video conference system |
CN103020950B (en) * | 2011-09-27 | 2015-09-09 | 华为终端有限公司 | Luminance function acquisition methods and relevant apparatus |
CN102521829A (en) * | 2011-11-22 | 2012-06-27 | 无锡海森诺科技有限公司 | Optical touch image calibrating method |
CN104766329B (en) * | 2013-01-28 | 2018-04-27 | 海信集团有限公司 | A kind of image processing method and electronic equipment |
JP6217244B2 (en) * | 2013-08-29 | 2017-10-25 | セイコーエプソン株式会社 | Image processing apparatus, head-mounted display apparatus having the same, image processing method, and computer program |
US8736685B1 (en) * | 2013-12-11 | 2014-05-27 | Anritsu Company | Systems and methods for measuring brightness response of a camera operating in automatic exposure mode |
CN104202547B (en) * | 2014-08-27 | 2017-10-10 | 广东威创视讯科技股份有限公司 | Method, projection interactive approach and its system of target object are extracted in projected picture |
CN104503673B (en) * | 2014-12-08 | 2018-01-16 | 昆山国显光电有限公司 | A kind of adjustable touch control method of display screen |
US10066982B2 (en) * | 2015-06-16 | 2018-09-04 | Hand Held Products, Inc. | Calibrating a volume dimensioner |
DE102015216908A1 (en) * | 2015-09-03 | 2017-03-09 | Robert Bosch Gmbh | Method of detecting objects on a shelf |
CN106780616B (en) * | 2016-11-23 | 2019-09-27 | 安徽慧视金瞳科技有限公司 | A kind of projector calibrating method based on the mapping of more matrixes |
CN109685853B (en) * | 2018-11-30 | 2021-02-02 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN110390668B (en) * | 2019-06-26 | 2022-02-01 | 石家庄铁道大学 | Bolt looseness detection method, terminal device and storage medium |
CN111860142B (en) * | 2020-06-10 | 2024-08-02 | 南京翱翔信息物理融合创新研究院有限公司 | Gesture interaction method based on machine vision and oriented to projection enhancement |
-
2007
- 2007-09-04 CN CNB2007100710782A patent/CN100527165C/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN101140661A (en) | 2008-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100527165C (en) | Real time object identification method taking dynamic projection as background | |
CN109118569B (en) | Rendering method and device based on three-dimensional model | |
CN107679497B (en) | Video face mapping special effect processing method and generating system | |
JP6864449B2 (en) | Methods and devices for adjusting the brightness of the image | |
CN110650368A (en) | Video processing method and device and electronic equipment | |
US20170323465A1 (en) | Image processing apparatus, image processing method, and storage medium | |
Meilland et al. | A unified rolling shutter and motion blur model for 3D visual registration | |
TW202011353A (en) | Method for operating a depth data processing system | |
CN111724317A (en) | Method for constructing Raw domain video denoising supervision data set | |
CN109996048A (en) | A kind of projection correction's method and its system based on structure light | |
CN104902168B (en) | A kind of image combining method, device and capture apparatus | |
CN115375581A (en) | Dynamic visual event stream noise reduction effect evaluation method based on event time-space synchronization | |
CN103093426A (en) | Method recovering texture and illumination of calibration plate sheltered area | |
CN111899345B (en) | Three-dimensional reconstruction method based on 2D visual image | |
CN101729739A (en) | Method for rectifying deviation of image | |
US20230010947A1 (en) | Electronic apparatus, and method for displaying image on display device | |
KR20150101343A (en) | Video projection system | |
CN114241127A (en) | Panoramic image generation method and device, electronic equipment and medium | |
TWI808321B (en) | Object transparency changing method for image display and document camera | |
JP2017138927A (en) | Image processing device, imaging apparatus, control method and program thereof | |
CN105282419A (en) | Denoising method and image system | |
CN107295261A (en) | Image defogging processing method, device, storage medium and mobile terminal | |
JP2018022287A (en) | Image processing device, method, image processing program and projection apparatus | |
KR101012758B1 (en) | Three-dimensional body measurement system and method | |
TWI387321B (en) | Moving object detection method for different exposure image sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090812 Termination date: 20140904 |
|
EXPY | Termination of patent right or utility model |