CN106650665A - Human face tracing method and device - Google Patents
Human face tracing method and device Download PDFInfo
- Publication number
- CN106650665A CN106650665A CN201611216574.8A CN201611216574A CN106650665A CN 106650665 A CN106650665 A CN 106650665A CN 201611216574 A CN201611216574 A CN 201611216574A CN 106650665 A CN106650665 A CN 106650665A
- Authority
- CN
- China
- Prior art keywords
- infrared image
- coordinate
- human face
- central point
- visible images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a human face tracing method and device. The human face tracing method comprises the steps of obtaining an infrared image and a visible light image; detecting a human face region of a target object in the infrared image and obtaining the coordinate of the central point of the human face region in the infrared image; obtaining the corresponding coordinate of the point in the visible light image which corresponds to the central point of the human face region in the infrared image based on the coordinate of the central point of the human face region in the infrared image; obtaining a horizontal included angle and a vertical included angle of the point in the visible light image corresponding to the central point of the human face region in the infrared image, with respect to the central point of the visible light image; and controlling a cloud platform to rotate based on the horizontal included angle and the vertical included angle so as to trace the human face of the target object. The human face tracing method can realize automatic tracing alignment for the human face of the target object, and can improve the alignment efficiency of the target object and the user experience. The human face tracing device has the similar advantages.
Description
Technical field
The present invention relates to image identification technical field, more particularly to it is a kind of can track human faces orientation automatically face tracking
Method and device.
Background technology
With the appearance of the various monitoring demands in the fields such as the development of economy and information technology, gate inhibition, security protection, anti-terrorism, to prison
The intellectuality of control device is put forward higher requirement, and the continuous development of Video Image technology and various control process
And communication etc. technology the intellectuality for developing into supervising device provide condition.Artificial intelligence introducing monitoring field is also become
The monitoring system of gesture, particularly recognition of face has the great market demand to fields such as identification, security protection stability maintenances, estimate to
The industrial chain of hundred billion RMB ranks of formation that I haven't seen you for ages.How a set of high-quality rational face identification system morning to the industry is designed
Day realizes undoubtedly there is huge motive force.
The current supervising device that can carry out recognition of face generally needs identified object actively to go in face of camera
Row front is aligned, and Consumer's Experience is poor, because on the one hand not all identified object is all ready alignment, on the other hand, is known
The other object active alignment time is slower.
Accordingly, it is desirable to provide a kind of face tracking method and device, to solve issue noted above at least in part.
The content of the invention
A series of concept of reduced forms is introduced in Summary, this will enter one in specific embodiment part
Step is described in detail.The Summary of the present invention is not meant to the pass for attempting to limit technical scheme required for protection
Key feature and essential features, more do not mean that the protection domain for attempting to determine technical scheme required for protection.
In order to solve the above problems at least in part, the invention provides a kind of face tracking method, including:Obtain infrared
Image and visible images, the infrared image and the visible images are respectively by infrared image sensor and visible ray figure
As sensor collection;Detect the human face region of the destination object in the infrared image and obtain face area in the infrared image
The coordinate of the central point in domain;According to the coordinate of the central point of human face region in the infrared image, in obtaining the infrared image
The respective coordinates of corresponding points of the coordinate of the central point of human face region in the visible images;In obtaining the infrared image
The water of the central point of the relatively described visible images of the corresponding points of the central point of human face region in the visible images
Flat angle and vertical angle, the horizontal sextant angle is that the corresponding points are relative described with the central point line of the visible images
The angle of the horizontal direction of visible images, the vertical angle is that the corresponding points connect with the central point of the visible images
The angle of the vertical direction of the relatively described visible images of line;And based on the horizontal sextant angle and vertical angle control cloud
Platform is rotated, and is tracked with the face to destination object.
Exemplarily, the human face region of the destination object in the detection infrared image and the infrared image is obtained
The coordinate of the central point of middle human face region includes:Heat radiation point that the infrared image sensor senses is obtained described infrared
Coordinate in image;The corresponding pixel value of coordinate of heat radiation point described in the infrared image is labeled as into 1, and by pixel value
For 1 region as the targeted object region in the infrared image;The presumptive area extracted in the targeted object region is made
For human face region in the infrared image;And the coordinate value of the central point of human face region in the infrared image is obtained, as
The coordinate of the central point of human face region in the infrared image.
Exemplarily, the coordinate value for obtaining the central point of human face region in the infrared image, as infrared image
Described in the coordinate of central point of human face region include:The circumscribed rectangle of human face region in the infrared image is intercepted, and with institute
State circumscribed rectangle central point coordinate as the central point of human face region in the infrared image coordinate.
Exemplarily, according to the coordinate of the central point of human face region in the infrared image, in obtaining the infrared image
The respective coordinates of corresponding points of the central point of human face region in the visible images include:Based on coordinate conversion relation, will
The coordinate of the central point of human face region is transformed to the correspondence of the corresponding points in the visible images in the infrared image
Coordinate.
Exemplarily, methods described also includes being based on includes the infrared image of the first object of reference and including first reference
The visible images of thing obtain the coordinate conversion relation, wherein the step of acquisition coordinate conversion relation includes:It is logical
Cross the visible light image sensor and the infrared image sensor obtain respectively it is described including first object of reference can
See light image and the infrared image including first object of reference, wherein, first object of reference is placed in the visible ray
In the angular field of view of imageing sensor and the infrared image sensor, and at least one is marked with first object of reference
Reference point;Respectively in visible images including first object of reference and described including the infrared of first object of reference
The coordinate of at least one reference point is obtained in image;And by the visible images including first object of reference
In at least one reference point coordinate and the infrared image including first object of reference in described at least one
The coordinate of individual reference point obtains the coordinate conversion relation.
According to another aspect of the present invention, a kind of face tracking device is additionally provided, it includes:Image collection module,
Described image acquisition module is used to obtain infrared image and visible images;Infrared image face center coordinate obtaining module,
The infrared image face center coordinate obtaining module is used to detect the human face region of the destination object in the infrared image
And obtain the coordinate of the central point of human face region in the infrared image;Visible images respective coordinates acquisition module, it is described can
Light image respective coordinates acquisition module is seen for according to the coordinate of the central point of human face region in the infrared image, obtaining described
The respective coordinates of corresponding points of the central point of human face region in visible images in infrared image;Angle detection module, it is described
The center point coordinate that angle detection module is used to obtain human face region in the infrared image is right in the visible images
Should put the horizontal sextant angle and vertical angle of the central point of relatively described visible images, the horizontal sextant angle be the corresponding points with
The angle of the horizontal direction of the relatively described visible images of the line of the central point of the visible images, the vertical angle is
The angle of the vertical direction of the corresponding points visible images relative with the line of the central point of the visible images;With
And cloud platform rotation control module, the cloud platform rotation control module is based on the horizontal sextant angle and vertical angle control head
Rotated, be tracked with the face to destination object.
Exemplarily, the face tracking device also includes:Infrared image sensor, the infrared image sensor is used for
Obtain the infrared image;And visible light image sensor, the visible light image sensor be used for obtain the visible ray
Image.
Exemplarily, the infrared image face center coordinate obtaining module includes:Coordinate acquisition submodule, the seat
Mark acquisition submodule is used to obtaining the coordinate of heat radiation point that the infrared image sensor senses in the infrared image;
Element marking submodule, the element marking submodule is used for the coordinate of heat radiation point described in the infrared image is corresponding
Pixel value is labeled as 1, and using region that pixel value is 1 as the targeted object region in the infrared image;Human face region is carried
Take submodule, the presumptive area that the human face region extracting sub-module is used to extracting in the targeted object region is used as described red
Human face region in outer image;And infrared image face center coordinate calculating sub module, the infrared image face center
Point coordinates calculating sub module is used to obtain the coordinate value of the central point of human face region in the infrared image, used as the infrared figure
The coordinate of the central point of human face region as in.
Exemplarily, the infrared image face center coordinate calculating sub module includes interception unit, the intercepting list
Unit for intercepting the circumscribed rectangle of human face region in the infrared image, and using the coordinate of the central point of the circumscribed rectangle as
The coordinate of the central point of human face region in the infrared image.
Exemplarily, it is seen that light image respective coordinates acquisition module is based on coordinate conversion relation, by the infrared image
The coordinate of the central point of human face region is transformed to the respective coordinates of the corresponding points in the visible images.
Exemplarily, the face tracking device also includes:Coordinate conversion relation acquisition module, the coordinate conversion relation
Acquisition module is used to based on the infrared image for including the first object of reference and including the visible images of first object of reference obtain
The coordinate conversion relation, wherein the coordinate conversion relation acquisition module includes:
It is described to be used to be passed by the visible images with reference to object image acquisition submodule with reference to object image acquisition submodule
Sensor and the infrared image sensor obtain respectively the visible images including first object of reference and it is described including
The infrared image of first object of reference, wherein, first object of reference is placed in the visible light image sensor and described red
In the angular field of view of outer imageing sensor, and at least one reference point is marked with first object of reference;With reference to point coordinates
Acquisition submodule, it is described be used in the visible images including first object of reference with reference to point coordinates acquisition submodule and
Obtain the coordinate of at least one reference point in the infrared image including first object of reference respectively;And coordinate becomes
Change Relation acquisition submodule, the coordinate conversion relation acquisition submodule be used for by it is described including first object of reference can
Institute in the coordinate of at least one reference point seen in light image and the infrared image including first object of reference
The coordinate for stating at least one reference point obtains the coordinate conversion relation.
Exemplarily, described image acquisition module, the infrared image face center coordinate obtaining module, described visible
Light image respective coordinates acquisition module, the angle detection module and the cloud platform rotation control module are by same processor reality
It is existing.
Face tracking method of the invention and device, by sensing the heat radiation of human body and forming infrared image, so
Obtain the coordinate of human face region central point from infrared image afterwards, and the coordinate of the central point of human face region in infrared image is turned
The respective coordinates of the corresponding points being changed in visible images, then obtain the corresponding points in visible images with respect to visible images
Central point horizontal sextant angle and vertical angle, and based on the horizontal sextant angle and vertical angle control cloud platform rotation, so as to right
The face of destination object is tracked.That is, face tracking method of the invention and device can realize the mesh to being identified
Being aligned from motion tracking for object face is marked, and is actively gone to phase machine face without the need for destination object and is aligned, when improve alignment
Between and Consumer's Experience.
Description of the drawings
The drawings below of the embodiment of the present invention is used to understand the present invention in this as the part of the present invention.Illustrate in accompanying drawing
Embodiments of the invention and its description, for explaining the principle of the present invention.In the accompanying drawings,
Fig. 1 is the signal for realizing the exemplary electronic device of face tracking method according to embodiments of the present invention and device
Property block diagram;
Fig. 2 is the indicative flowchart of the face tracking method according to the embodiment of the present invention;
Fig. 3 is the signal of the coordinate acquiring method of the central point of human face region in infrared image according to the embodiment of the present invention
Property flow chart;
Fig. 4 is the schematic block diagram of the face tracking device according to the embodiment of the present invention;
Fig. 5 is the schematic structure frame of the infrared image face center coordinate obtaining module according to the embodiment of the present invention
Figure;
Fig. 6 is the schematic block diagram of the coordinate conversion relation acquisition module according to the embodiment of the present invention;
Fig. 7 is the schematic block diagram of the face tracking equipment according to the embodiment of the present invention.
Specific embodiment
In the following description, a large amount of concrete details are given to provide more thorough understanding of the invention.So
And, it will be apparent to one skilled in the art that the embodiment of the present invention can be without the need for one or more of these details
And be carried out.In other examples, in order to avoid obscuring with the embodiment of the present invention, for more well known in the art
Technical characteristic is not described.
It should be appreciated that the present invention can be implemented in different forms, and should not be construed as being limited to what is proposed here
Embodiment.On the contrary, providing these embodiments disclosure will be made thoroughly and complete, and be will fully convey the scope of the invention to
Those skilled in the art.In the accompanying drawings, in order to clear, the size and relative size of part, element etc. may be exaggerated.Ad initio
Identical element is represented to whole same reference numerals.
In order that the object, technical solutions and advantages of the present invention become apparent from, root is described in detail below with reference to accompanying drawings
According to the example embodiment of the present invention.Obviously, described embodiment is only a part of embodiment of the present invention, rather than this
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Described in the present invention
The embodiment of the present invention, those skilled in the art's all other embodiment resulting in the case where creative work is not paid
All should fall under the scope of the present invention.
First, with reference to Fig. 1 describing the example for realizing face tracking method according to embodiments of the present invention and device
Electronic equipment 100.
As shown in figure 1, electronic equipment 100 includes one or more processors 102, one or more storage devices 104, defeated
Enter/output device 106, communication interface 108 and one or more image-signal processors (ISP) 110, visible images sensing
Device 114 and infrared image sensor 116, these components (are not shown by the bindiny mechanism of bus system 112 and/or other forms
Go out) interconnection.It should be noted that the component and structure of the electronic equipment 100 shown in Fig. 1 are illustrative, and not restrictive, root
According to needs, the electronic equipment can also have other assemblies and structure, it is also possible to not including aforesaid members, for example may be used
With including communication interface 108, it is also possible to not packet communication interface 108.
The processor 102 typically represent any types or form can processing data or explain and execute instruction place
Reason unit.In general, processor can be CPU (CPU) or hold with data-handling capacity and/or instruction
The processing unit of the other forms of row ability, and other components in the electronic equipment 100 can be controlled to perform expectation
Function.In a particular embodiment, processor 102 can receive the instruction from software application or module.These instructions can be with
Cause the function of one or more example embodiments that processor 102 completes to be described herein and/or illustrate.
The storage device 104 can include one or more computer programs, and the computer program can
With including various forms of computer-readable recording mediums, such as volatile memory and/or nonvolatile memory.It is described easy
The property lost memory can for example include random access memory (RAM) and/or cache memory (cache) etc..It is described non-
Volatile memory can for example include read-only storage (ROM), hard disk, flash memory etc..In the computer-readable recording medium
On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute
The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter
Various application programs and various data can also be stored in calculation machine readable storage medium storing program for executing, such as application program use and/or
Various data for producing etc..
The input/output device 106 can be that user exports the device of various information for input instruction and to outside,
For example input unit can include one or more in keyboard, mouse, microphone and touch-screen etc..Output device can include
One or more in display, loudspeaker etc..
Communication interface 108 widely represent any types or form can promote exemplary electronic device 100 and one or
The adapter or communication equipment of the communication between multiple optional equipments.For example, communication interface 108 can promote electronic equipment 100
With the communication in front end or accessory electronic device and back-end server or high in the clouds.The example of communication interface 108 is included but is not limited to
Wired network interface (such as NIC), radio network interface (such as wireless network interface card), modem and appoint
What his suitable interface.In one embodiment, communication interface 108 by with the direct-connected offer of the network of such as internet to remote
Journey server/remote front-end equipment it is direct-connected.In a particular embodiment, communication interface 108 by with dedicated network, such as video
The direct-connected offer of the networks such as monitoring network, Skynet system network is to the direct-connected of remote server/remote front-end equipment.Communication interface
108 can also indirectly provide this connection by any other suitable connection.
Image-signal processor (ISP) 110 is used to control visible light image sensor 114 and infrared image sensor 116
Carry out IMAQ, and adjust parameters such as digital gain, analog gain, BLC, the wide dynamic of AE (automatic exposure) and bright
The image quality parameters such as degree, contrast and other specification, with adjusting image quality, and based on visible light image sensor 114
Corresponding visible light exposure image and infrared image are generated with the output signal of infrared image sensor 116, and by such as total
The connection unit of linear system system 112 is transmitted to processor 102 and carries out subsequent treatment.Meanwhile, image-signal processor (ISP) 110
The order or instruction from processor 102 can be received, with to visible light image sensor 114 and infrared image sensor 116
It is adjusted control.Image-signal processor (ISP) 110 can as needed and visible light image sensor 114 and infrared
The quantity of imageing sensor 116 arranges one or more, and image-signal processor (ISP) 110 both can be with processor 102
Etc. integrated, on-chip system (SOC) is embodied as, setting can be set up again, and the connection unit by such as bus system 112 is carried out
Connection.
Visible light image sensor 114 can gather or shoot the desired visible light exposure of user image (such as photo,
Video etc.), and captured image is stored in the storage device 104 so that other components are used.Visible images
One example of sensor 114 is RGB sensors, or RGBW sensors or black and white sensor (can only gather black white image) etc.
Deng, it passes through to sense visible ray and forms colour or black white image/video, thus referred to here as visible light image sensor.Accordingly
Visible light image sensor collection image/video be referred to as visible images/video or visible light exposure image/video.
Infrared image sensor 116 is used to gather infrared image, and it is based on heat radiation imaging, therefore no matter ambient light is strong
It is weak, can gather infrared image.
Exemplarily, for realizing that the exemplary electronic device of face tracking method according to embodiments of the present invention and device can
To be implemented as the IMAQ of such as smart mobile phone, panel computer, the IMAQ end of gate control system, preventing road monitoring system
The rear end control process end at end, the IMAQ end of safety-protection system and various monitoring, safety-security area etc. or server end or cloud
End etc..
Fig. 2 is the indicative flowchart of the face tracking method according to the embodiment of the present invention.Below in conjunction with Fig. 2 to basis
The face tracking method of the embodiment of the present invention is described.
As shown in Fig. 2 face tracking method includes disclosed in the embodiment of the present invention:
First, in step s 201, infrared image and visible images, the infrared image and the visible ray figure are obtained
As respectively by infrared image sensor and visible light image sensor collection.
Infrared image is the image formed based on heat radiation infrared sensitive, and compared with visible ray, infrared imaging does not receive ring
The strong and weak impact of environmental light, therefore clearly infrared image can be also obtained under half-light.In embodiments of the present invention, can be by figure
As signal processor controls the infrared image that infrared image sensor obtains destination object, i.e., sensed by infrared image sensor
The infrared ray that heat radiation is formed in angular field of view, and infrared signal is converted into electric signal, at image-signal processor
Infrared image is formed after reason.In one embodiment, the infrared image includes destination object.
Visible images can by image-signal processor control visible light image sensor obtain destination object can
See light image, i.e., the light in angular field of view is sensed by visible light image sensor, and convert optical signal into electric signal, Jing
Cross after image-signal processor is processed and form visible images.In one embodiment, the visible images include target pair
As.
In one embodiment, the infrared image and the visible images be respectively the infrared image sensor and
Visible light image sensor is gathered in synchronization.
Then, in step S202, detect the human face region of the destination object in the infrared image and obtain described red
The coordinate of the central point of human face region in outer image.
That is, the acquired infrared image is processed, therefrom obtains in the infrared image and detect target pair
As when, then obtain the coordinate of the central point of the human face region of destination object in the infrared image.Obtain face in infrared image
The coordinate of the central point in region will subsequently be described in detail.
Then, in step S203, according to the coordinate of the central point of human face region in the infrared image, obtain described red
The respective coordinates of corresponding points of the central point of human face region in the visible images in outer image.
Due to the position of visible light image sensor and infrared image sensor and misaligned, therefore face in infrared image
The coordinate of regional center point is simultaneously not equal to the coordinate of human face region central point in visible images, therefore in this step to infrared
The coordinate of the central point of human face region carries out coordinate transform in image, so as to obtain infrared image in the central point of human face region exist
The respective coordinates of the corresponding points in visible images.In one embodiment, can be by the center of human face region in infrared image
Corresponding points of the point in visible images are considered as the central point of human face region in visible images.So-called coordinate transform is i.e. based on seat
Mark transformation relation, the coordinate of the central point of human face region in the infrared image is transformed in the visible images described in
The respective coordinates of corresponding points.For example, in an embodiment of the invention by the central point of human face region in the infrared image
The converted quantity and/or transforming function transformation function that coordinate is given with coordinate conversion relation is overlapped, to obtain the infrared image in face
Respective coordinates of the coordinate of the central point in region in the visible images.
The coordinate conversion relation can only provide the converted quantity on a direction, it is also possible to provide the change in multiple directions
The amount of changing.In one embodiment, the coordinate conversion relation can only provide the transforming function transformation function on a direction, it is also possible to be given
Transforming function transformation function in multiple directions.In embodiments of the present invention, the coordinate conversion relation can be based on includes the first object of reference
Infrared image and including the visible images of first object of reference obtaining.
Exemplarily, in an example of the invention, the coordinate can be obtained by following step and becomes relation:
First, obtain described including described respectively by the visible light image sensor and the infrared image sensor
The visible images of the first object of reference and the infrared image including first object of reference, wherein, first object of reference
It is placed in the angular field of view of the visible light image sensor and the infrared image sensor, and on first object of reference
It is marked with least one reference point.In an example of the invention, first object of reference is, for example, a blank sheet of paper, and in blank sheet of paper
Four corner marks, four stains apply thermal source so as to form infrared figure as reference point in the behind of each stain
Picture.In an example of the invention, it is preferable that infrared image sensor and visible light image sensor select identical resolution ratio with
And horizontal and vertical visual angle, and be horizontally disposed with circuit boards and close proximity to reduce the error of coordinate transform.Also,
When the reference point does not have heat radiation, wherein, apply thermal source to heat at the back side of each at least one reference point
At least one reference point.Certainly, when the reference point itself has heat radiation, then need not be heated.
Can be obtained respectively including described first by the visible light image sensor and the infrared image sensor
The visible images of object of reference and the infrared image including first object of reference.For example, sensed by the visible images
Device and infrared blit image sensor obtain respectively the visible images and the infrared image including the blank sheet of paper including the blank sheet of paper.
In an example of the invention, visible light image sensor and infrared image sensor collection figure can be controlled by picture signal device
Picture, to obtain visible images and the infrared image including the blank sheet of paper including the blank sheet of paper respectively.
Then, respectively in visible images including first object of reference and described including first object of reference
Infrared image in obtain at least one reference point and to answer coordinate.For example, from the infrared image including the blank sheet of paper
With the coordinate that the visible images including the blank sheet of paper obtain four stains on four angles.Stain coordinate can be by calculating
The coordinate system of method and image is obtained.
Finally, by the seat of at least one reference point in the visible images including first object of reference
The coordinate of at least one reference point obtains the coordinate and becomes described in mark and the infrared image including first object of reference
Change relation.For example, after the stain and its coordinate in obtaining two width images, the coordinate of stain described in two width images can be passed through
Obtain the coordinate conversion relation.For example, in an example of the invention, it is seen that optical image sensor and infrared image sensor water
Keep flat and be placed in same level, therefore, it is seen that Vertical Square in the coordinate system of optical image sensor and infrared image sensor
To coordinate it is consistent, and there is difference in the coordinate of horizontal direction, thus by comparing two width images in the level of correspondence stain sit
Mark difference or margin of image element dPix can obtain the coordinate transform function formula of horizontal direction, and by the coordinate transform function formula
As the coordinate conversion relation.Exemplarily, the coordinate transform formula of such as horizontal direction (being expressed as X-direction) (can for X1
The horizontal direction coordinate value seen in optical image sensor)=X2 (the horizontal direction coordinate value in infrared image sensor)+dPix.
It is understood that dPix can be on the occasion of also may be used based on the relative position of visible light image sensor and infrared blit image sensor
With negative value.
May be appreciated yes, the set-up mode and coordinate of above-mentioned visible light image sensor and infrared blit image sensor
The acquisition methods of transformation relation are only an examples, and those skilled in the art can be obtained based on similar principles by additive method
Take the coordinate conversion relation.Additionally, above-mentioned four corner marks, four stains using blank sheet of paper as the first object of reference and in blank sheet of paper
It is also exemplary as reference point, those skilled in the art can use other articles as object of reference, and on object of reference
Optional position choose one or more reference point.
In step S204, the central point for obtaining human face region in the infrared image is right in the visible images
The horizontal sextant angle and vertical angle of the central point of relatively described visible images should be put, the horizontal sextant angle is the visible ray figure
The angle of the horizontal direction of the corresponding points visible images relative with the central point line of the visible images as in, institute
It is that corresponding points in the visible images are relative with the central point line of the visible images described visible to state vertical angle
The angle of the vertical direction of light image.
Namely be based on the respective coordinates of corresponding points of the central point of human face region in the infrared image in visible images
The relatively described visible images of respective coordinates in the visible images are obtained with the coordinate of the central point of visible images
The horizontal sextant angle and vertical angle of the coordinate of central point, wherein, the horizontal sextant angle is correspondence described in the visible images
The angle of the horizontal direction of the point visible images relative with the central point line of the visible images, the vertical angle
For the visible images relative with the central point line of the visible images of corresponding points described in the visible images
The angle of vertical direction.
In an example of the invention, based in respective coordinates and the visible images described in the visible images
It is relatively described visible that the coordinate of heart point calculates corresponding points described in the visible images of the destination object by certain algorithm
The horizontal sextant angle and vertical angle of the central point of light image.
Wherein, the center point coordinate of the visible images can be based on coordinate system or the resolution of the visible images
Rate is obtained.For example, in an example of the invention, the coordinate in units of pixel, if visible light image sensor point
Resolution is fixed as WxH, then, it is seen that the center point coordinate of light image is (W/2, H/2), and W, H are even number;Or ((W-1)/2, (H-1)/
2), W, H are odd number;Or ((W+1)/2, (H+1)/2), W, H are odd number.Certainly, in other embodiments, it is also possible to according to difference
Setting is obtained using other suitable methods.
In step S205, based on the horizontal sextant angle and vertical angle control cloud platform rotation, with to destination object
Face be tracked.
The head is the support infrared image sensor/collecting unit and visible light image sensor/collecting unit,
And can adjust the infrared image sensor/collecting unit and visible light image sensor/collecting unit image acquisition region
Device, it can adopt various suitable structures.When it is determined that the relatively described visible ray of corresponding points described in the visible images
After the horizontal sextant angle and vertical angle of the central point of image, cloud platform rotation can be controlled based on the horizontal sextant angle and vertical angle,
So as to adjust the image acquisition region of visible light image sensor, it is tracked with the face to destination object.For example, based on institute
Horizontal sextant angle and vertical angle control cloud platform rotation are stated, the face area of visible light image sensor alignment target object is made
Domain, the human face region for making destination object is generally in the center of freshly harvested visible images, so as to realize to destination object people
The tracking alignment in face region.
In one embodiment, in the step of obtaining the transformation relation, the visible light image sensor and described
The relative position of infrared image sensor has fixed (for example, be horizontally disposed with and close proximity to), therefore, head is in rotation process
In, the pickup area of the infrared image sensor and visible light image sensor is adjusted, and keep the visible images to pass
The relative position of sensor and the infrared image sensor is constant.
Exemplarily, can be deployed at man face image acquiring end according to the face tracking method of the present embodiment, for example,
Security protection application or financial application field, can be deployed in the IMAQ end of gate control system or image capturing system.
Alternatively, face tracking method according to embodiments of the present invention is deployed in server end (or cloud in which can also be distributed
End) and IMAQ end at.For example, in financial application field, target pair can be carried out IMAQ (such as imaging generator terminal)
Image tracing and IMAQ, in server end (or high in the clouds) cloud platform rotation control etc. is carried out.
According to the face tracking method of the present embodiment, by sensing the heat radiation of human body and forming infrared image, Ran Houcong
The coordinate of the central point of human face region in infrared image is obtained in infrared image, and by the central point of human face region in infrared image
Coordinate Conversion be visible images in corresponding points respective coordinates, then obtain the corresponding points with respect in visible images
The horizontal sextant angle and vertical angle of heart point, and based on the horizontal sextant angle and vertical angle control cloud platform rotation, with to target pair
The face of elephant is tracked.That is, face tracking method of the invention can realize the destination object face to being identified
From motion tracking alignment, and actively go to phase machine face without the need for destination object and be aligned, improve alignment time and Consumer's Experience.
Fig. 3 is the signal of the coordinate acquiring method of the central point of human face region in infrared image according to the embodiment of the present invention
Property flow chart.The coordinate of the central point of human face region in infrared image according to embodiments of the present invention is obtained with reference to Fig. 3
Method is described.
As shown in figure 3, the human face region of the destination object in the embodiment of the present invention disclosed detection infrared image is simultaneously
Obtaining the seat calibration method of the central point of human face region in the infrared image includes:
First, in step S301, the heat radiation point that the acquisition infrared image sensor is sensed is in the infrared figure
Coordinate as in.
Infrared image is formed by induction heat infrared radiation line, therefore when destination object is human body, can be by sensing
The heat radiation infrared ray of human body forms infrared image.Therefore people can be obtained by the coordinate of heat radiation point in acquisition infrared image
Body profile.In embodiments of the present invention, position of the heat radiation point in the coordinate system of infrared image can be judged by algorithm
The coordinate of heat radiation point in obtain infrared image.
In step s 302, the corresponding pixel value of above-mentioned coordinate is labeled as into 1.Will heat radiation point described in infrared image
Pixel value corresponding to coordinate is labeled as 1, and using region that pixel value is 1 as the targeted object region in the infrared image
(human body contour outline).And exemplarily, can also further use existing figure cutting techniques (for example, Graph Cut algorithms)
The region that pixel value is 1 is taken out, so as to obtain the only infrared image comprising targeted object region.
Exemplarily, in one example, the pixel value corresponding to the coordinate of all heat radiation points of infrared image is being marked
For 1 when, the pixel value in other regions is labeled as into 0.Or, in another embodiment, can in advance by the pixel of all pixels
Value is defaulted as 0, after coordinate value of the heat radiation point in infrared figure is obtained, then by the coordinate institute of all heat radiation points of infrared image
Corresponding pixel value is labeled as 1, and targeted object region so equally can be obtained from infrared image.
In step S303, the presumptive area of the targeted object region is extracted as face area in the infrared image
Domain.
I.e. according to the relative position relation setting presumptive area of human face region in general infrared image and targeted object region,
Behind the targeted object region for obtaining in step s 302, then the presumptive area of the targeted object region is extracted as described red
Human face region in outer image.In an example of the invention, the presumptive area be the targeted object region in from top to bottom as
Element value for 1 front 50 row pixel constitute region, that is, extract targeted object region in front 50 row pixel be 1 region as face
Region.Certainly, in other embodiments, it is also possible to choose the pixel region of other quantity rows as human face region, or adopt it
He extracts human face region at method from targeted object region.
In step s 304, the coordinate value of the central point of human face region in the infrared image is obtained, as described infrared
The coordinate of the central point of human face region in image.
Obtain the center position of human face region in infrared image.For example in an example is invented, can be by suitable
Algorithm intercept the circumscribed rectangle of human face region in the infrared image, and using the coordinate of the central point of the circumscribed rectangle as
The coordinate of the central point of human face region in the infrared image.Certainly, in other embodiments, it is also possible to intercept human face region
Other circumscribed or interior contact graphs are obtaining the coordinate of central point, or obtain the human face region by other suitable methods
The coordinate of central point.
Exemplarily, the coordinate acquiring method of the central point of human face region can in infrared image according to embodiments of the present invention
To realize in the unit with visible light image sensor, infrared image sensor and processor or system.
The coordinate acquiring method of the central point of human face region can be deployed in infrared image according to embodiments of the present invention
At man face image acquiring end, for example, in security protection application or financial application field, gate control system can be deployed in or image is known
The IMAQ end of other system.Alternatively, it is also possible to be arranged in image procossing end or control end, and server end or high in the clouds,
I.e. in front-end collection image, in rear end human face region center point coordinate is obtained.
According to the coordinate acquiring method of the central point of human face region in the infrared image of the present embodiment, by sensing human body
Heat radiation simultaneously forms infrared image, and targeted object region is then extracted from infrared image, and extracts people from targeted object region
Face region, then obtains the coordinate of the central point of human face region, so just can be based on the seat of the central point of the human face region
Mark obtains cloud platform rotation angle, is tracked with the face to destination object so as to control cloud platform rotation.
Fig. 4 is the schematic block diagram of the face tracking device according to the embodiment of the present invention.With reference to Fig. 4 to basis
The face tracking device of the embodiment of the present invention is described.
As shown in figure 4, face tracking device 400 according to embodiments of the present invention includes image collection module 410, infrared figure
As face center coordinate obtaining module 420, visible images respective coordinates acquisition module 430, angle detection module 440 and cloud
Platform rotates control module 450.The modules can respectively perform the face tracking method above in conjunction with Fig. 2 to Fig. 3 descriptions
Each step/function.Hereinafter only the major function of each unit of face tracking device 400 is described, and more than omitting
The detail content having been noted above.
Image collection module 410 is used to obtain infrared image and visible images.Image collection module 410 obtains infrared figure
The infrared image of the destination object of picture sensor collection, and the visible ray figure of the destination object of visible light image sensor collection
Picture.Image collection module 410 can be as shown in Figure 1 electronic equipment in processor 102 and/or image-signal processor 110
The programmed instruction stored in Running storage device 104 can perform face tracking according to embodiments of the present invention realizing
Step S201 in method.
Examine in the infrared image that infrared image face center coordinate obtaining module 420 is obtained from image collection module 410
Survey the human face region of destination object and obtain the coordinate of the central point of human face region in the infrared image.In infrared image face
Heart point coordinates acquisition module 420 can be as shown in Figure 1 electronic equipment in processor 102 and/or image-signal processor 110
The programmed instruction stored in Running storage device 104 can perform face tracking according to embodiments of the present invention realizing
Step in step S202 in method, and the coordinate acquiring method of infrared image face center according to embodiments of the present invention
Rapid S301~S304.
Visible images respective coordinates acquisition module 430 is used for according to the central point of human face region in the infrared image
Coordinate, obtains the center of human face region in the infrared image that the infrared image face center coordinate obtaining module 420 is obtained
The respective coordinates of corresponding points of the point in the visible images.In an example of the invention, it is seen that light image respective coordinates are obtained
Delivery block 430 is based on coordinate conversion relation, by the coordinate of the central point of human face region in the infrared image be transformed to it is described can
The respective coordinates of the corresponding points seen in light image.Visible images respective coordinates acquisition module 430 can be as shown in Figure 1 electronics
The programmed instruction stored in processor 102 and/or the Running storage device 104 of image-signal processor 110 in equipment realizing,
And step S203 in face tracking method according to embodiments of the present invention can be performed.
Angle detection module 440 is used to obtain the central point of human face region in the infrared image in the visible images
In the relatively described visible images of corresponding points central point horizontal sextant angle and vertical angle, the horizontal sextant angle is for described right
The angle of the horizontal direction of the visible images relative with the central point line of the visible images, the vertical folder should be put
Angle is the angle of the vertical direction of the corresponding points visible images relative with the central point line of the visible images.
Angle detection module 440 can be as shown in Figure 1 electronic equipment in processor 102 and/or image-signal processor 110 run
The programmed instruction stored in storage device 104 can perform face tracking method according to embodiments of the present invention realizing
In step S204.
Cloud platform rotation control module 450 is rotated based on the horizontal sextant angle and vertical angle control head, with
The face of destination object is tracked.Cloud platform rotation control module 450 can be as shown in Figure 1 electronic equipment in processor
102 and/or the Running storage device 104 of picture signal device 110 in the programmed instruction that stores realizing, and can perform according to this
Step S205 in the face tracking method of inventive embodiments.
Further, in an example of the invention, as shown in Figure 4 the face tracking device can also include coordinate transform
Relation acquisition module 460, the coordinate conversion relation acquisition module 460 be used for based on include the infrared image of the first object of reference with
The coordinate conversion relation is obtained including the visible images of the first object of reference.Coordinate conversion relation acquisition module 460 can be by
The program stored in processor 102 and/or the Running storage device 104 of picture signal device 110 in electronic equipment shown in Fig. 1 refers to
Order can perform aforementioned coordinate conversion relation acquisition methods obtaining coordinate conversion relation realizing, so as to visible ray figure
As respective coordinates acquisition module 430 is based on coordinate conversion relation, by the coordinate of the central point of human face region in the infrared image
The respective coordinates of the corresponding points being transformed in the visible images.
Exemplarily, as shown in figure 5, in an embodiment of the present invention, infrared image face center coordinate obtaining module
420 include coordinate acquisition submodule 4201, element marking submodule 4202, human face region extracting sub-module 4203 and infrared image
Face center coordinate calculating sub module 4204.
Coordinate acquisition submodule 4201 is used to obtaining heat radiation point that infrared image sensor senses in the infrared figure
Coordinate as in.Coordinate acquisition submodule 4201 can be as shown in Figure 1 electronic equipment in the picture signal of processor 102/ at
The programmed instruction stored in the reason Running storage device 104 of device 110 can be performed according to embodiments of the present invention red realizing
Step S301 in outer image in the coordinate acquiring method of the central point of human face region.
Element marking submodule 4202 is used for the corresponding pixel value of coordinate of heat radiation point described in the infrared image
It is marked, to obtain targeted object region.Exemplarily, element marking submodule 4202 will be warm described in the infrared image
The corresponding pixel value of coordinate of radiant is labeled as 1 and using region that pixel value is 1 as the target pair in the infrared image
As region.Element marking submodule 4202 can be as shown in Figure 1 electronic equipment in the image-signal processor of processor 102/
The programmed instruction stored in 110 Running storage devices 104 can perform infrared figure according to embodiments of the present invention realizing
Step S302 as in the coordinate acquiring method of the central point of human face region.
Human face region extracting sub-module 4203 is used to extract presumptive area in the targeted object region as described red
Human face region in outer image.Human face region extracting sub-module 4203 can be as shown in Figure 1 electronic equipment in processor 102/
The programmed instruction stored in the Running storage device 104 of image-signal processor 110 can be performed according to the present invention realizing
Step S303 in the infrared image of embodiment in the coordinate acquiring method of the central point of human face region.
Infrared image face center coordinate calculating sub module 4204 is used to obtain human face region in the infrared image
The coordinate value of central point, as the coordinate of the central point of the human face region.In an example of the invention, the infrared image people
Face center point coordinate calculating sub module 4204 includes interception unit, and the interception unit is used to intercept face in the infrared image
The circumscribed rectangle in region, and using the coordinate of the central point of the circumscribed rectangle as the center of human face region in the infrared image
The coordinate of point.Infrared face center point coordinate calculating sub module 4204 can be as shown in Figure 1 electronic equipment in processor
The programmed instruction stored in the Running storage device 104 of 102/ image-signal processor 110 can be performed according to this realizing
Step S304 in the infrared image of inventive embodiments in the coordinate acquiring method of the central point of human face region.
Further, in an embodiment of the present invention, as shown in fig. 6, the coordinate conversion relation acquisition module 460 includes
With reference to object image acquisition submodule 4601, with reference to point coordinates acquisition submodule 4602 and coordinate conversion relation acquisition submodule
4603。
It is used to be passed by the visible light image sensor and the infrared image with reference to object image acquisition submodule 4601
Sensor obtains respectively the visible images and the infrared image including first object of reference including first object of reference, its
In, first object of reference is placed in the angular field of view of the visible light image sensor and the infrared image sensor, and
And it is marked with least one reference point on first object of reference.Can be as shown in Figure 1 with reference to object image acquisition submodule 4601
Electronic equipment in the Running storage device 104 of 102/ image-signal processor of processor 110 in the programmed instruction that stores come real
It is existing.
It is used in the visible images and bag including first object of reference with reference to point coordinates acquisition submodule 4602
The coordinate of at least one reference point is obtained in the infrared image for including first object of reference.With reference to point coordinates acquisition submodule
4602 can be as shown in Figure 1 electronic equipment in the Running storage device 104 of 102/ image-signal processor of processor 110 in deposit
The programmed instruction of storage is realizing.
Coordinate conversion relation acquisition submodule 4603 is used to pass through the visible images including first object of reference
In at least one reference point coordinate and the infrared image including first object of reference described at least one
The coordinate of reference point obtains the coordinate conversion relation.Coordinate conversion relation acquisition submodule 4603 can be as shown in Figure 1 electricity
The programmed instruction stored in the Running storage device 104 of 102/ image-signal processor of processor 110 in sub- equipment is realizing.
Face tracking device of the invention, by sensing the heat radiation of human body and forming infrared image, then from red
The coordinate of the central point of human face region is obtained in outer image, and is by the Coordinate Conversion of the central point of human face region in infrared image
The respective coordinates of the corresponding points in visible images, then obtain the corresponding points in visible images with respect in visible images
The horizontal sextant angle and vertical angle of heart point, and based on the horizontal sextant angle and vertical angle control cloud platform rotation, with to target pair
The face of elephant is tracked.That is, face tracking device of the invention can realize the destination object face to being identified
From motion tracking alignment, and actively go to phase machine face without the need for destination object and be aligned, improve alignment time and Consumer's Experience.
Fig. 7 is the schematic block diagram of the face tracking equipment according to the embodiment of the present invention.With reference to Fig. 7 to basis
The face tracking device systems of the embodiment of the present invention are described.
As shown in fig. 7, face tracking equipment 700 according to embodiments of the present invention includes infrared image sensor 710, visible
Optical image sensor 720, head 730, processor 740 and storage device 750.
The infrared image sensor 710 is used to gather infrared image, for example, the infrared image of destination object.Visible ray
Imageing sensor 720 is used to gather visible images, for example, the visible light exposure image of destination object.Visible images are sensed
Device 720 can be RGB sensors, or black and white sensor.Infrared image sensor 710 and visible light image sensor
720 can gather picture can also gather video.Certainly, infrared image sensor 710 and visible light image sensor 720 are gathered
The process of data can be as shown in Figure 1 electronic equipment in processor 102 or the Running storage device of image-signal processor 110
The programmed instruction stored in 104 realizing, to obtain picture or video data.
Further, in embodiments of the present invention, infrared image sensor 710 and visible light image sensor 720 are preferred
Ground has identical resolution ratio, horizontal view angle and vertical angle of view, and is horizontally disposed with circuit boards and close proximity to reduce
The error of coordinate transform.
The head 730 is used to support the infrared image sensor 710 and visible light image sensor 720, and passes through
Rotation horizontally and vertically adjusts the IMAQ of infrared image sensor 710 and visible light image sensor 720
Region, to realize being aligned from motion tracking for the face to destination object.
The storage device 750 store for realizing face tracking method according to embodiments of the present invention in corresponding steps
Program code.
The processor 740 is used to run the program code stored in the storage device 750, to perform according to the present invention
The corresponding steps of the face tracking method of embodiment, and for realizing face tracking device according to embodiments of the present invention in
Image collection module 410, infrared image face center coordinate obtaining module 420, visible images respective coordinates acquisition module
430th, angle detection module 440, cloud platform rotation control module 450 and coordinate conversion relation acquisition module 460;And infrared image
Coordinate acquisition submodule 4201, element marking submodule 4202, human face region in face center coordinate obtaining module 420 is carried
Take submodule 4203 and infrared image face center coordinate calculating sub module 4204, and the reference in coordinate conversion relation 460
Object image acquisition submodule 4601, reference point coordinates acquisition submodule 4602 and coordinate conversion relation acquisition submodule 4603.
In one embodiment, following steps are performed when described program code is run by the processor 740:
Infrared image and visible images are obtained, the infrared image and the visible images pass through respectively infrared image
Sensor and visible light image sensor are gathered;
Detect the human face region of the destination object in the infrared image and obtain human face region in the infrared image
The coordinate of central point;
According to the coordinate of the central point of human face region in the infrared image, human face region in the infrared image is obtained
The respective coordinates of corresponding points of the central point in the visible images;
Obtain the corresponding points of the central point of human face region in the infrared image in the visible images relative
The horizontal sextant angle and vertical angle of the central point of the visible images, the horizontal sextant angle is that the corresponding points are visible with described
The angle of the horizontal direction of the relatively described visible images of central point line of light image, the vertical angle is the corresponding points
The angle of the vertical direction of the visible images relative with the central point line of the visible images;And
Based on the horizontal sextant angle and the vertical angle control cloud platform rotation, with the face of destination object is carried out with
Track.
In one embodiment, processor 740 can be implemented as central processing unit (CPU) and/or image-signal processor
(ISP)。
Exemplarily, face tracking equipment 700 according to embodiments of the present invention can be implemented with infrared image sensing
The unit or system of device, visible light image sensor, head, memory, communication interface/unit and processor.
Face tracking equipment 700 according to embodiments of the present invention can be deployed in Face datection identifying system, for example,
Security protection application, can be deployed in gate control system or preventing road monitoring system;In financial application field, can be with bank, exchange
Etc. identity authorization system in.
Additionally, according to embodiments of the present invention, additionally providing a kind of storage medium, program is stored on said storage
Instruction, when described program is instructed and run by computer or processor for perform the embodiment of the present invention face tracking method,
The corresponding steps of the coordinate acquiring method of the central point of human face region in infrared image, and for realizing according to present invention enforcement
Face tracking device, infrared image face center coordinate obtaining module, the respective mode of coordinate conversion relation acquisition module of example
Block/submodule.The storage medium can for example include storage card, the memory unit of panel computer, the personal meter of smart phone
The hard disk of calculation machine, read-only storage (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), the read-only storage of portable compact disc
Any combination of device (CD-ROM), USB storage or above-mentioned storage medium.The computer-readable recording medium can be
Any combination of one or more computer-readable recording mediums, such as one computer-readable recording medium is included for obtaining
The relatively described visible images of corresponding points of the central point of human face region in the visible images in the infrared image
The horizontal sextant angle of central point and the computer-readable program code of vertical angle, another computer-readable recording medium is included
For controlling the computer-readable program code of cloud platform rotation based on the horizontal sextant angle and the vertical angle.
In one embodiment, the computer program instructions can be realized according to of the invention real when being run by computer
Apply each functional module in the face tracking device of example, and/or can perform face according to embodiments of the present invention with
Track method.
In one embodiment, the computer program instructions perform following steps when being run by computer:Obtain red
Outer image and visible images, the infrared image and the visible images are respectively by infrared image sensor and visible ray
Imageing sensor is gathered;Detect the human face region of the destination object in the infrared image and obtain face in the infrared image
The coordinate of the central point in region;According to the coordinate of the central point of human face region in the infrared image, the infrared image is obtained
The respective coordinates of corresponding points of the central point of middle human face region in the visible images;Obtain face in the infrared image
The level folder of the central point of the relatively described visible images of the corresponding points of the central point in region in the visible images
Angle and vertical angle, the horizontal sextant angle is that the corresponding points are relative with the central point line of the visible images described visible
The angle of the horizontal direction of light image, the vertical angle is that the corresponding points are relative with the visible images central point line
The angle of the vertical direction of the visible images;And turned based on the horizontal sextant angle and vertical angle control head
It is dynamic, it is tracked with the face to destination object.
Each module in face tracking device according to embodiments of the present invention can pass through people according to embodiments of the present invention
The processor computer program instructions that store in memory of operation of face tracking equipment, or can be according to this realizing
When the computer instruction stored in the computer-readable recording medium of the computer program of bright embodiment is run by computer
Realize.
Face tracking method according to embodiments of the present invention and device, face tracking equipment and storage medium, by sensing
The heat radiation of human body simultaneously forms infrared image, and the coordinate of the central point of human face region is then obtained from infrared image, and will be red
The Coordinate Conversion of the central point of human face region is the respective coordinates of the corresponding points in visible images in outer image, then obtains institute
State the relative visible images central point of corresponding points of the central point of human face region in infrared image in the visible images
Horizontal sextant angle and vertical angle, and based on the horizontal sextant angle and vertical angle control cloud platform rotation, with the people to destination object
Face is tracked.That is, face tracking method of the invention and device can realize the face of the destination object to being identified
From motion tracking alignment, and actively go to phase machine face without the need for destination object and be aligned, improve alignment time and user's body
Test.
Although the example embodiment by reference to Description of Drawings here, it should be understood that above-mentioned example embodiment is merely exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can wherein carry out various changes
And modification, it is made without departing from the scope of the present invention and spirit.All such changes and modifications are intended to be included in claims
Within required the scope of the present invention.
Those of ordinary skill in the art are it is to be appreciated that the list of each example with reference to the embodiments described herein description
Unit and algorithm steps, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel
Each specific application can be used different methods to realize described function, but this realization it is not considered that exceeding
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it can be passed through
Its mode is realized.For example, apparatus embodiments described above are only schematic, for example, the division of the unit, and only
Only a kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can be tied
Close or be desirably integrated into another equipment, or some features can be ignored, or do not perform.
In specification mentioned herein, a large amount of details are illustrated.It is to be appreciated, however, that the enforcement of the present invention
Example can be put into practice in the case of without these details.In some instances, known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the present invention and help understand one or more in each inventive aspect, exist
To the present invention exemplary embodiment description in, the present invention each feature be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the invention should be construed to reflect following intention:It is i.e. claimed
The more features of feature that are expressly recited in each claim of application claims ratio.More precisely, as accordingly
As claims reflect, its inventive point is can be with all features less than certain disclosed single embodiment
Feature is solving corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in the tool
Separate embodiments of the body embodiment, wherein each claim as the present invention itself.
It will be understood to those skilled in the art that in addition to mutually exclusive between feature, any combinations pair can be adopted
All features and so disclosed any method disclosed in this specification (including adjoint claim, summary and accompanying drawing)
Or all processes or unit of equipment are combined.Unless expressly stated otherwise, this specification (will including adjoint right
Ask, make a summary and accompanying drawing) disclosed in each feature can, equivalent identical by offer or similar purpose alternative features replacing.
Although additionally, it will be appreciated by those of skill in the art that some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment required for protection one of arbitrarily
Can in any combination mode using.
The present invention all parts embodiment can be realized with hardware, or with one or more processor operation
Software module realize, or with combinations thereof realization.It will be understood by those of skill in the art that can use in practice
Microprocessor or digital signal processor (DSP) to realize article analytical equipment according to embodiments of the present invention in some moulds
The some or all functions of block.The present invention is also implemented as the part for performing method as described herein or complete
The program of device (for example, computer program and computer program) in portion.Such program for realizing the present invention can be stored
On a computer-readable medium, or can have one or more signal form.Such signal can be from internet
Download on website and obtain, or provide on carrier signal, or provide in any other form.
It should be noted that above-described embodiment the present invention will be described rather than limits the invention, and ability
Field technique personnel can design without departing from the scope of the appended claims alternative embodiment.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" is not excluded the presence of not
Element listed in the claims or step.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can come real by means of the hardware for including some different elements and by means of properly programmed computer
It is existing.If in the unit claim for listing equipment for drying, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and be run after fame
Claim.
The above, the only specific embodiment of the present invention or the explanation to specific embodiment, the protection of the present invention
Scope is not limited thereto, any those familiar with the art the invention discloses technical scope in, can be easily
Expect change or replacement, all should be included within the scope of the present invention.Protection scope of the present invention should be with claim
Protection domain is defined.
Claims (12)
1. a kind of face tracking method, it is characterised in that include:
Infrared image and visible images are obtained, the infrared image and the visible images pass through respectively infrared image sensing
Device and visible light image sensor are gathered;
Detect the human face region of the destination object in the infrared image and obtain the center of human face region in the infrared image
The coordinate of point;
According to the coordinate of the central point of human face region in the infrared image, the center of human face region in the infrared image is obtained
The respective coordinates of corresponding points of the point in the visible images;
Obtain the corresponding points of the central point of human face region in the infrared image in the visible images relatively described
The horizontal sextant angle and vertical angle of the central point of visible images, the horizontal sextant angle is the corresponding points and the visible ray figure
The angle of the horizontal direction of the relatively described visible images of central point line of picture, the vertical angle is the corresponding points and institute
State the angle of the vertical direction of the relatively described visible images of central point line of visible images;And
Based on the horizontal sextant angle and vertical angle control cloud platform rotation, it is tracked with the face to destination object.
2. face tracking method according to claim 1, it is characterised in that the target in the detection infrared image
The human face region of object simultaneously obtains the coordinate of the central point of human face region in the infrared image and includes:
Obtain the coordinate of heat radiation point that the infrared image sensor senses in the infrared image;
The corresponding pixel value of coordinate of heat radiation point described in the infrared image is labeled as into 1, and by region that pixel value is 1
As the targeted object region in the infrared image;
The presumptive area in the targeted object region is extracted as human face region in the infrared image;And
The coordinate value of the central point of human face region in the infrared image is obtained, as in human face region in the infrared image
The coordinate of heart point.
3. face tracking method according to claim 2, it is characterised in that face area in the acquisition infrared image
The coordinate value of the central point in domain, includes as the coordinate of the central point of human face region in the infrared image:
The circumscribed rectangle of human face region in the infrared image is intercepted, and using the coordinate of the central point of the circumscribed rectangle as institute
State the coordinate of the central point of human face region in infrared image.
4. method for detecting human face according to claim 1, it is characterised in that according to human face region in the infrared image
The coordinate of central point, obtains the right of corresponding points of the central point of human face region in the infrared image in the visible images
Coordinate is answered to include:
Based on coordinate conversion relation, the coordinate of the central point of human face region in the infrared image is transformed into the visible ray figure
The respective coordinates of the corresponding points as in.
5. face tracking method according to claim 4, it is characterised in that methods described also includes:Based on including first
The infrared image of object of reference and the visible images including first object of reference obtain the coordinate conversion relation,
Wherein it is described acquisition the coordinate conversion relation the step of include:
Obtain described including first reference respectively by the visible light image sensor and the infrared image sensor
The visible images of thing and the infrared image including first object of reference, wherein, first object of reference is placed in described
In the angular field of view of visible light image sensor and the infrared image sensor, and be marked with first object of reference to
A few reference point;
Respectively in the visible images including first object of reference and the infrared figure including first object of reference
The coordinate of at least one reference point is obtained as in;And
By the coordinate of at least one reference point in the visible images including first object of reference and described
Including the coordinate of at least one reference point in the infrared image of first object of reference, obtain the coordinate transform and close
System.
6. a kind of face tracking device, it is characterised in that include:
Image collection module, described image acquisition module is used to obtain infrared image and visible images;
Infrared image face center coordinate obtaining module, the infrared image face center coordinate obtaining module is used to detect
The human face region of the destination object in the infrared image simultaneously obtains the coordinate of the central point of human face region in the infrared image;
Visible images respective coordinates acquisition module, the visible images respective coordinates acquisition module is used for according to described infrared
The coordinate of the central point of human face region in image, obtains the central point of human face region in the infrared image in the visible ray figure
The respective coordinates of the corresponding points as in;
Angle detection module, the angle detection module is used to obtain the central point of human face region in the infrared image described
The horizontal sextant angle and vertical angle of the central point of the relatively described visible images of the corresponding points in visible images, the water
Flat angle is the horizontal direction of the corresponding points visible images relative with the central point line of the visible images
Angle, the vertical angle is the corresponding points visible images relative with the line of the central point of the visible images
Vertical direction angle;And
Cloud platform rotation control module, the cloud platform rotation control module is based on the horizontal sextant angle and vertical angle control cloud
Platform is rotated, and is tracked with the face to destination object.
7. face tracking device according to claim 6, it is characterised in that the face tracking device also includes:
Infrared image sensor, the infrared image sensor is used to obtain the infrared image;And
Visible light image sensor, the visible light image sensor is used to obtain the visible images.
8. face tracking device according to claim 7, it is characterised in that the infrared image face center coordinate is obtained
Delivery block includes:
Coordinate acquisition submodule, the coordinate acquisition submodule is used to obtain the heat radiation that the infrared image sensor is sensed
Coordinate of the point in the infrared image;
Element marking submodule, the element marking submodule is used for the coordinate pair of heat radiation point described in the infrared image
The pixel value answered is labeled as 1, and using region that pixel value is 1 as the targeted object region in the infrared image;
Human face region extracting sub-module, the human face region extracting sub-module is used to extract predetermined in the targeted object region
Region is used as human face region in the infrared image;And
Infrared image face center coordinate calculating sub module, the infrared image face center coordinate calculating sub module is used for
The coordinate value of the central point of human face region in the infrared image is obtained, as the central point of human face region in the infrared image
Coordinate.
9. face tracking device according to claim 8, it is characterised in that the infrared image face center coordinate meter
Operator module includes interception unit, and the interception unit is used to intercept the circumscribed rectangle of human face region in the infrared image, and
Using the coordinate of the central point of the circumscribed rectangle as the coordinate of the central point of human face region in the infrared image.
10. face tracking device according to claim 7, it is characterised in that the visible images respective coordinates are obtained
Module is based on coordinate conversion relation, and the coordinate of the central point of human face region in the infrared image is transformed into the visible ray figure
The respective coordinates of the corresponding points as in.
11. face tracking devices according to claim 10, it is characterised in that the face tracking device also includes:Sit
Mark transformation relation acquisition module, the coordinate conversion relation acquisition module be used for based on include the infrared image of the first object of reference with
The coordinate conversion relation is obtained including the visible images of first object of reference,
Wherein described coordinate conversion relation acquisition module includes:
It is described to be used to pass through the visible light image sensor with reference to object image acquisition submodule with reference to object image acquisition submodule
Visible images including first object of reference and described including described are obtained respectively with the infrared image sensor
The infrared image of the first object of reference, wherein, first object of reference is placed in the visible light image sensor and the infrared figure
In the angular field of view of picture sensor, and at least one reference point is marked with first object of reference;
It is described to be used for described including first object of reference with reference to point coordinates acquisition submodule with reference to point coordinates acquisition submodule
Visible images and the infrared image including first object of reference in obtain at least one reference point respectively
Coordinate;And
Coordinate conversion relation acquisition submodule, the coordinate conversion relation acquisition submodule is used for by described including described first
The coordinate of at least one reference point in the visible images of object of reference and described including the infrared of first object of reference
The coordinate of at least one reference point in image, obtains the coordinate conversion relation.
The 12. face tracking devices according to any one of claim 6 to 11, it is characterised in that described image is obtained
Module, the infrared image face center coordinate obtaining module, the visible images respective coordinates acquisition module, the angle
Degree detection module and the cloud platform rotation control module are realized by same processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611216574.8A CN106650665B (en) | 2016-12-26 | 2016-12-26 | Face tracking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611216574.8A CN106650665B (en) | 2016-12-26 | 2016-12-26 | Face tracking method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106650665A true CN106650665A (en) | 2017-05-10 |
CN106650665B CN106650665B (en) | 2021-02-12 |
Family
ID=58828324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611216574.8A Active CN106650665B (en) | 2016-12-26 | 2016-12-26 | Face tracking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650665B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108596110A (en) * | 2018-04-26 | 2018-09-28 | 北京京东金融科技控股有限公司 | Image-recognizing method and device, electronic equipment, storage medium |
CN108875479A (en) * | 2017-08-15 | 2018-11-23 | 北京旷视科技有限公司 | The acquisition methods and device of facial image |
CN109960401A (en) * | 2017-12-26 | 2019-07-02 | 广景视睿科技(深圳)有限公司 | A kind of trend projecting method, device and its system based on face tracking |
CN110244775A (en) * | 2019-04-29 | 2019-09-17 | 广州市景沃电子有限公司 | Automatic tracking method and device based on mobile device clamping holder |
CN110411570A (en) * | 2019-06-28 | 2019-11-05 | 武汉高德智感科技有限公司 | Infrared human body temperature screening method based on human testing and human body tracking technology |
CN111539295A (en) * | 2020-04-17 | 2020-08-14 | 中移(杭州)信息技术有限公司 | Face recognition method, device, system, network equipment and storage medium |
CN111815676A (en) * | 2020-07-06 | 2020-10-23 | 浙江大华技术股份有限公司 | Face tracking method, device, equipment and storage device |
CN111833382A (en) * | 2020-02-13 | 2020-10-27 | 珠海安联锐视科技股份有限公司 | Target tracking method based on moving head camera |
CN111854107A (en) * | 2019-04-24 | 2020-10-30 | 杭州萤石软件有限公司 | Intelligent control method and device of air purifier and air purifier |
CN111881712A (en) * | 2020-05-12 | 2020-11-03 | 无锡北斗星通信息科技有限公司 | Medical instrument real-time tracking platform based on signal processing |
CN111998959A (en) * | 2020-07-20 | 2020-11-27 | 北京百度网讯科技有限公司 | Temperature calibration method and device based on real-time temperature measurement system and storage medium |
CN112380965A (en) * | 2020-11-11 | 2021-02-19 | 浙江大华技术股份有限公司 | Method for face recognition and multi-view camera |
CN113063500A (en) * | 2021-03-30 | 2021-07-02 | 新疆爱华盈通信息技术有限公司 | Face temperature measurement method, face temperature measurement instrument and storage medium |
CN113112668A (en) * | 2021-04-15 | 2021-07-13 | 新疆爱华盈通信息技术有限公司 | Face recognition-based holder tracking method, holder and entrance guard recognition machine |
CN113516089A (en) * | 2021-07-27 | 2021-10-19 | 中国平安人寿保险股份有限公司 | Face image recognition method, device and equipment and readable storage medium |
CN115937700A (en) * | 2022-11-10 | 2023-04-07 | 哈尔滨工业大学 | Multi-source collaborative moving target online detection and identification method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101520838A (en) * | 2008-02-27 | 2009-09-02 | 中国科学院自动化研究所 | Automatic-tracking and automatic-zooming method for acquiring iris images |
CN102098499A (en) * | 2011-03-24 | 2011-06-15 | 杭州华三通信技术有限公司 | Pan/ tilt/ zoom (PTZ) camera control method, device and system thereof |
CN103024281A (en) * | 2013-01-11 | 2013-04-03 | 重庆大学 | Infrared and visible video integration system |
CN103413395A (en) * | 2013-08-15 | 2013-11-27 | 北京声迅电子股份有限公司 | Intelligent smoke detecting and early warning method and device |
CN103905733A (en) * | 2014-04-02 | 2014-07-02 | 哈尔滨工业大学深圳研究生院 | Method and system for conducting real-time tracking on faces by monocular camera |
US20150302576A1 (en) * | 2013-01-23 | 2015-10-22 | Leap Motion, Inc. | Retraction Based Three-Dimensional Tracking of Object Movements |
-
2016
- 2016-12-26 CN CN201611216574.8A patent/CN106650665B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101520838A (en) * | 2008-02-27 | 2009-09-02 | 中国科学院自动化研究所 | Automatic-tracking and automatic-zooming method for acquiring iris images |
CN102098499A (en) * | 2011-03-24 | 2011-06-15 | 杭州华三通信技术有限公司 | Pan/ tilt/ zoom (PTZ) camera control method, device and system thereof |
CN103024281A (en) * | 2013-01-11 | 2013-04-03 | 重庆大学 | Infrared and visible video integration system |
US20150302576A1 (en) * | 2013-01-23 | 2015-10-22 | Leap Motion, Inc. | Retraction Based Three-Dimensional Tracking of Object Movements |
CN103413395A (en) * | 2013-08-15 | 2013-11-27 | 北京声迅电子股份有限公司 | Intelligent smoke detecting and early warning method and device |
CN103905733A (en) * | 2014-04-02 | 2014-07-02 | 哈尔滨工业大学深圳研究生院 | Method and system for conducting real-time tracking on faces by monocular camera |
Non-Patent Citations (1)
Title |
---|
高玉瑞,王德兴: "基于红外与可见光的智能跟踪技术", 《微计算机信息》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875479A (en) * | 2017-08-15 | 2018-11-23 | 北京旷视科技有限公司 | The acquisition methods and device of facial image |
CN109960401B (en) * | 2017-12-26 | 2020-10-23 | 广景视睿科技(深圳)有限公司 | Dynamic projection method, device and system based on face tracking |
CN109960401A (en) * | 2017-12-26 | 2019-07-02 | 广景视睿科技(深圳)有限公司 | A kind of trend projecting method, device and its system based on face tracking |
CN108596110A (en) * | 2018-04-26 | 2018-09-28 | 北京京东金融科技控股有限公司 | Image-recognizing method and device, electronic equipment, storage medium |
CN111854107A (en) * | 2019-04-24 | 2020-10-30 | 杭州萤石软件有限公司 | Intelligent control method and device of air purifier and air purifier |
CN110244775A (en) * | 2019-04-29 | 2019-09-17 | 广州市景沃电子有限公司 | Automatic tracking method and device based on mobile device clamping holder |
CN110411570B (en) * | 2019-06-28 | 2020-08-28 | 武汉高德智感科技有限公司 | Infrared human body temperature screening method based on human body detection and human body tracking technology |
CN110411570A (en) * | 2019-06-28 | 2019-11-05 | 武汉高德智感科技有限公司 | Infrared human body temperature screening method based on human testing and human body tracking technology |
CN111833382A (en) * | 2020-02-13 | 2020-10-27 | 珠海安联锐视科技股份有限公司 | Target tracking method based on moving head camera |
CN111833382B (en) * | 2020-02-13 | 2021-03-09 | 珠海安联锐视科技股份有限公司 | Target tracking method based on moving head camera |
CN111539295A (en) * | 2020-04-17 | 2020-08-14 | 中移(杭州)信息技术有限公司 | Face recognition method, device, system, network equipment and storage medium |
CN111881712B (en) * | 2020-05-12 | 2021-03-19 | 郑海英 | Medical instrument real-time tracking platform based on signal processing |
CN111881712A (en) * | 2020-05-12 | 2020-11-03 | 无锡北斗星通信息科技有限公司 | Medical instrument real-time tracking platform based on signal processing |
CN111815676A (en) * | 2020-07-06 | 2020-10-23 | 浙江大华技术股份有限公司 | Face tracking method, device, equipment and storage device |
CN111815676B (en) * | 2020-07-06 | 2024-09-27 | 浙江大华技术股份有限公司 | Face tracking method, device, equipment and storage device |
CN111998959A (en) * | 2020-07-20 | 2020-11-27 | 北京百度网讯科技有限公司 | Temperature calibration method and device based on real-time temperature measurement system and storage medium |
CN111998959B (en) * | 2020-07-20 | 2022-07-22 | 北京百度网讯科技有限公司 | Temperature calibration method and device based on real-time temperature measurement system and storage medium |
CN112380965B (en) * | 2020-11-11 | 2024-04-09 | 浙江大华技术股份有限公司 | Face recognition method and multi-camera |
CN112380965A (en) * | 2020-11-11 | 2021-02-19 | 浙江大华技术股份有限公司 | Method for face recognition and multi-view camera |
CN113063500A (en) * | 2021-03-30 | 2021-07-02 | 新疆爱华盈通信息技术有限公司 | Face temperature measurement method, face temperature measurement instrument and storage medium |
CN113112668A (en) * | 2021-04-15 | 2021-07-13 | 新疆爱华盈通信息技术有限公司 | Face recognition-based holder tracking method, holder and entrance guard recognition machine |
CN113516089A (en) * | 2021-07-27 | 2021-10-19 | 中国平安人寿保险股份有限公司 | Face image recognition method, device and equipment and readable storage medium |
CN113516089B (en) * | 2021-07-27 | 2024-04-12 | 中国平安人寿保险股份有限公司 | Face image recognition method, device, equipment and readable storage medium |
CN115937700A (en) * | 2022-11-10 | 2023-04-07 | 哈尔滨工业大学 | Multi-source collaborative moving target online detection and identification method |
Also Published As
Publication number | Publication date |
---|---|
CN106650665B (en) | 2021-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650665A (en) | Human face tracing method and device | |
CN106446873B (en) | Face detection method and device | |
CN111091063B (en) | Living body detection method, device and system | |
JP6629513B2 (en) | Liveness inspection method and apparatus, and video processing method and apparatus | |
CN111723611A (en) | Pedestrian re-identification method and device and storage medium | |
CN106682620A (en) | Human face image acquisition method and device | |
TW202026948A (en) | Methods and devices for biological testing and storage medium thereof | |
CN109815843A (en) | Object detection method and Related product | |
CN108805024A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
TW201606556A (en) | Method, device and system for identity authentication by using card features | |
BR112016004206B1 (en) | METHOD, APPARATUS AND NON-TRANSITORY PROCESSOR-READABLE MEMORY FOR IMAGE PROCESSING | |
CN111626163B (en) | Human face living body detection method and device and computer equipment | |
EP3674973A1 (en) | Method and apparatus with liveness detection and object recognition | |
TWI525555B (en) | Image processing apparatus and processing method thereof | |
Alemán-Flores et al. | Line detection in images showing significant lens distortion and application to distortion correction | |
WO2019061659A1 (en) | Method and device for removing eyeglasses from facial image, and storage medium | |
CN103902953A (en) | Screen detection system and method | |
CN113642639B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
KR101348681B1 (en) | Multi-sensor image alignment method of image detection system and apparatus using the same | |
CN108875497B (en) | Living body detection method, living body detection device and computer storage medium | |
KR102299250B1 (en) | Counting device and method using composite image data | |
CN113159229A (en) | Image fusion method, electronic equipment and related product | |
CN111383255B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
Huang et al. | An improved RANSAC algorithm of color image stitching | |
US11205064B1 (en) | Measuring quality of depth images in real time |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313 Applicant after: MEGVII INC. Applicant after: Beijing maigewei Technology Co., Ltd. Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313 Applicant before: MEGVII INC. Applicant before: Beijing aperture Science and Technology Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |