[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104091176B - Portrait comparison application technology in video - Google Patents

Portrait comparison application technology in video Download PDF

Info

Publication number
CN104091176B
CN104091176B CN201410343153.6A CN201410343153A CN104091176B CN 104091176 B CN104091176 B CN 104091176B CN 201410343153 A CN201410343153 A CN 201410343153A CN 104091176 B CN104091176 B CN 104091176B
Authority
CN
China
Prior art keywords
head portrait
target person
face
personage
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410343153.6A
Other languages
Chinese (zh)
Other versions
CN104091176A (en
Inventor
吴建忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Dreamhunt Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410343153.6A priority Critical patent/CN104091176B/en
Publication of CN104091176A publication Critical patent/CN104091176A/en
Application granted granted Critical
Publication of CN104091176B publication Critical patent/CN104091176B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses a kind of portrait comparison application technology in video, comprise the following steps, step 1: obtain real-time video and extract target person head portrait photograph; Also extract instant head portrait photo by all personage's videos of camera Real-time Obtaining predeterminable area simultaneously, when the distance of pupil center's point of personage two in instant head portrait photo is 10 ~ 100 pixels and in described camera and this instant head portrait photo, the shooting vertical angle of the head of personage is 0 ~ 90 °, then this instant head portrait photo is judged as YES target person head portrait photograph; Step 2: portrait comparison.The present invention aims to provide and a kind ofly provides simple to operate, differentiate rapidly, portrait comparison application technology in video that accuracy rate is high.

Description

Portrait comparison application technology in video
Technical field
The present invention relates to portrait Determination field, particularly portrait comparison application technology in video.
Background technology
Recognition of face is a popular computer technology research field, and it belongs to biometrics identification technology, is to distinguish biosome individuality to the biological characteristic of biosome (generally refering in particular to people) itself.The biological characteristic that biometrics identification technology is studied comprises face, fingerprint, palm line, iris, retina, sound (voice), the bodily form etc.This wherein only has face characteristic to be the most intuitively, the most reliably, the most accurately, and utilizing face characteristic to carry out authentication is the most natural, the most direct means.Compare other human body biological characteristics identification, the cooperation that face characteristic identification does not need object behavior just can the identity of easily and effectively verification object, is not easily discovered, thus have excellent false proof, antifraud, directly, the feature such as friendly, convenient.Through the research of decades, face recognition technology is applied in the fields such as security protection, gate inhibition, work attendance widely.
Recognition of face mainly comprises the location of face, picture pre-service, the identification of unique point of face, the extraction of the eigenwert of the unique point of face, eigenwert than equity, wherein, the method that Face detection can realize is a lot, such as Knowledge based engineering method (Knowledge-based), feature invariant metering method (Feature invariant), wherein comparatively classical is the method (Template matching) etc. of adaboost method, template matches.Picture pre-service comprises the angular setting of face, angle of inclination perpendicular to photographic plane region is corrected and is usually undertaken by following several theory: 1) based on method (the document 1:Y.Li and X.Y.Lin of Garbor wavelet transformation, " Face hallucination with pose variation " in Pro.6th IEEE Int.Conf.AutomaticFace and Gesture Recognition, 2004, pp.723 – 728); 2) based on method (the document 2:K.Jia andS.G.Gong of tensor resolution, " Multi-modal tensor face for simultaneoussuper-resolution and recognition; " in Proc.IEEEInt.Conf.ComputerVision, 2005, pp.1683-1690; Document 3:K.Jia andS.G.Gong, " Generalizedfacesuper-resolution, " IEEETrans.ImageProcessing, vol.17, no.6, pp.873886, Jun.2008); 3) method (document 4: Chen Jia great, Lai Jianhuang, Feng Guocan of the corresponding algorithm of point improved and the positive face synthesis of the principles of construction of linear object class, " a kind of face posture differentiates the new method of synthesizing with positive face ", Journal of Computer Research and Development, 2006) adjustment at comparison film angle of inclination, can be reached by any one method above-mentioned; The identification of human face characteristic point and extraction, as in Chinese patent application CN201310746593.1, propose the algorithm that a kind of new face characteristic value is extracted.Chinese patent application CN2008100003131.1 discloses a kind of method of Face datection and tracking, and above-mentioned technical development makes face recognition technology develop rapidly, and application prospect is extensive.
According to incompletely statistics, China is just entering the flowing epoch.2011, national floating population reached 2.3 hundred million people, accounted for 17% of total population, namely just had 1 people to be floating population in every 6 Chinese.Following 20 years, 300,000,000 people in the countryside also will be had to enter cities and towns.Along with floating population rolls up, be faced with floating population's complex structure, personal information be opaque, floating population's substantial amounts,
The situations such as mobile personnel's difficult management, especially in mobile personnel, be mingled with punishment suspicion personnel, flee to escape, wait for an opportunity crime, after more having some of the staff's crime, " bleaching identity " slips into strange land and lives on, and how effectively can suppress and to find the emphasis personnel that obscure in huge floating population, become the important topic faced at present, bring challenges to social synthesis's management, make the departments such as public security organ as the main force of social public security safety management, responsibilities and obligations are with it heavier.
Along with the quickening of rapid development of economy and Development of China's Urbanization, urban population is increasingly intensive, and urban population quantity and mobility also increase greatly, strengthens the management of city personal information significant to aspects such as social security, key area strick precaution, stability maintenances.In addition, social crime rate is in the situation raised year by year in recent years, and high-tech crime and forged identity Information Crimes emerge in an endless stream, and the sudden, uncertain of criminal offence strengthens, and brings great difficulty to strick precaution and detection work.Recently, the terrorist activities such as terrorist implements blast, chopper is hurted sb.'s feelings take place frequently.Anti-terrorism department is studying the countermeasure always, monitors based on not having enough police strength at present; Even if monitored in real time by people, still cannot timely and effectively by event recognition out.Carry out the research of anti-terrorism picture control intelligent analysis system, will effectively prevent the generation of terrorist incident.The anti-terrorism picture control intelligent analysis system that conducts a research is extremely urgent.
Along with the increase of mobility of people, it is very crucial for analyzing timely in video and obtaining people information, particularly along with the development of cloud computing and data communication technology, the storage of data and calculating are not limited only in local computer, large-scale computing machine constantly puts into operation, therefore develop a kind of calculated amount little, the portrait correlation technique realized by monitor video that accuracy rate is high very has market outlook.
Summary of the invention
The object of this invention is to provide simple to operate, that resolution is rapid, accuracy rate is high portrait comparison application technology in video.
Technical scheme provided by the invention is:
A portrait comparison application technology in video, comprises the following steps: step 1: obtain real-time video and extract target person head portrait photograph; Also extract instant head portrait photo by all personage's videos of camera Real-time Obtaining predeterminable area simultaneously, when the distance of pupil center's point of personage two in instant head portrait photo is 10 ~ 100 pixels and in described camera and this instant head portrait photo, the shooting vertical angle of the head of personage is 0 ~ 90 °, then this instant head portrait photo is judged as YES target person head portrait photograph; Wherein in instant head portrait photo the distance of pupil center's point of personage two be preferably 60 ~ 90 pixels and in described camera and this instant head portrait photo, the shooting vertical angle of the head of personage is preferably 30 ~ 70 ° time, then this instant head portrait photo is judged as YES target person head portrait photograph; Step 2: portrait comparison, by in step 1 extract target person head portrait photograph and send to server, and by described server, this target person head portrait photograph and existing head portrait photo are carried out similarity comparison, when existence in existing head portrait photo meets the existing head portrait photo of default first threshold, namely judge that the personage in this target person head portrait photograph is that same people also exports corresponding people information with existing head portrait photo, otherwise, then the information without corresponding personage is exported.
In above-mentioned portrait comparison application technology in video, described step 1 specifically comprises following sub-step: sub-step 11: by camera testing environment light intensity, when described ambient light intensity is less than default Second Threshold, then and light filling; Sub-step 12: by all personage's videos of camera Real-time Obtaining predeterminable area, the number of the personage contained in the video described in judgement; If the number of target person is single, then for single target person extraction target person head portrait photo, if the number of target person is multiple, then extract target person head portrait photo respectively for each target person.
In above-mentioned portrait comparison application technology in video, the angle determination methods of taking vertical angle in described step 1 is: carry out eyes location to personage in instant head portrait photo, and with the center line of the eyes of location for datum line, eyes are mapped eyes references angle upper and lower in about face to judge the shooting vertical angle of the head of personage in described camera and this instant head portrait photo.
In above-mentioned portrait comparison application technology in video, between described step 1 and step 2, also include shooting level angle determining step:
Shooting level angle determining step: in the camera described in judgement and this instant head portrait photo, whether the shooting level angle of the head of personage is 0 ~ 90 °, if so, then carry out step 2, if not, then preserves the instant head portrait photo of this frame.
In above-mentioned portrait comparison application technology in video, step 2 is entered again after step is first cut out to the target person head portrait photograph judging to enter step 2 in horizontal shooting angle determining step, wherein, described cut out step for: obtained target person head portrait photograph is cut out, makes the face of described target person at least account for 3/4 of the area of the target person head portrait photograph after described cutting out.
In above-mentioned portrait comparison application technology in video, in step 2, target person head portrait photograph is compared with the existing head portrait photo stored in the server and is specifically comprised following sub-step by described server: sub-step 201: Face detection, from the background of target person head portrait photograph, isolate human face region; Sub-step 202: eyes are located, determines the position of pupil of both eyes from sub-step 201 in the human face region obtained; Sub-step 203: adjustment of image, correct the angle of inclination of human face region, and the distance at described pupil of both eyes center is determined according to the position of pupil of both eyes determined in sub-step 202, and according to the distance at pupil of both eyes center and the ratio of the distance at the pupil of both eyes center of presetting, the pixel of adjustment human face region; Sub-step 204: Yunnan snub-nosed monkey, in sub-paragraphs 203, corrected human face region carries out pre-service, specifically comprises the process of the color of face, face's exposure, face's uniform light, face's average color, face Gao Guang, blur level, brightness average, gray scale dynamic range, unevenness, overexposure ratio, under-exposure ratio, image sharpness, image blur; Sub-step 205: extract face characteristic cluster, the human face region obtained through sub-step 204 is extracted to the face characteristic value being no less than, the set of the face characteristic value of described head portrait photo is the face characteristic cluster of head portrait photo; Sub-step 206: the face characteristic cluster of the existing head portrait photo stored in the face characteristic cluster of described personage's head portrait photograph and server is compared, calculate meet the 3rd default threshold value the quantity of face characteristic value, and according to meet the 3rd default threshold value the quantity of face characteristic value obtain similarity; Sub-step 207: whether the similarity described in judgement meets first threshold, if so, then extracts and exports corresponding target person information.
In above-mentioned portrait comparison application technology in video, if the existing head portrait photo meeting first threshold obtained in sub-step 207 is multiple, then corresponding according to multiple existing head portrait photos similarity, selects the maximum existing head portrait photo of similarity and judges that the personage in this existing head portrait photo also exports corresponding people information with the same people of personage in target person head portrait photograph.
In above-mentioned portrait comparison application technology in video, in described sub-step 207, if when exporting corresponding target person information, network condition is deteriorated and cannot transmits, then the target person information of described correspondence is stored in the server, when after network conditions turn good, export corresponding target person information.
In above-mentioned portrait comparison application technology in video, in step 2, corresponding people information is exported to face sharing service platform.
In above-mentioned portrait comparison application technology in video, the people information of described correspondence comprises the identity information of target person head portrait photograph, corresponding existing head portrait photo and correspondence.
The present invention is by the specific interpupillary distance of setting, shooting angle and shooting light, obtain satisfactory instant head portrait photo, and obtain result fast by portrait comparison technology, data processing amount can be effectively reduced by above-mentioned process, improve comparison accuracy, reduce the used time of comparison.
Cannot comparison if server is busy, then the target person head portrait photograph of extraction is sent to face sharing service platform, is continued to identify by face sharing service platform, improve recognition efficiency.
Accompanying drawing explanation
Fig. 1 is the flow diagram of the specific embodiment of the invention 1;
Fig. 2 is the flow diagram of the sub-step of the specific embodiment of the invention 1;
Fig. 3 is the flow diagram of the sub-step of the specific embodiment of the invention 1;
Fig. 4 is the flow diagram of the specific embodiment of the invention 1.
Embodiment
Below in conjunction with embodiment, technical scheme of the present invention is described in further detail, but does not form any limitation of the invention.
One of core idea of the present invention is, by setting specific interpupillary distance, shooting angle and shooting light, obtain satisfactory instant head portrait photo, and obtain result fast by portrait comparison technology, data processing amount can be effectively reduced by above-mentioned process, improve comparison accuracy, reduce the used time of comparison.
Specific embodiment 1:
As shown in Figure 1, a kind of portrait comparison application technology in video, comprises the following steps:
Step 1: obtain real-time video and extract target person head portrait photograph; Also extract instant head portrait photo by all personage's videos of camera Real-time Obtaining predeterminable area simultaneously, when the distance of pupil center's point of personage two in instant head portrait photo is 10 ~ 100 pixels and in described camera and this instant head portrait photo, the shooting vertical angle of the head of personage is 0 ~ 90 °, then this instant head portrait photo is judged as YES target person head portrait photograph; It should be noted that, in the present embodiment the operations such as image zooming-out are not carried out to real-time video, carry out the shooting of real-time video by camera and capture target person head portrait photograph, the acquisition of target person head portrait photograph can be obtained by camera also can be undertaken by automatic capture machine, preferably adopts same camera to carry out.Described camera sensing device parameter is: 1/2.5 inch of CMOS valid pixel: 3,000,000 lens interfaces: C/CS ultimate resolution: 1920 × 1200 minimal illuminations: 0.3Lux F1.4 frame per second: 1920 × 108029fps1600 × 120031fps, substantially increases the quantity of information of image.It should be noted that, the target person described in the present embodiment is defined as all people that camera can photograph.Northeast, the Northwest (Xinjiang, Inner Mongol, the Heilungkiang) winter temperature of China can reach-40 degrees Celsius, outdoor maintenance is very inconvenient, high to the stability requirement of system, camera adopts embedded design, technical grade device selected by all devices, the extreme weather conditions such as low temperature, high temperature can be successfully managed, effectively can ensure the stable of system.Hardware circuits such as built-in " house dogs ", be equivalent to have people carry out in front end 24 hours uninterruptedly on duty, the system that effectively ensure that is long-term, stable, run continuously.
As shown in Figure 2, described step 1 specifically comprises following sub-step:
Sub-step 11: by camera testing environment light intensity, when described ambient light intensity is less than default Second Threshold, then light filling; In the present embodiment, Second Threshold is brightness 500-1000lux; Lower than this threshold value, then light filling, be that the industry is general certainly, the mode of light filling is not restricted to by LED light filling, by increasing the light filling measures such as soft box, can also improve photoenvironment quality, to ensure the video quality gathered.
Sub-step 12: by all personage's videos of camera Real-time Obtaining predeterminable area, the number of the personage contained in the video described in judgement; If the number of target person is single, then for single target personage extract real-time target person head portrait photo, if the number of target person is multiple, then for each target person extract real-time target person head portrait photo respectively.
In actual applications, scheme about face tracking in video is very ripe, the present embodiment is according to Detection for Moving Target, human body and human face region is determined by the change of the edge image of this frame and previous frame, after determining human face region, according to the universal rule of eyes distance proportion on pre-set face, estimate the distance of pupil center's point of eyes, such as, the rule preset is that the distance of pupil of both eyes central point accounts for 50% ~ 65% of face width, if the face width recorded now is 100 pixels, then the distance of the central point of the pupil of eyes is 50 ~ 65 pixels, computing velocity can be improved significantly by this method.Certain the present embodiment is not restricted to above-mentioned method, the change by previous frame and next frame picture can also be adopted in the art, detect the eigenwert of face in adjacent two frame pictures and comparison, determine eyes position after being defined as same people and calculate eyes distance.
When the distance of pupil center's point of target person two is not 10 ~ 100 pixel, instant for this frame head portrait photo is preserved in the server.Can certainly be kept in the SD card of camera.
When shooting level angle or shooting vertical angle are not 0 ~ 90 °, instant for this frame head portrait photo is preserved in the server.Can certainly be kept in the SD card of camera.
Judgement the present embodiment of shooting vertical angle provides two kinds of concrete implementation methods:
Method 1: in sub-step 12, according to the center position of the human face region determination human face region determined;
Determine the central point of camera
According to shooting level angle and the shooting vertical angle of the central point of camera, zoom magnification, the center position determination camera of human face region and the head of people.The zoom magnification in each moment is stored in camera, and server can extract the zoom magnification in any moment.
Method 2: eyes location is carried out to personage in instant head portrait photo, and with the center line of the eyes of location for datum line, eyes are mapped eyes references angle upper and lower in about face to judge the shooting vertical angle of the head of personage in described camera and this instant head portrait photo.
Sub-step 13 is also comprised: shooting level angle determining step: in the camera described in judgement and this instant head portrait photo, whether the shooting level angle of the head of personage is 0 ~ 90 ° after sub-step 12, if, then carry out sub-step 14, if not, then preserve the instant head portrait photo of this frame.
It is specially and judges with the line of pupil of both eyes and horizontal angle.
Sub-step 14: cut out obtained target person head portrait photograph, makes the face of described target person at least account for 3/4 of the area of the target person head portrait photograph after described cutting out.
Step 2: portrait comparison, by in step 1 extract target person head portrait photograph and send to server, and by described server, this target person head portrait photograph and existing head portrait photo are carried out similarity comparison, when existence in existing head portrait photo meets the existing head portrait photo of default first threshold, namely judge that the personage in this target person head portrait photograph is that same people also exports corresponding people information with existing head portrait photo, otherwise, then the information without corresponding personage is exported.The existing head portrait photo stored in book server independently can be arranged according to the needs of user, such as, if banking system, store the photo with the people of current right in the server, photo size is 3Kb, when detecting not for having the people of current right, then reports to the police, such as, if public security department, then store the photo of wanted criminal in the server, such as, if education department, then store the photo that in examination hall, examinee pre-deposits in the server.In order to ensure to carry out recognition of face timely, reduce the data processing amount of server, the existing head portrait photo stored in book server is preferable over and is less than 10,000.For the face photograph of different service types, support that storehouse is built in classification, support to build storehouse by the rule classification such as sex, region (as the place where his residence is registered, native place), age bracket, nationality, nationality, photographic quality.
Build in the process of storehouse and should be noted that following item:
(1) individual human face photo template size <3KB;
(2) storage rate >=99% of Certification of Second Generation or the similar photo of quality;
Under (3) 100 ten thousand storage capacity (Certification of Second Generation or the similar photo of quality) condition, modeling time <3 days;
Under (4) 100 ten thousand storage capacity (Certification of Second Generation or the similar photo of quality) condition, 1:N full storehouse comparison speed <1.5 second;
Under (5) 100 ten thousand storage capacity (Certification of Second Generation or the similar photo of quality) condition, the first hit rate >90% of 1:N full storehouse comparison; Front 100 select hit rate >96%;
Wherein, build storehouse and be imported with three kinds of modes and add, be front-end camera respectively, existing photo and I.D. shine.The human face photo that what current discrimination was the highest is is gathered by front-end camera is as blacklist, and being secondly existing photo (picture etc. that the photo of other video cameras or digital camera take), is finally the China second-generation identity card photo in public security storehouse.
Annotation:
Comparison speed=comparison T.T./comparison total degree;
Front N selects hit rate=return in top n image and test pattern is the number of times/comparison total degree * 100% of same person.
As shown in Figure 3, in described step 2, target person head portrait photograph is compared with the existing head portrait photo stored in the server and is specifically comprised following sub-step by described server:
Sub-step 201: Face detection, isolates human face region from the background of target person head portrait photograph;
Sub-step 202: eyes are located, determines the position of pupil of both eyes from sub-step 201 in the human face region obtained; Adopt adaboost effectively can determine the position of pupil of both eyes based on the method for face characteristic, in the prior art, several method has been developed in the location of human face characteristic point, does not describe in detail one by one at this, and above-mentioned concrete method is not restricted to the method for adaboost based on face characteristic.
In actual applications, computing power due to existing mobile terminal is limited by the restriction of volume, chip development degree, there is a certain distance in its computing velocity and fixed computing machine, in order to improve this technology arithmetic speed in the terminal further, above-mentioned Face detection and eyes location adopt following method:
1) utilize setting threshold values automatically by human eye and face other parts and background separation or utilize human eye gray-scale value to carry out eyes location;
2) by after carry out vertical and horizontal environmental well to gray level image, tentatively fixed to face;
3) with the square frame search face of pupil size, when the black picture element number fallen in frame reaches maximum, namely the position of frame is eye position.
Sub-step 203: adjustment of image, correct the angle of inclination of human face region, and the distance at described pupil of both eyes center is determined according to the position of pupil of both eyes determined in sub-step 202, and according to the distance at pupil of both eyes center and the ratio of the distance at the pupil of both eyes center of presetting, the pixel of adjustment human face region, in a particular application, the correction at the angle of inclination of human face region comprises two aspects, the i.e. correction at the correction at the angle of inclination in photographic plane region and the angle of inclination perpendicular to photographic plane region, angle of inclination for photographic plane region corrects both can know by the calculating at the line angle of inclination at two through hole centers the angle needing to correct, the pixel of human face region mainly adjusts with the ratio of the distance at the pupil of both eyes center of presetting according to the distance at pupil of both eyes center, the distance of the pupil of both eyes obtained when such as taking pictures is 63 pixels, and the distance at the pupil of both eyes center of presetting is 65 pixels, then the picture processing module of mobile terminal then suitably increases according to the ratio of above-mentioned pixel.The adjustment of above-mentioned angle and pixel is all fine setting, both can obtain the result required by this sub-step 33 by less calculated amount.
Sub-step 204: Yunnan snub-nosed monkey, in sub-paragraphs 203, corrected human face region carries out pre-service, specifically comprises the color of face, face's exposure, face's uniform light, face's average color, face Gao Guang, blur level, brightness average, gray scale dynamic range, unevenness, overexposure ratio, under-exposure ratio, image sharpness, image blur, wears glasses, the process of the face left and right sides, up and down deflection angle;
Sub-step 205: extract face characteristic cluster, the human face region obtained through sub-step 204 is extracted to the face characteristic value being no less than, the set of the face characteristic value of described head portrait photo is the face characteristic cluster of head portrait photo.
Sub-step 206: the face characteristic cluster of the existing head portrait photo stored in the face characteristic cluster of described personage's head portrait photograph and server is compared, calculate meet the 3rd default threshold value the quantity of face characteristic value, and according to meet the 3rd default threshold value the quantity of face characteristic value obtain similarity;
Sub-step 207: whether the similarity described in judgement meets first threshold, if so, then extracts and exports corresponding target person information to face sharing service platform, sends a warning message to the object of setting, such as supervisor, police etc. simultaneously; Face sharing service platform is the database server that each Department of Public Security of Shanxi Province sets up.It should be noted that, the target person information described in the present embodiment comprises the identity information of target person head portrait photograph, corresponding existing head portrait photo and correspondence.
If the existing head portrait photo meeting first threshold obtained in sub-step 207 is multiple, then corresponding according to multiple existing head portrait photos similarity, selects the maximum existing head portrait photo of similarity and judges the personage in this existing head portrait photo and the same people of personage in target person head portrait photograph.
In sub-step 207, if when exporting corresponding target person information, network condition is deteriorated, then the target person information of described correspondence stored in the server, when after network conditions turn good, exports corresponding target person information.
In the present embodiment, operating personnel can also carry out consistency operation on face sharing service platform, carry out manual operation or intervention, mainly artificial modeling, the operation such as control, collected by hand target person photograph to camera.
In the present embodiment, when recording the numbering of camera, shooting time in the target person head portrait photograph captured simultaneously and have multiple personage in shooting picture, between each personage, server also gives the numbering of different personage, if as long as the video wanting to transfer this target person follow-up staff has target person head portrait photo namely can transfer video recording smoothly.Video recording is stored in the storage of camera, and regularly passes to server or face sharing service platform, regularly deletes cleaning data.
As shown in Figure 4, in the present embodiment, described step 1 if server is busy cannot carry out step 2, is then carried out after terminating:
Step 3: described target person head portrait photograph is sent to face sharing service platform by server, described face sharing service platform stores multiple existing head portrait photos, carry out face alignment by described face sharing service platform, find out the existing head portrait photo that meets first threshold and and judge the personage in this existing head portrait photo and the same people of personage in target person head portrait photograph.
Server must produce alarm message reminding user to the face photograph that can not complete recognition of face, provides user's manual intervention entrance, can carry out manual Face detection modeling in client; Possess synchronous the intervention and asynchronous intervention selection, can be controlled by systematic parameter; Temporally can show and inquiry by the condition stub such as point, type of service, again carry out manual modeling; Provide evaluation to modeled images can not be carried out successfully, and improvement prompting is provided.
In step 2, face alignment has following points for attention specifically: (1) supports the function such as convergent-divergent, editing, compression of photo.Face photo files form supports the main flow forms such as JPEG, BMP.
(2) face photographic quality measuring ability before comparison, have to pass through before comparison comprise human face photo file compression formats, size, size (pixel), blur level, color figure place, the face number of human face photo, two eye distances from, wear glasses, uniform light degree, the item such as the face left and right sides, up and down deflection angle detect.To ensure that photo to be compared all must meet recognition of face requirement.
(3) human face photo preprocessing function before the comparison of OCX or other control forms is provided, as Fuzzy Quality optimization, correction etc.
(4) single-sheet photo comparison and the comparison of batch photograph is supported.
(5) support many condition classification, point storehouse comparison, the single or multiple conditions namely inputting the face identity characteristic of photograph can realize comparison faster in classification.
(6) support according to priority to set up comparison personage, how tactful the support of personage's dispatching method is, and can carry out dynamic-configuration by user according to business need.
(7), after comparison photograph modeling comparison, template is saved in feature database automatically.
(8) comparison result comprises photograph ID, similarity (mark) or other information etc. required, can be determined, export according to sequence by parameter; Comparison returns results size and is determined by parameter.
(9) mode such as comparison result supporting document and database mode exports.Support comprises forms such as deriving WORD, EXCEL, TXT.
By setting specific interpupillary distance, shooting angle and shooting light, obtain satisfactory instant head portrait photo, and obtain result fast by portrait comparison technology, can data processing amount be effectively reduced by above-mentioned process, improve comparison accuracy, reduce the used time of comparison.
By the present embodiment, based on <=5 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, correct discrimination power >=75%; Based on <=10 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, correct discrimination power >=70%; Based on <=20 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, correct discrimination power >=65%; Based on >20 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, correct discrimination power >=60%; (do not comprise lean to one side, bow, face by partial occlusion);
Based on <=5 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, misidentification rate <=5 ‰; Based on <=10 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, misidentification rate <=1%; Based on <=20 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, misidentification rate <=3%; Based on >20 ten thousand deploy to ensure effective monitoring and control of illegal activities list time, misidentification rate <=5%; (do not comprise lean to one side, bow, face by partial occlusion), alarm time <=3 second (relevant with network transfer speeds).
Above-describedly be only preferred embodiment of the present invention, all do within the scope of the spirit and principles in the present invention any amendment, equivalently to replace and improvement etc., all should be included within protection scope of the present invention.

Claims (7)

1. a portrait comparison application process in video, is characterized in that, comprise the following steps:
Step 1: obtain real-time video and extract target person head portrait photograph, also extract instant head portrait photo by all personage's videos of camera Real-time Obtaining predeterminable area simultaneously, when the distance of pupil center's point of personage two in instant head portrait photo is 10 ~ 100 pixels and in described camera and this instant head portrait photo, the shooting vertical angle of the head of personage is 0 ~ 90 °, then this instant head portrait photo is judged as YES target person head portrait photograph, the angle determination methods of shooting vertical angle is: carry out eyes location to personage in instant head portrait photo, and with the center line of the eyes of location for datum line, eyes are mapped the shooting vertical angle that eyes references angle upper and lower in about face is judged as personage's head in described camera and this instant head portrait photo,
In camera described in judgement and this instant head portrait photo, whether the shooting level angle of the head of personage is 0 ~ 90 °, if so, then carry out step 2, if not, then preserve the instant head portrait photo of this frame, wherein, shooting level angle judges with the line of pupil of both eyes and horizontal angle;
Step 2: portrait comparison; By in step 1 extract target person head portrait photograph and send to server, and by described server, this target person head portrait photograph and existing head portrait photo are carried out similarity comparison, when existence in existing head portrait photo meets the existing head portrait photo of default first threshold, namely judge that the personage in this target person head portrait photograph is that same people also exports corresponding people information with existing head portrait photo, otherwise, then export the information without corresponding personage, step 2 comprises:
Sub-step 201: Face detection and eyes location, utilizes setting threshold values automatically by human eye and face other parts and background separation or utilize human eye gray-scale value to carry out eyes location; After gray level image being carried out to vertical and horizontal environmental well, to face Primary Location; With the square frame search face of pupil size, when the black picture element number fallen in frame reaches maximum, namely the position of frame is pupil position;
Sub-step 202: adjustment of image, correct the angle of inclination of human face region, and the distance at described pupil of both eyes center is determined according to the position of pupil of both eyes determined in sub-step 201, and according to the distance at pupil of both eyes center and the ratio of the distance at the pupil of both eyes center of presetting, the pixel of adjustment human face region;
Sub-step 203: Yunnan snub-nosed monkey, in sub-paragraphs 202, corrected human face region carries out pre-service, specifically comprises the process of the color of face, face's exposure, face's uniform light, face's average color, face Gao Guang, blur level, brightness average, gray scale dynamic range, unevenness, overexposure ratio, under-exposure ratio, image sharpness, image blur;
Sub-step 204: extract face characteristic cluster, the human face region obtained through sub-step 203 is extracted to the face characteristic value being no less than, the set of the face characteristic value of described head portrait photo is the face characteristic cluster of head portrait photo;
Sub-step 205: the face characteristic cluster of the existing head portrait photo stored in the face characteristic cluster of described personage's head portrait photograph and server is compared, calculate meet the 3rd default threshold value the quantity of face characteristic value, and according to meet the 3rd default threshold value the quantity of face characteristic value obtain similarity;
Sub-step 206: whether the similarity described in judgement meets first threshold, if so, then extracts and exports corresponding target person information.
2. portrait comparison according to claim 1 application process in video, it is characterized in that, described step 1 specifically comprises following sub-step:
Sub-step 11: by camera testing environment light intensity, when described ambient light intensity is less than default Second Threshold, then light filling;
Sub-step 12: by all personage's videos of camera Real-time Obtaining predeterminable area, the number of the personage contained in the video described in judgement; If the number of target person is single, then for single target personage extract real-time target person head portrait photo, if the number of target person is multiple, then for each target person extract real-time target person head portrait photo respectively.
3. portrait comparison according to claim 2 application process in video, it is characterized in that, step 2 is entered again after step is first cut out to the target person head portrait photograph judging to enter step 2 in horizontal shooting angle determining step, wherein, described cut out step for: obtained target person head portrait photograph is cut out, makes the face of described target person at least account for 3/4 of the area of the target person head portrait photograph after described cutting out.
4. portrait comparison according to claim 3 application process in video, it is characterized in that, if the existing head portrait photo meeting first threshold obtained in sub-step 206 is multiple, then corresponding according to multiple existing head portrait photos similarity, selects the maximum existing head portrait photo of similarity and judges personage in this existing head portrait photo with the personage in target person head portrait photograph as same people also exports corresponding people information.
5. portrait comparison according to claim 4 application process in video, it is characterized in that, in described sub-step 206, if when exporting corresponding target person information, network condition is deteriorated and cannot transmits, then the target person information of described correspondence is stored in the server, when after network conditions turn good, export corresponding target person information.
6. according to the arbitrary described portrait comparison application process in video of Claims 1 to 5, it is characterized in that, in step 2, export corresponding people information to face sharing service platform.
7. according to the arbitrary described portrait comparison application process in video of Claims 1 to 5, it is characterized in that, the people information of described correspondence comprises the identity information of target person head portrait photograph, corresponding existing head portrait photo and correspondence.
CN201410343153.6A 2014-07-18 2014-07-18 Portrait comparison application technology in video Expired - Fee Related CN104091176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410343153.6A CN104091176B (en) 2014-07-18 2014-07-18 Portrait comparison application technology in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410343153.6A CN104091176B (en) 2014-07-18 2014-07-18 Portrait comparison application technology in video

Publications (2)

Publication Number Publication Date
CN104091176A CN104091176A (en) 2014-10-08
CN104091176B true CN104091176B (en) 2015-10-14

Family

ID=51638891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410343153.6A Expired - Fee Related CN104091176B (en) 2014-07-18 2014-07-18 Portrait comparison application technology in video

Country Status (1)

Country Link
CN (1) CN104091176B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171219A (en) * 2018-01-30 2018-06-15 广州市君望机器人自动化有限公司 Face method is tracked by a kind of robot

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574876B (en) * 2015-01-16 2017-06-16 移康智能科技(上海)股份有限公司 A kind of method for managing and monitoring and system based in monitoring system
CN106203234A (en) * 2015-04-30 2016-12-07 蔡汉宝 Object identifying and searching system and method
CN104794458A (en) * 2015-05-07 2015-07-22 北京丰华联合科技有限公司 Fuzzy video person identifying method
CN105022999B (en) * 2015-07-12 2019-06-04 上海微桥电子科技有限公司 A kind of adjoint real-time acquisition system of people's code
CN104951773B (en) * 2015-07-12 2018-10-02 上海微桥电子科技有限公司 A kind of real-time face recognition monitoring system
CN105046219B (en) * 2015-07-12 2018-12-18 上海微桥电子科技有限公司 A kind of face identification system
CN105138954B (en) * 2015-07-12 2019-06-04 上海微桥电子科技有限公司 A kind of image automatic screening inquiry identifying system
CN105187721B (en) * 2015-08-31 2018-09-21 广州市幸福网络技术有限公司 A kind of the license camera and method of rapid extraction portrait feature
CN105357473A (en) * 2015-10-23 2016-02-24 苏州佳风网络科技有限公司 Video monitoring method
CN105809415B (en) * 2016-03-04 2020-04-21 腾讯科技(深圳)有限公司 Check-in system, method and device based on face recognition
CN106127106A (en) * 2016-06-13 2016-11-16 东软集团股份有限公司 Target person lookup method and device in video
CN106203393A (en) * 2016-07-22 2016-12-07 广东金杭科技股份有限公司 A kind of face collection and recognition method and the system realizing the method
CN106803941A (en) * 2017-03-06 2017-06-06 深圳市博信诺达经贸咨询有限公司 The big data sort recommendations method and system of monitoring system
CN107134031A (en) * 2017-05-09 2017-09-05 厦门善基通信科技有限公司 The first card second method for early warning of lessee gate inhibition's IC-card
CN107704851B (en) * 2017-10-30 2021-01-15 歌尔股份有限公司 Character identification method, public media display device, server and system
CN107944424A (en) * 2017-12-08 2018-04-20 广东金杭科技有限公司 Front end human image collecting and Multi-angle human are distributed as comparison method
CN108399665A (en) * 2018-01-03 2018-08-14 平安科技(深圳)有限公司 Method for safety monitoring, device based on recognition of face and storage medium
CN108269333A (en) * 2018-01-08 2018-07-10 平安科技(深圳)有限公司 Face identification method, application server and computer readable storage medium
CN109729280B (en) * 2018-12-28 2021-08-06 维沃移动通信有限公司 Image processing method and mobile terminal
TWI783199B (en) * 2019-12-25 2022-11-11 亞旭電腦股份有限公司 Processing method of face recognition and electronic device
CN111489097A (en) * 2020-04-17 2020-08-04 云南电网有限责任公司电力科学研究院 Park people flow management and control system based on edge calculation
CN111985348B (en) * 2020-07-29 2024-05-10 深思考人工智能科技(上海)有限公司 Face recognition method and system
CN115512276B (en) * 2022-10-25 2023-07-25 湖南三湘银行股份有限公司 Video anti-counterfeiting identification method and system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751551A (en) * 2008-12-05 2010-06-23 比亚迪股份有限公司 Method, device, system and device for identifying face based on image
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition
CN102298702A (en) * 2010-06-28 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751551A (en) * 2008-12-05 2010-06-23 比亚迪股份有限公司 Method, device, system and device for identifying face based on image
CN102298702A (en) * 2010-06-28 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures
CN102201061A (en) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 Intelligent safety monitoring system and method based on multilevel filtering face recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171219A (en) * 2018-01-30 2018-06-15 广州市君望机器人自动化有限公司 Face method is tracked by a kind of robot

Also Published As

Publication number Publication date
CN104091176A (en) 2014-10-08

Similar Documents

Publication Publication Date Title
CN104091176B (en) Portrait comparison application technology in video
CN109583342B (en) Human face living body detection method based on transfer learning
CN104504408A (en) Human face identification comparing method and system for realizing the method
CN105868689B (en) A kind of face occlusion detection method based on concatenated convolutional neural network
CN105138954B (en) A kind of image automatic screening inquiry identifying system
CN106203393A (en) A kind of face collection and recognition method and the system realizing the method
CN104978567B (en) Vehicle checking method based on scene classification
CN109711370A (en) A kind of data anastomosing algorithm based on WIFI detection and face cluster
CN105389562B (en) A kind of double optimization method of the monitor video pedestrian weight recognition result of space-time restriction
CN108009482A (en) One kind improves recognition of face efficiency method
CN104036236B (en) A kind of face gender identification method based on multiparameter exponential weighting
CN105426820B (en) More people&#39;s anomaly detection methods based on safety monitoring video data
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN107967458A (en) A kind of face identification method
CN106156688A (en) A kind of dynamic human face recognition methods and system
CN104751136A (en) Face recognition based multi-camera video event retrospective trace method
CN109190475A (en) A kind of recognition of face network and pedestrian identify network cooperating training method again
CN105574509B (en) A kind of face identification system replay attack detection method and application based on illumination
CN106169071A (en) A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN110414381A (en) Tracing type face identification system
CN105022999A (en) Man code company real-time acquisition system
WO2021217764A1 (en) Human face liveness detection method based on polarization imaging
CN104463232A (en) Density crowd counting method based on HOG characteristic and color histogram characteristic
CN102880864A (en) Method for snap-shooting human face from streaming media file

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160411

Address after: 510000 Guangdong city of Guangzhou province Panyu District Dashi street, Village Stone Road No. 13 403

Patentee after: Guangzhou Jia Qi Electronics Co.,Ltd.

Address before: 353000 Nanping city of Fujian Province Huang Dun Yanping District Jiangbin Road No. 51 Room 204

Patentee before: Wu Jianzhong

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160720

Address after: 510000 Guangdong city of Guangzhou province Panyu District Dashi street, Village Stone Road No. 13 Room 401

Patentee after: GUANGDONG DREAMHUNT TECHNOLOGY CO.,LTD.

Address before: 510000 Guangdong city of Guangzhou province Panyu District Dashi street, Village Stone Road No. 13 403

Patentee before: Guangzhou Jia Qi Electronics Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151014

CF01 Termination of patent right due to non-payment of annual fee