[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111539311A - Living body distinguishing method, device and system based on IR and RGB double photographing - Google Patents

Living body distinguishing method, device and system based on IR and RGB double photographing Download PDF

Info

Publication number
CN111539311A
CN111539311A CN202010316897.4A CN202010316897A CN111539311A CN 111539311 A CN111539311 A CN 111539311A CN 202010316897 A CN202010316897 A CN 202010316897A CN 111539311 A CN111539311 A CN 111539311A
Authority
CN
China
Prior art keywords
living body
face image
rgb
face
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010316897.4A
Other languages
Chinese (zh)
Other versions
CN111539311B (en
Inventor
陈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kaike Intelligent Technology Co ltd
Original Assignee
Shanghai Kaike Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Kaike Intelligent Technology Co ltd filed Critical Shanghai Kaike Intelligent Technology Co ltd
Priority to CN202010316897.4A priority Critical patent/CN111539311B/en
Publication of CN111539311A publication Critical patent/CN111539311A/en
Application granted granted Critical
Publication of CN111539311B publication Critical patent/CN111539311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a living body distinguishing method, a device and a system based on IR and RGB double-shot, wherein the method comprises the following steps: acquiring a video stream to be detected through a binocular module, and performing face detection on the video stream to obtain an RGB (red, green and blue) face image and a first IR (infrared) face image; inputting the first IR face image into a convolutional neural network for living body judgment to obtain a first living body judgment result and a second IR face image; and inputting the RGB face image and the second IR face image into a twin network for living body judgment to obtain a second living body judgment result. The embodiment of the invention fully uses the IR material imaging different information and binocular parallax information, carries out the living body judgment of the human face in a progressive mode, and improves the accuracy of the living body judgment.

Description

Living body distinguishing method, device and system based on IR and RGB double photographing
Technical Field
The invention relates to the technical field of computer vision, in particular to a living body distinguishing method, a living body distinguishing device and a living body distinguishing system based on IR and RGB double-shooting.
Background
The existing living body discrimination methods mainly include the following three methods:
(1) the method only uses infrared imaging to make living body judgment on reflected infrared rays of different materials such as screen playing, photographic paper, color high-definition printing paper and the like, and the living body judgment is simple but is easy to carry out false detection and missed detection;
(2) and training the eye region image to obtain a living body discrimination network by adopting the imaging difference of the infrared image human face eye region. The method has high requirements on infrared imaging quality and is not beneficial to low-cost use.
(3) According to whether the intensity of the reflected light of the infrared light on the characteristic points of the face is within the range or not, the method is high in dependence on the light source brightness, distance and other accuracy, and results are unstable due to the fact that the method is easily influenced.
Disclosure of Invention
In view of the above technical defects, embodiments of the present invention provide a living body identification method, device and system based on IR and RGB double-shot.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a living body identification method based on IR and RGB double shots, including:
acquiring a video stream to be detected through a binocular module, wherein the binocular module comprises an RGB camera and an IR camera;
performing face detection on the video stream to be detected to obtain an RGB face image and a first IR face image;
inputting the first IR face image into a convolutional neural network for living body judgment to obtain a first living body judgment result and a second IR face image;
and inputting the RGB face image and the second IR face image into a twin network for living body judgment to obtain a second living body judgment result.
As a specific embodiment of the present application, the obtaining the first living body discrimination result and the first target IR face image specifically includes:
inputting the first IR face image into the convolutional neural network for living body judgment to obtain living body probability;
if the living body probability is larger than a preset value, determining the first IR face image corresponding to the living body probability as a first living body judgment result;
and if the living body probability is smaller than a preset value, determining the first IR face image corresponding to the living body probability as the second IR face image.
As an embodiment of the present application, the obtaining of the second living body discrimination result specifically includes:
inputting the RGB face image and the second IR face image into the twin network, extracting features of the RGB face image and the second IR face image by the twin network, performing full-connection classification after performing difference processing on the extracted features to obtain a second living body judgment result, and performing spooff alarm on a non-living body obtained after full-connection classification.
As a preferred embodiment of the present application, after obtaining the RGB facial image and the first IR facial image, the method further includes:
calculating the face size through the binocular epipolar relationship formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, if the face size does not fall into the reasonable face size range, discarding the RGB face image and the first IR face image, and performing spoofs warning.
As a preferred embodiment of the present application, before the video stream to be detected is acquired through the binocular module, the method further includes:
acquiring a plurality of live human pictures and printing attack and mask attack pictures as first training samples through the IR camera, and obtaining the convolutional neural network based on the first training samples;
and acquiring a plurality of live human pictures and printing attack and mask attack pictures as second training samples through the RGB camera and the IR camera, and obtaining the twin network based on the second training samples.
As a preferred embodiment of the present application, after obtaining the second living body discrimination result, the method further includes:
and sending the first living body judgment result and the second living body judgment result to an external device for displaying.
In a second aspect, an embodiment of the present invention provides a living body distinguishing apparatus based on IR and RGB double-shot, which includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method of the first aspect.
In a third aspect, the present invention provides a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the first aspect.
In a fourth aspect, the embodiment of the present invention further provides an IR and RGB double-shooting based living body identification system, which includes a binocular module, an infrared light supplement lamp, a living body identification device, and an external device, where the binocular module includes an RGB camera and an IR camera, and the infrared light supplement lamp is used for supplementing light to the IR camera. The binocular module is used for collecting a video stream to be detected, the living body distinguishing device is respectively communicated with the binocular module and external equipment, and the living body distinguishing device is as described in the second aspect.
By implementing the embodiment of the invention, the face detection is firstly carried out on the video stream to be detected to obtain an RGB face image and a first IR face image; inputting the first IR face image into a convolutional neural network for living body discrimination to obtain a first living body discrimination result and a second IR face image (namely, the part of the convolutional neural network which is not judged as a living body); finally, inputting the RGB face image and the second target IR face image into a twin network for living body judgment to obtain a second living body judgment result; the embodiment of the invention fully uses the IR material imaging different information and binocular parallax information, carries out the living body judgment of the human face in a progressive mode, and improves the accuracy of the living body judgment.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
Fig. 1 is a schematic flowchart of a living body identification method based on IR and RGB dual-shot according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of a binocular module used in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first stage of binocular living body discrimination;
FIG. 4 is a schematic diagram of a second stage of binocular living body discrimination;
fig. 5 is a schematic diagram of a third stage of binocular recognition;
FIG. 6 is a schematic structural diagram of a living body identification system based on IR and RGB dual shooting according to an embodiment of the present invention;
fig. 7 is a schematic structural view of the living body discriminating device shown in fig. 6.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention has the following inventive concept: the method is characterized in that an IR and RGB double-shot image is used in a combined mode, images of a screen and a photographic paper attack are obviously different from shallow to deep by using the IR, the size of a binocular ranging measurement face excludes the face attack with unreasonable size, the IR is used for classifying living bodies, faces made of other materials and surrounding imaging differences, the IR and RGB faces matched with the two eyes are subjected to living body judgment by extracting characteristic differences through a double-generation convolutional neural network.
Based on the above inventive concept, referring to fig. 1, a first embodiment of the present invention provides a living body identification method based on IR and RGB, comprising:
s101, collecting a sample image, and training a convolutional neural network and a twin network based on the sample image.
Before living body discrimination is performed, a convolutional neural network and a twin network need to be trained. When the sample image is collected, the binocular module shown in fig. 2 is adopted in this embodiment. As shown in fig. 2, the binocular module used in the embodiment of the present invention includes an RGB camera, an IR camera, an infrared fill light, and a display or function area. The RGB camera and the IR camera are used for collecting video streams or sample images, the infrared light supplement lamp is used for supplementing light for the RGB camera and the IR camera, and the display or functional area is used for displaying the identified living body result.
Specifically, when the convolutional neural network is trained, only the IR camera collects a plurality of pictures of live human bodies as training samples for training, for example, 5000 or more pictures, or collects 5000 pictures of human faces attacked by high-definition color printing and black-and-white printing as training samples for training. In this embodiment, the trained convolutional neural network is a 224 × 224 input MobileNetV2 network.
Specifically, when the twin network is trained, a plurality of pictures of live human bodies are collected by the RGB camera and the IR camera at the same time to be used as training samples for training. In this embodiment, the backbone part of the trained twin network is a MobileNetV2 network.
And S102, acquiring the video stream to be detected through a binocular module.
S103, performing face detection on the video stream to be detected to obtain an RGB face image and a first IR face image.
And S104, inputting the first IR face image into a convolutional neural network for living body judgment to obtain a first living body judgment result and a second IR face image.
Specifically, step S104 includes:
(1) inputting the first IR face image into a convolutional neural network for living body judgment to obtain living body probability;
(2) if the living body probability is larger than a preset value, determining a first IR face image corresponding to the living body probability as a first living body judgment result;
(3) and if the living body probability is smaller than the preset value, determining the first IR face image corresponding to the living body probability as a second IR face image.
It should be noted that, in step S104, the IR face meeting the size decision requirement is sent to the convolutional neural network to perform living body decision, and if the living body probability is greater than the preset value, it is determined as a living body, and if the living body probability is smaller than the preset value, a spooff alarm is performed, and it is determined as a second IR image, so as to perform next living body decision.
And S105, inputting the RGB face image and the second IR face image into a twin network for living body judgment to obtain a second living body judgment result.
It should be noted that the IR face that cannot be determined in step S104 and the RGB face obtained by the foregoing detection are input into the twin network, feature extraction is performed on the RGB face image and the second IR face image by the twin network, full-connected classification is performed after difference processing is performed on the extracted features to obtain a second living body determination result, and spooff warning is performed on the non-living body obtained after full-connected classification.
And S106, transmitting the first living body discrimination result and the second living body discrimination result to an external device for displaying.
Specifically, the living body discrimination results obtained in steps S104 and S105 are displayed in the display or functional area shown in fig. 2.
It should be noted that, the RGB face and the IR face obtained in step S103 are subjected to size determination in addition to the live body recognition in steps S104 and S105. The method specifically comprises the following steps: calculating the face size through the binocular epipolar relationship formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, if the face size does not fall into the reasonable face size range, discarding the RGB face image and the first IR face image, and performing spoofs warning, as shown in FIG. 3.
The calibration of the camera adopts a Zhangzhengyou checkerboard mode, and the face size is calculated in the following mode:
let parallax be d, focal length be f, base length b, then depth:
z=f*b/d
the accuracy of the measurement, delta _ z, is
delta_z=f*b/d0-f*b/d1
For one camera, the value of d needs to be determined by the pixel and pel size;
d=d_pixel*pixel_size
after Z is calculated, the size of the face can be calculated according to the number of pixels of the face according to the similar triangles.
Further, referring to fig. 3 to 5, the living body discriminating section in the embodiment of the present invention is performed in a progressive manner, and mainly includes three stages:
the first stage is as follows: as shown in fig. 3, after correcting the RGB camera and the IR, face detection is performed, the RGB camera performs face detection, scoring and other operations to obtain an RGB face, and the IR camera performs face detection, luminance contrast detection and other operations to obtain an IR face; if the RGB camera detects a face and the IR camera does not detect the face, performing spooff warning; for the detected RGB face and IR face, matching operation is also carried out in the figure 3, and if the faces are not matched, spooff warning is carried out; in addition, the face size is calculated through a binocular epipolar relationship formed by the RGB camera and the IR camera, the face size is compared with a preset reasonable face size range, rejection of an unsatisfied range is carried out, and spoofs warning is carried out. .
It should be noted that, in the first stage, near infrared imaging is mainly used to eliminate screen attacks of such electronic devices due to the characteristic that screen displays of mobile phones, pads, computers and the like all belong to visible light. And the characteristic that the smooth photographic paper can not display contents under the near-infrared light is also utilized to eliminate the attack of the smooth photographic paper on the human face. For the face image printed by paper with good diffuse reflection to near infrared light, the face image in RGB is combined, parallax is calculated, so that distance information and face size information are obtained, and the attack that the face size is unreasonable is eliminated.
The calibration of the camera adopts a Zhangzhengyou checkerboard mode, and the face size is calculated in the following mode:
let parallax be d, focal length be f, base length b, then depth:
z=f*b/d
the accuracy of the measurement, delta _ z, is
delta_z=f*b/d0-f*b/d1
For one camera, the value of d needs to be determined by the pixel and pel size;
d=d_pixel*pixel_size
after Z is calculated, the size of the face can be calculated according to the number of pixels of the face according to the similar triangles.
And a second stage: as shown in fig. 4, for the IR face that is not excluded in fig. 3, it is sent to the CNN network to determine whether it is a living body.
It should be noted that the IR living body decision network here adopts the MobileNetV2 network input by 224X224, the IR faces are aligned according to landmark first and sent to the IR living body decision network, if the living body probability is greater than thd0, it is determined that the IR face is a living body, and if the living body probability is less than thd1, a spooff alarm is performed.
And a third stage: as shown in fig. 5, for the face that still cannot be judged, the RGB face is aligned first according to landmark, and sent to CNN twin network _ a, the IR face is aligned first according to landmark, and sent to CNN twin network _ B, then the difference is made in features, and the living body probability is obtained through full-connection classification, and if the living body probability is greater than thd0, it is judged that the face is a living body, and if the living body probability is less than thd1, the spoof alarm is performed.
In the RGB/IR face living body judgment twin network, a MobileNet V2 network is used as a backbone, weight sharing is carried out, and features extracted by the twin network are classified into living bodies and non-living bodies through full connection after difference is made.
Compared with the prior art, the embodiment of the invention mainly has the following advantages:
(1) the method fully uses IR material imaging different information and binocular parallax information, carries out human face living body judgment according to a progressive mode, and improves the accuracy of living body judgment.
(2) The invention has low requirements on infrared imaging quality and no special requirements on an infrared light supplement lamp, and is convenient for mass production and low-cost deployment.
(3) The different reflection intensities of the IR supplementary lighting and the like for different distances are creatively utilized, so that the difference between the IR reflection of the face printed on the paper and the living body face per se and the difference between the depth difference of the living body face and the background and the depth difference of the face and the background printed on the paper can be learned, and the living body discrimination rate is improved.
(4) The inventive twin network is used for carrying out feature extraction end-to-end training on the IR and RGB images, which is beneficial to learning different parallax information of a living body face and a plane face and learning different IR and RGB reflection information of the face and other materials near the face.
Furthermore, the embodiment of the invention also solves the problem that the living body discrimination result is not robust enough by using a single IR image, solves the problems that the prior art has higher requirements on the IR light source or the IR imaging quality and is not easy to deploy with low cost, solves the problems that the prior art does not fully utilize the 3D imaging difference and the material reflection difference of RGB and IR imaging the same person, and improves the robustness of the living body algorithm.
Based on the same inventive concept, the embodiment of the invention provides a living body distinguishing system based on IR and RGB double-shot. As shown in fig. 6, the system includes a living body identification device 100, a binocular module 200, an infrared light supplement lamp 300 and an external device 400, where the binocular module 200 includes an RGB camera and an IR camera, the infrared light supplement lamp 300 is used to supplement light to the IR camera, the binocular module 200 is used to collect a video stream to be detected, the living body identification device 100 is respectively communicated with the binocular module 200 and the external device 400, and the external device 400 is used to display a living body identification result obtained by the living body identification device 100.
As shown in fig. 7, the living body discriminating apparatus 100 may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected via a bus 105. The memory 104 is used for storing a computer program comprising program instructions, the processor 101 is configured for calling the program instructions to execute the method of the above-mentioned living body discrimination method based on IR and RGB bi-shooting part of the embodiment.
It should be understood that, in the embodiment of the present invention, the Processor 101 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker, or the like.
The memory 104 may include read-only memory and random access memory, and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store device type information.
In a specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiment of the present invention may execute the implementation manner described in the embodiment of the living body distinguishing method based on IR and RGB double shot provided in the first embodiment of the present invention, and are not described herein again.
Further, in correspondence with the IR and RGB dual-shot based living body discrimination method and living body discrimination apparatus of the first embodiment, an embodiment of the present invention also provides a readable storage medium storing a computer program including program instructions that, when executed by a processor, implement: the living body discrimination method based on IR and RGB double shots of the first embodiment described above.
The computer-readable storage medium may be an internal storage unit of the living body discriminating device described in the foregoing embodiment, such as a hard disk or a memory of a system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard drive, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the system. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the system. The computer-readable storage medium is used for storing the computer program and other programs and data required by the system. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A living body discrimination method based on IR and RGB double photographing is characterized by comprising the following steps:
acquiring a video stream to be detected through a binocular module, wherein the binocular module comprises an RGB camera and an IR camera;
performing face detection on the video stream to be detected to obtain an RGB face image and a first IR face image;
inputting the first IR face image into a convolutional neural network for living body judgment to obtain a first living body judgment result and a second IR face image;
and inputting the RGB face image and the second IR face image into a twin network for living body judgment to obtain a second living body judgment result.
2. The method of claim 1, wherein the method comprises:
inputting the first IR face image into the convolutional neural network for living body judgment to obtain living body probability;
if the living body probability is larger than a preset value, determining the first IR face image corresponding to the living body probability as a first living body judgment result;
and if the living body probability is smaller than a preset value, determining the first IR face image corresponding to the living body probability as the second IR face image.
3. The method of claim 1, wherein the method comprises:
inputting the RGB face image and the second IR face image into the twin network, extracting features of the RGB face image and the second IR face image by the twin network, performing full-connection classification after performing difference processing on the extracted features to obtain a second living body judgment result, and performing spooff alarm on a non-living body obtained after full-connection classification.
4. The method of claim 1, wherein after obtaining the RGB face image and the first IR face image, the method further comprises:
calculating the face size through the binocular epipolar relationship formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, if the face size does not fall into the reasonable face size range, discarding the RGB face image and the first IR face image, and performing spoofs warning.
5. The method according to any of claims 1-4, wherein before the video stream to be detected is acquired by the binocular module, the method further comprises:
acquiring a plurality of live human pictures and printing attack and mask attack pictures as first training samples through the IR camera, and obtaining the convolutional neural network based on the first training samples;
and acquiring a plurality of live human pictures and printing attack and mask attack pictures as second training samples through the RGB camera and the IR camera, and obtaining the twin network based on the second training samples.
6. The method of claim 5, wherein after obtaining a second living body discrimination result, the method further comprises:
and sending the first living body judgment result and the second living body judgment result to an external device for displaying.
7. An IR and RGB bi-shot based living body discriminating apparatus comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is used for storing a computer program comprising program instructions, and the processor is configured to invoke the program instructions to perform the method of claim 6.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method of claim 6.
9. An IR and RGB double-shooting-based living body distinguishing system comprises a binocular module, an infrared light supplementing lamp, a living body distinguishing device and external equipment, wherein the binocular module comprises an RGB camera and an IR camera, the infrared light supplementing lamp is used for supplementing light to the IR camera, the living body distinguishing system is characterized in that the binocular module is used for collecting a video stream to be detected, the living body distinguishing device is respectively communicated with the binocular module and the external equipment, and the living body distinguishing device is as claimed in claim 6.
CN202010316897.4A 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting Active CN111539311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010316897.4A CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010316897.4A CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Publications (2)

Publication Number Publication Date
CN111539311A true CN111539311A (en) 2020-08-14
CN111539311B CN111539311B (en) 2024-03-01

Family

ID=71975221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010316897.4A Active CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Country Status (1)

Country Link
CN (1) CN111539311B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347904A (en) * 2020-11-04 2021-02-09 杭州锐颖科技有限公司 Living body detection method, device and medium based on binocular depth and picture structure
CN112464741A (en) * 2020-11-05 2021-03-09 马上消费金融股份有限公司 Face classification method, model training method, electronic device and storage medium
CN113221830A (en) * 2021-05-31 2021-08-06 平安科技(深圳)有限公司 Super-resolution living body identification method, system, terminal and storage medium
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment
CN115623291A (en) * 2022-08-12 2023-01-17 深圳市新良田科技股份有限公司 Binocular camera module, service system and face verification method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device
US20180349721A1 (en) * 2017-06-06 2018-12-06 Microsoft Technology Licensing, Llc Biometric object spoof detection based on image intensity variations
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN110852134A (en) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180349721A1 (en) * 2017-06-06 2018-12-06 Microsoft Technology Licensing, Llc Biometric object spoof detection based on image intensity variations
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device
CN110852134A (en) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUHUA LIU ET AL.: "an identity authentication method combining liveness detection and face recognition", 《MDPI》, 31 October 2019 (2019-10-31), pages 1 - 10 *
胡斐;文畅;谢凯;贺建飚;: "基于微调策略的多线索融合人脸活体检测", 计算机工程, no. 05, 27 June 2018 (2018-06-27), pages 262 - 266 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347904A (en) * 2020-11-04 2021-02-09 杭州锐颖科技有限公司 Living body detection method, device and medium based on binocular depth and picture structure
CN112464741A (en) * 2020-11-05 2021-03-09 马上消费金融股份有限公司 Face classification method, model training method, electronic device and storage medium
CN112464741B (en) * 2020-11-05 2021-11-26 马上消费金融股份有限公司 Face classification method, model training method, electronic device and storage medium
CN113221830A (en) * 2021-05-31 2021-08-06 平安科技(深圳)有限公司 Super-resolution living body identification method, system, terminal and storage medium
CN113221830B (en) * 2021-05-31 2023-09-01 平安科技(深圳)有限公司 Super-division living body identification method, system, terminal and storage medium
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment
CN113255586B (en) * 2021-06-23 2024-03-15 中国平安人寿保险股份有限公司 Face anti-cheating method based on RGB image and IR image alignment and related equipment
CN115623291A (en) * 2022-08-12 2023-01-17 深圳市新良田科技股份有限公司 Binocular camera module, service system and face verification method

Also Published As

Publication number Publication date
CN111539311B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN111539311B (en) Living body judging method, device and system based on IR and RGB double shooting
CN108764071B (en) Real face detection method and device based on infrared and visible light images
CN105279372B (en) A kind of method and apparatus of determining depth of building
WO2018166525A1 (en) Human face anti-counterfeit detection method and system, electronic device, program and medium
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
CN111079576A (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN111368601B (en) Living body detection method and apparatus, electronic device, and computer-readable storage medium
JP2011188496A (en) Backlight detection device and backlight detection method
KR20040089670A (en) Method and system for classifying object in scene
CN113205057B (en) Face living body detection method, device, equipment and storage medium
CN110363087B (en) Long-baseline binocular face in-vivo detection method and system
CN113673584A (en) Image detection method and related device
CN107018407B (en) Information processing device, evaluation chart, evaluation system, and performance evaluation method
JP2017504017A (en) Measuring instrument, system, and program
US11315360B2 (en) Live facial recognition system and method
CN112434546A (en) Face living body detection method and device, equipment and storage medium
CN112818722A (en) Modular dynamically configurable living body face recognition system
CN115035147A (en) Matting method, device and system based on virtual shooting and image fusion method
CN112507767B (en) Face recognition method and related computer system
CN105787429A (en) Method and apparatus for inspecting an object employing machine vision
US9536162B2 (en) Method for detecting an invisible mark on a card
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
KR100827133B1 (en) Method and apparatus for distinguishment of 3d image in mobile communication terminal
CN106402717B (en) A kind of AR control method for playing back and intelligent desk lamp
KR102501461B1 (en) Method and Apparatus for distinguishing forgery of identification card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant