[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110163078B - Living body detection method, living body detection device and service system applying living body detection method - Google Patents

Living body detection method, living body detection device and service system applying living body detection method Download PDF

Info

Publication number
CN110163078B
CN110163078B CN201910217452.8A CN201910217452A CN110163078B CN 110163078 B CN110163078 B CN 110163078B CN 201910217452 A CN201910217452 A CN 201910217452A CN 110163078 B CN110163078 B CN 110163078B
Authority
CN
China
Prior art keywords
image
living body
visible light
biological feature
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910217452.8A
Other languages
Chinese (zh)
Other versions
CN110163078A (en
Inventor
王智慧
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910217452.8A priority Critical patent/CN110163078B/en
Publication of CN110163078A publication Critical patent/CN110163078A/en
Application granted granted Critical
Publication of CN110163078B publication Critical patent/CN110163078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a living body detection method, which comprises the following steps: acquiring an image of an object to be detected, which is shot under a binocular camera, wherein the image comprises an infrared image and a visible light image; extracting image physical information from a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image; if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, carrying out deep semantic feature extraction on the biological feature area in the image based on a machine learning model to obtain deep semantic features of the biological feature area in the image; and carrying out living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image. The invention solves the problem of poor defensive performance of living body detection on the attack of the prosthesis in the prior art.

Description

Living body detection method, living body detection device and service system applying living body detection method
Technical Field
The present invention relates to the field of computers, and in particular, to a living body detection method, a living body detection device, and a service system applying the living body detection method.
Background
With the development of biometric identification technology, biometric identification is widely used, such as face-swipe payment, face recognition in video monitoring, fingerprint identification in door access authorization, iris identification, and so on. Biometric identification is thus also a variety of threats, such as biometric identification by an attacker using fake faces, fingerprints, irises, etc.
For this reason, the existing living body detection scheme is mainly based on a man-machine interaction scheme, and the object to be detected needs to cooperate to make corresponding actions, such as nodding, blinking, smiling, and the like, so as to analyze the actions of the object to be detected to perform living body detection.
The inventor discovers that the scheme has higher requirements on the object to be detected, is easy to cause poor user experience, and has the problem of poor defensive performance against the prosthesis attack.
Disclosure of Invention
The embodiments of the invention provide a living body detection method, a living body detection device, an access control system applying the living body detection method, a payment system, a service system, electronic equipment and a storage medium, which can solve the problem of poor defending performance of living body detection on prosthesis attack.
The technical scheme adopted by the invention is as follows:
According to an aspect of an embodiment of the present invention, a living body detection method includes: acquiring an image of an object to be detected, which is shot under a binocular camera, wherein the image comprises an infrared image and a visible light image; extracting image physical information from a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image; if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, carrying out deep semantic feature extraction on the biological feature area in the image based on a machine learning model to obtain deep semantic features of the biological feature area in the image; and carrying out living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image.
In an exemplary embodiment, the performing region position matching between the biometric region in the infrared image and the biometric region in the visible light image includes: detecting the positions of the biological feature areas in the infrared image and the biological feature areas in the visible light image respectively to obtain a first area position corresponding to the biological feature areas in the infrared image and a second area position corresponding to the biological feature areas in the visible light image; calculating a correlation coefficient between the first region position and the second region position; and if the correlation coefficient exceeds a set correlation threshold, judging that the region positions are matched.
In an exemplary embodiment, the performing region position matching between the biometric region in the infrared image and the biometric region in the visible light image includes: determining a first horizontal distance between the object to be detected and a vertical plane where the binocular camera is located; acquiring a second horizontal distance between the infrared camera and the visible light camera based on the infrared camera and the visible light camera in the binocular camera; acquiring a horizontal distance difference between a biological feature area in the infrared image and a biological feature area in the visible light image according to the first horizontal distance and the second horizontal distance; and if the horizontal distance difference exceeds a set distance threshold, determining that the region positions are not matched.
In an exemplary embodiment, the biometric region is a face region; after the image of the object to be detected, which is shot by the binocular camera, is acquired, the method further comprises: respectively carrying out face detection on the infrared image and the visible light image; and if the infrared image is detected to not contain the human face area, and/or the visible light image does not contain the human face area, judging that the object to be detected is a prosthesis.
According to an aspect of an embodiment of the present invention, a living body detection apparatus includes: the image acquisition module is used for acquiring an image of an object to be detected, which is shot under the binocular camera, wherein the image comprises an infrared image and a visible light image; the image physical information extraction module is used for extracting image physical information of a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image; the deep semantic feature extraction module is used for extracting deep semantic features of the biological feature area in the image based on a machine learning model if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, so as to obtain the deep semantic features of the biological feature area in the image; and the object living body judging module is used for judging the living body of the object to be detected according to the deep semantic features of the biological feature area in the image.
According to one aspect of the embodiment of the invention, an access control system applying a living body detection method comprises reception equipment, identification electronic equipment and access control equipment, wherein the reception equipment is used for acquiring images of an access object by using a binocular camera, and the images comprise infrared images and visible light images; the identification electronic equipment comprises a living body detection device, a detection device and a detection device, wherein the living body detection device is used for respectively extracting image physical information and deep semantic features of a biological feature area in an image of the access object, and judging whether the access object is a living body or not according to the extracted image physical information and the deep semantic features; when the access object is a living body, the identification electronic equipment performs identity identification on the access object, so that the access control equipment configures access rights for the access object successfully completing the identity identification, and the access object controls the access gate of the designated area to execute release action according to the configured access rights.
According to an aspect of the embodiment of the invention, a payment system applying a living body detection method comprises a payment terminal and a payment electronic device, wherein the payment terminal is used for acquiring images of a payment user by using a binocular camera, and the images comprise an infrared image and a visible light image; the payment terminal comprises a living body detection device, a processing device and a processing device, wherein the living body detection device is used for respectively extracting image physical information and deep semantic features of a biological feature area in an image of the payment user, and judging whether the payment user is living body or not according to the extracted image physical information and the deep semantic features; and when the payment user is a living body, the payment terminal performs identity verification on the payment user so as to initiate a payment request to the payment electronic equipment when the payment user passes the identity verification.
According to one aspect of the embodiment of the invention, a service system applying a living body detection method comprises a service terminal and authentication electronic equipment, wherein the service terminal is used for acquiring images of service personnel by using a binocular camera, and the images comprise infrared images and visible light images; the service terminal comprises a living body detection device, a detection device and a control device, wherein the living body detection device is used for respectively extracting image physical information and deep semantic features of a biological feature area in an image of the service personnel, and judging whether the service personnel is living or not according to the extracted image physical information and the deep semantic features; when the service personnel is a living body, the service terminal requests the authentication electronic equipment to carry out identity authentication on the service personnel, and distributes service business instructions to the service personnel passing the identity authentication.
According to an aspect of an embodiment of the present invention, an electronic device includes a processor and a memory, the memory having stored thereon computer readable instructions that, when executed by the processor, implement a living body detection method as described above.
According to an aspect of an embodiment of the present invention, a storage medium has stored thereon a computer program which, when executed by a processor, implements the living body detection method as described above.
In the technical scheme, based on the infrared image shot by the binocular camera and combining the physical information of the image and the deep semantic features, the object to be detected is subjected to living detection, so that various types of prosthesis attack behaviors of an attacker can be effectively filtered, and the matching of the object to be detected is not relied on.
Specifically, an image of an object to be detected, which is shot under a binocular camera, is obtained, image physical information extraction is performed on a biological feature area in the image, when the image physical information of the biological feature area in the image is only that of the object to be detected as a living body, deep semantic feature extraction is performed on the biological feature area in the image based on a machine learning model, so that living body judgment of the object to be detected is performed according to the extracted deep semantic features, therefore, the false attack behaviors of an attacker about video playback are filtered out based on an infrared image shot by the binocular camera, the false attack behaviors about black and white photos are filtered out based on the image physical feature information, the false attack behaviors about color photos, hole masks and the like are filtered out based on the deep semantic feature information, and meanwhile, the living body detection of the object to be detected is allowed under a non-matched free state, so that the problem of poor defensive property of the living body detection to the false attack in the prior art is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of an implementation environment of an application scenario in which a living body detection method is applied.
Fig. 2 is a block diagram of a hardware architecture of an electronic device, according to an example embodiment.
Fig. 3 is a flow chart illustrating a method of in-vivo detection, according to an exemplary embodiment.
FIG. 4 is a flow chart of step 330 in one embodiment of the corresponding embodiment of FIG. 3.
Fig. 5 is a flow chart of step 333 in one embodiment of the corresponding embodiment of fig. 4.
Fig. 6 is a schematic diagram of the difference between the color histogram of the color image and the color histogram of the black-and-white image according to the corresponding embodiment of fig. 5.
Fig. 7 is a flowchart of step 333 in another embodiment according to the corresponding embodiment of fig. 4.
Fig. 8 is a flow chart of step 3332 in one embodiment of the corresponding embodiment of fig. 7.
Figure 9 is a schematic diagram of an HSV model according to the corresponding embodiment of figure 8.
Fig. 10 is a flow chart of another embodiment of step 3332 of the corresponding embodiment of fig. 7.
FIG. 11 is a flow chart of step 350 in one embodiment of the corresponding embodiment of FIG. 3.
Fig. 12 is a flow chart of step 370 in one embodiment of the corresponding embodiment of fig. 3.
FIG. 13 is a flow chart illustrating step 320 in one embodiment, according to an exemplary embodiment.
FIG. 14 is a flow chart illustrating step 320 in another embodiment according to an exemplary embodiment.
Figure 15 is a schematic view of the horizontal distance differences according to the embodiment of figure 14.
Fig. 16 is a schematic diagram of a face key point shown according to an example embodiment.
Fig. 17 is a schematic diagram of an implementation of a living body detection method in an application scenario.
Fig. 18 is a block diagram showing a living body detection apparatus according to an exemplary embodiment.
Fig. 19 is a block diagram of an electronic device, according to an example embodiment.
There has been shown in the drawings, and will hereinafter be described, specific embodiments of the invention with the understanding that the present disclosure is to be considered in all respects as illustrative, and not restrictive, the scope of the inventive concepts being indicated by the appended claims.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Fig. 1 is a schematic diagram of an implementation environment of an application scenario in which a living body detection method is applied.
As shown in fig. 1 (a), the implementation environment includes a payment user 510, a smartphone 530, and a payment server 550.
If the payment subscriber 510 is determined to be living by the living body detection method, the payment subscriber 510 can perform authentication by means of the smart phone 530 and request the payment server 550 to complete the payment process of the order to be paid after passing the authentication.
As shown in fig. 1 (b), the implementation environment includes a reception apparatus 610, an identification server 630, and an entrance guard control apparatus 650.
If the access object 670 is determined to be a living body by the living body detection method, the access object 670 can identify through the identification server 630 after the image is collected by the reception device 610, and the access control device 650 can control the access gate of the relevant area to execute the release action after the identification of the access object 670 is completed.
As shown in fig. 1 (c), the implementation environment includes a service person 710, a service terminal 730, and an authentication server 750.
If the service person 710 is determined to be a living body by the living body detection method, the service person 710 may collect an image by the service terminal 730, perform authentication by the authentication server 750 based on the image, and distribute a service traffic instruction by the service terminal 730 to fulfill the relevant service after passing the authentication.
In the above three application scenarios, only the objects to be detected, such as the payment user 510, the access object 670, and the service personnel 710, can continue subsequent authentication or identification through living detection, so as to effectively reduce the working pressure and flow pressure of authentication or identification, thereby better completing various authentication or identification tasks.
Referring to fig. 2, fig. 2 is a block diagram illustrating a hardware architecture of an electronic device according to an exemplary embodiment. Such an electronic device is suitable for the smartphone 530, the recognition server 630, and the authentication server 750 of the implementation environment shown in fig. 1.
It should be noted that this electronic device is only an example adapted to the present invention, and should not be construed as providing any limitation on the scope of use of the present invention. Nor should such an electronic device be construed as necessarily relying on or necessarily having one or more of the components of the exemplary electronic device 200 shown in fig. 2.
The hardware structure of the electronic device 200 may vary widely depending on the configuration or performance, as shown in fig. 2, the electronic device 200 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Specifically, the power supply 210 is configured to provide an operating voltage for each hardware device on the electronic device 200.
Interface 230 includes at least one wired or wireless network interface for interacting with external devices.
Of course, in other examples of the adaptation of the present invention, the interface 230 may further include at least one serial-parallel conversion interface 233, at least one input-output interface 235, at least one USB interface 237, and the like, as shown in fig. 2, which is not particularly limited herein.
The memory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include an operating system 251, application programs 253, and data 255, and the storage mode may be transient storage or permanent storage.
The operating system 251 is used for managing and controlling various hardware devices and applications 253 on the electronic device 200, so as to implement the operation and processing of the cpu 270 on the mass data 255 in the memory 250, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in fig. 2), each of which may respectively contain a series of computer readable instructions for the electronic device 200. For example, the living body detection apparatus may be regarded as the application 253 deployed on the electronic device.
The data 255 may be a photograph, a picture, etc. stored on a disk, and stored in the memory 250.
The central processor 270 may include one or more of the above processors and is configured to communicate with the memory 250 via at least one communication bus to read computer readable instructions stored in the memory 250, thereby implementing operations and processing of the bulk data 255 in the memory 250. The living body detection method is accomplished, for example, by the cpu 270 reading a series of computer readable instructions stored in the memory 250.
It is to be understood that the configuration shown in fig. 2 is merely illustrative and that electronic device 100 may also include more or fewer components than shown in fig. 2 or have different components than shown in fig. 1. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, in an exemplary embodiment, a living body detection method is applied to an electronic device, and a hardware structure of the electronic device may be as shown in fig. 2.
The in-vivo detection method may be performed by an electronic device and may include the steps of:
In step 310, an image of the object to be detected captured under the binocular camera is obtained, wherein the image includes an infrared image and a visible light image.
Firstly, the object to be detected may be a payment user of a certain order to be paid, a certain bank card user to be deposited/withdrawn, a certain access object to be accessed through an access control, or a certain service person to receive a service, and the embodiment is not limited specifically to the object to be detected.
Accordingly, the difference of the objects to be detected may correspond to different application scenarios, for example, a payment user of a certain order to be paid corresponds to an order payment application scenario, a certain bank card user to be deposited/withdrawn corresponds to a bank card application scenario, a certain access object to be accessed through an access control corresponds to an access control application scenario, and a certain service person to receive a service corresponds to a passenger transport service application scenario.
It can be understood that in any of the above application scenarios, there may be a false body attack action of an attacker, for example, a criminal may replace a service person, and the false body attack action is used to pass identity authentication to carry passengers, so the living body detection method provided by the embodiment may be applicable to different application scenarios according to different objects to be detected.
Secondly, the image of the object to be detected may be obtained by capturing and acquiring an image of the object to be detected by the binocular camera in real time, or may be an image of the object to be detected stored in the electronic device in advance, that is, an image captured and acquired by the binocular camera by reading a historical time period in a cache area of the electronic device, which is not limited in this embodiment.
The image may be a video or a plurality of pictures, and thus, the subsequent live detection is performed in units of image frames.
Finally, the binocular camera includes an infrared camera for generating an infrared image, and a visible camera for generating a visible image.
The binocular camera may be mounted to a video camera, video recorder, or other electronic device having image capturing capabilities, such as a smart phone or the like.
It will be appreciated that based on this type of prosthesis attack behavior of video playback, it is often necessary for an attacker to use a screen configured by the electronic device for video playback, at which time, if the infrared camera projects infrared light onto the screen, a reflection phenomenon will occur, which in turn makes it impossible to include a biometric region in the generated infrared image.
The method is beneficial to realizing living body judgment of the object to be detected based on the infrared image shot by the binocular camera, namely filtering the false attack behavior about video playback.
And 330, extracting image physical information of the biological feature area in the image to obtain the image physical information of the biological feature area in the image.
First, the biological characteristics of an object to be detected, such as a human face, eyes, mouth, hands, feet, fingerprint, iris, and the like. Accordingly, the position of the biometric feature of the object to be detected in the image constitutes a biometric region in the image of the object to be detected, which is also understood to indicate the position of the biometric feature of the object to be detected in the image.
Second, image physical information, reflecting the texture, color of the image, including but not limited to: texture information, color information, etc. of the image.
It should be noted that, in terms of the texture of the image, the biological feature of the living body has a large difference between the texture details presented in the image and the biological feature of the prosthesis, whereas for the color of the image, the visible light image taken and acquired by the living body is usually a color image.
Therefore, after the image physical information of the biological feature region in the image is extracted, the living body detection can be performed on the object to be detected with respect to the above-described distinction existing between the living body and the prosthesis.
If the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, the step 350 is skipped to continue to perform living body detection on the object to be detected.
Otherwise, if the image physical information of the biological feature area in the image indicates that the object to be detected is a prosthesis, stopping the object to be detected from continuing to perform living body detection, so that the living body detection efficiency is improved.
And 350, based on a machine learning model, extracting deep semantic features of the biological feature region in the image to obtain the deep semantic features of the biological feature region in the image.
Because the texture and the color of the image reflected by the image physical information are easy to change greatly due to the change of the shooting angle, so that the living body judgment is in error, and the capability of defending the attack behavior of the prosthesis is limited, the method is only suitable for simpler photo attack, and therefore, in the embodiment, the deep semantic features of the biological feature area in the image are extracted based on a machine learning model, so that the defending capability of living body detection on the attack behavior of the prosthesis is improved, and meanwhile, the self-adaption of the shooting angle is realized.
And (3) a machine learning model, which performs model training based on a large number of positive samples and negative samples to realize living body judgment of the object to be detected.
In order to allow the object to be detected to perform living body detection in a non-matched free state (such as nodding, turning, shaking), the positive sample and the negative sample are images obtained by respectively shooting living bodies and prostheses at different angles by using binocular cameras.
By model training, a machine learning model for performing living body judgment of an object to be detected is constructed by taking a positive sample and a negative sample as training inputs and taking a living body corresponding to the positive sample and a prosthesis corresponding to the negative sample as training truth values.
Specifically, model training, the parameters of a specified mathematical model are subjected to iterative optimization through positive samples and negative samples, so that the specified algorithm function constructed by the parameters meets the convergence condition.
Wherein a mathematical model is specified, including but not limited to: logistic regression, support vector machines, random forests, neural networks, etc.
Specifying algorithmic functions, including but not limited to: maximum expectation function, loss function, etc.
For example, the parameters specifying the mathematical model are randomly initialized, and the loss value of the loss function constructed from the randomly initialized parameters is calculated from the current sample.
If the loss value of the loss function does not reach the minimum, updating the parameters of the designated mathematical model, and calculating the loss value of the loss function constructed by the updated parameters according to the later sample.
And (3) iterating until the loss value of the loss function reaches the minimum, namely, the loss function is regarded as converging, at the moment, the appointed mathematical model is converged, the preset precision requirement is met, and the iteration is stopped.
Otherwise, iteratively updating parameters of the appointed mathematical model, and iteratively calculating a loss value of the loss function constructed by the updated parameters according to the rest samples until the loss function converges.
It should be noted that if the iteration number reaches the iteration threshold before the loss function converges, the iteration will be stopped, so as to ensure the model training efficiency.
When the appointed mathematical model converges and meets the preset precision requirement, the model training is completed, and therefore the machine learning model for completing the model training has the capability of extracting deep semantic features from the biological feature area in the image.
Then, based on the machine learning model, deep semantic features can be extracted from the biological feature areas in the image, and the living body judgment of the object to be detected can be performed.
Optionally, the machine learning model includes, but is not limited to: convolutional neural network model, deep neural network model, residual neural network model, etc.
The deep semantic features comprise color features and texture features of the image, and compared with color information and texture information in image physical information, the resolution capability of living bodies and prostheses can be improved, so that the defending capability of living body detection on the attack behaviors of the prostheses can be improved.
And step 370, performing living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image.
Through the process, various different types of prosthesis attack behaviors such as video playback, black-and-white photos, color photos, hole masks and the like can be effectively resisted, and simultaneously, the living body detection of the object to be detected in the non-matched free state is allowed, so that the user experience is improved, the accuracy of the living body detection is improved, and the safety of the living body detection is fully ensured.
Referring to fig. 4, in an exemplary embodiment, step 330 may include the steps of:
and step 331, obtaining a visible light image containing the biological feature area according to the obtained image.
It should be understood that the infrared image photographed and collected by the living body is substantially a gray image based on the infrared camera of the binocular camera, and the visible light image photographed and collected by the living body is a color image based on the visible light camera of the binocular camera.
Based on this, in order to be able to filter the type of false body attack behavior of black-and-white photographs, the basis of making a living body determination of the object to be detected is a visible light image.
Therefore, after acquiring an infrared image and a visible light image of an object to be detected captured under a binocular camera, it is first necessary to acquire a visible light image including a biological feature area, and then it is possible to perform a living body determination of the object to be detected based on whether the visible light image is a color image or not.
Step 333, extracting image physical information of the biological feature area in the visible light image from the biological feature area in the visible light image.
If the image physical information indicates that the visible light image is a color image, the object to be detected can be judged to be a living body, otherwise, if the image physical information indicates that the visible light image is a black-and-white image, the object to be detected is judged to be a prosthesis.
Optionally, the image physical information includes, but is not limited to: color information defined in color histogram, texture information defined in LBP (Local Binary Patterns, local binary pattern)/LPQ (Local Phase Quantization ) histogram.
The process of extracting physical information of an image is described below.
Referring to fig. 5, in an exemplary embodiment, the image physical information is color information.
Accordingly, step 333 may include the steps of:
Step 3331, calculating a color histogram of the biometric region in the visible light image based on the biometric region in the visible light image.
And step 3333, using the calculated color histogram as the color information of the biological feature area in the visible light image.
As shown in fig. 6, the color histogram (a) of the color image and the color histogram (b) of the black-and-white image have a more distinct distribution difference, so that the living body judgment of the object to be detected can be performed based on the color histogram of the biological feature area in the visible light image, so as to filter the false body attack behavior about the black-and-white photograph.
Under the action of the embodiment, the extraction of the color information is realized, and further, the living body judgment of the object to be detected based on the color information is realized.
Referring to fig. 7, in another exemplary embodiment, the image physical information is texture information.
Accordingly, step 333 may include the steps of:
Step 3332 creates a color space corresponding to the biometric region in the visible light image based on the biometric region in the visible light image.
Wherein the color space corresponding to the biometric region in the visible light image is essentially a collection of colors describing the biometric region in the visible light image mathematically.
Alternatively, the color space may be constructed based on HSV parameters, and also based on YCbCr parameters. The HSV parameters include hue (H), saturation (S), and brightness (V), and the YCbCr parameters include brightness and density (Y) of a color, blue density offset (Cb), and red density offset (Cr).
Step 3334 extracts local binary pattern features in the spatial domain and/or local phase quantization features in the frequency domain for the color space.
The local binary pattern (LBP, local Binary Patterns) features are based on the image pixels themselves, and accurately describe texture details of a biological feature area in a visible light image, so that gray level changes of the visible light image are reflected.
The local phase quantization (LPQ, local Phase Quantization) feature is based on the transform coefficients of the image in the transform domain, and accurately describes the texture details of the biological feature region in the visible light image, so as to reflect the gradient distribution of the visible light image.
In other words, whether the local binary pattern feature or the local phase quantization feature, it is essential to analyze texture details of the biometric region in the visible light image, so as to define texture information of the biometric region in the visible light image.
Step 3336, generating an LBP/LPQ histogram as texture information of the biological feature region in the visible light image according to the local binary pattern feature and/or the local phase quantization feature.
Therefore, the LBP/LPQ histogram is generated according to the local binary pattern feature and/or the local phase quantization feature, and the texture details of the biological feature area in the visible light image can be described more accurately through mutual combination and mutual complementation of the LBP/LPQ, so that the accuracy of living body detection is fully ensured.
Under the action of the above embodiments, extraction of texture information is realized, so that in-vivo judgment of an object to be detected based on the texture information is realized.
Further, the creation process of the color space is explained as follows.
Referring to fig. 8, in an exemplary embodiment, step 3332 may include the steps of:
Step 3332a, based on the biometric region in the visible light image, acquiring HSV parameters corresponding to the biometric region in the visible light image, the HSV parameters including hue (H), saturation (S) and brightness (V).
And step 3332c, constructing an HSV model according to the acquired HSV parameters, and using the HSV model as a color space corresponding to the biological feature area in the visible light image.
As shown in fig. 9, the HSV model is substantially a hexagonal pyramid, and accordingly, the construction process of the HSV model includes: the boundary of the hexagonal pyramid is constructed by hue (H), the horizontal axis of the hexagonal pyramid is constructed by saturation (S), and the vertical axis of the hexagonal pyramid is constructed by brightness (V).
Thereby, the construction of the color space based on the HSV parameters is completed.
Referring to fig. 10, in another exemplary embodiment, step 3332 may include the steps of:
Step 3332b, based on the biometric region in the visible light image, obtaining YCbCr parameters corresponding to the biometric region in the visible light image, the YCbCr parameters including brightness and concentration (Y) of color, blue concentration offset (Cb), and red concentration offset (Cr).
And 3332d, constructing a color space corresponding to the biological feature area in the visible light image according to the acquired YCbCr parameters.
Specifically, the obtained YCbCr parameters are converted into RGB parameters, and then RGB color channels represented by the RGB parameters are used to construct an RGB picture, that is, a color space of a biometric region in the visible light image. The RGB color channels include a red color channel R representing red, a green color channel G representing green, and a blue color channel B representing blue.
Thereby, the construction of the color space based on YCbCr parameters is completed.
Through the cooperation of the above embodiments, the creation of the color space is realized, and further, the extraction of texture information based on the color space is realized.
In an exemplary embodiment, after step 330, the method as described above may further comprise the steps of:
and performing living body judgment of the object to be detected according to the image physical information of the biological feature area in the visible light image.
Specifically, the image physical information of the biological feature area in the visible light image is input into a support vector machine classifier, and the color class of the visible light image is obtained by carrying out color class prediction on the visible light image.
First, a support vector machine classifier is generated by training the color class of a visible light image based on a large number of learning samples. Wherein the learning sample includes a visible light image belonging to a black-and-white image, and a visible light image belonging to a color image.
Second, the color categories include a color image category and a black-and-white image category.
And if the predicted color type of the visible light image is a black-and-white image type, namely the color type of the visible light image indicates that the visible light image is a black-and-white image, judging that the object to be detected is a prosthesis.
Otherwise, if the predicted color class of the visible light image is a color image class, that is, the color class of the visible light image indicates that the visible light image is a color image, the object to be detected is determined to be a living body.
Under the action of the embodiment, the living body judgment of the object to be detected based on the visible light image is realized, namely, the false body attack behavior about the black-and-white photo is filtered.
In an exemplary embodiment, the machine learning model is a deep neural network model that includes an input layer, a convolution layer, a connection layer, and an output layer. The convolution layer is used for feature extraction, and the connection layer is used for feature fusion.
Optionally, the deep neural network model may further include an activation layer, a pooling layer. The activation layer is used for improving the convergence speed of the deep neural network model, and the pooling layer is used for reducing the complexity of feature connection.
Optionally, the convolution layer is configured with a plurality of channels, and each channel can be used for inputting images with different channel information in the same image, so that the feature extraction accuracy is improved.
For example, if the convolution layer is configured with three channels A1, A2, and A3, the color image may be input to the three channels configured by the convolution layer according to the GRB color channel mode, that is, the color image portion corresponding to the red channel R is input to the A1 channel, the color image portion corresponding to the green channel G is input to the A2 channel, and the color image portion corresponding to the blue channel B is input to the A3 channel.
As shown in fig. 11, step 350 may include the steps of:
Step 351, inputting the image from the input layer in the deep neural network model to the convolution layer.
And 353, extracting features by using the convolution layer to obtain shallow semantic features of the biological feature region in the image, and inputting the shallow semantic features to the connection layer.
Step 355, performing feature fusion by using the connection layer to obtain deep semantic features of the biological feature region in the image, and inputting the deep semantic features to the output layer.
The shallow semantic features comprise shape features and spatial relationship features of the image, and the deep semantic features comprise color features and texture features of the image.
That is, shallow semantic features are obtained through feature extraction of the convolution layer, and deep semantic features are obtained through feature fusion of the connection layer, which means that features with different resolutions and different scales in the deep neural network model can be associated with each other instead of being isolated, and further accuracy of living body detection can be effectively improved.
Referring to fig. 12, in an exemplary embodiment, step 370 may include the steps of:
and 371, performing classification prediction on the image by using an activation function classifier in the output layer.
And 373, judging whether the object to be detected is a living body according to the type predicted by the image.
The function classifier, i.e. the softmax classifier, is activated for calculating the probabilities that the images belong to different classes.
In this embodiment, the categories of the images include: a living class and a prosthetic class.
For example, for an image, based on an activation function classifier in the output layer, the probability that the image belongs to the living body class is calculated as P1, and the probability that the image belongs to the prosthesis class is calculated as P2.
If P1> P2, that is, the image belongs to the living body category, the object to be detected is judged to be a living body.
And otherwise, P1< P2, if the image belongs to the category of the prosthesis, judging that the object to be detected is the prosthesis.
Under the action of the embodiment, the living body judgment of the object to be detected based on the deep semantic features is realized, namely, the false body attack behaviors about the color photo and the hole mask are filtered, and the matching of the object to be detected to the living body detection is not dependent.
In an exemplary embodiment, after step 310, the method as described above may further comprise the steps of:
Step 320, performing region position matching between the biological feature region in the infrared image and the biological feature region in the visible light image.
It can be understood that, for the infrared camera and the visible light camera in the binocular camera, if the infrared camera and the visible light camera are photographed for the free state (e.g., nodding) of the same object to be detected at the same time, there is a large correlation between the region position of the biometric region in the infrared image thus photographed and the region position of the biometric region in the visible light image.
Therefore, in this embodiment, whether the two have a large correlation is determined by the matching of the region positions, so as to determine whether the object to be detected is a living body.
If the region positions are not matched, namely the region position of the biological feature region in the infrared image and the region position of the biological feature region in the visible light image are less in correlation, namely the infrared camera and the visible light camera are used for shooting different individuals, the object to be detected is judged to be a prosthesis.
Otherwise, if the region positions are matched, namely, the region position of the biological feature region in the infrared image and the region position of the biological feature region in the visible light image are relatively high in correlation, namely, the infrared camera and the visible light camera are used for shooting the same individual, and the object to be detected is judged to be a living body.
It should be noted that the step 320 may be disposed before any of the steps 330 and 350, which is not limited in this embodiment.
The matching process of the region positions is explained below.
Referring to fig. 13, in an exemplary embodiment, step 320 may include the steps of:
Step 321, detecting the region positions of the biological feature region in the infrared image and the biological feature region in the visible light image, so as to obtain a first region position corresponding to the biological feature region in the infrared image and a second region position corresponding to the biological feature region in the visible light image.
The region position detection can be realized based on a projective geometry method of computer vision.
Step 323, calculating a correlation coefficient between the first region position and the second region position.
If the correlation coefficient exceeds the set correlation threshold, the region position is judged to be matched, and then the object to be detected is judged to be a living body, and the subsequent living body detection step can be continuously executed.
Otherwise, if the correlation coefficient is smaller than the set correlation threshold, the position of the region is not matched, and then the object to be detected is judged to be a prosthesis, and at the moment, the step of performing subsequent living body detection is stopped, so that the living body detection efficiency is improved.
Referring to fig. 14, in another exemplary embodiment, step 320 may include the steps of:
step 322, determining a first horizontal distance between the object to be detected and the vertical plane where the binocular camera is located.
Step 324, based on the infrared camera and the visible light camera in the binocular camera, obtaining a second horizontal distance between the infrared camera and the visible light camera.
And step 326, obtaining a horizontal distance difference between the biological feature area in the infrared image and the biological feature area in the visible light image according to the first horizontal distance and the second horizontal distance.
As shown in fig. 15, a denotes an object to be detected, B1 denotes an infrared camera in the binocular camera, and B2 denotes a visible camera in the binocular camera. X1 represents the vertical plane of the object A to be detected, and X2 represents the vertical plane of the binocular camera.
Then, the acquisition formula of the horizontal distance difference D is as follows:
D=L/Z。
Wherein D represents the horizontal distance difference, i.e. the difference between the infrared image and the visible image on the horizontal plane on the abscissa. Z represents a first horizontal distance, and L represents a second horizontal distance.
If the horizontal distance difference D is smaller than the set distance threshold, the position matching of the area is judged, and then the object to be detected is judged to be a living body, and the subsequent living body detection step can be continuously executed.
Otherwise, if the horizontal distance difference D exceeds the set distance threshold, the position of the region is not matched, and then the object to be detected is judged to be a prosthesis, and at the moment, the step of performing subsequent living body detection is stopped, so that the living body detection efficiency is improved.
Through the process, the living body judgment of the object to be detected based on the region position is realized, the image belonging to the prosthesis is effectively filtered, and the accuracy of living body detection is improved.
In an exemplary embodiment, the biometric region is a face region.
Accordingly, following step 310, the method as described above may further comprise the steps of:
and respectively carrying out face detection on the infrared image and the visible light image.
As shown in fig. 16, 68 key points exist in the image in the face feature, which specifically includes: 6 key points 43-48 of eyes in the image, 20 key points 49-68 of mouth in the image, etc. Wherein the key points are uniquely represented in the image by different coordinates (x, y).
Based on this, in this embodiment, the face detection is implemented by a face key point model.
The key point model of the human face essentially builds an index relation for the human face features in the image, so that key points of the specific human face features can be positioned from the image through the built index relation.
Specifically, after an image of an object to be detected, i.e. an infrared image or a visible light image, is input into a face key point model, key points of face features in the image are marked by indexes, as shown in fig. 16, the indexes marked by six key points of eyes in the image are 43-48, and the indexes marked by twenty key points of a mouth in the image are 49-68.
Simultaneously, the coordinates of the key points marked with the indexes in the images are correspondingly stored, and an index relation between the indexes and the coordinates corresponding to the images is constructed by taking the coordinates of the key points as the face features of the objects to be detected.
Then, based on the index relation, the coordinates of key points of the face features of the object to be detected in the image can be obtained through the index, and then the position of the face features of the object to be detected in the image, namely the face feature area in the image, is determined.
And if the infrared image is detected to not contain the face area, and/or the visible light image does not contain the face area, judging that the object to be detected is a prosthesis, and stopping subsequent living body detection to improve living body detection efficiency.
Otherwise, if it is detected that the infrared image includes a face region and the visible light image includes a face region, it is determined that the object to be detected is a living body, and step 330 may be skipped.
Based on the above process, the living body judgment of the object to be detected based on the infrared image shot by the binocular camera is realized, namely, the false attack behavior about video playback is filtered.
In addition, face detection is carried out based on the face key point model, so that the face feature recognition of different facial expressions has good accuracy and stability, and the accuracy of living body detection is fully ensured.
In an exemplary embodiment, after step 370, the method as described above may further comprise the steps of:
and if the object to be detected is a living body, a face recognition model is called to carry out face recognition on the image of the object to be detected.
The following describes the face recognition process in connection with a specific application scenario.
Fig. 1 (a) is a schematic diagram of an implementation environment of an order payment application scenario. As shown in fig. 1 (a), in this application scenario, the implementation environment includes a payment user 510, a smart phone 530, and a payment server 550.
For a certain order to be paid, the payment user 510 swipes the face through the binocular camera configured by the smart phone 530, so that the smart phone 530 obtains an image of the payment user 510, and further performs face recognition on the image by using a face recognition model.
The user features of the image are extracted by the face recognition model to calculate the similarity between the user features and the specified user features, and if the similarity is greater than a similarity threshold, the payment user 510 passes the authentication. Wherein the specified user features are pre-extracted by the smartphone 530 for the payment user 510 through a face recognition model.
After the payment user 510 passes the authentication, the smartphone 530 initiates an order payment request to the payment server 550 for the order to be paid, thereby completing the payment flow of the order to be paid.
Fig. 1 (b) is a schematic diagram of an implementation environment of an access control application scenario. As shown in fig. 1 (b), the implementation environment includes a reception apparatus 610, an identification server 630, and an entrance guard control apparatus 650.
The reception device 610 is provided with a binocular camera to take a picture of the face of the incoming and outgoing object 670, and sends the obtained image of the incoming and outgoing object 670 to the recognition server 630 to perform face recognition. In this application scenario, the access object 670 includes a worker and a visitor.
The recognition server 630 extracts the personnel features of the image through the face recognition model to calculate the similarity between the personnel features and the plurality of appointed personnel features, obtain the appointed personnel feature with the maximum similarity, and further recognize the personnel identity associated with the appointed personnel feature with the maximum similarity as the identity of the access object 670, thereby completing the identity recognition of the access object 670. Wherein the specified person feature is extracted in advance for the access object 670 by the recognition server 630 through the face recognition model.
After the identification of the access object 670 is completed, the identification server 630 sends an access authorization instruction to the access control device 650 for the access object 670, so that the access control device 650 configures corresponding access rights for the access object 670 according to the access authorization instruction, and further the access object 670 controls the access gates of the designated working area to execute the release action by virtue of the access rights.
Of course, in different application scenarios, flexible deployment may be performed according to actual application requirements, for example, the identification server 630 and the access control device 650 may be deployed as the same server, or the reception device 610 and the access control device 650 may be deployed on the same server, which is not limited in this application scenario.
Fig. 1 (c) is a schematic diagram of an implementation environment of a passenger service application scenario. As shown in fig. 1 (c), in this application scenario, the implementation environment includes a service person 710, a service terminal 730, and an authentication server 750. In this application scenario, the attendant 710 is a passenger driver.
The service terminal 730 installed in the vehicle is provided with a binocular camera to take a picture of the service personnel 710, and sends the obtained image of the service personnel 710 to the authentication server 750 for face recognition.
The authentication server 750 extracts the person feature of the image through the face recognition model to calculate the similarity between the person feature and the designated person feature, and if the similarity is greater than the similarity threshold, the service person 710 passes the identity authentication. Wherein, the designated person feature is previously extracted for the service person 710 by the service terminal 730 through the face recognition model.
After the service person 710 passes the authentication, the service terminal 730 may distribute the service instruction to the service person 710 so that the service person, i.e., the passenger driver, can reach the destination to pick up the passenger according to the instruction of the service instruction.
In the three application scenarios, the living body detection device can be used as a precursor module for face recognition.
As shown in fig. 17, by performing steps 801 to 804, the living body determination of the object to be detected is performed a plurality of times based on face detection, region matching detection, image physical information detection, and deep semantic feature detection, respectively.
Therefore, the living body detection device can accurately judge whether an object to be detected is a living body, and further defending various different types of prosthesis attacks is achieved, so that the safety of identity verification/identity recognition can be fully ensured, the working pressure and the flow pressure of later face recognition can be effectively reduced, and convenience is brought to various face recognition tasks better.
The following is an embodiment of the device of the present invention, which can be used to perform the living body detection method according to the present invention. For details not disclosed in the device embodiments of the present invention, please refer to a method embodiment of the living body detection method according to the present invention.
Referring to fig. 18, in an exemplary embodiment, a biopsy device 900 includes, but is not limited to: an image acquisition module 910, an image physical information extraction module 930, a deep semantic feature extraction module 950, and a subject living judgment module 970.
The image acquisition module 910 is configured to acquire an image of an object to be detected captured under the binocular camera, where the image includes an infrared image and a visible light image.
The image physical information extraction module 930 is configured to extract image physical information of a biometric area in the image, to obtain image physical information of the biometric area in the image, where the biometric area is used to indicate a position of a biometric of the object to be detected in the image.
The deep semantic feature extraction module 950 is configured to, if the image physical information of the biometric region in the image indicates that the object to be detected is a living body, perform deep semantic feature extraction on the biometric region in the image based on a machine learning model, to obtain deep semantic features of the biometric region in the image.
An object living body judging module 970 is configured to perform living body judgment of the object to be detected according to deep semantic features of a biological feature region in the image.
In an exemplary embodiment, the image physical information extraction module 930 includes, but is not limited to: a visible light image acquisition unit and an image physical information extraction unit.
And the visible light image acquisition unit is used for acquiring a visible light image containing the biological characteristic area according to the acquired image.
And the image physical information extraction unit is used for extracting the image physical information of the biological characteristic region in the visible light image from the biological characteristic region in the visible light image.
In an exemplary embodiment, the image physical information is color information.
Accordingly, the image physical information extraction unit includes, but is not limited to: the color histogram calculation subunit and the color information definition subunit.
The color histogram calculation subunit is used for calculating the color histogram of the biological characteristic region in the visible light image based on the biological characteristic region in the visible light image.
And the color information definition subunit is used for taking the calculated color histogram as the color information of the biological feature area in the visible light image.
In an exemplary embodiment, the image physical information is texture information.
Accordingly, the image physical information extraction unit includes, but is not limited to: a color space creation subunit, a local feature extraction subunit, and a texture information definition subunit.
Wherein the color space creation subunit is configured to create a color space corresponding to a biometric region in the visible light image based on the biometric region in the visible light image.
A local feature extraction subunit, configured to extract local binary pattern features in a spatial domain and/or extract local phase quantization features in a frequency domain for the color space.
And the texture information definition subunit is used for generating an LBP/LPQ histogram as texture information of a biological characteristic area in the visible light image according to the local binary pattern characteristic and/or the local phase quantization characteristic.
In an exemplary embodiment, the color space creation subunit includes, but is not limited to: a first parameter acquisition subunit and a first construction subunit.
The first parameter acquisition subunit is configured to acquire, based on the biometric region in the visible light image, an HSV parameter corresponding to the biometric region in the visible light image, where the HSV parameter includes hue (H), saturation (S), and brightness (V).
And the first construction subunit is used for constructing an HSV model according to the acquired HSV parameters and taking the HSV model as a color space corresponding to the biological feature area in the visible light image.
In an exemplary embodiment, the color space creation subunit includes, but is not limited to: a second parameter acquisition subunit and a second construction subunit.
The second parameter obtaining subunit is configured to obtain YCbCr parameters corresponding to a biological feature area in the visible light image based on the biological feature area in the visible light image, where the YCbCr parameters include brightness and concentration (Y) of a color, a blue concentration offset (Cb), and a red concentration offset (Cr).
And the second construction subunit is used for constructing a color space corresponding to the biological feature area in the visible light image according to the acquired YCbCr parameters.
In an exemplary embodiment, the apparatus 900 further includes, but is not limited to: a second subject living body determination module.
The second object living body judging module is used for judging the living body of the object to be detected according to the image physical information of the biological feature area in the visible light image.
In an exemplary embodiment, the second object living body determination module includes, but is not limited to: a color category prediction unit, a first object living body determination unit, and a second object living body determination unit.
The color type prediction unit is used for inputting the image physical information of the biological feature area in the visible light image into the support vector machine classifier, and performing color type prediction on the visible light image to obtain the color type of the visible light image.
A first object living body determination unit configured to determine that the object to be detected is a prosthesis if a color class of the visible light image indicates that the visible light image is a black-and-white image.
A second object living body determination unit configured to determine that the object to be detected is a living body if the color class of the visible light image indicates that the visible light image is a color image.
In an exemplary embodiment, the machine learning model is a deep neural network model that includes an input layer, a convolution layer, a connection layer, and an output layer.
Accordingly, the deep semantic feature extraction module 950 includes, but is not limited to: an image input unit, a feature extraction unit and a feature fusion unit.
The image input unit is used for inputting the image from the input layer in the deep neural network model to the convolution layer.
And the feature extraction unit is used for carrying out feature extraction by utilizing the convolution layer to obtain shallow semantic features of the biological feature area in the image, and inputting the shallow semantic features into the connection layer.
And the feature fusion unit is used for carrying out feature fusion by utilizing the connection layer to obtain deep semantic features of the biological feature area in the image, and inputting the deep semantic features into the output layer.
In an exemplary embodiment, the subject in-vivo determination module 970 includes, but is not limited to: a classification prediction unit and a third subject living body determination unit.
And the classification prediction unit is used for performing classification prediction on the image by using an activation function classifier in the output layer.
And a third object living body judging unit for judging whether the object to be detected is a living body according to the type predicted by the image.
In an exemplary embodiment, the apparatus 900 further includes, but is not limited to: a region position matching module and a fourth object living body determination module.
The region position matching module is used for performing region position matching between the biological characteristic region in the infrared image and the biological characteristic region in the visible light image.
And the fourth object living body judging module is used for judging that the object to be detected is a prosthesis if the area positions are not matched.
In an exemplary embodiment, the region location matching module includes, but is not limited to: an area position detection unit, a correlation coefficient calculation unit, and a fifth object living body determination unit.
The region position detection unit is used for detecting the region positions of the biological feature region in the infrared image and the biological feature region in the visible light image respectively to obtain a first region position corresponding to the biological feature region in the infrared image and a second region position corresponding to the biological feature region in the visible light image.
And the correlation coefficient calculation unit is used for calculating the correlation coefficient of the first area position and the second area position.
And a fifth object living body determination unit configured to determine that the region positions match if the correlation coefficient exceeds a set correlation threshold.
In an exemplary embodiment, the region location matching module includes, but is not limited to: a first horizontal distance determination unit, a second horizontal distance determination unit, a horizontal distance difference acquisition unit, and a sixth object living body determination unit.
The first horizontal distance determining unit is used for determining a first horizontal distance between the object to be detected and the vertical plane where the binocular camera is located.
And the second horizontal distance determining unit is used for acquiring a second horizontal distance between the infrared camera and the visible light camera based on the infrared camera and the visible light camera in the binocular camera.
A horizontal distance difference acquiring unit configured to acquire a horizontal distance difference between a biometric region in the infrared image and a biometric region in the visible light image according to the first horizontal distance and the second horizontal distance.
A sixth object living body determination unit configured to determine that the region positions do not match if the horizontal distance difference exceeds a set distance threshold.
In an exemplary embodiment, the biometric region is a face region.
Accordingly, the apparatus 900 further includes, but is not limited to: a face detection module and a seventh object living body determination module.
The face detection module is used for respectively carrying out face detection on the infrared image and the visible light image.
And the seventh object living body judging module is used for judging that the object to be detected is a prosthesis if the infrared image does not contain a face area and/or the visible light image does not contain a face area.
It should be noted that, in the living body detection device provided in the foregoing embodiment, only the division of the functional modules is illustrated in the living body detection process, and in practical application, the above-mentioned functions may be allocated by different functional modules according to needs, that is, the internal structure of the living body detection device may be divided into different functional modules to complete all or part of the functions described above.
In addition, the embodiments of the living body detection apparatus and the living body detection method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, which is not described herein again.
In an exemplary embodiment, an access control system to which a living body detection method is applied includes a reception apparatus, an identification server, and an access control apparatus.
The reception equipment is used for acquiring images of an in-out object by using the binocular camera, wherein the images comprise infrared images and visible light images.
The identification server comprises a living body detection device which is used for respectively extracting image physical information and deep semantic features of a biological feature area in the image of the access object and judging whether the access object is living body or not according to the extracted image physical information and the deep semantic features.
When the access object is a living body, the identification server performs identity identification on the access object, so that the access control equipment configures access rights for the access object successfully completing the identity identification, and the access object controls the access gate of the designated area to execute release action according to the configured access rights.
In an exemplary embodiment, a payment system to which a living body detection method is applied includes a payment terminal and a payment server.
The payment terminal is used for acquiring images of a payment user by using the binocular camera, wherein the images comprise infrared images and visible light images.
The payment terminal comprises a living body detection device which is used for respectively extracting image physical information and deep semantic features from biological feature areas in the image of the payment user and judging whether the payment user is living body or not according to the extracted image physical information and the deep semantic features.
And when the payment user is living, the payment terminal performs identity verification on the payment user so as to initiate a payment request to the payment server when the payment user passes the identity verification.
In an exemplary embodiment, a service system to which a living body detection method is applied includes a service terminal and an authentication server.
The service terminal is used for acquiring images of service personnel by using the binocular camera, wherein the images comprise infrared images and visible light images.
The service terminal comprises a living body detection device which is used for respectively extracting image physical information and deep semantic features of a biological feature area in an image of the service personnel and judging whether the service personnel is living or not according to the extracted image physical information and the deep semantic features.
When the service personnel is a living body, the service terminal requests the authentication server to carry out identity authentication on the service personnel, and distributes service business instructions to the service personnel passing the identity authentication.
Referring to fig. 19, in an exemplary embodiment, an electronic device 1000 includes at least one processor 1001, at least one memory 1002, and at least one communication bus 1003.
Wherein the memory 1002 has stored thereon computer readable instructions, the processor 1001 reads the computer readable instructions stored in the memory 1002 via the communication bus 1003.
The computer readable instructions, when executed by the processor 1001, implement the living body detection method in the above embodiments.
In an exemplary embodiment, a storage medium has stored thereon a computer program which, when executed by a processor, implements the living body detection method in the above embodiments.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments of the present invention, and those skilled in the art can easily make corresponding variations or modifications according to the main concept and spirit of the present invention, so that the protection scope of the present invention shall be defined by the claims.

Claims (8)

1. A living body detecting method, characterized by comprising:
acquiring an image of an object to be detected, which is shot under a binocular camera, wherein the image comprises an infrared image and a visible light image;
Acquiring a visible light image containing a biological feature area according to the acquired image;
calculating a color histogram of a biological feature region in the visible light image based on the biological feature region in the visible light image;
taking the calculated color histogram as color information of a biological feature area in the visible light image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image;
Inputting color information of a biological feature area in the visible light image into a support vector machine classifier, and performing color category prediction on the visible light image to obtain a color category of the visible light image;
If the color type of the visible light image indicates that the visible light image is a black-and-white image, judging that the object to be detected is a prosthesis;
If the color type of the visible light image indicates that the visible light image is a color image, judging that the object to be detected is a living body;
If the color information of the biological feature area in the image indicates that the object to be detected is a living body, carrying out deep semantic feature extraction on the biological feature area in the image based on a machine learning model to obtain deep semantic features of the biological feature area in the image;
And carrying out living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image.
2. The method of claim 1, wherein the machine learning model is a deep neural network model comprising an input layer, a convolutional layer, a connection layer, and an output layer;
The machine learning model-based deep semantic feature extraction is performed on the biological feature area in the image to obtain deep semantic features of the biological feature area in the image, and the machine learning model-based deep semantic feature extraction comprises the following steps:
inputting the image from an input layer in the deep neural network model to the convolution layer;
Extracting features by using the convolution layer to obtain shallow semantic features of a biological feature area in the image, and inputting the shallow semantic features into the connection layer;
and carrying out feature fusion by utilizing the connecting layer to obtain deep semantic features of the biological feature area in the image, and inputting the deep semantic features into the output layer.
3. The method according to claim 2, wherein the performing the living body determination of the object to be detected based on the deep semantic features of the biometric region in the image comprises:
using an activation function classifier in the output layer to classify and predict the image;
and judging whether the object to be detected is a living body or not according to the type predicted by the image.
4. The method of claim 1, wherein after the capturing the image of the object to be detected taken under the binocular camera, the method further comprises:
Performing region position matching between a biological feature region in the infrared image and a biological feature region in the visible light image;
and if the region positions are not matched, judging that the object to be detected is a prosthesis.
5. A living body detecting device, characterized by comprising:
the image acquisition module is used for acquiring an image of an object to be detected, which is shot under the binocular camera, wherein the image comprises an infrared image and a visible light image;
The image physical information extraction module is used for extracting image physical information of a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image;
The deep semantic feature extraction module is used for extracting deep semantic features of the biological feature area in the image based on a machine learning model if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, so as to obtain the deep semantic features of the biological feature area in the image;
the object living body judging module is used for judging the living body of the object to be detected according to the deep semantic features of the biological feature area in the image;
wherein, the image physical information is color information, the image physical information extraction module includes:
A visible light image acquisition unit configured to acquire a visible light image including the biometric region from the acquired image;
A color histogram calculation subunit configured to calculate a color histogram of a biological feature region in the visible light image based on the biological feature region in the visible light image;
A color information definition subunit, configured to use the calculated color histogram as color information of a biometric area in the visible light image;
the apparatus further comprises:
The color type prediction unit is used for inputting the color information of the biological feature area in the visible light image into a support vector machine classifier, and performing color type prediction on the visible light image to obtain the color type of the visible light image;
A first object living body determination unit configured to determine that the object to be detected is a prosthesis if a color class of the visible light image indicates that the visible light image is a black-and-white image;
A second object living body determination unit configured to determine that the object to be detected is a living body if the color class of the visible light image indicates that the visible light image is a color image.
6. A service system to which a living body detection method is applied, characterized in that the service system includes a service terminal and an authentication server, wherein,
The service terminal is used for acquiring images of service personnel by using the binocular camera, wherein the images comprise infrared images and visible light images;
The service terminal includes the living body detecting device according to claim 5 for judging whether the service person is living or not based on the image of the service person;
when the service personnel is a living body, the service terminal requests the authentication server to carry out identity authentication on the service personnel, and distributes service business instructions to the service personnel passing the identity authentication.
7. An electronic device, comprising:
A processor; and
A memory having stored thereon computer readable instructions which, when executed by the processor, implement the in-vivo detection method of any one of claims 1 to 4.
8. A storage medium having stored thereon a computer program, which when executed by a processor implements the living body detection method according to any one of claims 1 to 4.
CN201910217452.8A 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method Active CN110163078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910217452.8A CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910217452.8A CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Publications (2)

Publication Number Publication Date
CN110163078A CN110163078A (en) 2019-08-23
CN110163078B true CN110163078B (en) 2024-08-02

Family

ID=67638988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910217452.8A Active CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Country Status (1)

Country Link
CN (1) CN110163078B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555930B (en) * 2019-08-30 2021-03-26 北京市商汤科技开发有限公司 Door lock control method and device, electronic equipment and storage medium
CN110532957B (en) * 2019-08-30 2021-05-07 北京市商汤科技开发有限公司 Face recognition method and device, electronic equipment and storage medium
CN110519061A (en) * 2019-09-02 2019-11-29 国网电子商务有限公司 A kind of identity identifying method based on biological characteristic, equipment and system
CN110781770B (en) * 2019-10-08 2022-05-06 高新兴科技集团股份有限公司 Living body detection method, device and equipment based on face recognition
CN112651268B (en) * 2019-10-11 2024-05-28 北京眼神智能科技有限公司 Method and device for eliminating black-and-white photo in living body detection and electronic equipment
CN112883762A (en) * 2019-11-29 2021-06-01 广州慧睿思通科技股份有限公司 Living body detection method, device, system and storage medium
CN111191527B (en) * 2019-12-16 2024-03-12 北京迈格威科技有限公司 Attribute identification method, attribute identification device, electronic equipment and readable storage medium
CN111222425A (en) * 2019-12-26 2020-06-02 新绎健康科技有限公司 Method and device for positioning facial features
CN111160299A (en) * 2019-12-31 2020-05-15 上海依图网络科技有限公司 Living body identification method and device
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof
CN111401258B (en) * 2020-03-18 2024-01-30 腾讯科技(深圳)有限公司 Living body detection method and device based on artificial intelligence
CN111582045B (en) * 2020-04-15 2024-05-10 芯算一体(深圳)科技有限公司 Living body detection method and device and electronic equipment
CN111582155B (en) * 2020-05-07 2024-02-09 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111666878B (en) * 2020-06-05 2023-08-29 腾讯科技(深圳)有限公司 Object detection method and device
CN111951243A (en) * 2020-08-11 2020-11-17 华北电力科学研究院有限责任公司 Method and device for monitoring linear variable differential transformer
CN112345080A (en) * 2020-10-30 2021-02-09 华北电力科学研究院有限责任公司 Temperature monitoring method and system for linear variable differential transformer of thermal power generating unit
CN114627518A (en) * 2020-12-14 2022-06-14 阿里巴巴集团控股有限公司 Data processing method, data processing device, computer readable storage medium and processor
CN112801057B (en) * 2021-04-02 2021-07-13 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113422982B (en) * 2021-08-23 2021-12-14 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2009107237A1 (en) * 2008-02-29 2011-06-30 グローリー株式会社 Biometric authentication device
CN108985134B (en) * 2017-06-01 2021-04-16 重庆中科云从科技有限公司 Face living body detection and face brushing transaction method and system based on binocular camera
CN108764071B (en) * 2018-05-11 2021-11-12 四川大学 Real face detection method and device based on infrared and visible light images
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109492551B (en) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 Living body detection method and device and related system applying living body detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera

Also Published As

Publication number Publication date
CN110163078A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110163078B (en) Living body detection method, living body detection device and service system applying living body detection method
US11288504B2 (en) Iris liveness detection for mobile devices
CN107766786B (en) Activity test method and activity test computing device
CN109446981B (en) Face living body detection and identity authentication method and device
US11941918B2 (en) Extracting information from images
US20210082136A1 (en) Extracting information from images
CN111274916B (en) Face recognition method and face recognition device
WO2019128507A1 (en) Image processing method and apparatus, storage medium and electronic device
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
CN109492550B (en) Living body detection method, living body detection device and related system applying living body detection method
US10885171B2 (en) Authentication verification using soft biometric traits
CN112052830B (en) Method, device and computer storage medium for face detection
CN112052831A (en) Face detection method, device and computer storage medium
WO2022222575A1 (en) Method and system for target recognition
WO2022222569A1 (en) Target discrimation method and system
CN111767879A (en) Living body detection method
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
CN116453232A (en) Face living body detection method, training method and device of face living body detection model
Chen et al. A dataset and benchmark towards multi-modal face anti-spoofing under surveillance scenarios
CN112818899B (en) Face image processing method, device, computer equipment and storage medium
Chen Design and simulation of AI remote terminal user identity recognition system based on reinforcement learning
CN115830720A (en) Living body detection method, living body detection device, computer equipment and storage medium
Kakran et al. Identification and Recognition of face and number Plate for Autonomous and Secure Car Parking
Huang et al. Dual fusion paired environmental background and face region for face anti-spoofing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TG01 Patent term adjustment