[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110163078B - Living body detection method, living body detection device and service system applying living body detection method - Google Patents

Living body detection method, living body detection device and service system applying living body detection method Download PDF

Info

Publication number
CN110163078B
CN110163078B CN201910217452.8A CN201910217452A CN110163078B CN 110163078 B CN110163078 B CN 110163078B CN 201910217452 A CN201910217452 A CN 201910217452A CN 110163078 B CN110163078 B CN 110163078B
Authority
CN
China
Prior art keywords
image
visible light
living body
biological feature
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910217452.8A
Other languages
Chinese (zh)
Other versions
CN110163078A (en
Inventor
王智慧
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910217452.8A priority Critical patent/CN110163078B/en
Publication of CN110163078A publication Critical patent/CN110163078A/en
Application granted granted Critical
Publication of CN110163078B publication Critical patent/CN110163078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a living body detection method, which comprises the following steps: acquiring an image of an object to be detected, which is shot under a binocular camera, wherein the image comprises an infrared image and a visible light image; extracting image physical information from a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image; if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, carrying out deep semantic feature extraction on the biological feature area in the image based on a machine learning model to obtain deep semantic features of the biological feature area in the image; and carrying out living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image. The invention solves the problem of poor defensive performance of living body detection on the attack of the prosthesis in the prior art.

Description

活体检测方法、装置及应用活体检测方法的服务系统Liveness detection method, device and service system using liveness detection method

技术领域Technical Field

本发明涉及计算机领域,尤其涉及一种活体检测方法、装置及应用活体检测方法的服务系统。The present invention relates to the field of computers, and in particular to a living body detection method, a device, and a service system using the living body detection method.

背景技术Background technique

随着生物特征识别技术的发展,生物特征识别被广泛地应用,例如,刷脸支付、视频监控中的人脸识别,以及门禁授权中的指纹识别、虹膜识别等等。生物特征识别也因此而存在着各种各样的威胁,比如攻击者利用伪造的人脸、指纹、虹膜等等进行生物特征识别。With the development of biometric recognition technology, biometric recognition has been widely used, for example, face payment, face recognition in video surveillance, fingerprint recognition and iris recognition in access control authorization, etc. Biometric recognition also faces various threats, such as attackers using forged faces, fingerprints, irises, etc. for biometric recognition.

为此,现有的活体检测方案,主要是基于人机交互的方案,需要待检测对象配合做出相应的动作,例如点头、眨眼、微笑等,以分析待检测对象的动作进行活体检测。To this end, existing liveness detection solutions are mainly based on human-computer interaction solutions, which require the object to be detected to cooperate and make corresponding actions, such as nodding, blinking, smiling, etc., in order to analyze the actions of the object to be detected for liveness detection.

发明人发现上述方案,不仅对待检测对象要求较高,容易使得用户体验差,而且也存在对假体攻击防御性较差的问题。The inventors have discovered that the above solution not only has high requirements on the detection object, which easily leads to a poor user experience, but also has the problem of poor defense against prosthesis attacks.

发明内容Summary of the invention

本发明各实施例提供一种活体检测方法、装置、应用活体检测方法的门禁系统、支付系统、服务系统、电子设备及存储介质,可以解决活体检测对假体攻击防御性较差的问题。The embodiments of the present invention provide a liveness detection method, an apparatus, an access control system, a payment system, a service system, an electronic device and a storage medium using the liveness detection method, which can solve the problem that liveness detection has poor defense against prosthetic attacks.

其中,本发明所采用的技术方案为:Wherein, the technical solution adopted by the present invention is:

根据本发明实施例的一方面,一种活体检测方法,包括:获取待检测对象在双目摄像头下拍摄到的图像,所述图像包括红外图像和可见光图像;对所述图像中的生物特征区域,进行图像物理信息提取,得到所述图像中生物特征区域的图像物理信息,所述生物特征区域用于指示所述待检测对象的生物特征在所述图像中的位置;如果所述图像中生物特征区域的图像物理信息指示所述待检测对象为活体,则基于机器学习模型,对所述图像中的生物特征区域,进行深层语义特征提取,得到所述图像中生物特征区域的深层语义特征;根据所述图像中生物特征区域的深层语义特征,进行所述待检测对象的活体判定。According to one aspect of an embodiment of the present invention, a liveness detection method includes: obtaining an image of an object to be detected captured by a binocular camera, the image including an infrared image and a visible light image; extracting image physical information of a biometric region in the image to obtain image physical information of the biometric region in the image, the biometric region being used to indicate a position of a biometric of the object to be detected in the image; if the image physical information of the biometric region in the image indicates that the object to be detected is alive, extracting deep semantic features of the biometric region in the image based on a machine learning model to obtain deep semantic features of the biometric region in the image; and determining the liveness of the object to be detected based on the deep semantic features of the biometric region in the image.

在一示例性实施例中,所述在所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间,进行区域位置匹配,包括:分别对所述红外图像中的生物特征区域和所述可见光图像中的生物特征区域,进行区域位置检测,得到对应于所述红外图像中生物特征区域的第一区域位置、和对应于所述可见光图像中生物特征区域的第二区域位置;计算所述第一区域位置与所述第二区域位置的相关系数;如果所述相关系数超过设定相关阈值,则判定所述区域位置匹配。In an exemplary embodiment, regional position matching is performed between the biometric region in the infrared image and the biometric region in the visible light image, including: performing regional position detection on the biometric region in the infrared image and the biometric region in the visible light image, respectively, to obtain a first regional position corresponding to the biometric region in the infrared image, and a second regional position corresponding to the biometric region in the visible light image; calculating a correlation coefficient between the first regional position and the second regional position; and determining that the regional positions match if the correlation coefficient exceeds a set correlation threshold.

在一示例性实施例中,所述在所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间,进行区域位置匹配,包括:确定所述待检测对象与所述双目摄像头所在竖直平面之间的第一水平距离;基于所述双目摄像头中的红外摄像头和可见光摄像头,获取所述红外摄像头与所述可见光摄像头之间的第二水平距离;根据所述第一水平距离和所述第二水平距离,获取所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间的水平距离差;如果所述水平距离差超过设定距离阈值,则判定所述区域位置不匹配。In an exemplary embodiment, the area position matching is performed between the biometric area in the infrared image and the biometric area in the visible light image, including: determining a first horizontal distance between the object to be detected and the vertical plane where the binocular camera is located; based on the infrared camera and the visible light camera in the binocular camera, obtaining a second horizontal distance between the infrared camera and the visible light camera; according to the first horizontal distance and the second horizontal distance, obtaining a horizontal distance difference between the biometric area in the infrared image and the biometric area in the visible light image; if the horizontal distance difference exceeds a set distance threshold, determining that the area position does not match.

在一示例性实施例中,所述生物特征区域为人脸区域;所述获取待检测对象在双目摄像头下拍摄到的图像之后,所述方法还包括:分别对所述红外图像和所述可见光图像进行人脸检测;如果检测到所述红外图像中不包含人脸区域,和/或,所述可见光图像中不包含人脸区域,则判定所述待检测对象为假体。In an exemplary embodiment, the biometric area is a face area; after acquiring the image of the object to be detected captured by a binocular camera, the method further includes: performing face detection on the infrared image and the visible light image respectively; if it is detected that the infrared image does not contain a face area, and/or that the visible light image does not contain a face area, it is determined that the object to be detected is a prosthesis.

根据本发明实施例的一方面,一种活体检测装置,包括:图像获取模块,用于获取待检测对象在双目摄像头下拍摄到的图像,所述图像包括红外图像和可见光图像;图像物理信息提取模块,用于对所述图像中的生物特征区域,进行图像物理信息提取,得到所述图像中生物特征区域的图像物理信息,所述生物特征区域用于指示所述待检测对象的生物特征在所述图像中的位置;深层语义特征提取模块,用于如果所述图像中生物特征区域的图像物理信息指示所述待检测对象为活体,则基于机器学习模型,对所述图像中的生物特征区域,进行深层语义特征提取,得到所述图像中生物特征区域的深层语义特征;对象活体判定模块,用于根据所述图像中生物特征区域的深层语义特征,进行所述待检测对象的活体判定。According to one aspect of an embodiment of the present invention, a liveness detection device includes: an image acquisition module, which is used to acquire an image of an object to be detected captured by a binocular camera, wherein the image includes an infrared image and a visible light image; an image physical information extraction module, which is used to extract image physical information of a biometric region in the image to obtain image physical information of the biometric region in the image, wherein the biometric region is used to indicate a position of a biometric feature of the object to be detected in the image; a deep semantic feature extraction module, which is used to extract deep semantic features of the biometric region in the image based on a machine learning model if the image physical information of the biometric region in the image indicates that the object to be detected is alive, to obtain deep semantic features of the biometric region in the image; and an object liveness determination module, which is used to determine the liveness of the object to be detected based on the deep semantic features of the biometric region in the image.

根据本发明实施例的一方面,一种应用活体检测方法的门禁系统,包括接待设备、识别电子设备和门禁控制设备,其中,所述接待设备,用于利用双目摄像头采集出入对象的图像,所述图像包括红外图像和可见光图像;所述识别电子设备包括活体检测装置,用于对所述出入对象的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述出入对象是否活体;当所述出入对象为活体,所述识别电子设备对所述出入对象进行身份识别,以使所述门禁控制设备为成功完成身份识别的出入对象配置门禁权限,使得该出入对象根据所配置的门禁权限控制指定区域的门禁道闸执行放行动作。According to one aspect of an embodiment of the present invention, a door access control system using a liveness detection method includes a reception device, an identification electronic device and an access control device, wherein the reception device is used to collect images of objects entering and exiting using a binocular camera, and the images include infrared images and visible light images; the identification electronic device includes a liveness detection device, which is used to extract image physical information and deep semantic features from the biometric feature areas in the images of the objects entering and exiting, and judge whether the objects entering and exiting are alive based on the extracted image physical information and deep semantic features; when the objects entering and exiting are alive, the identification electronic device performs identity recognition on the objects entering and exiting, so that the access control device configures access rights for the objects entering and exiting that have successfully completed identity recognition, so that the objects entering and exiting control the access gate of the designated area according to the configured access rights to perform a release action.

根据本发明实施例的一方面,一种应用活体检测方法的支付系统,包括支付终端和支付电子设备,其中,所述支付终端,用于利用双目摄像头采集支付用户的图像,所述图像包括红外图像和可见光图像;所述支付终端包括活体检测装置,用于对所述支付用户的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述支付用户是否活体;当所述支付用户为活体,所述支付终端对所述支付用户进行身份验证,以在所述支付用户通过身份验证时,向所述支付电子设备发起支付请求。According to one aspect of an embodiment of the present invention, a payment system using a liveness detection method includes a payment terminal and a payment electronic device, wherein the payment terminal is used to collect an image of a payment user using a binocular camera, the image including an infrared image and a visible light image; the payment terminal includes a liveness detection device, which is used to extract image physical information and deep semantic features from a biometric feature area in the image of the payment user, respectively, and judge whether the payment user is alive based on the extracted image physical information and deep semantic features; when the payment user is alive, the payment terminal authenticates the payment user, so as to initiate a payment request to the payment electronic device when the payment user passes the identity authentication.

根据本发明实施例的一方面,一种应用活体检测方法的服务系统,包括服务终端和认证电子设备,其中,所述服务终端,用于利用双目摄像头采集服务人员的图像,所述图像包括红外图像和可见光图像;所述服务终端包括活体检测装置,用于对所述服务人员的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述服务人员是否活体;当所述服务人员为活体,所述服务终端请求所述认证电子设备对所述服务人员进行身份认证,并向通过身份认证的服务人员分发服务业务指令。According to one aspect of an embodiment of the present invention, a service system applying a liveness detection method includes a service terminal and an authentication electronic device, wherein the service terminal is used to collect images of service personnel using a binocular camera, and the images include infrared images and visible light images; the service terminal includes a liveness detection device, which is used to extract image physical information and deep semantic features from biometric feature areas in the images of the service personnel, respectively, and judge whether the service personnel is alive based on the extracted image physical information and deep semantic features; when the service personnel is alive, the service terminal requests the authentication electronic device to authenticate the service personnel, and distributes service business instructions to the service personnel who pass the identity authentication.

根据本发明实施例的一方面,一种电子设备,包括处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述的活体检测方法。According to one aspect of an embodiment of the present invention, an electronic device includes a processor and a memory, wherein the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the living body detection method as described above is implemented.

根据本发明实施例的一方面,一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的活体检测方法。According to one aspect of an embodiment of the present invention, a storage medium stores a computer program, and when the computer program is executed by a processor, the living body detection method as described above is implemented.

在上述技术方案中,基于双目摄像头拍摄的红外图像,并结合图像物理信息和深层语义特征,对待检测对象进行活体检测,能够有效地过滤掉攻击者各种类型的假体攻击行为,且不依赖于待检测对象的配合。In the above technical solution, based on the infrared images taken by the binocular camera and combined with the physical information and deep semantic features of the image, liveness detection is performed on the object to be detected, which can effectively filter out various types of prosthetic attack behaviors of the attacker and does not rely on the cooperation of the object to be detected.

具体地,获取待检测对象在双目摄像头下拍摄到的图像,对图像中的生物特征区域进行图像物理信息提取,当图像中生物特征区域的图像物理信息只是待检测对象为活体时,基于机器学习模型,对图像中的生物特征区域进行深层语义特征提取,以根据提取到的深层语义特征进行待检测对象的活体判定,由此,基于双目摄像头拍摄到的红外图像过滤掉攻击者关于视频回放的假体攻击行为,基于图像物理特征信息过滤掉关于黑白照片的假体攻击行为,以及基于深层语义特征信息过滤掉关于彩色照片以及孔洞面具等的假体攻击行为,同时允许待检测对象在非配合的自由状态下进行活体检测,从而解决了现有技术中存在的活体检测对假体攻击防御性较差的问题。Specifically, an image of the object to be detected captured by a binocular camera is obtained, and image physical information of the biometric feature area in the image is extracted. When the image physical information of the biometric feature area in the image is only that the object to be detected is alive, deep semantic features are extracted from the biometric feature area in the image based on a machine learning model, so as to determine the liveness of the object to be detected according to the extracted deep semantic features. Thus, the attacker's prosthetic attack behavior on video playback is filtered out based on the infrared image captured by the binocular camera, the prosthetic attack behavior on black and white photos is filtered out based on the image physical feature information, and the prosthetic attack behavior on color photos and hole masks is filtered out based on the deep semantic feature information. At the same time, the object to be detected is allowed to perform liveness detection in a non-cooperative free state, thereby solving the problem of poor defense of liveness detection against prosthetic attacks in the prior art.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本发明。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并于说明书一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and, together with the description, serve to explain the principles of the invention.

图1是应用活体检测方法的应用场景的实施环境的示意图。FIG. 1 is a schematic diagram of an implementation environment of an application scenario in which a liveness detection method is applied.

图2是根据一示例性实施例示出的一种电子设备的硬件结构框图。Fig. 2 is a hardware structure block diagram of an electronic device according to an exemplary embodiment.

图3是根据一示例性实施例示出的一种活体检测方法的流程图。Fig. 3 is a flow chart showing a living body detection method according to an exemplary embodiment.

图4是图3对应实施例中步骤330在一个实施例的流程图。FIG. 4 is a flow chart of step 330 in the embodiment corresponding to FIG. 3 in one embodiment.

图5是图4对应实施例中步骤333在一个实施例的流程图。FIG. 5 is a flow chart of step 333 in the embodiment corresponding to FIG. 4 in one embodiment.

图6为图5对应实施例所涉及的彩色图像的颜色直方图和黑白图像的颜色直方图之间差异的示意图。FIG. 6 is a schematic diagram showing the difference between a color histogram of a color image and a color histogram of a black-and-white image involved in the embodiment corresponding to FIG. 5 .

图7是图4对应实施例中步骤333在另一个实施例的流程图。FIG. 7 is a flow chart of step 333 in the embodiment corresponding to FIG. 4 in another embodiment.

图8是图7对应实施例中步骤3332在一个实施例的流程图。FIG8 is a flow chart of step 3332 in the embodiment corresponding to FIG7 in one embodiment.

图9为图8对应实施例所涉及的HSV模型的示意图。FIG. 9 is a schematic diagram of an HSV model involved in the embodiment corresponding to FIG. 8 .

图10是图7对应实施例中步骤3332在另一个实施例的流程图。FIG. 10 is a flowchart of step 3332 in the embodiment corresponding to FIG. 7 in another embodiment.

图11是图3对应实施例中步骤350在一个实施例的流程图。FIG. 11 is a flow chart of step 350 in the embodiment corresponding to FIG. 3 in one embodiment.

图12是图3对应实施例中步骤370在一个实施例的流程图。FIG. 12 is a flow chart of step 370 in the embodiment corresponding to FIG. 3 in one embodiment.

图13是根据一示例性实施例示出的步骤320在一个实施例的流程图。FIG. 13 is a flow chart showing step 320 in one embodiment according to an exemplary embodiment.

图14是根据一示例性实施例示出的步骤320在另一个实施例的流程图。FIG. 14 is a flow chart showing step 320 in another embodiment according to an exemplary embodiment.

图15为图14对应实施例所涉及的水平距离差的示意图。FIG. 15 is a schematic diagram of the horizontal distance difference involved in the embodiment corresponding to FIG. 14 .

图16是根据一示例性实施例示出的人脸关键点的示意图。Fig. 16 is a schematic diagram showing key points of a human face according to an exemplary embodiment.

图17是一应用场景中一种活体检测方法的具体实现示意图。FIG. 17 is a schematic diagram of a specific implementation of a liveness detection method in an application scenario.

图18是根据一示例性实施例示出的一种活体检测装置的框图。Fig. 18 is a block diagram showing a living body detection device according to an exemplary embodiment.

图19是根据一示例性实施例示出的一种电子设备的框图。Fig. 19 is a block diagram of an electronic device according to an exemplary embodiment.

通过上述附图,已示出本发明明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本发明构思的范围,而是通过参考特定实施例为本领域技术人员说明本发明的概念。The above-mentioned drawings have shown clear embodiments of the present invention, which will be described in more detail hereinafter. These drawings and textual descriptions are not intended to limit the scope of the present invention in any way, but to illustrate the concept of the present invention for those skilled in the art by referring to specific embodiments.

具体实施方式Detailed ways

这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。Here, exemplary embodiments will be described in detail, examples of which are shown in the accompanying drawings. When the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Instead, they are only examples of devices and methods consistent with some aspects of the present invention as detailed in the appended claims.

图1是应用活体检测方法的应用场景的实施环境的示意图。FIG. 1 is a schematic diagram of an implementation environment of an application scenario in which a liveness detection method is applied.

如图1(a)所示,该实施环境包括支付用户510、智能手机530和支付服务器550。As shown in FIG. 1( a ), the implementation environment includes a payment user 510 , a smart phone 530 , and a payment server 550 .

如果支付用户510通过活体检测方法被判定为活体,支付用户510便可借助智能手机530进行身份验证,并在通过身份验证之后,请求支付服务器550完成待支付订单的支付流程。If the paying user 510 is determined to be alive through the liveness detection method, the paying user 510 can use the smart phone 530 to perform identity authentication, and after passing the identity authentication, request the payment server 550 to complete the payment process of the order to be paid.

如图1(b)所示,该实施环境包括接待设备610、识别服务器630和门禁控制设备650。As shown in FIG. 1( b ), the implementation environment includes a reception device 610 , an identification server 630 , and an access control device 650 .

如果出入对象670通过活体检测方法被判定为活体,出入对象670便可在由接待设备610采集图像之后,便通过识别服务器630进行身份识别,待出入对象670的身份识别完成,便可凭借门禁控制设备650配置的门禁权限控制相关区域的门禁道闸执行放行动作。If the object 670 is determined to be alive through the liveness detection method, the object 670 can be identified through the recognition server 630 after the reception device 610 captures the image. After the identification of the object 670 is completed, the access control barrier of the relevant area can be released by relying on the access control authority configured by the access control device 650.

如图1(c)所示,该实施环境包括服务人员710、服务终端730和认证服务器750。As shown in FIG. 1( c ), the implementation environment includes a service personnel 710 , a service terminal 730 , and an authentication server 750 .

如果服务人员710通过活体检测方法被判定为活体,服务人员710便可由服务终端730采集图像,并基于该图像由认证服务器750进行身份验证,并在通过身份验证之后,由服务终端730分发服务业务指令,以履行相关服务。If the service personnel 710 is determined to be alive through the liveness detection method, the service terminal 730 can collect images of the service personnel 710, and the authentication server 750 can perform identity authentication based on the image. After passing the identity authentication, the service terminal 730 distributes service business instructions to perform related services.

在上述三种应用场景中,只有待检测对象,例如支付用户510、出入对象670、服务人员710,通过活体检测,方可继续后续的身份验证或者身份识别,以此有效地减轻了身份验证或者身份识别的工作压力和流量压力,从而更好地完成各种身份验证或者身份识别任务。In the above three application scenarios, only the objects to be detected, such as the paying user 510, the entry and exit objects 670, and the service personnel 710, can continue the subsequent identity authentication or identity recognition through liveness detection, thereby effectively reducing the work pressure and traffic pressure of identity authentication or identity recognition, so as to better complete various identity authentication or identity recognition tasks.

请参阅图2,图2是根据一示例性实施例示出的一种电子设备的硬件结构框图。该种电子设备适用于图1所示出实施环境的智能手机530、识别服务器630、认证服务器750。Please refer to Fig. 2, which is a hardware structure block diagram of an electronic device according to an exemplary embodiment. The electronic device is suitable for the smart phone 530, the identification server 630, and the authentication server 750 in the implementation environment shown in Fig. 1.

需要说明的是,该种电子设备只是一个适配于本发明的示例,不能认为是提供了对本发明的使用范围的任何限制。该种电子设备也不能解释为需要依赖于或者必须具有图2中示出的示例性的电子设备200中的一个或者多个组件。It should be noted that this electronic device is only an example adapted to the present invention and cannot be considered to provide any limitation on the scope of use of the present invention. This electronic device cannot be interpreted as needing to rely on or necessarily having one or more components in the exemplary electronic device 200 shown in FIG. 2.

电子设备200的硬件结构可因配置或者性能的不同而产生较大的差异,如图2所示,电子设备200包括:电源210、接口230、至少一存储器250、以及至少一中央处理器(CPU,Central Processing Units)270。The hardware structure of the electronic device 200 may vary greatly due to different configurations or performances. As shown in FIG. 2 , the electronic device 200 includes: a power supply 210 , an interface 230 , at least one memory 250 , and at least one central processing unit (CPU) 270 .

具体地,电源210用于为电子设备200上的各硬件设备提供工作电压。Specifically, the power supply 210 is used to provide operating voltage for each hardware device on the electronic device 200 .

接口230包括至少一有线或无线网络接口,用于与外部设备交互。The interface 230 includes at least one wired or wireless network interface for interacting with external devices.

当然,在其余本发明适配的示例中,接口230还可以进一步包括至少一串并转换接口233、至少一输入输出接口235以及至少一USB接口237等,如图2所示,在此并非对此构成具体限定。Of course, in other examples adapted by the present invention, the interface 230 may further include at least one serial-to-parallel conversion interface 233, at least one input-output interface 235, and at least one USB interface 237, as shown in FIG. 2, which is not specifically limited here.

存储器250作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,其上所存储的资源包括操作系统251、应用程序253及数据255等,存储方式可以是短暂存储或者永久存储。The memory 250 is a carrier for storing resources, which may be a read-only memory, a random access memory, a disk or an optical disk, etc. The resources stored thereon include an operating system 251, an application 253 and data 255, etc. The storage method may be temporary storage or permanent storage.

其中,操作系统251用于管理与控制电子设备200上的各硬件设备以及应用程序253,以实现中央处理器270对存储器250中海量数据255的运算与处理,其可以是WindowsServerTM、Mac OS XTM、UnixTM、LinuxTM、FreeBSDTM等。Among them, the operating system 251 is used to manage and control various hardware devices and application programs 253 on the electronic device 200 to enable the central processor 270 to calculate and process the massive data 255 in the memory 250. It can be WindowsServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.

应用程序253是基于操作系统251之上完成至少一项特定工作的计算机程序,其可以包括至少一模块(图2中未示出),每个模块都可以分别包含有对电子设备200的一系列计算机可读指令。例如,活体检测装置可视为部署于电子设备的应用程序253。The application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in FIG. 2 ), each of which may include a series of computer-readable instructions for the electronic device 200. For example, a liveness detection device may be regarded as an application 253 deployed on the electronic device.

数据255可以是存储于磁盘中的照片、图片等,存储于存储器250中。The data 255 may be photos, pictures, etc. stored in a disk, stored in the memory 250 .

中央处理器270可以包括一个或多个以上的处理器,并设置为通过至少一通信总线与存储器250通信,以读取存储器250中存储的计算机可读指令,进而实现对存储器250中海量数据255的运算与处理。例如,通过中央处理器270读取存储器250中存储的一系列计算机可读指令的形式来完成活体检测方法。The central processor 270 may include one or more processors and is configured to communicate with the memory 250 through at least one communication bus to read the computer-readable instructions stored in the memory 250, thereby realizing the operation and processing of the mass data 255 in the memory 250. For example, the liveness detection method is completed by the central processor 270 reading a series of computer-readable instructions stored in the memory 250.

可以理解,图2所示的结构仅为示意,电子设备100还可包括比图2中所示更多或更少的组件,或者具有与图1所示不同的组件。图2中所示的各组件可以采用硬件、软件或者其组合来实现。It is understood that the structure shown in Fig. 2 is only for illustration, and the electronic device 100 may also include more or fewer components than those shown in Fig. 2, or have components different from those shown in Fig. 1. Each component shown in Fig. 2 may be implemented by hardware, software, or a combination thereof.

请参阅图3,在一示例性实施例中,一种活体检测方法适用于电子设备,该电子设备的硬件结构可以如图2所示。Please refer to FIG. 3 . In an exemplary embodiment, a living body detection method is applicable to an electronic device. The hardware structure of the electronic device may be as shown in FIG. 2 .

该种活体检测方法可以由电子设备执行,可以包括以下步骤:The living body detection method can be performed by an electronic device and may include the following steps:

步骤310,获取待检测对象在双目摄像头下拍摄到的图像,所述图像包括红外图像和可见光图像。Step 310: Acquire an image of the object to be detected captured by a binocular camera, wherein the image includes an infrared image and a visible light image.

首先,待检测对象可以是某个待支付订单的支付用户,可以是某个待存/取款的银行卡用户,还可以是某个待通过门禁的出入对象,又或者,是某个待接收服务业务的服务人员,本实施例并未对待检测对象作具体限定。First of all, the object to be detected can be a paying user of an order to be paid, a bank card user to deposit/withdraw money, an entry and exit object to pass through a access control, or a service personnel to receive service business. This embodiment does not specifically limit the object to be detected.

相应地,待检测对象的不同可对应于不同的应用场景,例如,某个待支付订单的支付用户对应于订单支付应用场景,某个待存/取款的银行卡用户对应于银行卡应用场景,某个待通过门禁的出入对象则对应于门禁应用场景,某个待接收服务业务的服务人员对应于客运服务应用场景。Correspondingly, different objects to be detected can correspond to different application scenarios. For example, a paying user who is about to pay an order corresponds to the order payment application scenario, a bank card user who is about to deposit/withdraw money corresponds to the bank card application scenario, an entry and exit object who is about to pass through the access control corresponds to the access control application scenario, and a service personnel who is about to receive service business corresponds to the passenger service application scenario.

可以理解,上述任意一种应用场景中,都可能存在攻击者的假体攻击行为,例如,犯罪分子可能替代服务人员,利用假体攻击行为通过身份认证,以搭载乘客,因此,本实施例所提供的活体检测方法可根据待检测对象的不同而适用于不同的应用场景。It can be understood that in any of the above application scenarios, there may be fake attack behaviors of attackers. For example, criminals may replace service personnel and use fake attack behaviors to pass identity authentication in order to carry passengers. Therefore, the liveness detection method provided in this embodiment can be applied to different application scenarios depending on the different objects to be detected.

其次,对于待检测对象的图像获取,可以是双目摄像头实时拍摄并采集到的待检测对象的图像,也可以是电子设备中预先存储的待检测对象的图像,即通过读取电子设备的缓存区域中一历史时间段由双目摄像头拍摄并采集到的图像,本实施例也并未对此进行限定。Secondly, for the acquisition of the image of the object to be detected, it can be the image of the object to be detected captured and collected by the binocular camera in real time, or it can be the image of the object to be detected pre-stored in the electronic device, that is, by reading the image captured and collected by the binocular camera in a historical time period in the cache area of the electronic device. This embodiment does not limit this.

在此补充说明的是,图像,可以是一段视频,还可以是若干张照片,由此,后续活体检测时,是以图像帧为单位进行的。It is to be supplemented here that the image can be a video or a number of photos, and therefore, the subsequent liveness detection is performed in units of image frames.

最后,双目摄像头包括用于生成红外图像的红外摄像头、以及用于生成可见光图像的可见光摄像头。Finally, the binocular camera includes an infrared camera for generating infrared images, and a visible light camera for generating visible light images.

该双目摄像头可以安装于摄像机、录像机、或者其他具有图像采集功能的电子设备,例如,智能手机等。The binocular camera can be installed on a camera, a video recorder, or other electronic devices with image acquisition function, such as a smart phone.

可以理解,基于视频回放这类型的假体攻击行为,往往需要攻击者利用电子设备所配置的屏幕进行视频回放,此时,如果红外摄像头将红外光投射至该屏幕,将出现反光现象,进而导致生成的红外图像中不可能包含生物特征区域。It is understandable that prosthetic attacks based on video playback often require attackers to use the screen configured on the electronic device to play back the video. At this time, if the infrared camera projects infrared light onto the screen, reflections will occur, making it impossible for the generated infrared image to contain the biometric feature area.

由此,便有利于实现基于双目摄像头拍摄到的红外图像的待检测对象的活体判定,即过滤掉关于视频回放的假体攻击行为。This is conducive to achieving liveness determination of the object to be detected based on the infrared image captured by the binocular camera, that is, filtering out fake attack behaviors related to video playback.

步骤330,对所述图像中的生物特征区域,进行图像物理信息提取,得到所述图像中生物特征区域的图像物理信息。Step 330 , extracting image physical information of the biometric feature region in the image to obtain image physical information of the biometric feature region in the image.

首先,待检测对象的生物特征,例如,人脸、眼睛、嘴巴、手、脚、指纹、虹膜等等。相应地,待检测对象的生物特征在图像中的位置即构成了待检测对象的图像中的生物特征区域,也可以理解为,所述生物特征区域用于指示所述待检测对象的生物特征在所述图像中的位置。First, the biometric features of the object to be detected, such as face, eyes, mouth, hands, feet, fingerprints, irises, etc. Correspondingly, the position of the biometric features of the object to be detected in the image constitutes the biometric feature area in the image of the object to be detected, which can also be understood as the biometric feature area is used to indicate the position of the biometric features of the object to be detected in the image.

其次,图像物理信息,反映了图像的纹理、颜色,包括但不限于:图像的纹理信息、颜色信息等。Secondly, the physical information of the image reflects the texture and color of the image, including but not limited to: the texture information and color information of the image.

需要说明的是,就图像的纹理而言,活体的生物特征在图像中呈现的纹理细节与假体的生物特征在图像中呈现的纹理细节之间具有较大的差异,而针对图像的颜色来说,由活体拍摄并采集到的可见光图像通常是彩色图像。It should be noted that in terms of image texture, there are significant differences between the texture details presented in the image by the biological features of a living body and the texture details presented in the image by the biological features of a prosthesis, while in terms of image color, the visible light images captured and collected by a living body are usually color images.

因此,在提取得到图像中生物特征区域的图像物理信息之后,便可针对活体与假体之间存在的上述区别,对待检测对象进行活体检测。Therefore, after the physical information of the biological feature area in the image is extracted, the liveness detection can be performed on the object to be detected according to the above-mentioned difference between the living body and the prosthesis.

如果所述图像中生物特征区域的图像物理信息指示所述待检测对象为活体,则跳转执行步骤350,继续对待检测对象进行活体检测。If the image physical information of the biometric feature area in the image indicates that the object to be detected is a living body, the process jumps to step 350 to continue to perform liveness detection on the object to be detected.

反之,如果所述图像中生物特征区域的图像物理信息指示所述待检测对象为假体,则停止对待检测对象继续进行活体检测,以此提高活体检测的效率。On the contrary, if the image physical information of the biological feature area in the image indicates that the object to be detected is a prosthesis, the liveness detection of the object to be detected is stopped, so as to improve the efficiency of liveness detection.

步骤350,基于机器学习模型,对所述图像中的生物特征区域,进行深层语义特征提取,得到所述图像中生物特征区域的深层语义特征。Step 350: Based on the machine learning model, deep semantic features are extracted for the biometric feature area in the image to obtain deep semantic features of the biometric feature area in the image.

由于图像物理信息反映的图像的纹理、颜色,不仅容易因拍摄角度的变化,而发生较大的变化,进而导致活体判定出现误差,而且防御假体攻击行为的能力有限,仅适用于较为简单的照片攻击,为此,本实施例中,将基于机器学习模型,提取图像中生物特征区域的深层语义特征,以此提高活体检测对假体攻击行为的防御能力,同时实现拍摄角度的自适应。Since the texture and color of the image reflected by the physical information of the image are not only prone to significant changes due to changes in the shooting angle, which in turn leads to errors in liveness determination, but also the ability to defend against prosthetic attack behaviors is limited and is only applicable to relatively simple photo attacks. For this reason, in this embodiment, based on the machine learning model, the deep semantic features of the biometric feature area in the image are extracted to improve the defense capability of liveness detection against prosthetic attack behaviors, while achieving the adaptation of the shooting angle.

机器学习模型,基于大量正样本和负样本进行模型训练,以实现待检测对象的活体判定。The machine learning model is trained based on a large number of positive and negative samples to achieve liveness determination of the object to be detected.

其中,为了允许待检测对象在非配合的自由状态(例如点头、转头、摇头)下进行活体检测,正样本和负样本是分别利用双目摄像头对活体、假体进行不同角度拍摄得到的图像。In order to allow the subject to be detected to be detected in a non-cooperative free state (such as nodding, turning, shaking the head) for liveness detection, the positive samples and negative samples are images obtained by taking pictures of the living body and the prosthesis at different angles using a binocular camera.

通过模型训练,将正样本和负样本作为训练输入,而以正样本对应的活体、以及负样本对应的假体作为训练真值,便构建得到进行待检测对象的活体判定的机器学习模型。Through model training, positive samples and negative samples are used as training inputs, and the living body corresponding to the positive samples and the prosthesis corresponding to the negative samples are used as training true values, so as to construct a machine learning model for liveness determination of the object to be detected.

具体而言,模型训练,通过正样本和负样本对指定数学模型的参数加以迭代优化,使得由此参数构建的指定算法函数满足收敛条件。Specifically, model training iteratively optimizes the parameters of a specified mathematical model through positive and negative samples so that the specified algorithm function constructed by these parameters meets the convergence conditions.

其中,指定数学模型,包括但不限于:逻辑回归、支持向量机、随机森林、神经网络等等。The specified mathematical model includes but is not limited to: logistic regression, support vector machine, random forest, neural network, etc.

指定算法函数,包括但不限于:最大期望函数、损失函数等等。Specify algorithm functions, including but not limited to: maximum expectation function, loss function, etc.

举例来说,随机初始化指定数学模型的参数,根据当前一个样本计算随机初始化的参数所构建的损失函数的损失值。For example, the parameters of the specified mathematical model are randomly initialized, and the loss value of the loss function constructed by the randomly initialized parameters is calculated based on the current sample.

如果损失函数的损失值未达到最小,则更新指定数学模型的参数,并根据后一个样本计算更新的参数所构建的损失函数的损失值。If the loss value of the loss function does not reach the minimum, the parameters of the specified mathematical model are updated, and the loss value of the loss function constructed by the updated parameters is calculated based on the next sample.

如此迭代循环,直至损失函数的损失值达到最小,即视为损失函数收敛,此时,指定数学模型也收敛,并符合预设精度要求,则停止迭代。The iteration cycle is repeated until the loss value of the loss function reaches the minimum, which means that the loss function converges. At this time, the specified mathematical model also converges and meets the preset accuracy requirements, and the iteration is stopped.

否则,迭代更新指定数学模型的参数,并根据其余样本迭代计算所更新参数构建的损失函数的损失值,直至损失函数收敛。Otherwise, the parameters of the specified mathematical model are iteratively updated, and the loss value of the loss function constructed by the updated parameters is iteratively calculated based on the remaining samples until the loss function converges.

值得一提的是,如果在损失函数收敛之前,迭代次数已经达到迭代阈值,也将停止迭代,以此保证模型训练的效率。It is worth mentioning that if the number of iterations reaches the iteration threshold before the loss function converges, the iteration will be stopped to ensure the efficiency of model training.

当指定数学模型收敛并符合预设精度要求时,表示模型训练已完成,由此完成模型训练的机器学习模型,便具有了对图像中生物特征区域进行深层语义特征提取的能力。When the specified mathematical model converges and meets the preset accuracy requirements, it means that the model training has been completed. The machine learning model that has completed the model training will have the ability to extract deep semantic features of the biological feature areas in the image.

那么,基于机器学习模型,便可从图像中的生物特征区域提取得到深层语义特征,并以此进行待检测对象的活体判定。Then, based on the machine learning model, deep semantic features can be extracted from the biological feature areas in the image, and used to determine the liveness of the object to be detected.

可选地,机器学习模型包括但不限于:卷积神经网络模型、深度神经网络模型、残差神经网络模型等。Optionally, the machine learning model includes but is not limited to: a convolutional neural network model, a deep neural network model, a residual neural network model, etc.

其中,深层语义特征包括图像的颜色特征、纹理特征,相较于图像物理信息中的颜色信息和纹理信息,更能够提高活体与假体的分辨能力,以利于提高活体检测对假体攻击行为的防御能力。Among them, deep semantic features include color features and texture features of the image. Compared with the color information and texture information in the physical information of the image, they can better improve the ability to distinguish between living objects and prostheses, so as to improve the ability of living object detection to defend against prosthesis attack behaviors.

步骤370,根据所述图像中生物特征区域的深层语义特征,进行所述待检测对象的活体判定。Step 370: Perform a liveness determination on the object to be detected based on the deep semantic features of the biological feature area in the image.

通过如上所述的过程,可有有效地抵御视频回放、黑白照片、彩色照片、孔洞面具等多种不同类型的假体攻击行为,同时允许待检测对象在非配合的自由状态下进行活体检测,在提高用户体验的同时提高了活体检测的准确率,充分保障了活体检测的安全性。Through the process described above, it is possible to effectively resist various types of prosthetic attack behaviors such as video playback, black and white photos, color photos, hole masks, etc., while allowing the object to be detected to perform liveness detection in a non-cooperative free state, thereby improving the accuracy of liveness detection while improving the user experience, and fully ensuring the security of liveness detection.

请参阅图4,在一示例性实施例中,步骤330可以包括以下步骤:Referring to FIG. 4 , in an exemplary embodiment, step 330 may include the following steps:

步骤331,根据获取到的图像,获取包含所述生物特征区域的可见光图像。Step 331, acquiring a visible light image including the biometric feature area according to the acquired image.

应当理解,基于双目摄像头中的红外摄像头,由活体拍摄并采集到的红外图像实质是灰度图像,而基于双目摄像头中的可见光摄像头,由活体拍摄并采集到的可见光图像则是彩色图像。It should be understood that the infrared image captured and collected by the living body based on the infrared camera in the binocular camera is actually a grayscale image, while the visible light image captured and collected by the living body based on the visible light camera in the binocular camera is a color image.

基于此,为了能够过滤黑白照片这种类型的假体攻击行为,进行待检测对象的活体判定的基础是可见光图像。Based on this, in order to filter out fake attack behaviors such as black and white photos, the basis for determining the liveness of the object to be detected is the visible light image.

因此,在获取到待检测对象在双目摄像头下拍摄到的红外图像和可见光图像之后,首先需要获取包含生物特征区域的可见光图像,方能够在后续基于该可见光图像是否为彩色图像,进行待检测对象的活体判定。Therefore, after obtaining the infrared image and visible light image of the object to be detected taken by the binocular camera, it is necessary to first obtain the visible light image containing the biological feature area, so that the living body of the object to be detected can be determined based on whether the visible light image is a color image.

步骤333,从所述可见光图像中的生物特征区域,提取得到所述可见光图像中生物特征区域的图像物理信息。Step 333: extracting image physical information of the biometric feature area in the visible light image from the biometric feature area in the visible light image.

如果图像物理信息指示可见光图像为彩色图像,便可判定待检测对象为活体,反之,如果图像物理信息指示可见光图像为黑白图像,则判定待检测对象为假体。If the physical information of the image indicates that the visible light image is a color image, it can be determined that the object to be detected is a living body. Conversely, if the physical information of the image indicates that the visible light image is a black and white image, it can be determined that the object to be detected is a prosthesis.

可选地,图像物理信息包括但不限于:以颜色直方图定义的颜色信息、以LBP(Local Binary Patterns,局部二值模式)/LPQ(Local Phase Quantization,局部相位量化)直方图定义的纹理信息。Optionally, the image physical information includes, but is not limited to: color information defined by a color histogram, and texture information defined by a LBP (Local Binary Patterns)/LPQ (Local Phase Quantization) histogram.

下面对图像物理信息的提取过程加以说明。The process of extracting physical information from images is explained below.

请参阅图5,在一示例性实施例中,图像物理信息为颜色信息。Please refer to FIG. 5 , in an exemplary embodiment, the image physical information is color information.

相应地,步骤333可以包括以下步骤:Accordingly, step 333 may include the following steps:

步骤3331,基于所述可见光图像中的生物特征区域,计算所述可见光图像中生物特征区域的颜色直方图。Step 3331, based on the biometric feature area in the visible light image, calculate the color histogram of the biometric feature area in the visible light image.

步骤3333,将计算得到的颜色直方图,作为所述可见光图像中生物特征区域的颜色信息。Step 3333: Use the calculated color histogram as color information of the biometric feature area in the visible light image.

如图6所示,彩色图像的颜色直方图(a)与黑白图像的颜色直方图(b)存在较为明显的分布差异,由此,便可基于可见光图像中生物特征区域的颜色直方图,进行待检测对象的活体判定,以过滤掉关于黑白照片的假体攻击行为。As shown in Figure 6, there is a relatively obvious distribution difference between the color histogram (a) of the color image and the color histogram (b) of the black and white image. Therefore, based on the color histogram of the biometric feature area in the visible light image, the liveness determination of the object to be detected can be performed to filter out the fake attack behavior on the black and white photo.

在上述实施例的作用下,实现了颜色信息的提取,进而使得基于颜色信息进行待检测对象的活体判定得以实现。With the help of the above-mentioned embodiments, the extraction of color information is realized, thereby making it possible to perform living body determination of the object to be detected based on the color information.

请参阅图7,在另一示例性实施例中,图像物理信息为纹理信息。Please refer to FIG. 7 , in another exemplary embodiment, the image physical information is texture information.

相应地,步骤333可以包括以下步骤:Accordingly, step 333 may include the following steps:

步骤3332,基于所述可见光图像中的生物特征区域,创建对应于所述可见光图像中生物特征区域的颜色空间。Step 3332: Create a color space corresponding to the biometric region in the visible light image based on the biometric region in the visible light image.

其中,对应于所述可见光图像中生物特征区域的颜色空间,实质上是以数学方式来描述可见光图像中生物特征区域的颜色集合。The color space corresponding to the biometric feature region in the visible light image is essentially a color set that describes the biometric feature region in the visible light image in a mathematical way.

可选地,颜色空间,可以基于HSV参数构建,还可以基于YCbCr参数构建。该HSV参数包括色调(H)、饱和度(S)和明亮度(V),该YCbCr参数包括颜色的明亮度和浓度(Y)、蓝色浓度偏移量(Cb)和红色浓度偏移量(Cr)。Optionally, the color space can be constructed based on HSV parameters or YCbCr parameters. The HSV parameters include hue (H), saturation (S) and brightness (V), and the YCbCr parameters include color brightness and concentration (Y), blue concentration offset (Cb) and red concentration offset (Cr).

步骤3334,针对所述颜色空间,在空域提取局部二值模式特征,和/或,在频域提取局部相位量化特征。Step 3334: for the color space, extract local binary pattern features in the spatial domain, and/or extract local phase quantization features in the frequency domain.

其中,局部二值模式(LBP,Local Binary Patterns)特征,是基于图像像素本身,对可见光图像中生物特征区域的纹理细节的准确描述,以此反映可见光图像的灰度变化。Among them, the Local Binary Patterns (LBP) feature is an accurate description of the texture details of the biological feature area in the visible light image based on the image pixels themselves, thereby reflecting the grayscale changes of the visible light image.

局部相位量化(LPQ,Local Phase Quantization)特征,则是基于变换域中图像的变换系数,对可见光图像中生物特征区域的纹理细节的准确描述,以此反映可见光图像的梯度分布。The local phase quantization (LPQ) feature is based on the transformation coefficient of the image in the transform domain, which accurately describes the texture details of the biological feature area in the visible light image, thereby reflecting the gradient distribution of the visible light image.

换而言之,无论是局部二值模式特征,还是局部相位量化特征,实质是对可见光图像中生物特征区域的纹理细节进行分析,以此定义可见光图像中生物特征区域的纹理信息。In other words, whether it is the local binary pattern feature or the local phase quantization feature, the essence is to analyze the texture details of the biological feature area in the visible light image, so as to define the texture information of the biological feature area in the visible light image.

步骤3336,根据所述局部二值模式特征,和/或,所述局部相位量化特征,生成LBP/LPQ直方图,作为所述可见光图像中生物特征区域的纹理信息。Step 3336: Generate an LBP/LPQ histogram based on the local binary pattern feature and/or the local phase quantization feature as texture information of the biological feature area in the visible light image.

由此,LBP/LPQ直方图,是根据所述局部二值模式特征,和/或,所述局部相位量化特征生成的,通过LBP/LPQ的相互结合、相互补充,能够更加准确地描述可见光图像中生物特征区域的纹理细节,进而充分地保障活体检测的准确率。Therefore, the LBP/LPQ histogram is generated based on the local binary pattern features and/or the local phase quantization features. Through the mutual combination and mutual complementation of LBP/LPQ, the texture details of the biological feature area in the visible light image can be more accurately described, thereby fully ensuring the accuracy of liveness detection.

在上述各实施例的作用下,实现了纹理信息的提取,进而使得基于纹理信息进行待检测对象的活体判定得以实现。With the effects of the above-mentioned embodiments, the extraction of texture information is realized, thereby making it possible to perform a liveness determination of the object to be detected based on the texture information.

进一步地,对颜色空间的创建过程进行如下说明。Furthermore, the process of creating the color space is described as follows.

请参阅图8,在一示例性实施例中,步骤3332可以包括以下步骤:Please refer to FIG. 8 , in an exemplary embodiment, step 3332 may include the following steps:

步骤3332a,基于所述可见光图像中的生物特征区域,获取对应于所述可见光图像中生物特征区域的HSV参数,所述HSV参数包括色调(H)、饱和度(S)和明亮度(V)。Step 3332a, based on the biometric feature area in the visible light image, obtain HSV parameters corresponding to the biometric feature area in the visible light image, and the HSV parameters include hue (H), saturation (S) and brightness (V).

步骤3332c,根据获取到的HSV参数构建HSV模型,作为对应于所述可见光图像中生物特征区域的颜色空间。Step 3332c, constructing an HSV model according to the acquired HSV parameters as the color space corresponding to the biological feature area in the visible light image.

如图9所示,HSV模型实质是六棱锥,相应地,HSV模型的构建过程包括:通过色调(H)构建六棱锥的边界,通过饱和度(S)构建六棱锥的水平轴,通过明亮度(V)构建六棱锥的垂直轴。As shown in FIG9 , the HSV model is essentially a hexagonal pyramid. Accordingly, the construction process of the HSV model includes: constructing the boundary of the hexagonal pyramid through hue (H), constructing the horizontal axis of the hexagonal pyramid through saturation (S), and constructing the vertical axis of the hexagonal pyramid through brightness (V).

由此,完成基于HSV参数的颜色空间的构建。Thus, the construction of the color space based on HSV parameters is completed.

请参阅图10,在另一示例性实施例中,步骤3332可以包括以下步骤:Please refer to FIG. 10 , in another exemplary embodiment, step 3332 may include the following steps:

步骤3332b,基于所述可见光图像中的生物特征区域,获取对应于所述可见光图像中生物特征区域的YCbCr参数,所述YCbCr参数包括颜色的明亮度和浓度(Y)、蓝色浓度偏移量(Cb)和红色浓度偏移量(Cr)。Step 3332b, based on the biometric area in the visible light image, obtain YCbCr parameters corresponding to the biometric area in the visible light image, wherein the YCbCr parameters include color brightness and concentration (Y), blue concentration offset (Cb) and red concentration offset (Cr).

步骤3332d,根据获取到的YCbCr参数,构建对应于所述可见光图像中生物特征区域的颜色空间。Step 3332d: construct a color space corresponding to the biometric feature area in the visible light image based on the acquired YCbCr parameters.

具体而言,将获取到的YCbCr参数转换为RGB参数,进而以RGB参数代表的RGB颜色通道,构建RGB图片,亦即可见光图像中生物特征区域的颜色空间。其中,RGB颜色通道包括表示红色的红色通道R、表示绿色的绿色通道G、以及表示蓝色的蓝色通道B。Specifically, the acquired YCbCr parameters are converted into RGB parameters, and then the RGB color channels represented by the RGB parameters are used to construct an RGB picture, that is, the color space of the biometric region in the visible light image. The RGB color channels include a red channel R representing red, a green channel G representing green, and a blue channel B representing blue.

由此,完成基于YCbCr参数的颜色空间的构建。Thus, the construction of the color space based on the YCbCr parameters is completed.

通过上述各实施例的配合,实现了颜色空间的创建,进而使得基于颜色空间提取纹理信息得以实现。Through the cooperation of the above embodiments, the creation of a color space is achieved, thereby enabling the extraction of texture information based on the color space to be realized.

在一示例性实施例中,步骤330之后,如上所述的方法还可以包括以下步骤:In an exemplary embodiment, after step 330, the method described above may further include the following steps:

根据所述可见光图像中生物特征区域的图像物理信息,进行所述待检测对象的活体判定。The liveness determination of the object to be detected is performed based on the image physical information of the biological feature area in the visible light image.

具体而言,将所述可见光图像中生物特征区域的图像物理信息输入支持向量机分类器,对所述可见光图像进行颜色类别预测,得到所述可见光图像的颜色类别。Specifically, the image physical information of the biometric feature area in the visible light image is input into a support vector machine classifier, and the color category of the visible light image is predicted to obtain the color category of the visible light image.

首先,支持向量机分类器,是基于大量学习样本对可见光图像的颜色类别进行训练生成的。其中,学习样本包括属于黑白图像的可见光图像、和属于彩色图像的可见光图像。First, the support vector machine classifier is generated by training the color categories of visible light images based on a large number of learning samples, wherein the learning samples include visible light images belonging to black and white images and visible light images belonging to color images.

其次,颜色类别包括彩色图像类别和黑白图像类别。Secondly, the color category includes the color image category and the black and white image category.

那么,如果预测得到的可见光图像的颜色类别为黑白图像类别,即表示所述可见光图像的颜色类别指示所述可见光图像为黑白图像,则判定所述待检测对象为假体。Then, if the predicted color category of the visible light image is a black and white image category, that is, the color category of the visible light image indicates that the visible light image is a black and white image, then it is determined that the object to be detected is a prosthesis.

反之,如果预测得到的可见光图像的颜色类别为彩色图像类别,即表示所述可见光图像的颜色类别指示所述可见光图像为彩色图像,则判定所述待检测对象为活体。On the contrary, if the predicted color category of the visible light image is a color image category, that is, the color category of the visible light image indicates that the visible light image is a color image, then it is determined that the object to be detected is a living body.

在上述实施例的作用下,实现了基于可见光图像的待检测对象的活体判定,即过滤掉关于黑白照片的假体攻击行为。With the help of the above-mentioned embodiments, the liveness determination of the object to be detected based on the visible light image is realized, that is, the fake attack behavior on the black and white photos is filtered out.

在一示例性实施例中,所述机器学习模型为深度神经网络模型,所述深度神经网络模型包括输入层、卷积层、连接层和输出层。该卷积层用于特征提取,该连接层用于特征融合。In an exemplary embodiment, the machine learning model is a deep neural network model, and the deep neural network model includes an input layer, a convolution layer, a connection layer and an output layer. The convolution layer is used for feature extraction, and the connection layer is used for feature fusion.

可选地,深度神经网络模型还可以包括激活层、池化层。其中,激活层用于提高深度神经网络模型的收敛速度,池化层则用于降低特征连接的复杂度。Optionally, the deep neural network model may further include an activation layer and a pooling layer, wherein the activation layer is used to increase the convergence speed of the deep neural network model, and the pooling layer is used to reduce the complexity of feature connections.

可选地,卷积层配置有多个通道,每一个通道可供同一个图像中具有不同通道信息的图像输入,以此提升特征提取的精度。Optionally, the convolutional layer is configured with multiple channels, each channel being capable of being used for inputting images with different channel information in the same image, thereby improving the accuracy of feature extraction.

举例来说,图像为一彩色图像,假设卷积层配置有A1、A2、A3三个通道,那么,该彩色图像便可以按照GRB颜色通道方式输入至卷积层配置的该三个通道,即红色通道R对应的彩色图像部分输入A1通道,绿色通道G对应的彩色图像部分输入A2通道,蓝色通道B对应的彩色图像部分输入A3通道。For example, the image is a color image. Assuming that the convolution layer is configured with three channels A1, A2, and A3, then the color image can be input into the three channels configured in the convolution layer in the GRB color channel manner, that is, the color image part corresponding to the red channel R is input into the A1 channel, the color image part corresponding to the green channel G is input into the A2 channel, and the color image part corresponding to the blue channel B is input into the A3 channel.

如图11所示,步骤350可以包括以下步骤:As shown in FIG. 11 , step 350 may include the following steps:

步骤351,从所述深度神经网络模型中的输入层,将所述图像输入至所述卷积层。Step 351, input the image into the convolutional layer from the input layer in the deep neural network model.

步骤353,利用所述卷积层进行特征提取,得到所述图像中生物特征区域的浅层语义特征,并输入至所述连接层。Step 353, using the convolutional layer to perform feature extraction, obtain shallow semantic features of the biometric feature area in the image, and input them into the connection layer.

步骤355,利用所述连接层进行特征融合,得到所述图像中生物特征区域的深层语义特征,并输入至所述输出层。Step 355, using the connection layer to perform feature fusion, obtain the deep semantic features of the biological feature area in the image, and input them to the output layer.

其中,浅层语义特征包括图像的形状特征、空间关系特征,而深层语义特征则包括图像的颜色特征、纹理特征。Among them, shallow semantic features include image shape features and spatial relationship features, while deep semantic features include image color features and texture features.

也就是说,经卷积层的特征提取得到浅层语义特征,再经连接层的特征融合得到深层语义特征,意味着深度神经网络模型中不同分辨率不同尺度的特征能够相互关联,而并非孤立的,进而能够有效地提升活体检测的准确率。In other words, shallow semantic features are obtained through feature extraction in the convolutional layer, and deep semantic features are obtained through feature fusion in the connection layer. This means that features of different resolutions and scales in the deep neural network model can be correlated with each other rather than isolated, which can effectively improve the accuracy of liveness detection.

请参阅图12,在一示例性实施例中,步骤370可以包括以下步骤:Referring to FIG. 12 , in an exemplary embodiment, step 370 may include the following steps:

步骤371,利用所述输出层中的激活函数分类器,对所述图像进行分类预测。Step 371, using the activation function classifier in the output layer to perform classification prediction on the image.

步骤373,根据所述图像预测到的类别,判断所述待检测对象是否为活体。Step 373: determine whether the object to be detected is a living body according to the category predicted by the image.

激活函数分类器,也即是,softmax分类器,用于计算图像属于不同类别的概率。The activation function classifier, that is, the softmax classifier, is used to calculate the probability that an image belongs to different categories.

本实施例中,图像的类别包括:活体类别和假体类别。In this embodiment, the categories of images include: living body category and prosthesis category.

举例来说,针对图像而言,基于输出层中的激活函数分类器,计算得到该图像属于活体类别的概率为P1,而该图像属于假体类别的概率为P2。For example, for an image, based on the activation function classifier in the output layer, it is calculated that the probability that the image belongs to the living body category is P1, and the probability that the image belongs to the prosthesis category is P2.

如果P1>P2,即表示该图像属于活体类别,则判定待检测对象为活体。If P1>P2, it means that the image belongs to the living body category, and the object to be detected is determined to be a living body.

反之P1<P2,如果,即表示该图像属于假体类别,则判定待检测对象为假体。On the contrary, if P1<P2, it means that the image belongs to the prosthesis category, and the object to be detected is determined to be a prosthesis.

在上述实施例的作用下,实现了基于深层语义特征的待检测对象的活体判定,即过滤掉关于彩色照片和孔洞面具的假体攻击行为,且不依赖于待检测对象对活体检测的配合。With the help of the above-mentioned embodiments, liveness determination of the object to be detected based on deep semantic features is realized, that is, prosthetic attack behaviors regarding color photos and hole masks are filtered out, and it does not rely on the cooperation of the object to be detected with liveness detection.

在一示例性实施例中,步骤310之后,如上所述的方法还可以包括以下步骤:In an exemplary embodiment, after step 310, the method described above may further include the following steps:

步骤320,在所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间,进行区域位置匹配。Step 320 , performing region position matching between the biometric region in the infrared image and the biometric region in the visible light image.

可以理解,对于双目摄像头中的红外摄像头和可见光摄像头而言,如果红外摄像头和可见光摄像头是在同一时刻针对同一个待检测对象的自由状态(例如点头)进行拍摄,则由此拍摄到的红外图像中的生物特征区域的区域位置与可见光图像中的生物特征区域的区域位置之间具有较大的相关性。It can be understood that for the infrared camera and the visible light camera in the binocular camera, if the infrared camera and the visible light camera are shooting the free state (such as nodding) of the same object to be detected at the same time, then the regional position of the biometric area in the infrared image captured thereby and the regional position of the biometric area in the visible light image have a large correlation.

因此,本实施例中,通过区域位置匹配来判断上述二者之间是否具有较大的相关性,进而判断待检测对象是否为活体。Therefore, in this embodiment, whether there is a large correlation between the two is determined by regional position matching, and then whether the object to be detected is a living body is determined.

如果区域位置不匹配,即表示红外图像中的生物特征区域的区域位置与可见光图像中的生物特征区域的区域位置相关性较小,亦即表示红外摄像头和可见光摄像头拍摄的并非同一个体,则判定所述待检测对象为假体。If the area positions do not match, that is, the area position of the biometric area in the infrared image has little correlation with the area position of the biometric area in the visible light image, which means that the infrared camera and the visible light camera do not capture the same individual, then the object to be detected is determined to be a prosthesis.

反之,如果区域位置匹配,即表示红外图像中的生物特征区域的区域位置与可见光图像中的生物特征区域的区域位置相关性较大,亦即表示红外摄像头和可见光摄像头拍摄的属于同一个体,则判定所述待检测对象为活体。On the contrary, if the area positions match, that is, the area position of the biometric area in the infrared image is highly correlated with the area position of the biometric area in the visible light image, that is, the infrared camera and the visible light camera take pictures of the same individual, then the object to be detected is determined to be a living body.

值得一提的是,步骤320可以设置在步骤330、步骤350任意一个步骤之前,本实施例并未对此加以限定。It is worth mentioning that step 320 can be set before any one of step 330 and step 350, and this embodiment does not limit this.

下面对区域位置的匹配过程加以说明。The matching process of regional locations is described below.

请参阅图13,在一示例性实施例中,步骤320可以包括以下步骤:Referring to FIG. 13 , in an exemplary embodiment, step 320 may include the following steps:

步骤321,分别对所述红外图像中的生物特征区域和所述可见光图像中的生物特征区域,进行区域位置检测,得到对应于所述红外图像中生物特征区域的第一区域位置、和对应于所述可见光图像中生物特征区域的第二区域位置。Step 321, performing area position detection on the biometric area in the infrared image and the biometric area in the visible light image, respectively, to obtain a first area position corresponding to the biometric area in the infrared image and a second area position corresponding to the biometric area in the visible light image.

其中,区域位置检测,可以基于计算机视觉的射影几何方法实现。Among them, area position detection can be achieved based on the projective geometry method of computer vision.

步骤323,计算所述第一区域位置与所述第二区域位置的相关系数。Step 323: Calculate the correlation coefficient between the first area position and the second area position.

如果所述相关系数超过设定相关阈值,则判定所述区域位置匹配,进而判定待检测对象为活体,方可继续执行后续活体检测的步骤。If the correlation coefficient exceeds the set correlation threshold, the region positions are determined to be matched, and then the object to be detected is determined to be a living body, and the subsequent steps of living body detection can be continued.

反之,如果所述相关系数小于设定相关阈值,则判定所述区域位置不匹配,进而判定待检测对象为假体,此时停止执行后续活体检测的步骤,以此提高活体检测的效率。On the contrary, if the correlation coefficient is less than the set correlation threshold, it is determined that the area positions do not match, and then it is determined that the object to be detected is a prosthesis. At this time, the subsequent steps of liveness detection are stopped, thereby improving the efficiency of liveness detection.

请参阅图14,在另一示例性实施例中,步骤320可以包括以下步骤:Please refer to FIG. 14 , in another exemplary embodiment, step 320 may include the following steps:

步骤322,确定所述待检测对象与所述双目摄像头所在竖直平面之间的第一水平距离。Step 322: determine a first horizontal distance between the object to be detected and the vertical plane where the binocular camera is located.

步骤324,基于所述双目摄像头中的红外摄像头和可见光摄像头,获取所述红外摄像头与所述可见光摄像头之间的第二水平距离。Step 324: Based on the infrared camera and the visible light camera in the binocular camera, a second horizontal distance between the infrared camera and the visible light camera is obtained.

步骤326,根据所述第一水平距离和所述第二水平距离,获取所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间的水平距离差。Step 326: Obtain a horizontal distance difference between the biometric feature region in the infrared image and the biometric feature region in the visible light image according to the first horizontal distance and the second horizontal distance.

如图15所示,A表示待检测对象,B1表示双目摄像头中的红外摄像头,B2表示双目摄像头中的可见光摄像头。X1表示待检测对象A所在竖直平面,X2表示双目摄像头所在竖直平面。As shown in Figure 15, A represents the object to be detected, B1 represents the infrared camera in the binocular camera, and B2 represents the visible light camera in the binocular camera. X1 represents the vertical plane where the object to be detected A is located, and X2 represents the vertical plane where the binocular camera is located.

那么,水平距离差D的获取公式如下:Then, the formula for obtaining the horizontal distance difference D is as follows:

D=L/Z。D=L/Z.

其中,D表示水平距离差,也即是红外图像与可见光图像在水平面上横坐标之差。Z表示第一水平距离,L表示第二水平距离。Wherein, D represents the horizontal distance difference, that is, the difference between the horizontal coordinates of the infrared image and the visible light image on the horizontal plane. Z represents the first horizontal distance, and L represents the second horizontal distance.

如果所述水平距离差D小于设定距离阈值,则判定所述区域位置匹配,进而判定待检测对象为活体,方可继续执行后续活体检测的步骤。If the horizontal distance difference D is less than the set distance threshold, it is determined that the area positions match, and then it is determined that the object to be detected is a living body, and the subsequent living body detection steps can be continued.

反之,如果所述水平距离差D超过设定距离阈值,则判定所述区域位置不匹配,进而判定待检测对象为假体,此时停止执行后续活体检测的步骤,以此提高活体检测的效率。On the contrary, if the horizontal distance difference D exceeds the set distance threshold, it is determined that the area positions do not match, and then it is determined that the object to be detected is a prosthesis. At this time, the subsequent steps of liveness detection are stopped, thereby improving the efficiency of liveness detection.

通过上述过程,实现了基于区域位置的待检测对象的活体判定,有效地过滤属于假体的图像,有利于提高活体检测的准确率。Through the above process, the liveness determination of the object to be detected based on the regional position is realized, and the images belonging to the prosthesis are effectively filtered, which is conducive to improving the accuracy of liveness detection.

在一示例性实施例中,所述生物特征区域为人脸区域。In an exemplary embodiment, the biometric region is a face region.

相应地,步骤310之后,如上所述的方法还可以包括以下步骤:Accordingly, after step 310, the method described above may further include the following steps:

分别对所述红外图像和所述可见光图像进行人脸检测。Face detection is performed on the infrared image and the visible light image respectively.

如图16所示,人脸特征在图像中存在68个关键点,具体包括:眼睛在图像中的6个关键点43~48,嘴巴在图像中的20个关键点49~68等等。其中,上述关键点,在图像中由不同的坐标(x,y)进行唯一的表示。As shown in FIG16 , there are 68 key points of facial features in the image, including: 6 key points 43 to 48 of the eyes in the image, 20 key points 49 to 68 of the mouth in the image, etc. The above key points are uniquely represented by different coordinates (x, y) in the image.

基于此,本实施例中,人脸检测,通过人脸关键点模型实现。Based on this, in this embodiment, face detection is achieved through a face key point model.

人脸关键点模型,实质上为图像中的人脸特征构建了索引关系,以便于通过构建的索引关系能够从图像中定位得到特定人脸特征的关键点。The facial key point model essentially constructs an index relationship for the facial features in the image, so that the key points of specific facial features can be located from the image through the constructed index relationship.

具体地,对于待检测对象的图像,即红外图像或者可见光图像,输入至人脸关键点模型之后,人脸特征在图像中的关键点即进行了索引标记,如图16所示,眼睛在图像中的六个关键点所标记的索引为43~48,嘴巴在图像中的二十个关键点所标记的索引为49~68。Specifically, for the image of the object to be detected, that is, the infrared image or the visible light image, after it is input into the facial key point model, the key points of the facial features in the image are indexed and marked. As shown in Figure 16, the indexes marked by the six key points of the eyes in the image are 43 to 48, and the indexes marked by the twenty key points of the mouth in the image are 49 to 68.

同时,相应地存储进行了索引标记的关键点在图像中的坐标,以此为待检测对象的人脸特征构建了对应于图像的索引与坐标之间的索引关系。At the same time, the coordinates of the key points marked with indexes in the image are stored accordingly, thereby constructing an index relationship between the index corresponding to the image and the coordinates for the facial features of the object to be detected.

那么,基于索引关系,由索引便可得到待检测对象的人脸特征在图像中关键点的坐标,进而确定待检测对象的人脸特征在图像中的位置,即图像中的人脸特征区域。Then, based on the index relationship, the coordinates of the key points of the facial features of the object to be detected in the image can be obtained from the index, and then the position of the facial features of the object to be detected in the image, that is, the facial feature area in the image, can be determined.

如果检测到所述红外图像中不包含人脸区域,和/或,所述可见光图像中不包含人脸区域,则判定所述待检测对象为假体,从而停止后续的活体检测,以提高活体检测的效率。If it is detected that the infrared image does not contain a human face region, and/or the visible light image does not contain a human face region, the object to be detected is determined to be a prosthesis, thereby stopping subsequent liveness detection to improve the efficiency of liveness detection.

反之,如果检测到所述红外图像中包含人脸区域、以及所述可见光图像中包含人脸区域,则判定所述待检测对象为活体,便可跳转执行步骤330。On the contrary, if it is detected that the infrared image contains a human face region and the visible light image contains a human face region, it is determined that the object to be detected is a living body, and the process jumps to step 330 .

由此,基于上述过程,实现了基于双目摄像头拍摄到的红外图像的待检测对象的活体判定,即过滤掉关于视频回放的假体攻击行为。Therefore, based on the above process, the liveness determination of the object to be detected based on the infrared image captured by the binocular camera is achieved, that is, the prosthetic attack behavior on the video playback is filtered out.

此外,基于人脸关键点模型进行人脸检测,对于不同面部表情的人脸特征识别,都有较好的准确性和稳定性,充分保证了活体检测的准确性。In addition, face detection based on the face key point model has good accuracy and stability for facial feature recognition of different facial expressions, which fully guarantees the accuracy of liveness detection.

在一示例性实施例中,步骤370之后,如上所述的方法还可以包括以下步骤:In an exemplary embodiment, after step 370, the method described above may further include the following steps:

如果所述待检测对象为活体,则调用人脸识别模型对所述待检测对象的图像进行人脸识别。If the object to be detected is a living body, a face recognition model is called to perform face recognition on the image of the object to be detected.

下面结合具体应用场景对人脸识别过程加以说明。The face recognition process is explained below in conjunction with specific application scenarios.

图1(a)是订单支付应用场景的实施环境的示意图。如图1(a)所示,该应用场景中,实施环境包括支付用户510、智能手机530和支付服务器550。Fig. 1(a) is a schematic diagram of an implementation environment of an order payment application scenario. As shown in Fig. 1(a), in this application scenario, the implementation environment includes a payment user 510, a smart phone 530 and a payment server 550.

针对某个待支付订单,支付用户510通过智能手机530所配置的双目摄像头进行刷脸,使得智能手机530获得支付用户510的图像,进而利用人脸识别模型对该图像进行人脸识别。For a certain order to be paid, the paying user 510 scans his face through the binocular camera configured by the smart phone 530, so that the smart phone 530 obtains the image of the paying user 510, and then uses the face recognition model to perform face recognition on the image.

待人脸识别模型提取出该图像的用户特征,以计算该用户特征与指定用户特征的相似度,若相似度大于相似阈值,则支付用户510通过身份验证。其中,指定用户特征是智能手机530通过人脸识别模型预先为支付用户510提取的。The face recognition model extracts the user features of the image to calculate the similarity between the user features and the specified user features. If the similarity is greater than the similarity threshold, the paying user 510 passes the identity authentication. The specified user features are pre-extracted for the paying user 510 by the smart phone 530 through the face recognition model.

在支付用户510通过身份验证之后,智能手机530为待支付订单向支付服务器550发起订单支付请求,以此完成待支付订单的支付流程。After the paying user 510 passes the identity authentication, the smart phone 530 initiates an order payment request to the payment server 550 for the order to be paid, thereby completing the payment process of the order to be paid.

图1(b)是门禁应用场景的实施环境的示意图。如图1(b)所示,该实施环境包括接待设备610、识别服务器630和门禁控制设备650。FIG1( b ) is a schematic diagram of an implementation environment of a door access control application scenario. As shown in FIG1( b ), the implementation environment includes a reception device 610 , an identification server 630 , and a door access control device 650 .

其中,接待设备610上安装有双目摄像头,以对出入对象670进行人脸拍照,并将获得的出入对象670的图像发送至识别服务器630进行人脸识别。本应用场景中,出入对象670包括工作人员和来访人员。The reception device 610 is equipped with a binocular camera to take a face photo of the in-and-out object 670, and send the obtained image of the in-and-out object 670 to the recognition server 630 for face recognition. In this application scenario, the in-and-out object 670 includes staff and visitors.

识别服务器630通过人脸识别模型提取该图像的人员特征,以计算该人员特征与多个指定人员特征的相似度,得到相似度最大的指定人员特征,进而将相似度最大的指定人员特征所关联的人员身份识别为出入对象670的身份,以此完成出入对象670的身份识别。其中,指定人员特征是识别服务器630通过人脸识别模型预先为出入对象670提取的。The recognition server 630 extracts the person feature of the image through the face recognition model to calculate the similarity between the person feature and multiple designated person features, obtains the designated person feature with the greatest similarity, and then identifies the person identity associated with the designated person feature with the greatest similarity as the identity of the entry and exit object 670, thereby completing the identity recognition of the entry and exit object 670. The designated person feature is pre-extracted by the recognition server 630 for the entry and exit object 670 through the face recognition model.

待出入对象670的身份识别完成,识别服务器630为出入对象670向门禁控制设备650发送门禁授权指令,使得门禁控制设备650根据该门禁授权指令为出入对象670配置相应的门禁权限,进而使得出入对象670凭借该门禁权限控制指定工作区域的门禁道闸执行放行动作。After the identity recognition of the object 670 is completed, the identification server 630 sends a access authorization instruction to the access control device 650 for the object 670, so that the access control device 650 configures the corresponding access authority for the object 670 according to the access authorization instruction, and then the object 670 controls the access gate of the designated work area to perform the release action with the access authority.

当然,在不同的应用场景,可以根据实际应用需求进行灵活部署,例如,识别服务器630与门禁控制设备650可部署为同一个服务器,或者,接待设备610与门禁控制设备650部署于同一个服务器,本应用场景并未对此加以限定。Of course, in different application scenarios, flexible deployment can be performed according to actual application requirements. For example, the identification server 630 and the access control device 650 can be deployed as the same server, or the reception device 610 and the access control device 650 can be deployed on the same server. This application scenario does not limit this.

图1(c)是客运服务应用场景的实施环境的示意图。如图1(c)所示,该应用场景中,实施环境包括服务人员710、服务终端730和认证服务器750。本应用场景中,服务人员710为客运司机。Fig. 1(c) is a schematic diagram of the implementation environment of the passenger service application scenario. As shown in Fig. 1(c), in this application scenario, the implementation environment includes a service personnel 710, a service terminal 730 and an authentication server 750. In this application scenario, the service personnel 710 is a passenger driver.

其中,设置于车辆的服务终端730,安装有双目摄像头,以对服务人员710进行拍照,并将获得的服务人员710的图像发送至认证服务器750进行人脸识别。The service terminal 730 disposed in the vehicle is equipped with a binocular camera to take a photo of the service personnel 710 and send the obtained image of the service personnel 710 to the authentication server 750 for face recognition.

认证服务器750通过人脸识别模型提取出该图像的人员特征,以计算该人员特征与指定人员特征的相似度,若相似度大于相似阈值,则服务人员710通过身份认证。其中,指定人员特征是服务终端730通过人脸识别模型预先为服务人员710提取的。The authentication server 750 extracts the person features of the image through the face recognition model to calculate the similarity between the person features and the specified person features. If the similarity is greater than the similarity threshold, the service personnel 710 passes the identity authentication. The specified person features are pre-extracted for the service personnel 710 by the service terminal 730 through the face recognition model.

在服务人员710通过身份验证之后,服务终端730便可以向该服务人员710分发服务业务指令,使得该服务人员——客运司机便能够根据服务业务指令的指示到达目的地搭载乘客。After the service personnel 710 passes the identity authentication, the service terminal 730 can distribute service business instructions to the service personnel 710, so that the service personnel - the passenger driver can reach the destination and pick up passengers according to the instructions of the service business instructions.

在上述三种应用场景中,活体检测装置可作为人脸识别的前驱模块。In the above three application scenarios, the liveness detection device can be used as a precursor module for face recognition.

如图17所示,通过执行步骤801至步骤804,分别基于人脸检测、区域匹配检测、图像物理信息检测和深层语义特征检测,多次进行待检测对象的活体判定。As shown in FIG. 17 , by executing steps 801 to 804 , liveness determination of the object to be detected is performed multiple times based on face detection, region matching detection, image physical information detection, and deep semantic feature detection, respectively.

由此,活体检测装置便能够准确地判断待检测对象是否为活体,进而实现对各种不同类型的假体攻击的防御,不仅能够充分地保证身份验证/身份识别的安全性,而且还能够有效地减轻后期人脸识别的工作压力和流量压力,从而更好地为各种人脸识别任务提供便利。Therefore, the liveness detection device can accurately determine whether the object to be detected is alive, and then achieve defense against various types of prosthetic attacks. It can not only fully guarantee the security of identity authentication/identity recognition, but also effectively reduce the workload and traffic pressure of subsequent face recognition, thereby better facilitating various face recognition tasks.

下述为本发明装置实施例,可以用于执行本发明所涉及的活体检测方法。对于本发明装置实施例中未披露的细节,请参照本发明所涉及的活体检测方法的方法实施例。The following is an embodiment of the device of the present invention, which can be used to perform the liveness detection method involved in the present invention. For details not disclosed in the embodiment of the device of the present invention, please refer to the method embodiment of the liveness detection method involved in the present invention.

请参阅图18,在一示例性实施例中,一种活体检测装置900包括但不限于:图像获取模块910、图像物理信息提取模块930、深层语义特征提取模块950和对象活体判定模块970。Please refer to FIG. 18 . In an exemplary embodiment, a living body detection device 900 includes but is not limited to: an image acquisition module 910 , an image physical information extraction module 930 , a deep semantic feature extraction module 950 , and an object living body determination module 970 .

其中,图像获取模块910,用于获取待检测对象在双目摄像头下拍摄到的图像,所述图像包括红外图像和可见光图像。The image acquisition module 910 is used to acquire an image of the object to be detected captured by a binocular camera, wherein the image includes an infrared image and a visible light image.

图像物理信息提取模块930,用于对所述图像中的生物特征区域,进行图像物理信息提取,得到所述图像中生物特征区域的图像物理信息,所述生物特征区域用于指示所述待检测对象的生物特征在所述图像中的位置。The image physical information extraction module 930 is used to extract image physical information from the biometric feature area in the image to obtain image physical information of the biometric feature area in the image, where the biometric feature area is used to indicate the position of the biometric feature of the object to be detected in the image.

深层语义特征提取模块950,用于如果所述图像中生物特征区域的图像物理信息指示所述待检测对象为活体,则基于机器学习模型,对所述图像中的生物特征区域,进行深层语义特征提取,得到所述图像中生物特征区域的深层语义特征。The deep semantic feature extraction module 950 is used to extract deep semantic features of the biological feature area in the image based on a machine learning model if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, so as to obtain the deep semantic features of the biological feature area in the image.

对象活体判定模块970,用于根据所述图像中生物特征区域的深层语义特征,进行所述待检测对象的活体判定。The object liveness determination module 970 is used to perform liveness determination of the object to be detected based on the deep semantic features of the biological feature area in the image.

在一示例性实施例中,所述图像物理信息提取模块930包括但不限于:可见光图像获取单元和图像物理信息提取单元。In an exemplary embodiment, the image physical information extraction module 930 includes but is not limited to: a visible light image acquisition unit and an image physical information extraction unit.

其中,可见光图像获取单元,用于根据获取到的图像,获取包含所述生物特征区域的可见光图像。The visible light image acquisition unit is used to acquire a visible light image containing the biometric feature area based on the acquired image.

图像物理信息提取单元,用于从所述可见光图像中的生物特征区域,提取得到所述可见光图像中生物特征区域的图像物理信息。The image physical information extraction unit is used to extract the image physical information of the biometric feature area in the visible light image from the biometric feature area in the visible light image.

在一示例性实施例中,所述图像物理信息为颜色信息。In an exemplary embodiment, the physical image information is color information.

相应地,所述图像物理信息提取单元包括但不限于:颜色直方图计算子单元和颜色信息定义子单元。Accordingly, the image physical information extraction unit includes but is not limited to: a color histogram calculation subunit and a color information definition subunit.

其中,颜色直方图计算子单元,用于基于所述可见光图像中的生物特征区域,计算所述可见光图像中生物特征区域的颜色直方图。Wherein, the color histogram calculation subunit is used to calculate the color histogram of the biometric feature area in the visible light image based on the biometric feature area in the visible light image.

颜色信息定义子单元,用于将计算得到的颜色直方图,作为所述可见光图像中生物特征区域的颜色信息。The color information definition subunit is used to use the calculated color histogram as the color information of the biological feature area in the visible light image.

在一示例性实施例中,所述图像物理信息为纹理信息。In an exemplary embodiment, the image physical information is texture information.

相应地,所述图像物理信息提取单元包括但不限于:颜色空间创建子单元、局部特征提取子单元和纹理信息定义子单元。Accordingly, the image physical information extraction unit includes but is not limited to: a color space creation subunit, a local feature extraction subunit and a texture information definition subunit.

其中,颜色空间创建子单元,用于基于所述可见光图像中的生物特征区域,创建对应于所述可见光图像中生物特征区域的颜色空间。The color space creation subunit is used to create a color space corresponding to the biometric feature area in the visible light image based on the biometric feature area in the visible light image.

局部特征提取子单元,用于针对所述颜色空间,在空域提取局部二值模式特征,和/或,在频域提取局部相位量化特征。The local feature extraction subunit is used to extract local binary pattern features in the spatial domain and/or extract local phase quantization features in the frequency domain for the color space.

纹理信息定义子单元,用于根据所述局部二值模式特征,和/或,所述局部相位量化特征,生成LBP/LPQ直方图,作为所述可见光图像中生物特征区域的纹理信息。The texture information definition subunit is used to generate an LBP/LPQ histogram as texture information of the biological feature area in the visible light image according to the local binary pattern feature and/or the local phase quantization feature.

在一示例性实施例中,所述颜色空间创建子单元包括但不限于:第一参数获取子单元和第一构建子单元。In an exemplary embodiment, the color space creation subunit includes but is not limited to: a first parameter acquisition subunit and a first construction subunit.

其中,第一参数获取子单元,用于基于所述可见光图像中的生物特征区域,获取对应于所述可见光图像中生物特征区域的HSV参数,所述HSV参数包括色调(H)、饱和度(S)和明亮度(V)。Among them, the first parameter acquisition subunit is used to acquire HSV parameters corresponding to the biometric feature area in the visible light image based on the biometric feature area in the visible light image, and the HSV parameters include hue (H), saturation (S) and brightness (V).

第一构建子单元,用于根据获取到的HSV参数构建HSV模型,作为对应于所述可见光图像中生物特征区域的颜色空间。The first construction subunit is used to construct an HSV model according to the acquired HSV parameters as a color space corresponding to the biological feature area in the visible light image.

在一示例性实施例中,所述颜色空间创建子单元包括但不限于:第二参数获取子单元和第二构建子单元。In an exemplary embodiment, the color space creation subunit includes but is not limited to: a second parameter acquisition subunit and a second construction subunit.

其中,第二参数获取子单元,用于基于所述可见光图像中的生物特征区域,获取对应于所述可见光图像中生物特征区域的YCbCr参数,所述YCbCr参数包括颜色的明亮度和浓度(Y)、蓝色浓度偏移量(Cb)和红色浓度偏移量(Cr)。Among them, the second parameter acquisition subunit is used to obtain YCbCr parameters corresponding to the biometric feature area in the visible light image based on the biometric feature area in the visible light image, and the YCbCr parameters include color brightness and concentration (Y), blue concentration offset (Cb) and red concentration offset (Cr).

第二构建子单元,用于根据获取到的YCbCr参数,构建对应于所述可见光图像中生物特征区域的颜色空间。The second construction subunit is used to construct a color space corresponding to the biological feature area in the visible light image according to the acquired YCbCr parameters.

在一示例性实施例中,所述装置900还包括但不限于:第二对象活体判定模块。In an exemplary embodiment, the device 900 further includes but is not limited to: a second object living body determination module.

其中,第二对象活体判定模块,用于根据所述可见光图像中生物特征区域的图像物理信息,进行所述待检测对象的活体判定。The second object liveness determination module is used to perform liveness determination on the object to be detected based on the image physical information of the biological feature area in the visible light image.

在一示例性实施例中,所述第二对象活体判定模块包括但不限于:颜色类别预测单元、第一对象活体判定单元和第二对象活体判定单元。In an exemplary embodiment, the second object living body determination module includes, but is not limited to: a color category prediction unit, a first object living body determination unit, and a second object living body determination unit.

其中,颜色类别预测单元,用于将所述可见光图像中生物特征区域的图像物理信息输入支持向量机分类器,对所述可见光图像进行颜色类别预测,得到所述可见光图像的颜色类别。The color category prediction unit is used to input the image physical information of the biometric feature area in the visible light image into the support vector machine classifier, perform color category prediction on the visible light image, and obtain the color category of the visible light image.

第一对象活体判定单元,用于如果所述可见光图像的颜色类别指示所述可见光图像为黑白图像,则判定所述待检测对象为假体。The first object living body determination unit is configured to determine that the object to be detected is a prosthesis if the color category of the visible light image indicates that the visible light image is a black and white image.

第二对象活体判定单元,用于如果所述可见光图像的颜色类别指示所述可见光图像为彩色图像,则判定所述待检测对象为活体。The second object living body determination unit is configured to determine that the object to be detected is a living body if the color category of the visible light image indicates that the visible light image is a color image.

在一示例性实施例中,所述机器学习模型为深度神经网络模型,所述深度神经网络模型包括输入层、卷积层、连接层和输出层。In an exemplary embodiment, the machine learning model is a deep neural network model, which includes an input layer, a convolutional layer, a connection layer and an output layer.

相应地,所述深层语义特征提取模块950包括但不限于:图像输入单元、特征提取单元和特征融合单元。Accordingly, the deep semantic feature extraction module 950 includes but is not limited to: an image input unit, a feature extraction unit and a feature fusion unit.

其中,图像输入单元,用于从所述深度神经网络模型中的输入层,将所述图像输入至所述卷积层。Among them, the image input unit is used to input the image into the convolution layer from the input layer in the deep neural network model.

特征提取单元,用于利用所述卷积层进行特征提取,得到所述图像中生物特征区域的浅层语义特征,并输入至所述连接层。The feature extraction unit is used to perform feature extraction using the convolution layer to obtain shallow semantic features of the biological feature area in the image and input the shallow semantic features into the connection layer.

特征融合单元,用于利用所述连接层进行特征融合,得到所述图像中生物特征区域的深层语义特征,并输入至所述输出层。The feature fusion unit is used to perform feature fusion using the connection layer to obtain deep semantic features of the biological feature area in the image and input the deep semantic features into the output layer.

在一示例性实施例中,所述对象活体判定模块970包括但不限于:分类预测单元和第三对象活体判定单元。In an exemplary embodiment, the object living body determination module 970 includes but is not limited to: a classification prediction unit and a third object living body determination unit.

其中,分类预测单元,用于利用所述输出层中的激活函数分类器,对所述图像进行分类预测。The classification prediction unit is used to perform classification prediction on the image using the activation function classifier in the output layer.

第三对象活体判定单元,用于根据所述图像预测到的类别,判断所述待检测对象是否为活体。The third object living body determination unit is used to determine whether the object to be detected is a living body according to the category predicted by the image.

在一示例性实施例中,所述装置900还包括但不限于:区域位置匹配模块和第四对象活体判定模块。In an exemplary embodiment, the device 900 further includes but is not limited to: a region position matching module and a fourth object living body determination module.

其中,区域位置匹配模块,用于在所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间,进行区域位置匹配。The region position matching module is used to perform region position matching between the biometric feature region in the infrared image and the biometric feature region in the visible light image.

第四对象活体判定模块,用于如果区域位置不匹配,则判定所述待检测对象为假体。The fourth object living body determination module is used to determine that the object to be detected is a prosthesis if the area positions do not match.

在一示例性实施例中,所述区域位置匹配模块包括但不限于:区域位置检测单元、相关系数计算单元和第五对象活体判定单元。In an exemplary embodiment, the area position matching module includes but is not limited to: an area position detection unit, a correlation coefficient calculation unit and a fifth object living body determination unit.

其中,区域位置检测单元,用于分别对所述红外图像中的生物特征区域和所述可见光图像中的生物特征区域,进行区域位置检测,得到对应于所述红外图像中生物特征区域的第一区域位置、和对应于所述可见光图像中生物特征区域的第二区域位置。Among them, the area position detection unit is used to perform area position detection on the biometric area in the infrared image and the biometric area in the visible light image, respectively, to obtain a first area position corresponding to the biometric area in the infrared image and a second area position corresponding to the biometric area in the visible light image.

相关系数计算单元,用于计算所述第一区域位置与所述第二区域位置的相关系数。A correlation coefficient calculation unit is used to calculate the correlation coefficient between the first area position and the second area position.

第五对象活体判定单元,用于如果所述相关系数超过设定相关阈值,则判定所述区域位置匹配。The fifth object living body determination unit is configured to determine that the area positions match if the correlation coefficient exceeds a set correlation threshold.

在一示例性实施例中,所述区域位置匹配模块包括但不限于:第一水平距离确定单元、第二水平距离确定单元、水平距离差获取单元和第六对象活体判定单元。In an exemplary embodiment, the area position matching module includes but is not limited to: a first horizontal distance determining unit, a second horizontal distance determining unit, a horizontal distance difference acquiring unit and a sixth object living body determining unit.

其中,第一水平距离确定单元,用于确定所述待检测对象与所述双目摄像头所在竖直平面之间的第一水平距离。Wherein, the first horizontal distance determination unit is used to determine a first horizontal distance between the object to be detected and the vertical plane where the binocular camera is located.

第二水平距离确定单元,用于基于所述双目摄像头中的红外摄像头和可见光摄像头,获取所述红外摄像头与所述可见光摄像头之间的第二水平距离。The second horizontal distance determining unit is used to obtain a second horizontal distance between the infrared camera and the visible light camera based on the infrared camera and the visible light camera in the binocular camera.

水平距离差获取单元,用于根据所述第一水平距离和所述第二水平距离,获取所述红外图像中的生物特征区域与所述可见光图像中的生物特征区域之间的水平距离差。A horizontal distance difference acquisition unit is used to acquire a horizontal distance difference between the biometric feature area in the infrared image and the biometric feature area in the visible light image according to the first horizontal distance and the second horizontal distance.

第六对象活体判定单元,用于如果所述水平距离差超过设定距离阈值,则判定所述区域位置不匹配。The sixth object living body determination unit is configured to determine that the area positions do not match if the horizontal distance difference exceeds a set distance threshold.

在一示例性实施例中,所述生物特征区域为人脸区域。In an exemplary embodiment, the biometric region is a face region.

相应地,所述装置900还包括但不限于:人脸检测模块和第七对象活体判定模块。Accordingly, the device 900 further includes but is not limited to: a face detection module and a seventh object living body determination module.

其中,人脸检测模块,用于分别对所述红外图像和所述可见光图像进行人脸检测。Wherein, the face detection module is used to perform face detection on the infrared image and the visible light image respectively.

第七对象活体判定模块,用于如果检测到所述红外图像中不包含人脸区域,和/或,所述可见光图像中不包含人脸区域,则判定所述待检测对象为假体。The seventh object living body determination module is used to determine that the object to be detected is a prosthesis if it is detected that the infrared image does not contain a human face area and/or the visible light image does not contain a human face area.

需要说明的是,上述实施例所提供的活体检测装置在进行活体检测处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即活体检测装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。It should be noted that, when the liveness detection device provided in the above embodiment performs liveness detection processing, only the division of the above-mentioned functional modules is used as an example. In actual applications, the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the liveness detection device will be divided into different functional modules to complete all or part of the functions described above.

另外,上述实施例所提供的活体检测装置与活体检测方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。In addition, the embodiments of the liveness detection device and the liveness detection method provided in the above embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, which will not be repeated here.

在一示例性实施例中,一种应用活体检测方法的门禁系统,包括接待设备、识别服务器和门禁控制设备。In an exemplary embodiment, a door access control system using a liveness detection method includes a reception device, an identification server, and a door access control device.

其中,所述接待设备,用于利用双目摄像头采集出入对象的图像,所述图像包括红外图像和可见光图像。Wherein, the reception equipment is used to collect images of objects entering and leaving using a binocular camera, and the images include infrared images and visible light images.

所述识别服务器包括活体检测装置,用于对所述出入对象的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述出入对象是否活体。The recognition server includes a liveness detection device for extracting image physical information and deep semantic features from the biometric feature area in the image of the entering and exiting object, and judging whether the entering and exiting object is alive based on the extracted image physical information and deep semantic features.

当所述出入对象为活体,所述识别服务器对所述出入对象进行身份识别,以使所述门禁控制设备为成功完成身份识别的出入对象配置门禁权限,使得该出入对象根据所配置的门禁权限控制指定区域的门禁道闸执行放行动作。When the object entering and exiting is a living body, the recognition server performs identity recognition on the object entering and exiting, so that the access control device configures access control permissions for the object entering and exiting that has successfully completed identity recognition, so that the object entering and exiting controls the access control barrier of the designated area to perform a release action according to the configured access control permissions.

在一示例性实施例中,一种应用活体检测方法的支付系统,包括支付终端和支付服务器。In an exemplary embodiment, a payment system using a liveness detection method includes a payment terminal and a payment server.

其中,所述支付终端,用于利用双目摄像头采集支付用户的图像,所述图像包括红外图像和可见光图像。Wherein, the payment terminal is used to collect images of paying users using a binocular camera, and the images include infrared images and visible light images.

所述支付终端包括活体检测装置,用于对所述支付用户的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述支付用户是否活体。The payment terminal includes a liveness detection device for extracting image physical information and deep semantic features from the biometric feature area in the image of the payment user, and judging whether the payment user is alive based on the extracted image physical information and deep semantic features.

当所述支付用户为活体,所述支付终端对所述支付用户进行身份验证,以在所述支付用户通过身份验证时,向所述支付服务器发起支付请求。When the payment user is alive, the payment terminal performs identity authentication on the payment user, and initiates a payment request to the payment server when the payment user passes the identity authentication.

在一示例性实施例中,一种应用活体检测方法的服务系统,包括服务终端和认证服务器。In an exemplary embodiment, a service system applying a liveness detection method includes a service terminal and an authentication server.

其中,所述服务终端,用于利用双目摄像头采集服务人员的图像,所述图像包括红外图像和可见光图像。Wherein, the service terminal is used to collect images of service personnel using a binocular camera, and the images include infrared images and visible light images.

所述服务终端包括活体检测装置,用于对所述服务人员的图像中的生物特征区域分别进行图像物理信息提取和深层语义特征提取,根据提取到的图像物理信息和深层语义特征,判断所述服务人员是否活体。The service terminal includes a liveness detection device for extracting image physical information and deep semantic features from the biometric feature area in the image of the service personnel, and judging whether the service personnel is alive based on the extracted image physical information and deep semantic features.

当所述服务人员为活体,所述服务终端请求所述认证服务器对所述服务人员进行身份认证,并向通过身份认证的服务人员分发服务业务指令。When the service personnel are alive, the service terminal requests the authentication server to authenticate the identity of the service personnel and distributes service instructions to the service personnel who pass the identity authentication.

请参阅图19,在一示例性实施例中,一种电子设备1000,包括至少一处理器1001、至少一存储器1002、以及至少一通信总线1003。Please refer to FIG. 19 . In an exemplary embodiment, an electronic device 1000 includes at least one processor 1001 , at least one memory 1002 , and at least one communication bus 1003 .

其中,存储器1002上存储有计算机可读指令,处理器1001通过通信总线1003读取存储器1002中存储的计算机可读指令。The memory 1002 stores computer-readable instructions, and the processor 1001 reads the computer-readable instructions stored in the memory 1002 through the communication bus 1003 .

该计算机可读指令被处理器1001执行时实现上述各实施例中的活体检测方法。When the computer-readable instructions are executed by the processor 1001, the living body detection method in the above-mentioned embodiments is implemented.

在一示例性实施例中,一种存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各实施例中的活体检测方法。In an exemplary embodiment, a storage medium stores a computer program, which, when executed by a processor, implements the living body detection method in the above-mentioned embodiments.

上述内容,仅为本发明的较佳示例性实施例,并非用于限制本发明的实施方案,本领域普通技术人员根据本发明的主要构思和精神,可以十分方便地进行相应的变通或修改,故本发明的保护范围应以权利要求书所要求的保护范围为准。The above contents are only preferred exemplary embodiments of the present invention and are not intended to limit the implementation scheme of the present invention. A person skilled in the art can easily make corresponding changes or modifications based on the main concept and spirit of the present invention. Therefore, the protection scope of the present invention shall be based on the protection scope required by the claims.

Claims (8)

1. A living body detecting method, characterized by comprising:
acquiring an image of an object to be detected, which is shot under a binocular camera, wherein the image comprises an infrared image and a visible light image;
Acquiring a visible light image containing a biological feature area according to the acquired image;
calculating a color histogram of a biological feature region in the visible light image based on the biological feature region in the visible light image;
taking the calculated color histogram as color information of a biological feature area in the visible light image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image;
Inputting color information of a biological feature area in the visible light image into a support vector machine classifier, and performing color category prediction on the visible light image to obtain a color category of the visible light image;
If the color type of the visible light image indicates that the visible light image is a black-and-white image, judging that the object to be detected is a prosthesis;
If the color type of the visible light image indicates that the visible light image is a color image, judging that the object to be detected is a living body;
If the color information of the biological feature area in the image indicates that the object to be detected is a living body, carrying out deep semantic feature extraction on the biological feature area in the image based on a machine learning model to obtain deep semantic features of the biological feature area in the image;
And carrying out living body judgment of the object to be detected according to the deep semantic features of the biological feature area in the image.
2. The method of claim 1, wherein the machine learning model is a deep neural network model comprising an input layer, a convolutional layer, a connection layer, and an output layer;
The machine learning model-based deep semantic feature extraction is performed on the biological feature area in the image to obtain deep semantic features of the biological feature area in the image, and the machine learning model-based deep semantic feature extraction comprises the following steps:
inputting the image from an input layer in the deep neural network model to the convolution layer;
Extracting features by using the convolution layer to obtain shallow semantic features of a biological feature area in the image, and inputting the shallow semantic features into the connection layer;
and carrying out feature fusion by utilizing the connecting layer to obtain deep semantic features of the biological feature area in the image, and inputting the deep semantic features into the output layer.
3. The method according to claim 2, wherein the performing the living body determination of the object to be detected based on the deep semantic features of the biometric region in the image comprises:
using an activation function classifier in the output layer to classify and predict the image;
and judging whether the object to be detected is a living body or not according to the type predicted by the image.
4. The method of claim 1, wherein after the capturing the image of the object to be detected taken under the binocular camera, the method further comprises:
Performing region position matching between a biological feature region in the infrared image and a biological feature region in the visible light image;
and if the region positions are not matched, judging that the object to be detected is a prosthesis.
5. A living body detecting device, characterized by comprising:
the image acquisition module is used for acquiring an image of an object to be detected, which is shot under the binocular camera, wherein the image comprises an infrared image and a visible light image;
The image physical information extraction module is used for extracting image physical information of a biological feature area in the image to obtain image physical information of the biological feature area in the image, wherein the biological feature area is used for indicating the position of the biological feature of the object to be detected in the image;
The deep semantic feature extraction module is used for extracting deep semantic features of the biological feature area in the image based on a machine learning model if the image physical information of the biological feature area in the image indicates that the object to be detected is a living body, so as to obtain the deep semantic features of the biological feature area in the image;
the object living body judging module is used for judging the living body of the object to be detected according to the deep semantic features of the biological feature area in the image;
wherein, the image physical information is color information, the image physical information extraction module includes:
A visible light image acquisition unit configured to acquire a visible light image including the biometric region from the acquired image;
A color histogram calculation subunit configured to calculate a color histogram of a biological feature region in the visible light image based on the biological feature region in the visible light image;
A color information definition subunit, configured to use the calculated color histogram as color information of a biometric area in the visible light image;
the apparatus further comprises:
The color type prediction unit is used for inputting the color information of the biological feature area in the visible light image into a support vector machine classifier, and performing color type prediction on the visible light image to obtain the color type of the visible light image;
A first object living body determination unit configured to determine that the object to be detected is a prosthesis if a color class of the visible light image indicates that the visible light image is a black-and-white image;
A second object living body determination unit configured to determine that the object to be detected is a living body if the color class of the visible light image indicates that the visible light image is a color image.
6. A service system to which a living body detection method is applied, characterized in that the service system includes a service terminal and an authentication server, wherein,
The service terminal is used for acquiring images of service personnel by using the binocular camera, wherein the images comprise infrared images and visible light images;
The service terminal includes the living body detecting device according to claim 5 for judging whether the service person is living or not based on the image of the service person;
when the service personnel is a living body, the service terminal requests the authentication server to carry out identity authentication on the service personnel, and distributes service business instructions to the service personnel passing the identity authentication.
7. An electronic device, comprising:
A processor; and
A memory having stored thereon computer readable instructions which, when executed by the processor, implement the in-vivo detection method of any one of claims 1 to 4.
8. A storage medium having stored thereon a computer program, which when executed by a processor implements the living body detection method according to any one of claims 1 to 4.
CN201910217452.8A 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method Active CN110163078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910217452.8A CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910217452.8A CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Publications (2)

Publication Number Publication Date
CN110163078A CN110163078A (en) 2019-08-23
CN110163078B true CN110163078B (en) 2024-08-02

Family

ID=67638988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910217452.8A Active CN110163078B (en) 2019-03-21 2019-03-21 Living body detection method, living body detection device and service system applying living body detection method

Country Status (1)

Country Link
CN (1) CN110163078B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532957B (en) * 2019-08-30 2021-05-07 北京市商汤科技开发有限公司 Face recognition method and device, electronic equipment and storage medium
CN110555930B (en) * 2019-08-30 2021-03-26 北京市商汤科技开发有限公司 Door lock control method and device, electronic equipment and storage medium
CN110519061A (en) * 2019-09-02 2019-11-29 国网电子商务有限公司 A kind of identity identifying method based on biological characteristic, equipment and system
CN110781770B (en) * 2019-10-08 2022-05-06 高新兴科技集团股份有限公司 Living body detection method, device and equipment based on face recognition
CN112651268B (en) * 2019-10-11 2024-05-28 北京眼神智能科技有限公司 Method, device and electronic device for excluding black and white photos in liveness detection
CN112883762A (en) * 2019-11-29 2021-06-01 广州慧睿思通科技股份有限公司 Living body detection method, device, system and storage medium
CN111191527B (en) * 2019-12-16 2024-03-12 北京迈格威科技有限公司 Attribute identification method, attribute identification device, electronic equipment and readable storage medium
CN111222425A (en) * 2019-12-26 2020-06-02 新绎健康科技有限公司 Method and device for positioning facial features
CN111160299A (en) * 2019-12-31 2020-05-15 上海依图网络科技有限公司 Living body identification method and device
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof
CN111401258B (en) * 2020-03-18 2024-01-30 腾讯科技(深圳)有限公司 Living body detection method and device based on artificial intelligence
CN111582045B (en) * 2020-04-15 2024-05-10 芯算一体(深圳)科技有限公司 Living body detection method and device and electronic equipment
CN111582155B (en) * 2020-05-07 2024-02-09 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111666878B (en) * 2020-06-05 2023-08-29 腾讯科技(深圳)有限公司 Object detection method and device
CN111951243A (en) * 2020-08-11 2020-11-17 华北电力科学研究院有限责任公司 Monitoring method and device for linear variable differential transformer
CN112345080A (en) * 2020-10-30 2021-02-09 华北电力科学研究院有限责任公司 Temperature monitoring method and system for linear variable differential transformer of thermal power generating unit
CN114627518A (en) * 2020-12-14 2022-06-14 阿里巴巴集团控股有限公司 Data processing method, data processing device, computer readable storage medium and processor
CN112801057B (en) * 2021-04-02 2021-07-13 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113422982B (en) * 2021-08-23 2021-12-14 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2009107237A1 (en) * 2008-02-29 2011-06-30 グローリー株式会社 Biometric authentication device
CN108985134B (en) * 2017-06-01 2021-04-16 重庆中科云从科技有限公司 Face living body detection and face brushing transaction method and system based on binocular camera
CN108764071B (en) * 2018-05-11 2021-11-12 四川大学 Real face detection method and device based on infrared and visible light images
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109492551B (en) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 Living body detection method and device and related system applying living body detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera

Also Published As

Publication number Publication date
CN110163078A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110163078B (en) Living body detection method, living body detection device and service system applying living body detection method
US11288504B2 (en) Iris liveness detection for mobile devices
Liu et al. Learning deep models for face anti-spoofing: Binary or auxiliary supervision
KR102483642B1 (en) Method and apparatus for liveness test
CN109446981B (en) Face living body detection and identity authentication method and device
Del Rio et al. Automated border control e-gates and facial recognition systems
Deb et al. Look locally infer globally: A generalizable face anti-spoofing approach
Rusia et al. A comprehensive survey on techniques to handle face identity threats: challenges and opportunities
WO2019134536A1 (en) Neural network model-based human face living body detection
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
US10922399B2 (en) Authentication verification using soft biometric traits
CN105956572A (en) In vivo face detection method based on convolutional neural network
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
Wang et al. From RGB to depth: Domain transfer network for face anti-spoofing
CA3152812A1 (en) Facial recognition method and apparatus
AU2013221923A1 (en) Transaction verification system
CN108875336A (en) The method of face authentication and typing face, authenticating device and system
CN109801412A (en) Gate inhibition&#39;s unlocking method and relevant apparatus
Ilankumaran et al. Multi-biometric authentication system using finger vein and iris in cloud computing
Wasnik et al. Presentation attack detection for smartphone based fingerphoto recognition using second order local structures
CN112949365A (en) Living face identification system and method
Liu et al. Presentation attack detection for face in mobile phones
CN118172860B (en) Intelligent campus access control system based on identity recognition
Bakshi et al. 3T‐FASDM: Linear discriminant analysis‐based three‐tier face anti‐spoofing detection model using support vector machine
CN111191519B (en) A living body detection method for mobile power supply device user access

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TG01 Patent term adjustment
TG01 Patent term adjustment