CN111144365A - Living body detection method, living body detection device, computer equipment and storage medium - Google Patents
Living body detection method, living body detection device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111144365A CN111144365A CN201911412081.5A CN201911412081A CN111144365A CN 111144365 A CN111144365 A CN 111144365A CN 201911412081 A CN201911412081 A CN 201911412081A CN 111144365 A CN111144365 A CN 111144365A
- Authority
- CN
- China
- Prior art keywords
- face
- living body
- detection
- shooting
- detection result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 535
- 238000000034 method Methods 0.000 claims abstract description 81
- 238000001727 in vivo Methods 0.000 claims abstract description 62
- 238000005286 illumination Methods 0.000 claims description 160
- 230000001965 increasing effect Effects 0.000 claims description 25
- 230000015654 memory Effects 0.000 claims description 17
- 239000003086 colorant Substances 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 25
- 230000003993 interaction Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 12
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a living body detection method and device, computer equipment and a storage medium, and belongs to the technical field of face recognition. The method comprises the following steps: receiving a living body detection instruction, carrying out living body detection on a plurality of face images of the target face to obtain detection results corresponding to the face images, and determining the living body detection result of the target face according to the obtained detection results. According to the method and the device, in the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
Description
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a method and an apparatus for detecting a living body, a computer device, and a storage medium.
Background
With the development of computer technology, the application of face recognition technology is more and more extensive. In order to improve the safety of face recognition, the face of a user is often subjected to living body detection during face recognition, and whether the face of the user is a real living body face is verified, so that attack means such as photos and videos are avoided, and fraudulent behaviors are discriminated.
At present, when the living body detection is carried out on the face of a user, the face of the user needs to carry out multiple actions, such as blinking, opening the mouth and the like, and whether the face of the user is the living body face is determined by shooting videos of the face of the user carrying out the multiple actions.
Disclosure of Invention
The embodiment of the application provides a living body detection method and device, computer equipment and a storage medium, and can solve the problems that a user needs to perform multiple actions on a face and the operation is complex. The technical scheme is as follows:
in one aspect, a method of in vivo detection is provided, the method comprising:
receiving a living body detection instruction;
performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images, wherein the plurality of face images adopt different shooting parameters, and the shooting parameters comprise at least one of a focal length value of a camera, illumination intensity of light emitted by the camera and illumination color of the light emitted by the camera;
and determining the living body detection result of the target face according to the obtained detection result.
In a possible implementation manner, the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
when the focal length values adopted by the face images are different, the face images are subjected to in-vivo detection based on a first in-vivo detection model to obtain detection results corresponding to the face images, and the first in-vivo detection model is used for performing in-vivo detection according to edge features in the face images.
In another possible implementation manner, the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
when the illumination intensity that a plurality of face images adopted is different, perhaps when the illumination colour of the light that a plurality of face images adopted is different, based on second in vivo detection model, right a plurality of face images carry out in vivo detection, obtain the testing result that a plurality of face images correspond, second in vivo detection model is used for carrying out in vivo detection according to the light reflection characteristic in the face image.
In another possible implementation manner, the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
shooting the target face according to a first shooting parameter to obtain a face image corresponding to the first shooting parameter;
performing living body detection on the face image to obtain a detection result corresponding to the face image;
and adjusting the first shooting parameter, and continuing to carry out the steps of shooting the target face and carrying out in-vivo detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter.
In another possible implementation manner, the adjusting the first shooting parameter, and continuing to perform the steps of shooting the target face and performing living body detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter includes:
if the detection result corresponding to the face image corresponding to the first shooting parameter is a first detection result, adjusting the first shooting parameter, and continuing to perform the steps of shooting the target face and performing in-vivo detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter;
wherein the first detection result indicates that the target face is a living body face.
In another possible implementation manner, the adjusting the first shooting parameter includes:
reducing the first focal length value of the camera to obtain a reduced second focal length value;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the reduced focal length value is the minimum focal length value of the camera, stopping reducing the focal length value of the camera.
In another possible implementation manner, the shooting parameter includes an illumination intensity of light emitted by the camera, and the adjusting the first shooting parameter includes:
increasing a first illumination intensity of light emitted by the camera to obtain a second increased illumination intensity;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the increased illumination intensity is the maximum illumination intensity of the light emitted by the camera, stopping increasing the illumination intensity of the light emitted by the camera.
In another possible implementation manner, the shooting parameter includes an illumination color of light emitted by the camera, and the adjusting the first shooting parameter includes:
adjusting a first illumination color to a second illumination color subsequent to the first illumination color according to the arrangement sequence of the plurality of illumination colors;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the second illumination color is the last illumination color of the arrangement sequence, stopping adjusting the illumination color of the light emitted by the camera.
In another possible implementation manner, the determining, according to the obtained detection result, a living body detection result of the target face includes:
determining that the target face is not a living body face when at least one second detection result exists in the obtained plurality of detection results; or,
when the proportion of a first detection result in the plurality of detection results is larger than a first preset threshold value, determining that the target face is a living face; or,
when the proportion of a second detection result in the plurality of detection results is larger than a second preset threshold value, determining that the target face is not a living face;
wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
In one aspect, there is provided a living body detection apparatus, the apparatus comprising:
the instruction receiving module is used for receiving a living body detection instruction;
the living body detection module is used for carrying out living body detection on a plurality of face images of a target face to obtain detection results corresponding to the face images, wherein the face images adopt different shooting parameters, and the shooting parameters comprise at least one of a focal length value of a camera, the illumination intensity of light emitted by the camera and the illumination color of the light emitted by the camera;
and the detection result determining module is used for determining the living body detection result of the target face according to the obtained detection result.
In one possible implementation, the in-vivo detection module includes:
the first living body detection unit is used for carrying out living body detection on the plurality of face images based on a first living body detection model when the focal length values adopted by the plurality of face images are different, so as to obtain detection results corresponding to the plurality of face images, and the first living body detection model is used for carrying out living body detection according to edge features in the face images.
In another possible implementation manner, the living body detection module includes:
and the second in-vivo detection unit is used for carrying out in-vivo detection on the plurality of face images based on a second in-vivo detection model when the illumination intensity adopted by the plurality of face images is different or when the illumination color of the light adopted by the plurality of face images is different, so as to obtain the detection results corresponding to the plurality of face images, and the second in-vivo detection model is used for carrying out in-vivo detection according to the light reflection characteristics in the face images.
In one possible implementation, the in-vivo detection module includes:
the face image shooting unit is used for shooting the target face according to a first shooting parameter to obtain a face image corresponding to the first shooting parameter;
the third living body detection unit is used for carrying out living body detection on the face image to obtain a detection result corresponding to the face image;
and the shooting parameter adjusting unit is used for adjusting the first shooting parameter, continuously executing the steps of shooting the target face and carrying out living body detection according to the adjusted second shooting parameter, and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter.
In another possible implementation manner, the shooting parameter adjusting unit is further configured to adjust the first shooting parameter if the detection result corresponding to the face image corresponding to the first shooting parameter is a first detection result, and continue to perform the steps of shooting the target face and performing living body detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter;
wherein the first detection result indicates that the target face is a living body face.
In another possible implementation manner, the shooting parameter includes a focal length value of the camera, and the shooting parameter adjusting unit is further configured to reduce a first focal length value of the camera to obtain a reduced second focal length value;
and the shooting parameter adjusting unit is further used for stopping reducing the focal length value of the camera when the reduced focal length value is the minimum focal length value of the camera.
In another possible implementation manner, the shooting parameter includes an illumination intensity of light emitted by the camera, and the shooting parameter adjusting unit is further configured to increase a first illumination intensity of the light emitted by the camera to obtain an increased second illumination intensity;
the shooting parameter adjusting unit is further configured to stop increasing the illumination intensity of the light emitted by the camera when the increased illumination intensity is the maximum illumination intensity of the light emitted by the camera.
In another possible implementation manner, the shooting parameter includes an illumination color of light emitted by the camera, and the shooting parameter adjusting unit is further configured to adjust a first illumination color to a second illumination color subsequent to the first illumination color according to an arrangement order of the plurality of illumination colors;
the shooting parameter adjusting unit is further configured to stop adjusting the illumination color of the light emitted by the camera when the second illumination color is the last illumination color of the arrangement sequence.
In another possible implementation manner, the detection result determining module includes:
a first determination unit configured to determine that the target face is not a living body face when at least one second detection result exists among the obtained plurality of detection results; or,
the second determining unit is used for determining that the target face is a living body face when the proportion of the first detection result in the plurality of detection results is larger than a first preset threshold value; or,
the second determining unit is used for determining that the target face is not the living body face when the proportion of a second detection result in the plurality of detection results is larger than a second preset threshold;
wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform an operation performed by a liveness detection method as described in any one of the possible implementations above.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the living body detecting method according to any one of the above possible implementations.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method, the device, the computer equipment and the storage medium receive the living body detection instruction, carry out living body detection on a plurality of face images of a target face to obtain detection results corresponding to the face images, and determine the living body detection result of the target face according to the obtained detection results. According to the method and the device, in the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
In addition, in the living body detection process, the human face images with different characteristics can be acquired by automatically adjusting the focal length value of the camera, or the illumination intensity or the illumination color of the light emitted by the camera, so that the accuracy of the living body detection is improved.
Moreover, the in-vivo detection result of the target face can be determined through a plurality of face images acquired by the 2D (2-dimensional) camera, a 3D (3-dimensional) camera is not needed, the cost is saved, and the application range is wide.
In addition, in the related art, the directional gradient histogram and the local binary pattern are adopted to perform feature extraction on the face image, so that whether the face image is a living face or not is determined, but the method cannot be applied to the situations of a large number of users, and the application range is narrow. By the method, whether the face of the user is a living body face can be determined in the living body detection model of the acquired face image input values, and the application range is wide.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of an implementation environment provided by embodiments of the present application;
FIG. 2 is a flow chart of a method of in vivo detection provided by an embodiment of the present application;
FIG. 3 is a flow chart of a method of detecting a living subject according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a face image according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a living body detection model provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a face image according to an embodiment of the present application;
FIG. 7 is a flow chart of a method of in vivo detection provided by an embodiment of the present application;
FIG. 8 is a flow chart of a method of in vivo detection provided by an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present disclosure;
FIG. 10 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The in-vivo detection method provided by the embodiment of the application can be applied to computer equipment, and in a possible implementation manner, the computer equipment can be a terminal, and the terminal can be a mobile phone, a computer, a tablet computer and other equipment. And the terminal receives the living body detection instruction, performs living body detection on the plurality of face images, and determines whether the target face is a living body face according to the obtained detection result.
In another possible implementation, the computer devices may also be the terminal 101 and the server 102. Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the embodiment includes: a terminal 101 and a server 102.
The terminal 101 may be a mobile phone, a computer, a tablet computer, or other devices, and the server 102 may be a server 102, or a server 102 cluster formed by a plurality of servers 102, or a cloud computing center.
When the terminal 101 receives a living body detection instruction, a target face is shot to obtain a plurality of face images, the terminal 101 sends the shot face images to the server 102, the server 102 performs living body detection on the face images, and whether the target face is a living body face is determined according to an obtained detection result.
Fig. 2 is a flowchart of a method for detecting a living body according to an embodiment of the present disclosure. Referring to fig. 2, the embodiment includes:
201. a liveness detection instruction is received.
202. The method comprises the steps of carrying out living body detection on a plurality of face images of a target face to obtain detection results corresponding to the face images, wherein the face images adopt different shooting parameters, and the shooting parameters comprise at least one of a focal length value of a camera, illumination intensity of light emitted by the camera and illumination color of the light emitted by the camera.
203. And determining the living body detection result of the target face according to the obtained detection result.
The method provided by the embodiment of the application receives the living body detection instruction, performs living body detection on the plurality of face images of the target face to obtain detection results corresponding to the plurality of face images, and determines the living body detection result of the target face according to the obtained detection results. According to the method and the device, in the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
In one possible implementation manner, performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
and when the focal length values adopted by the plurality of face images are different, performing living body detection on the plurality of face images based on a first living body detection model to obtain detection results corresponding to the plurality of face images, wherein the first living body detection model is used for performing living body detection according to edge features in the face images.
In another possible implementation manner, performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
and when the illumination intensities adopted by the plurality of face images are different or the illumination colors of the light adopted by the plurality of face images are different, performing living body detection on the plurality of face images based on a second living body detection model to obtain detection results corresponding to the plurality of face images, wherein the second living body detection model is used for performing living body detection according to the light reflection characteristics in the face images.
In another possible implementation manner, performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images includes:
shooting a target face according to the first shooting parameters to obtain a face image corresponding to the first shooting parameters;
performing living body detection on the face image to obtain a detection result corresponding to the face image;
and adjusting the first shooting parameter, and continuing to execute the steps of shooting the target face and carrying out in-vivo detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter.
In another possible implementation manner, the adjusting the first shooting parameter, and continuing to perform the steps of shooting the target face and performing the living body detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter includes:
if the detection result corresponding to the face image corresponding to the first shooting parameter is the first detection result, adjusting the first shooting parameter, and continuing to execute the steps of shooting the target face and performing in-vivo detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter;
wherein the first detection result indicates that the target face is a living body face.
In another possible implementation manner, the adjusting the first shooting parameter includes:
reducing the first focal length value of the camera to obtain a reduced second focal length value;
and continuing to execute the steps of shooting the target face and carrying out living body detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter, wherein the method further comprises the following steps of:
and when the reduced focal length value is the minimum focal length value of the camera, stopping reducing the focal length value of the camera.
In another possible implementation manner, the shooting parameter includes an illumination intensity of light emitted by the camera, and the adjusting the first shooting parameter includes:
increasing a first illumination intensity of light emitted by the camera to obtain a second increased illumination intensity;
and continuing to execute the steps of shooting the target face and carrying out living body detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter, wherein the method further comprises the following steps of:
and when the increased illumination intensity is the maximum illumination intensity of the light emitted by the camera, stopping increasing the illumination intensity of the light emitted by the camera.
In another possible implementation manner, the shooting parameter includes an illumination color of light emitted by the camera, and the adjusting the first shooting parameter includes:
adjusting the first illumination color to a second illumination color after the first illumination color according to the arrangement sequence of the plurality of illumination colors;
and continuing to execute the steps of shooting the target face and carrying out living body detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter, wherein the method further comprises the following steps of:
and when the second illumination color is the last illumination color in the arrangement sequence, stopping adjusting the illumination color of the light emitted by the camera.
In another possible implementation manner, determining a living body detection result of the target human face according to the obtained detection result includes:
determining that the target face is not a living body face when at least one second detection result exists in the obtained plurality of detection results; or,
when the proportion of a first detection result in the plurality of detection results is larger than a first preset threshold value, determining that the target face is a living face; or,
when the proportion of a second detection result in the plurality of detection results is larger than a second preset threshold value, determining that the target face is not the living face;
wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
Fig. 3 is a flowchart of a method for detecting a living body according to an embodiment of the present disclosure. Referring to fig. 3, the embodiment includes:
301. the terminal receives a biopsy instruction.
The living body detection instruction is used for indicating whether a terminal detection target face is a living body face or not, and can be triggered when the terminal detects a preset operation. For example, when the terminal detects a trigger operation on a detection button, the terminal triggers a living body detection instruction, and the detection button can be a button with high safety requirements, such as a payment button and an authentication button; or when the terminal detects the sliding operation on any position, triggering a living body detection instruction; or when the terminal detects operations such as the lightening of a terminal screen, the lifting of the terminal and the like, the living body detection instruction is triggered.
302. And the terminal shoots the target face according to the first shooting parameters to obtain a face image corresponding to the first shooting parameters.
The shooting parameters are parameters used when the camera shoots the image, and the terminal can shoot different face images according to different shooting parameters. The shooting parameters comprise at least one of a focal length value of the camera, the illumination intensity of light emitted by the camera and the color of the light emitted by the camera, and at least one of the light and the color can be one or more.
The focal length value of the camera determines the shooting angle of the camera, namely, the shooting range of the camera when shooting images can be determined. The smaller the focal length value is, the larger the shooting range is, and the smaller the range of the shot target face in the face image is; the larger the focal length value is, the smaller the shooting range is, and the larger the range of the shot target face in the face image is. The illumination intensity of the light emitted by the camera may be the illumination intensity of the light emitted by the flash, and the illumination color of the light emitted by the camera may be the illumination color of the light emitted by the flash, such as white light, red light, blue light, and the like.
In one possible implementation, before step 302, the method further comprises: and when the terminal receives the living body detection instruction, acquiring the current shooting parameters of the terminal. The current shooting parameters of the terminal can be shooting parameters adopted by the terminal in the last shooting process, and can also be default initial shooting parameters of the terminal.
303. And the terminal performs living body detection on the face image to obtain a detection result corresponding to the face image.
The living body detection is used for detecting whether a target face in the face image is a living body face, and the detection result may be that the target face is a living body face or that the target face is not a living body face.
In a possible implementation manner, the terminal performs living body detection on the face image to obtain the probability that the face image is a living body face, determines that the target face is the living body face when the probability is greater than a preset threshold, and determines that the target face is not the living body face when the probability is less than the preset threshold. For example, the preset threshold is 0.6, and when the probability that the obtained face image is a living face is 0.8, the target face is determined to be the living face; when the probability that the obtained face image is a living body face is 0.4, it is determined that the target face is not a living body face.
In addition, the terminal may perform the living body detection on the face image based on the living body detection model, and then step 303 may include the following two ways:
the first mode is as follows: and performing living body detection on a face image obtained by shooting a target face based on the first living body detection model to obtain a detection result corresponding to the face image. The first living body detection model is used for carrying out living body detection according to edge features in the face image.
Because the target face can be a picture or an image played by equipment, and both the picture and the equipment playing the image have obvious edges, such as four edges of the picture or a frame of the equipment, and are obviously different from the outline edge of the real face, the first living body detection model can be trained through a large number of samples, so that the trained first living body detection model can identify the edge characteristics which do not belong to the face outline in the face image, and the detection result corresponding to the face image can be determined. Therefore, when the terminal performs living body detection on the target face, and the target face is a photo or a device playing an image, the face image shot by the terminal may include an edge of the photo or the device, and whether the face image includes an edge feature is detected based on the trained first living body detection model, so that detection results corresponding to a plurality of face images are determined. When the face image does not include the edge feature, the detection result corresponding to the face image is that the target face included in the face image is a living body face; and when the face image comprises the edge features, the detection result corresponding to the face image is that the target face contained in the face image is not a living body face. As shown in fig. 4, if the two left face images in the figure do not include edge features, the target face in the two left face images is a living face, and if the two right face images in the figure include edge features, the target face in the two right face images is not a living face.
In addition, for the process of training the first in-vivo detection model, a large number of positive sample face images and negative sample face images can be acquired, the positive sample face images do not include edge features which do not belong to face contours and can be obtained by shooting a plurality of sample in-vivo faces, the negative sample face images include edge features which do not belong to face contours and can be obtained by equipment for shooting pictures or playing images, the detection result corresponding to the positive sample face images is that the target face is a living face, the detection result corresponding to the negative sample face images is that the target face is not a living face, the first in-vivo detection model is trained by using a large number of positive sample face images, negative sample face images and corresponding detection results, and therefore the trained first detection model can acquire the detection result corresponding to any face image.
In addition, when the human face image is subjected to the living body detection based on the first living body detection model, as shown in fig. 5, the human face image is input into an input layer in the first living body detection model, the input layer acquires the image features of the human face image and inputs the image features into a convolution layer, the convolution layer performs convolution operation on the features of the human face image, the features obtained after the convolution operation are input into a full connection layer, the full connection layer splices the features, the spliced features are input into a Softmax (normalization) layer, the Softmax layer classifies the input features to obtain the probability of the human face image, and the detection result of the human face image is determined according to the obtained probability.
The process of classifying the input features to obtain the probability of the face image can adopt the following formula:
wherein,Virepresenting the i-th element, V, in the feature VjRepresenting the jth element, S, in the feature ViThe probability of the i-th element in the feature V is represented, and e represents the base of the natural logarithm function.
The second mode is as follows: and performing living body detection on the face image obtained by shooting the target face based on the second living body detection model to obtain a detection result corresponding to the face image. And the second living body detection model is used for carrying out living body detection according to the light reflection characteristics in the face image.
When the terminal performs living body detection on the target face, the target face can be a picture or an image played by equipment, when light emitted by the terminal when the terminal shoots the target face is projected on the picture or an equipment screen, mirror reflection can be generated, so that the shot face image can include light reflection features such as illumination patches and the like, when the emitted light is projected on the living body face, the light reflection features cannot be generated, and whether the face image includes the light reflection features or not is detected based on the trained second living body detection model, so that the detection results corresponding to a plurality of face images are determined. When the face image does not include the light reflection feature, the detection result corresponding to the face image is that the target face included in the face image is a living body face; and when the face image comprises the light reflection characteristic, the detection result corresponding to the face image is that the target face included in the face image is not a living body face. As shown in fig. 6, if the left face image does not include the light reflection feature, the target face in the left face image is a living face, and if the right face image includes the light reflection feature, the target face in the right face image is not a living face.
In addition, the training mode of the second living body detection model is similar to the training mode of the first living body detection model, and is not repeated here.
In addition, the process of the second living body detection model for carrying out living body detection on the face image is similar to that of the first living body detection model, and is not described herein again.
304. And the terminal adjusts the first shooting parameter, and continues to execute the steps of shooting the target face and carrying out in-vivo detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter.
When the terminal adjusts the first shooting parameter, the terminal can adjust the first shooting parameter according to a preset adjustment range, or adjust the first shooting parameter according to a gear of the shooting parameter set by the terminal. For example, when the shooting parameters include a focal length value of the camera, the focal length value is 18-55 mm, the current focal length value of the terminal is 50 mm, the adjustment amplitude is 5 mm, and the adjusted focal length value of the terminal is 45 mm; or, the focal length value of the terminal includes three steps, such as 18 mm, 25 mm, and 50 mm, and if the current focal length value is 50 mm, the adjusted focal length value is 25 mm. When the shooting parameters include the illumination color of the light emitted by the camera, the terminal can emit light of three colors, such as white light, red light and blue light, and the illumination color of the light currently emitted by the terminal is white, so that the illumination color of the light emitted after adjustment can be red or blue.
The terminal carries out the steps of shooting a target face and carrying out in-vivo detection according to the first shooting parameters to obtain a detection result corresponding to the face image, then different face images can be obtained through the adjusted second shooting parameters, and the in-vivo detection is carried out on the newly obtained face image to obtain a detection result corresponding to the face image. The terminal can obtain a plurality of face images and a detection result corresponding to each face image by adjusting the shooting parameters for a plurality of times, and executing the steps of shooting the target face and carrying out the living body detection for a plurality of times.
In one possible implementation, this step 304 includes: if the detection result corresponding to the face image corresponding to the first shooting parameter is the first detection result, the first shooting parameter is adjusted, and the steps of shooting the target face and performing the living body detection are continuously executed according to the adjusted second shooting parameter, namely the step 302 and the step 303, so as to obtain the detection result corresponding to the face image corresponding to the second shooting parameter. Wherein the first detection result indicates that the target face is a living body face.
The detection result corresponding to the face image corresponding to the first shooting parameter is a first detection result, that is, the currently obtained detection result is the first detection result, which indicates that living body detection is performed on the face image obtained according to the current shooting parameter, and it is determined that the target face is a living body face, the terminal can adjust the shooting parameter to reacquire different face images, and perform living body detection on the newly obtained face image, so that a plurality of face images can be obtained, and the living body detection is performed on the plurality of face images, so that the accuracy of the verification result of the target face is ensured.
In addition, the step 304 further includes: and if the currently obtained detection result is a second detection result which indicates that the target face is not the living body face, stopping adjusting the first shooting parameter.
As for the way of adjusting the first shooting parameter by the terminal, the following three ways may be included:
the first mode is as follows: the shooting parameters comprise a focal length value of the camera; the manner of adjusting the first shooting parameter is as follows: and reducing the first focal length value of the camera to obtain a reduced second focal length value.
The smaller the focal length value is, the larger the shooting range of the acquired face image is, when the target face is a photo, or when the target face is equipment for playing an image, the focal length value can be adjusted to be smaller, then the edge feature of the photo or the equipment is acquired, and subsequently, the target face is determined not to be a living face according to the fact that the shot face image comprises the edge feature. Therefore, when the shooting parameters include the focal length value of the camera, and when the shooting parameters are adjusted, the focal length value is reduced to obtain the reduced focal length value, so that the shooting range of the face image shot according to the reduced focal length value is enlarged, and whether the shot face image includes the edge feature is determined.
In one possible implementation, after step 304, the method further includes: and when the reduced focal length value is the minimum focal length value of the camera, stopping reducing the focal length value of the camera. For example, when the focal length value decreases once or more times and the current focal length value is the minimum value, the steps of shooting the target face and performing living body detection are performed according to the minimum value of the focal length value, and after the detection result corresponding to the face image corresponding to the second shooting parameter is obtained, the adjustment of the focal length value is stopped.
The second mode is as follows: the shooting parameters comprise the illumination intensity of light emitted by the camera; the manner of adjusting the first shooting parameter is as follows: and increasing the first illumination intensity of the light emitted by the camera to obtain the increased second illumination intensity.
When the target face is a photo or when the target face is a device for playing an image, and light emitted when the terminal shoots the target face is projected on the photo or a device screen, mirror reflection is generated, so that the shot face image can include light reflection features, such as illumination patches, the stronger the illumination intensity is, the more obvious the light reflection features in the face image are, and subsequently, the target face is determined not to be a living face according to the shot face image including the light reflection features. Therefore, when the shooting parameters include the illumination intensity of the light emitted by the camera, and when the shooting parameters are adjusted, the illumination intensity is increased to obtain the increased illumination intensity, so that whether the light reflection characteristics are included in the shot face image or not can be determined subsequently.
In one possible implementation, after step 304, the method further includes: and when the increased illumination intensity is the maximum illumination intensity of the light emitted by the camera, stopping increasing the illumination intensity of the light emitted by the camera. For example, the shooting parameters include the illumination intensity of light emitted by the camera, and when the illumination intensity increases once or more times and the current illumination intensity reaches the maximum value, the steps of shooting the target face and performing living body detection are performed according to the maximum value of the illumination intensity, and after the detection result corresponding to the face image corresponding to the second shooting parameter is obtained, the adjustment of the illumination intensity is stopped.
The third mode is as follows: the shooting parameters comprise the illumination color of light emitted by the camera; the manner of adjusting the first shooting parameter is as follows: and adjusting the first illumination color to a second illumination color subsequent to the first illumination color according to the arrangement sequence of the plurality of illumination colors. The order of arrangement of the plurality of illumination colors may be set in advance, and for example, when the order of arrangement of the plurality of illumination colors is white light, red light, and blue light, and the first illumination color is white light, the second illumination color after adjustment is red light.
When the target face is a photo or when the target face is a device for playing an image, light emitted when the terminal shoots the target face is projected on the photo or a device screen, mirror reflection is generated, so that the shot face image can include light reflection features, such as illumination patches, illumination colors of different lights, the generated light reflection features are different, and the target face is determined not to be a living face according to the shot face image including the light reflection features.
In one possible implementation, after step 304, the method further includes: and when the illumination color of the adjusted light is the last illumination color, stopping adjusting the illumination color of the light emitted by the camera. For example, when the color of the light is adjusted once or more times, and the current illumination color of the light is the last illumination color, the steps of photographing the target face and performing the living body detection are performed according to the current illumination color of the light, and the adjustment of the illumination intensity is stopped after the detection result corresponding to the face image corresponding to the second photographing parameter is obtained.
It should be noted that, in the embodiment of the present application, only the first shooting parameter is adjusted on the basis of the first shooting parameter to obtain the second shooting parameter, but in another embodiment, the shooting parameter may be adjusted multiple times, and then the subsequent steps are performed according to the adjusted shooting parameter, where each time the shooting parameter is adjusted, the process of obtaining the second shooting parameter on the basis of the first shooting parameter is similar to that described above.
305. And the terminal determines the living body detection result of the target face according to the obtained detection result.
And the living body detection result is used for indicating whether the target face is a living body face. After the terminal acquires the detection result of each face image, the plurality of face images comprise the target face, so that the living body detection result of the target face can be finally determined according to the detection results of the plurality of face images.
For a specific way of determining the living body detection result of the target face, in one possible implementation, the step 305 includes: and if the detection result corresponding to the face image corresponding to the first shooting parameter is the second detection result, determining that the living body detection result of the target face is the second detection result. Wherein the second detection result indicates that the target face is not a living body face.
When the terminal acquires the face images for multiple times, the next step of acquiring the face images is executed each time when the detection result of the current face image is determined to indicate that the target face is the living body face, so that when the detection result corresponding to the face image corresponding to the first shooting parameter is determined to be the second detection result, the terminal does not execute the next step of acquiring the face images, and the target face is determined not to be the living body face.
In addition, if the last detection result of the plurality of detection results is a first detection result indicating that the target face is a living body face, it is determined that the target face is a living body face.
In another possible implementation, the step 305 includes: determining that the target face is not a living body face when at least one second detection result exists in the obtained plurality of detection results; or when the proportion of the first detection result in the plurality of detection results is greater than a preset threshold value, determining that the target face is not the living face. Wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
The terminal obtains different face images according to different shooting parameters, the terminal obtains the face images and corresponding detection results according to the current shooting parameters each time, then the shooting parameters are continuously adjusted to obtain the detection results of the next face image, and when the terminal does not execute the step of obtaining the face images any more, the terminal determines the in-vivo detection results of the target face according to the obtained multiple detection results.
It should be noted that, in the embodiment of the present application, after a terminal acquires a detection result of each face image, a next face image and a corresponding detection result are acquired as an example. In another embodiment, the terminal may acquire a plurality of face images according to different shooting parameters, then perform live body detection on the plurality of face images respectively to obtain a detection result of each face image, and determine a live body detection result of the target face according to the obtained plurality of detection results.
The terminal can detect a plurality of face images based on the living body detection model, and the method comprises the following three modes:
the first mode is as follows: and when the focal length values adopted by the plurality of face images are different, performing living body detection on the plurality of face images of the target face based on the first living body detection model to obtain detection results corresponding to the plurality of face images. The first living body detection model is used for performing living body detection according to edge features in the face image, and the plurality of adopted focal length values may be a plurality of discontinuous focal length values or a plurality of continuous focal length values, for example, the focal length value may be 18 mm, 25 mm, 50 mm, or 18-55 mm.
The method is similar to the first way of performing the live body detection on the face image through the live body detection model in step 303, and is not described herein again.
The second mode is as follows: and when the illumination intensities adopted by the plurality of face images are different, performing living body detection on the plurality of face images of the target face based on a second living body detection model to obtain detection results corresponding to the plurality of face images, wherein the second living body detection model is used for performing living body detection according to the light reflection characteristics in the face images.
The method is similar to the second way of performing the live body detection on the face image through the live body detection model in step 303, and is not described herein again.
The third mode is as follows: and when the light colors adopted by the plurality of face images are different, performing living body detection on the plurality of face images of the target face based on a second living body detection model to obtain detection results corresponding to the plurality of face images, wherein the second living body detection model is used for performing living body detection according to the light reflection characteristics in the face images.
Because different light reflection characteristics can be generated when light with different colors is projected on a screen of equipment for playing photos or videos, the terminal can enable different light reflection characteristics included in the obtained face image to be easily determined whether the target face is a living face or not by emitting light with different colors.
It should be noted that, in the embodiment of the present application, a terminal is taken as an execution subject for explanation, and in another embodiment, when the terminal receives a live body detection instruction, the terminal shoots a target face according to current shooting parameters, sends an obtained face image to a server, the server performs live body detection on the received face image to obtain a detection result corresponding to the face image, and sends a shooting parameter adjustment instruction to the terminal, the terminal adjusts shooting parameters based on the parameter adjustment instruction, shoots the target face according to the shooting parameters after adjustment, sends the obtained face image to the server, and the server performs live body detection on the face image to obtain a detection result of the face image, so that the server can determine a live body detection result of the target face according to a plurality of obtained detection results.
The method provided by the embodiment of the application receives the living body detection instruction, performs living body detection on the plurality of face images of the target face to obtain detection results corresponding to the plurality of face images, and determines the living body detection result of the target face according to the obtained detection results. According to the method and the device, in the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
In addition, in the living body detection process, the human face images with different characteristics can be acquired by automatically adjusting the focal length value of the camera, or the illumination intensity or the illumination color of the light emitted by the camera, so that the accuracy of the living body detection is improved.
Moreover, the in-vivo detection result of the target face can be determined through a plurality of face images acquired by the 2D (2-dimensional) camera, a 3D (3-dimensional) camera is not needed, the cost is saved, and the application range is wide.
In addition, in the related art, the directional gradient histogram and the local binary pattern are adopted to perform feature extraction on the face image, so that whether the face image is a living face or not is determined, but the method cannot be applied to the situations of a large number of users, and the application range is narrow. By the method, whether the face of the user is a living body face can be determined in the living body detection model of the acquired face image input values, and the application range is wide.
As shown in fig. 7, when the terminal performs the living body detection on the target face, the terminal opens the 2D camera, shoots the target face to obtain a face image, and performs the edge detection on the face image: inputting the face image into a first living body detection model, and determining whether a target face in the face image is a living body face; if not, the living body detection fails, and the process is ended; if yes, reducing the focal length value of the camera, re-acquiring the face image, and performing living body detection on the re-acquired face image; if the focal length value of the camera cannot be reduced any more and the current detection result shows that the target face is a living face, the edge detection is passed, and the current face image is subjected to light reflection detection: inputting the current face image into a second living body detection model, and determining whether the target face in the face image is a living body face; if not, the living body detection fails, and the process is ended; if so, enhancing the illumination intensity of the light emitted by the camera or replacing the illumination color of the emitted light, re-acquiring the face image, and performing living body detection on the re-acquired face image; if the illumination intensity cannot be enhanced or the illumination color of the emitted light cannot be changed any more, and the current detection result shows that the target face is the living face, the light reflection detection is passed, at this moment, the target face can be determined to be the living face, and the process is ended.
Fig. 8 is a flowchart of a method for detecting a living body according to an embodiment of the present disclosure. Referring to fig. 8, the embodiment includes:
801. the terminal receives the living body detection instruction.
802. And the terminal shoots the target face according to the first focal length value and the first illumination intensity to obtain a face image corresponding to the first focal length value.
803. And the terminal performs living body detection on the face image based on the first living body detection model to obtain a detection result corresponding to the face image.
804. And if the detection result corresponding to the face image corresponding to the first focal length value is the first detection result, the terminal adjusts the first focal length value, the first illumination intensity is kept unchanged, and the steps of shooting the target face and performing living body detection are continuously performed according to the adjusted second focal length value to obtain the detection result corresponding to the face image corresponding to the second focal length value.
Wherein the first detection result indicates that the target face is a living body face.
It should be noted that, in the embodiment of the present application, the currently obtained detection result is taken as the first detection result, and in another embodiment, if the currently obtained detection result is the second detection result, it is determined that the target face is not the living face, and the subsequent step 804 and 807 do not need to be performed. Wherein the second detection result indicates that the target face is not a living body face.
805. And when the adjusted second focal length value is the minimum focal length value and the detection result of the face image acquired according to the minimum focal length value is the first detection result, performing living body detection on the face image based on the first living body detection model to obtain the detection result corresponding to the face image.
806. And if the detection result corresponding to the face image corresponding to the minimum focal length value is the first detection result, the terminal performs living body detection on the face image corresponding to the first illumination intensity based on the second living body detection model to obtain the detection result corresponding to the face image.
The face image corresponding to the first illumination intensity may be a face image corresponding to the minimum focal length value, or may be a face image corresponding to other focal length values. When the terminal shoots the target face by using the camera, the camera can emit light during shooting in addition to shooting according to the current focal length value of the camera, so that the face image acquired by the terminal according to the adjusted focal length value can also be used as the face image corresponding to the first illumination intensity.
It should be noted that, in the embodiment of the present application, the detection result corresponding to the face image corresponding to the minimum focal length value is taken as the first detection result, and in another embodiment, if the detection result corresponding to the face image corresponding to the minimum focal length value is taken as the second detection result, it is determined that the target face is not the living face, and the subsequent step 807 and 808 do not need to be executed.
807. And if the detection result corresponding to the face image corresponding to the first illumination intensity is the first detection result, the terminal adjusts the first illumination intensity, and continues to execute the steps of shooting the target face and performing living body detection according to the adjusted second illumination intensity to obtain the detection result corresponding to the face image corresponding to the second illumination intensity.
It should be noted that, in the process of shooting the face image by adjusting the illumination intensity in the embodiment of the present application, the focal length value of the camera remains unchanged, and the focal length value of the camera may be the minimum focal length value, or may be other focal length values, which is not limited herein.
808. And when the adjusted second illumination intensity is the maximum illumination intensity and the detection result of the face image acquired according to the maximum illumination intensity is the first detection result, determining that the living body detection result is that the target face is the living body face.
It should be noted that in the embodiment of the present application, a face image is acquired according to the current illumination intensity, and the detection result of the face image is acquired based on the second living body detection model, while in another embodiment, a face image is acquired according to the current light color, and the detection result of the face image is acquired based on the second living body detection model.
The method provided by the embodiment of the application receives the living body detection instruction, performs living body detection on the plurality of face images of the target face to obtain detection results corresponding to the plurality of face images, and determines the living body detection result of the target face according to the obtained detection results. According to the method and the device, in the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
In addition, in the living body detection process, the human face images with different characteristics can be acquired by automatically adjusting the focal length value of the camera, or the illumination intensity or the illumination color of the light emitted by the camera, so that the accuracy of the living body detection is improved.
Fig. 9 is a schematic structural diagram of a living body detection apparatus according to an embodiment of the present application. Referring to fig. 9, the embodiment includes:
an instruction receiving module 901, configured to receive a living body detection instruction;
the living body detection module 902 is configured to perform living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images, where the plurality of face images adopt different shooting parameters, and the shooting parameters include at least one of a focal length value of a camera, an illumination intensity of light emitted by the camera, and an illumination color of the light emitted by the camera;
and a detection result determining module 903, configured to determine a living body detection result of the target human face according to the obtained detection result.
The device provided by the embodiment of the application receives the living body detection instruction, performs living body detection on the plurality of face images of the target face to obtain detection results corresponding to the plurality of face images, and determines the living body detection result of the target face according to the obtained detection results. In the living body detection process, the user face is not required to execute actions, the living body detection result of the target face can be determined according to the shot face image, the time of interaction with the user is saved, and the living body detection efficiency is improved. In addition, in the process of in vivo detection, different shooting parameters are adopted to carry out in vivo detection on a plurality of different face images, so that the accuracy of in vivo detection is improved.
In one possible implementation, as shown in fig. 10, the liveness detection module 902 includes:
a first living body detection unit 9201, configured to, when the focal length values adopted by the multiple face images are different, perform living body detection on the multiple face images based on a first living body detection model, so as to obtain detection results corresponding to the multiple face images, where the first living body detection model is used to perform living body detection according to edge features in the face images.
In one possible implementation, as shown in fig. 10, the liveness detection module 902 includes:
a second living body detection unit 9202, configured to, when the illumination intensities adopted by the multiple face images are different, or when the illumination colors of the light adopted by the multiple face images are different, perform living body detection on the multiple face images based on a second living body detection model, so as to obtain detection results corresponding to the multiple face images, where the second living body detection model is used for performing living body detection according to the light reflection characteristics in the face images.
In one possible implementation, as shown in fig. 10, the liveness detection module 902 includes:
the face image shooting unit 9203 is configured to shoot a target face according to the first shooting parameter to obtain a face image corresponding to the first shooting parameter;
a third living body detection unit 9204, configured to perform living body detection on the face image to obtain a detection result corresponding to the face image;
and a shooting parameter adjusting unit 9205, configured to adjust the first shooting parameter, and continue to perform the steps of shooting the target face and performing living body detection according to the adjusted second shooting parameter, so as to obtain a detection result corresponding to the face image corresponding to the second shooting parameter.
In a possible implementation manner, as shown in fig. 10, the shooting parameter adjusting unit 9205 is further configured to adjust the first shooting parameter if the detection result corresponding to the face image corresponding to the first shooting parameter is the first detection result, and continue to perform the steps of shooting the target face and performing living body detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter;
wherein the first detection result indicates that the target face is a living body face.
In a possible implementation manner, as shown in fig. 10, the shooting parameters include a focal length value of the camera, and the shooting parameter adjusting unit 9205 is further configured to reduce the first focal length value of the camera to obtain a reduced second focal length value;
the shooting parameter adjusting unit 9205 is further configured to stop reducing the focal length value of the camera when the reduced focal length value is the minimum focal length value of the camera.
In a possible implementation manner, the shooting parameters include the illumination intensity of the light emitted by the camera, and the shooting parameter adjusting unit 9205 is further configured to increase the first illumination intensity of the light emitted by the camera to obtain a second increased illumination intensity;
the shooting parameter adjusting unit 9205 is further configured to stop increasing the illumination intensity of the light emitted by the camera when the increased illumination intensity is the maximum illumination intensity of the light emitted by the camera.
In one possible implementation manner, the shooting parameters include an illumination color of light emitted by the camera, and the shooting parameter adjusting unit 9205 is further configured to adjust the first illumination color to a second illumination color subsequent to the first illumination color according to an arrangement order of the plurality of illumination colors;
the shooting parameter adjusting unit 9205 is further configured to stop adjusting the illumination color of the light emitted by the camera when the second illumination color is the last illumination color in the arrangement order.
In one possible implementation manner, as shown in fig. 10, the detection result determining module 903 includes:
a first determining unit 9301 for determining that the target face is not a living body face when at least one second detection result exists among the obtained plurality of detection results; or,
a second determining unit 9302 for determining that the target face is a living body face when a proportion of the first detection result in the plurality of detection results is greater than a first preset threshold; or,
a third determining unit 9303 configured to determine that the target face is not a living body face when a ratio of the second detection result to the plurality of detection results is greater than a second preset threshold;
wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
It should be noted that: in the living body detecting apparatus provided in the above embodiment, when the living body detection is performed on the target human face, only the division of the above functional modules is illustrated, and in practical application, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the living body detection device and the living body detection method provided by the above embodiment belong to the same concept, and the specific implementation process is described in the method embodiment, which is not described herein again.
Fig. 11 is a block diagram of a terminal 1100 according to an embodiment of the present disclosure. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: a processor 1101 and a memory 1102.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, touch display screen 1105, camera 1106, audio circuitry 1107, positioning component 1108, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, providing the front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the touch display screen 1105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or on an underlying layer of touch display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the touch display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the touch display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1105 is turned down. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, the touch display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreasing; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes gradually larger, the touch display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 12 is a schematic structural diagram of a server 1200 according to an embodiment of the present application, where the server 1200 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where the memory 1202 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1201 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a server comprising one or more processors and one or more memories, the one or more memories storing therein at least one instruction, the at least one instruction being loaded and executed by the one or more processors to implement the liveness detection method as in the above embodiments.
In an exemplary embodiment, there is also provided a computer readable storage medium having at least one instruction stored therein, the at least one instruction being loaded and executed by a processor to implement the liveness detection method as in the above embodiments. For example, a memory including instructions executable by a processor in the terminal to perform the liveness detection method in the above-described embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program comprising at least one instruction loaded and executed by a processor to implement the liveness detection method as in the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (12)
1. A method of in vivo detection, the method comprising:
receiving a living body detection instruction;
performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images, wherein the plurality of face images adopt different shooting parameters, and the shooting parameters comprise at least one of a focal length value of a camera, illumination intensity of light emitted by the camera and illumination color of the light emitted by the camera;
and determining the living body detection result of the target face according to the obtained detection result.
2. The method according to claim 1, wherein the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images comprises:
when the focal length values adopted by the face images are different, the face images are subjected to in-vivo detection based on a first in-vivo detection model to obtain detection results corresponding to the face images, and the first in-vivo detection model is used for performing in-vivo detection according to edge features in the face images.
3. The method according to claim 1, wherein the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images comprises:
when the illumination intensity that a plurality of face images adopted is different, perhaps when the illumination colour of the light that a plurality of face images adopted is different, based on second in vivo detection model, right a plurality of face images carry out in vivo detection, obtain the testing result that a plurality of face images correspond, second in vivo detection model is used for carrying out in vivo detection according to the light reflection characteristic in the face image.
4. The method according to claim 1, wherein the performing living body detection on a plurality of face images of a target face to obtain detection results corresponding to the plurality of face images comprises:
shooting the target face according to a first shooting parameter to obtain a face image corresponding to the first shooting parameter;
performing living body detection on the face image to obtain a detection result corresponding to the face image;
and adjusting the first shooting parameter, and continuing to carry out the steps of shooting the target face and carrying out in-vivo detection according to the adjusted second shooting parameter to obtain a detection result corresponding to the face image corresponding to the second shooting parameter.
5. The method according to claim 4, wherein the adjusting the first shooting parameter, and continuing the steps of shooting the target face and performing the living body detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter comprises:
if the detection result corresponding to the face image corresponding to the first shooting parameter is a first detection result, adjusting the first shooting parameter, and continuing to perform the steps of shooting the target face and performing in-vivo detection according to the adjusted second shooting parameter to obtain the detection result corresponding to the face image corresponding to the second shooting parameter;
wherein the first detection result indicates that the target face is a living body face.
6. The method of claim 4, wherein the shooting parameters include a focal length value of the camera, and wherein adjusting the first shooting parameter comprises:
reducing the first focal length value of the camera to obtain a reduced second focal length value;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the reduced second focal length value is the minimum focal length value of the camera, stopping reducing the focal length value of the camera.
7. The method of claim 4, wherein the shooting parameters include an illumination intensity of light emitted by the camera, and wherein adjusting the first shooting parameter comprises:
increasing a first illumination intensity of light emitted by the camera to obtain a second increased illumination intensity;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the increased second illumination intensity is the maximum illumination intensity of the light emitted by the camera, stopping increasing the illumination intensity of the light emitted by the camera.
8. The method of claim 4, wherein the shooting parameters include an illumination color emitted by the camera, and wherein adjusting the first shooting parameter comprises:
adjusting a first illumination color to a second illumination color subsequent to the first illumination color according to the arrangement sequence of the plurality of illumination colors;
after the step of continuously shooting the target face and performing living body detection according to the adjusted second shooting parameter and obtaining a detection result corresponding to the face image corresponding to the second shooting parameter, the method further includes:
and when the second illumination color is the last illumination color of the arrangement sequence, stopping adjusting the illumination color of the light emitted by the camera.
9. The method according to claim 1, wherein the determining the living body detection result of the target human face according to the obtained detection result comprises:
determining that the target face is not a living body face when at least one second detection result exists in the obtained plurality of detection results; or,
when the proportion of a first detection result in the plurality of detection results is larger than a first preset threshold value, determining that the target face is a living face; or,
when the proportion of a second detection result in the plurality of detection results is larger than a second preset threshold value, determining that the target face is not a living face;
wherein the first detection result indicates that the target face is a living body face, and the second detection result indicates that the target face is not a living body face.
10. A living body detection apparatus, the apparatus comprising:
the instruction receiving module is used for receiving a living body detection instruction;
the living body detection module is used for carrying out living body detection on a plurality of face images of a target face to obtain detection results corresponding to the face images, wherein the face images adopt different shooting parameters, and the shooting parameters comprise at least one of a focal length value of a camera, the illumination intensity of light emitted by the camera and the illumination color of the light emitted by the camera;
and the detection result determining module is used for determining the living body detection result of the target face according to the obtained detection result.
11. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the liveness detection method of any one of claims 1 to 9.
12. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the liveness detection method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911412081.5A CN111144365A (en) | 2019-12-31 | 2019-12-31 | Living body detection method, living body detection device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911412081.5A CN111144365A (en) | 2019-12-31 | 2019-12-31 | Living body detection method, living body detection device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111144365A true CN111144365A (en) | 2020-05-12 |
Family
ID=70522619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911412081.5A Withdrawn CN111144365A (en) | 2019-12-31 | 2019-12-31 | Living body detection method, living body detection device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144365A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149570A (en) * | 2020-09-23 | 2020-12-29 | 平安科技(深圳)有限公司 | Multi-person living body detection method and device, electronic equipment and storage medium |
CN112818900A (en) * | 2020-06-08 | 2021-05-18 | 支付宝实验室(新加坡)有限公司 | Face activity detection system, device and method |
CN113435378A (en) * | 2021-07-06 | 2021-09-24 | 中国银行股份有限公司 | Living body detection method, device and system |
CN113505756A (en) * | 2021-08-23 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Face living body detection method and device |
CN113850214A (en) * | 2021-09-29 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Injection attack identification method and device for living body detection |
CN114463632A (en) * | 2022-01-14 | 2022-05-10 | 摩拜(北京)信息技术有限公司 | Shared vehicle returning processing method and device and shared vehicle |
CN115174138A (en) * | 2022-05-25 | 2022-10-11 | 北京旷视科技有限公司 | Camera attack detection method, system, device, storage medium and program product |
WO2022213955A1 (en) * | 2021-04-06 | 2022-10-13 | 京东科技控股股份有限公司 | Method and apparatus for detecting image acquisition device hijacking, and computer device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010197551A (en) * | 2009-02-24 | 2010-09-09 | Nikon Corp | Imaging apparatus and image synthesis method |
US20100315356A1 (en) * | 2009-06-16 | 2010-12-16 | Bran Ferren | Contoured thumb touch sensor apparatus |
CN105138996A (en) * | 2015-09-01 | 2015-12-09 | 北京上古视觉科技有限公司 | Iris identification system with living body detecting function |
CN105430267A (en) * | 2015-12-01 | 2016-03-23 | 厦门瑞为信息技术有限公司 | Method for adaptively adjusting camera parameters based on face image illumination parameters |
CN107491775A (en) * | 2017-10-13 | 2017-12-19 | 理光图像技术(上海)有限公司 | Human face in-vivo detection method, device, storage medium and equipment |
CN108009480A (en) * | 2017-11-22 | 2018-05-08 | 南京亚兴为信息技术有限公司 | A kind of image human body behavioral value method of feature based identification |
CN108334817A (en) * | 2018-01-16 | 2018-07-27 | 深圳前海华夏智信数据科技有限公司 | Living body faces detection method and system based on three mesh |
CN110414346A (en) * | 2019-06-25 | 2019-11-05 | 北京迈格威科技有限公司 | Biopsy method, device, electronic equipment and storage medium |
CN110516644A (en) * | 2019-08-30 | 2019-11-29 | 深圳前海微众银行股份有限公司 | A kind of biopsy method and device |
-
2019
- 2019-12-31 CN CN201911412081.5A patent/CN111144365A/en not_active Withdrawn
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010197551A (en) * | 2009-02-24 | 2010-09-09 | Nikon Corp | Imaging apparatus and image synthesis method |
US20100315356A1 (en) * | 2009-06-16 | 2010-12-16 | Bran Ferren | Contoured thumb touch sensor apparatus |
CN105138996A (en) * | 2015-09-01 | 2015-12-09 | 北京上古视觉科技有限公司 | Iris identification system with living body detecting function |
CN105430267A (en) * | 2015-12-01 | 2016-03-23 | 厦门瑞为信息技术有限公司 | Method for adaptively adjusting camera parameters based on face image illumination parameters |
CN107491775A (en) * | 2017-10-13 | 2017-12-19 | 理光图像技术(上海)有限公司 | Human face in-vivo detection method, device, storage medium and equipment |
CN108009480A (en) * | 2017-11-22 | 2018-05-08 | 南京亚兴为信息技术有限公司 | A kind of image human body behavioral value method of feature based identification |
CN108334817A (en) * | 2018-01-16 | 2018-07-27 | 深圳前海华夏智信数据科技有限公司 | Living body faces detection method and system based on three mesh |
CN110414346A (en) * | 2019-06-25 | 2019-11-05 | 北京迈格威科技有限公司 | Biopsy method, device, electronic equipment and storage medium |
CN110516644A (en) * | 2019-08-30 | 2019-11-29 | 深圳前海微众银行股份有限公司 | A kind of biopsy method and device |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818900A (en) * | 2020-06-08 | 2021-05-18 | 支付宝实验室(新加坡)有限公司 | Face activity detection system, device and method |
CN112149570A (en) * | 2020-09-23 | 2020-12-29 | 平安科技(深圳)有限公司 | Multi-person living body detection method and device, electronic equipment and storage medium |
CN112149570B (en) * | 2020-09-23 | 2023-09-15 | 平安科技(深圳)有限公司 | Multi-person living body detection method, device, electronic equipment and storage medium |
WO2022213955A1 (en) * | 2021-04-06 | 2022-10-13 | 京东科技控股股份有限公司 | Method and apparatus for detecting image acquisition device hijacking, and computer device |
CN113435378A (en) * | 2021-07-06 | 2021-09-24 | 中国银行股份有限公司 | Living body detection method, device and system |
CN113505756A (en) * | 2021-08-23 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Face living body detection method and device |
CN113850214A (en) * | 2021-09-29 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Injection attack identification method and device for living body detection |
CN114463632A (en) * | 2022-01-14 | 2022-05-10 | 摩拜(北京)信息技术有限公司 | Shared vehicle returning processing method and device and shared vehicle |
CN115174138A (en) * | 2022-05-25 | 2022-10-11 | 北京旷视科技有限公司 | Camera attack detection method, system, device, storage medium and program product |
CN115174138B (en) * | 2022-05-25 | 2024-06-07 | 北京旷视科技有限公司 | Camera attack detection method, system, device, storage medium and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110992493B (en) | Image processing method, device, electronic equipment and storage medium | |
CN111079576B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN111144365A (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
CN109815150B (en) | Application testing method and device, electronic equipment and storage medium | |
CN109285178A (en) | Image partition method, device and storage medium | |
CN109522863B (en) | Ear key point detection method and device and storage medium | |
CN111723803B (en) | Image processing method, device, equipment and storage medium | |
CN109302632B (en) | Method, device, terminal and storage medium for acquiring live video picture | |
US11386586B2 (en) | Method and electronic device for adding virtual item | |
CN110839128A (en) | Photographing behavior detection method and device and storage medium | |
CN112770173A (en) | Live broadcast picture processing method and device, computer equipment and storage medium | |
CN112581358A (en) | Training method of image processing model, image processing method and device | |
CN111083526A (en) | Video transition method and device, computer equipment and storage medium | |
CN112396076A (en) | License plate image generation method and device and computer storage medium | |
CN111586279B (en) | Method, device and equipment for determining shooting state and storage medium | |
CN110152309B (en) | Voice communication method, device, electronic equipment and storage medium | |
CN111753606A (en) | Intelligent model upgrading method and device | |
CN112419143B (en) | Image processing method, special effect parameter setting method, device, equipment and medium | |
CN111127541A (en) | Vehicle size determination method and device and storage medium | |
CN108304841B (en) | Method, device and storage medium for nipple positioning | |
CN112967261B (en) | Image fusion method, device, equipment and storage medium | |
CN111294513B (en) | Photographing method and device, electronic equipment and storage medium | |
CN110660031B (en) | Image sharpening method and device and storage medium | |
CN111757146B (en) | Method, system and storage medium for video splicing | |
CN112329909B (en) | Method, apparatus and storage medium for generating neural network model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200512 |