CN112997185A - Face living body detection method, chip and electronic equipment - Google Patents
Face living body detection method, chip and electronic equipment Download PDFInfo
- Publication number
- CN112997185A CN112997185A CN201980001922.5A CN201980001922A CN112997185A CN 112997185 A CN112997185 A CN 112997185A CN 201980001922 A CN201980001922 A CN 201980001922A CN 112997185 A CN112997185 A CN 112997185A
- Authority
- CN
- China
- Prior art keywords
- face
- images
- dimensional
- group
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 36
- 230000008859 change Effects 0.000 claims abstract description 89
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000012512 characterization method Methods 0.000 claims description 2
- 210000003205 muscle Anatomy 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 238000001727 in vivo Methods 0.000 description 3
- 238000009792 diffusion process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005452 bending Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001125 extrusion Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
A face living body detection method, a chip and an electronic device are provided. The human face living body detection method comprises the following steps: acquiring at least two frames of three-dimensional images (101) of a human face; determining depth change information (102) of the face surface from each of the three-dimensional images; and determining whether the face is a real face or not according to the depth change information (103). The method can accurately and conveniently realize the human face living body detection.
Description
The present disclosure relates to the field of image recognition technologies, and in particular, to a face living body detection method, a chip, and an electronic device.
The living body detection is an important function in human face recognition related engineering or products, and is mainly used for distinguishing whether a detected object is a human face of a real person or a human face photo, a mask or a three-dimensional printed human face model of the detected object.
The present face living body detection generally adopts face photos taken for living body detection, and the detected object actively makes a response action to cooperate with the living body detection.
Disclosure of Invention
Some embodiments of the present application provide a face in-vivo detection method, a chip and an electronic device, which can accurately and conveniently implement face in-vivo detection.
The embodiment of the application provides a face living body detection method, which comprises the following steps: acquiring at least two frames of three-dimensional images of a human face; determining depth change information of the face surface according to the three-dimensional images; and determining whether the face is a real face according to the depth change information.
The embodiment of the application also provides a chip for executing the human face living body detection method.
The embodiment of the application also provides electronic equipment which comprises the chip.
For the prior art, the depth change information of the face surface is determined according to the three-dimensional image of the face, and whether the face is a real face or not is judged according to the depth change information. On one hand, the depth information of the face surface recorded by the three-dimensional image of the face is not influenced by illumination and posture; on the other hand, even if the detected object is in a situation of expressionless motion, the depth change of the human face surface can be caused by the slight change of the muscle and the tissue of the human face, so that the detected object does not need to actively make motion to match the detection; therefore, the technical scheme of the application can accurately and conveniently detect the living human face, and the user experience is good. Moreover, in the embodiment, the judgment is performed based on the acquired three-dimensional image, and compared with a judgment mode in the prior art that comparison needs to be performed by combining a preset image template, the dependency on the pre-stored information is low.
For example, the determining depth change information of the face surface according to each three-dimensional image includes: selecting at least one set of images from each of said three-dimensional images; wherein each set of images comprises two frames of the three-dimensional image; and determining depth change information corresponding to each group of images according to the two frames of three-dimensional images in each group of images. In this embodiment, two three-dimensional images are taken as a group, and depth change information corresponding to each group of images is determined.
For example, the number of frames of the three-dimensional image is denoted by n, and n is an integer greater than or equal to 3; selecting at least one group of images from each three-dimensional image, specifically: selecting any two three-dimensional images from the n frames of three-dimensional images as a group of images, wherein the number of the selected image groups isIn this embodiment, the determination is made based on n frames of three-dimensional imagesThe image group number can be obtained as much as possible under the condition of the currently acquired three-dimensional image by adopting the mode, so that the depth comparison can be fully carried out on the three-dimensional image of each frame, and the accuracy of the detection result can be improved.
For example, the depth variation information corresponding to each group of images includes one or a combination of the following cases: and obtaining characteristic values according to the depth change of the corresponding pixel points on the two frames of three-dimensional images in each group of images and the depth change trend of the corresponding pixel points on the two frames of three-dimensional images. The present embodiment provides a specific type of depth variation information.
For example, the depth change information corresponding to each group of images includes a feature value obtained according to the depth change of each pixel point corresponding to two frames of the three-dimensional images in each group of images; the determining whether the face is a real face according to the depth change information includes: judging whether the characteristic value of a group of images is larger than a preset threshold value or not; if so, determining the face as a real face; and if not, determining that the face is a non-real face. The embodiment provides a specific way of judging whether the face is a real face according to the depth change information.
For example, the depth change information corresponding to each group of images further includes a depth change trend of each pixel point corresponding to two frames of the three-dimensional image; before the determining that the face is a real face, the method further comprises: judging whether the depth change trend of a group of images is matched with the depth change trend of face change caused by preset characterization artificial force application; if the face does not exist, determining that the face is a real face; and if so, determining that the face is a non-real face. The embodiment provides a mode for distinguishing the face change caused by artificial force application, and can avoid misjudgment caused by the fact that the depth of the surface of an unreal face is forced to change by artificial force application as far as possible.
For example, after acquiring at least two frames of three-dimensional images of a human face, the method further includes: carrying out face alignment on each three-dimensional image; the determining of the depth change information of the face surface according to each three-dimensional image specifically includes: and determining the depth change information of the face surface according to the three-dimensional images after the face is aligned. In this embodiment, the three-dimensional image is subjected to face alignment, so that the accuracy of the determined depth change information of the face surface can be improved, and the accuracy of face living body detection can be improved.
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a face liveness detection method according to a first embodiment of the present application;
FIG. 2 is a detailed flowchart of the step of determining depth variation information of a face surface from three-dimensional images according to the first embodiment of the present application;
fig. 3 is a detailed flowchart of a step of determining whether a face is a real face according to depth change information according to the first embodiment of the present application;
FIG. 4 is a flow chart of a face liveness detection method according to a second embodiment of the present application;
fig. 5 is a flowchart according to an example of a face liveness detection method in the third embodiment of the present application;
fig. 6 is a flowchart according to another example of a face liveness detection method in the third embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to a fifth embodiment of the present application.
In order to make the objects, technical solutions and advantages of the present application more apparent, some embodiments of the present application will be described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The inventor finds that the prior art has at least the following problems: the face picture is easily influenced by the ambient light and the posture of the detected object during shooting, so that misjudgment is easily caused; the detected object actively responds and needs to be matched and completed by the detected object, which is more tedious and has poor user experience. Based on this, the inventor proposes the technical scheme of the application.
The first embodiment of the application relates to a face in-vivo detection method, which can be applied to any scene needing face recognition to perform identity verification, such as an access control system, a payment system, a mobile phone unlocking system and the like.
Fig. 1 is a flowchart illustrating a living human face detection method according to a first embodiment of the present application, which is described in detail below.
And 102, determining depth change information of the face surface according to the three-dimensional images.
And 103, determining whether the face is a real face according to the depth change information.
In step 101, at least two frames of three-dimensional images can be acquired through a related three-dimensional imaging device such as a double camera, structured light or TOF; the embodiment does not limit the specific form of the three-dimensional image, and may be, for example, a point cloud image, a depth image, or a grid image.
Wherein, the muscles and tissues of the face at different time and different parts of the face are often changed; the change of the muscles and tissues of the face is very obvious when a person has expressions like joy, anger, sadness and the like, and even if the person is in a quiet state, the muscles and tissues of different parts of the face can also slightly change. Each pixel point in the three-dimensional image has a three-dimensional coordinate (x, y, z), wherein z represents the depth value of the pixel point, and even if a certain place of the face slightly changes, the depth value of the pixel point at the place in the multi-frame three-dimensional image also correspondingly changes; if the non-real human faces such as human face photos, masks, human face models and the like are manufactured and molded, the depth of the surface of the non-real human faces cannot be changed; it is thus possible to determine whether a face is a real face based on whether the depth of the face surface changes. The real face in this embodiment is a living face.
In one example, as shown in FIG. 2, step 102 includes the following sub-steps.
A substep 1021 selecting at least one group of images from the three-dimensional images; wherein each group of images comprises two frames of three-dimensional images.
And a substep 1022 of determining depth change information corresponding to each group of images according to the two frames of three-dimensional images in each group of images.
Specifically, when the number of frames of the acquired three-dimensional image is two, the group of images selected in sub-step 1021 includes the two frames of images. When the number of frames n of the acquired three-dimensional image is greater than or equal to3, any two three-dimensional images in the three-dimensional images of the frames may be taken as one group, and then the number of the selected image groups may beThat is, 2 three-dimensional images are selected from the n three-dimensional images without repetition as a combination (i.e., a combinationA set of images) that may result in a total number of combinations ofFor example, three-dimensional images photo1, photo2, photo3 are acquired; when any two three-dimensional images are taken as a group, the number of groups into which the three-dimensional images can be divided isI.e., can be divided into three groups, photo1 and photo2, photo2 and photo3, photo1 and photo3, respectively. However, the grouping is not limited in this embodiment, in other examples, two adjacent three-dimensional images may be used as a group, and when two adjacent three-dimensional images are used as a group, the two adjacent three-dimensional images may be divided into two groups, namely, photo1 and photo2, photo2 and photo 3; or two frames can be selected as a group from the three-frame three-dimensional images.
In this embodiment, the determination is made based on n frames of three-dimensional imagesThe image group number can be obtained as much as possible under the condition of the currently acquired three-dimensional image by adopting the mode, so that the depth comparison can be fully carried out on the three-dimensional image of each frame, and the accuracy of the detection result can be improved.
And after the groups of images are determined, calculating depth change information corresponding to the groups of images according to the two three-dimensional images in each group of images. The depth change information corresponding to each group of images comprises a characteristic value obtained according to the depth change of each pixel point corresponding to two frames of three-dimensional images in each group of images; the feature value may be, for example, an average value of depth changes of corresponding pixels on the two frames of three-dimensional images, or a minimum variance of depth changes of corresponding pixels on the two frames of three-dimensional images. Specifically, the three-dimensional coordinates of each pixel point in each frame of three-dimensional image are represented by x, y and z, wherein z represents the depth value of the pixel point; the corresponding pixel points in the two three-dimensional images refer to two pixel points with the same x and y values in the two three-dimensional images; the depth change of the corresponding pixel points on the two three-dimensional images refers to the difference value of the z values of the two pixel points with the same x and y values in the two three-dimensional images.
For example, pixel point P1(x1, y1, z 1) on photo1-1) And pixel point P1' (x1, y1, z 1) on photo2-2) Is the corresponding pixel point, the depth variation between the corresponding pixel points P1, P1' is z1-1-z1 -2。
In this embodiment, the depth change values of the corresponding pixels in the two three-dimensional images are all calculated, and then the average value or the minimum variance is calculated; however, in other examples, a plurality of reference points may be selected in advance, for example, pixel points of a nose region, a glasses region, and a mouth region are used as the reference points, and a depth variation characteristic value of each corresponding reference point is calculated.
Wherein, each group of images determined in the sub-step 1021 has a feature value corresponding to a depth change.
In one example, as shown in FIG. 3, step 103 includes the following sub-steps.
A substep 1031 of determining whether there is a group of images whose feature value is greater than a preset threshold value; if yes, go to substep 1032; if not, go to substep 1033.
And a sub-step 1032 of determining the face as a real face.
And a substep 1033 of determining the face as a non-real face.
The preset threshold may be obtained according to the detection of the real face in advance, that is, a feature value of the depth change of the face surface of the real face in a quiet state is detected, and the preset threshold is set according to the detected feature value, for example, the preset threshold may be set to be slightly smaller than the detected feature value.
If there are multiple groups of images, in sub-step 1031, the feature values of depth changes corresponding to each group of images may be sequentially compared with a preset threshold, and if the feature values of depth changes corresponding to the currently compared group of images are greater than the preset threshold, it indicates that there is at least one group of images whose feature values are greater than the preset threshold, and at this time, it is determined that the face is a real face; and if the depth change characteristic values corresponding to the images of all groups are smaller than or equal to a preset threshold value, determining that the face is not a real face.
Step 103 may be implemented by, for example, a feature extraction algorithm, a machine learning algorithm, a deep learning algorithm, or other algorithms, which is not limited in this embodiment.
Compared with the prior art, the depth change information of the face surface is determined according to the three-dimensional image of the face, and whether the face is a real face or not is judged according to the depth change information. On one hand, the depth information of the face surface recorded by the three-dimensional image of the face is not influenced by illumination and posture; on the other hand, even if the detected object is in a situation of expressionless motion, the depth change of the human face surface can be caused by the slight change of the muscle and the tissue of the human face, so that the detected object does not need to actively make motion to match the detection; therefore, the technical scheme of the application can accurately and conveniently detect the living human face, and the user experience is good. Moreover, in the embodiment, the judgment is performed based on the acquired three-dimensional image, and compared with a judgment mode in the prior art that comparison needs to be performed by combining a preset image template, the dependency on the pre-stored information is low.
The second embodiment of the present application relates to a face live detection method, and a specific flow is shown in fig. 4.
And step 202, determining depth change information of the face surface according to the three-dimensional images. This step is similar to step 102 in the first embodiment, and is not repeated here.
Sub-step 2034, determining the face as a real face. This step is similar to substep 1032 of the first embodiment and will not be described again.
In order to impersonate a non-real face as a real face, a malicious person may apply artificial force to the non-real face to deform the non-real face, such as bending, squeezing, or poking the non-real face; the non-real face includes, for example, a face photograph, a mask, a face model, and the like. However, the depth change of the face surface caused by the artificial force has a distinct characteristic. For example, when a non-real face is stamped, the depth change generated on the face surface is diffused to the periphery by taking the stamping weight position as the center position, the depth change of the center position is the largest, and the farther the distance from the center position is, the smaller the depth change is; such as squeezing a non-real face, the depth variation may be in the form of a wavy line.
In order to prevent misjudgment as much as possible caused by artificial force application of a malicious person to a non-real face, the depth change information corresponding to each group of images also comprises the depth change trend of each corresponding pixel point on two frames of three-dimensional images in each group of images; that is, the depth change information includes both the feature value of the depth change of each pixel point corresponding to the two three-dimensional images and the depth change trend of each pixel point corresponding to the two three-dimensional images. In other examples, the depth change information may only include the depth change trend of each corresponding pixel point on the two frames of three-dimensional images.
The designer can simulate various artificial force application actions on the unreal human face in advance, and set a depth change trend representing human face change caused by artificial force application according to the actually detected depth change. In the above example, when the human force is applied as a stamp, that is, when a non-real human face is stamped, the corresponding depth variation trend is to spread from the center position to the periphery, the depth variation of the center position is the largest, and the farther from the center position, the smaller the depth variation; when the manual force application action is used as extrusion, namely when a non-real human face is extruded, the corresponding depth change trend is in a wave form. In addition, the same artificial force application action is applied to different non-real faces, and the possible depth change trends are different, for example, a face photo is stamped and a face model is stamped, the diffusion range when the depth change diffuses from the center position to the periphery or the speed of the depth change in diffusion is possibly different, and the depth change can be determined according to actual detection data; therefore, classification can be performed according to the detection data, that is, not only the non-real face but also which non-real face may be recognized.
In this embodiment, a manner of distinguishing a face change caused by an artificial force is provided, so that misjudgment caused by a situation that the depth of a non-real face surface is changed due to the artificial force is avoided as much as possible.
A third embodiment of the present application relates to a face live detection method, and as shown in fig. 5, is a flowchart of an example in this embodiment, which is specifically as follows.
And step 302, performing face alignment on each three-dimensional image.
And step 304, determining whether the face is a real face according to the depth change information. This step is similar to step 103 in the first embodiment, and is not described here again.
Step 302 is added in this example; that is, after the three-dimensional image of the face is acquired, the face alignment is performed on the three-dimensional image. In the process of acquiring the three-dimensional images, the posture of the detected object may change, so that the angles of the human face presented by the three-dimensional images may be different. Specifically, for example, an iterative closest point algorithm may be used to perform face alignment. In this example, the face alignment is performed on each three-dimensional image, and the depth change information of the face surface is determined according to each three-dimensional image after the face alignment, so that the accuracy of the determined depth change information of the face surface can be improved, and the accuracy of the face living body detection can be improved.
Fig. 6 shows a flowchart of another example of the present embodiment, which is described in detail as follows.
And 301-1, performing image preprocessing on each three-dimensional image.
And step 304, determining whether the face is a real face according to the depth change information.
With respect to the example of FIG. 5, step 301-1 is added in this example; namely, after the three-dimensional images of the human face are obtained, image preprocessing is performed on each three-dimensional image, and then the human face alignment is performed on each three-dimensional image after the image preprocessing. The image preprocessing comprises deburring, hole filling, smooth filtering and the like of the image. In this example, the image quality can be improved by performing image preprocessing on the three-dimensional image, so that the accuracy of face recognition can be improved.
The fourth embodiment of the present application relates to a chip for executing the above-mentioned face live detection method.
A fifth embodiment of the present application relates to an electronic device, as shown in fig. 7, the electronic device includes the above chip 10; the electronic device may also include a three-dimensional imaging device 20 and a memory 30 connected to the chip. The chip 10 acquires a three-dimensional image of a human face through the three-dimensional imaging device 20; the memory 30 is used for storing instructions executable by the chip, and when the instructions are executed by the chip, the chip can execute the above-mentioned face living body detection method. The memory 30 may also be used to store the acquired three-dimensional image and the various data generated by the chip 10 to perform the above-described face liveness detection method. The electronic device may be, for example, a face recognition device in an access control system, or a payment system, or a mobile phone unlocking system.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.
Claims (12)
- A face living body detection method is characterized by comprising the following steps:acquiring at least two frames of three-dimensional images of a human face;determining depth change information of the face surface according to the three-dimensional images;and determining whether the face is a real face according to the depth change information.
- The method of claim 1, wherein determining depth variation information of the face surface from each of the three-dimensional images comprises:selecting at least one set of images from each of said three-dimensional images; wherein each set of images comprises two frames of the three-dimensional image;and determining depth change information corresponding to each group of images according to the two frames of three-dimensional images in each group of images.
- The method of claim 2, wherein the number of frames of the three-dimensional image is denoted as n, and n is an integer greater than or equal to 3; selecting at least one group of images from each three-dimensional image, specifically: selecting any two three-dimensional images from the n frames of three-dimensional images as a group of images, wherein the number of the selected image groups is
- The method of claim 2, wherein the depth variation information corresponding to each set of images comprises one or a combination of the following: and obtaining a characteristic value according to the depth change of each pixel point corresponding to the two frames of three-dimensional images in each group of images, and the depth change trend of each pixel point corresponding to the two frames of three-dimensional images in each group of images.
- The method according to claim 4, wherein the depth change information corresponding to each group of images includes a feature value obtained according to the depth change of each pixel point corresponding to two frames of the three-dimensional images in each group of images; the determining whether the face is a real face according to the depth change information includes:judging whether the characteristic value of a group of images is larger than a preset threshold value or not; if so, determining the face as a real face; and if not, determining that the face is a non-real face.
- The method according to claim 5, wherein the depth change information corresponding to each group of images further includes a depth change trend of each pixel point corresponding to two frames of the three-dimensional images in each group of images; before the determining that the face is a real face, the method further comprises:judging whether the depth change trend of a group of images is matched with the depth change trend of face change caused by preset characterization artificial force application; if the face does not exist, determining that the face is a real face; and if so, determining that the face is a non-real face.
- The method of claim 4, wherein the characteristic values comprise one or a combination of: the average value of the depth changes of the corresponding pixel points on the two frames of three-dimensional images and the minimum variance of the depth changes of the corresponding pixel points on the two frames of three-dimensional images.
- The method of claim 1, wherein after acquiring at least two three-dimensional images of a human face, further comprising: carrying out face alignment on each three-dimensional image;the determining of the depth change information of the face surface according to each three-dimensional image specifically includes: and determining the depth change information of the face surface according to the three-dimensional images after the face is aligned.
- The method of claim 8, wherein after acquiring at least two three-dimensional images of a human face, further comprising: carrying out image preprocessing on each three-dimensional image;and performing face alignment on each three-dimensional image, specifically performing face alignment on each three-dimensional image after image preprocessing.
- The method of claim 1, wherein the three-dimensional image is a point cloud map, or a depth map, or a grid map.
- A chip for performing the face liveness detection method of any one of claims 1 to 10.
- An electronic device comprising the chip of claim 11.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/104730 WO2021042375A1 (en) | 2019-09-06 | 2019-09-06 | Face spoofing detection method, chip, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112997185A true CN112997185A (en) | 2021-06-18 |
Family
ID=74852955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980001922.5A Pending CN112997185A (en) | 2019-09-06 | 2019-09-06 | Face living body detection method, chip and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112997185A (en) |
WO (1) | WO2021042375A1 (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105574518A (en) * | 2016-01-25 | 2016-05-11 | 北京天诚盛业科技有限公司 | Method and device for human face living detection |
CN105740781A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Three-dimensional human face in-vivo detection method and device |
US20160277397A1 (en) * | 2015-03-16 | 2016-09-22 | Ricoh Company, Ltd. | Information processing apparatus, information processing method, and information processing system |
CN106598226A (en) * | 2016-11-16 | 2017-04-26 | 天津大学 | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning |
CN108616688A (en) * | 2018-04-12 | 2018-10-02 | Oppo广东移动通信有限公司 | Image processing method, device and mobile terminal, storage medium |
CN109492585A (en) * | 2018-11-09 | 2019-03-19 | 联想(北京)有限公司 | A kind of biopsy method and electronic equipment |
CN109508706A (en) * | 2019-01-04 | 2019-03-22 | 江苏正赫通信息科技有限公司 | A kind of silent biopsy method based on micro- Expression Recognition and noninductive recognition of face |
CN110199296A (en) * | 2019-04-25 | 2019-09-03 | 深圳市汇顶科技股份有限公司 | Face identification method, processing chip and electronic equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679118B (en) * | 2012-09-07 | 2017-06-16 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and system |
CN107368769A (en) * | 2016-05-11 | 2017-11-21 | 北京市商汤科技开发有限公司 | Human face in-vivo detection method, device and electronic equipment |
CN108875509A (en) * | 2017-11-23 | 2018-11-23 | 北京旷视科技有限公司 | Biopsy method, device and system and storage medium |
CN108124486A (en) * | 2017-12-28 | 2018-06-05 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method based on cloud, electronic device and program product |
-
2019
- 2019-09-06 CN CN201980001922.5A patent/CN112997185A/en active Pending
- 2019-09-06 WO PCT/CN2019/104730 patent/WO2021042375A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160277397A1 (en) * | 2015-03-16 | 2016-09-22 | Ricoh Company, Ltd. | Information processing apparatus, information processing method, and information processing system |
CN105574518A (en) * | 2016-01-25 | 2016-05-11 | 北京天诚盛业科技有限公司 | Method and device for human face living detection |
CN105740781A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Three-dimensional human face in-vivo detection method and device |
CN106598226A (en) * | 2016-11-16 | 2017-04-26 | 天津大学 | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning |
CN108616688A (en) * | 2018-04-12 | 2018-10-02 | Oppo广东移动通信有限公司 | Image processing method, device and mobile terminal, storage medium |
CN109492585A (en) * | 2018-11-09 | 2019-03-19 | 联想(北京)有限公司 | A kind of biopsy method and electronic equipment |
CN109508706A (en) * | 2019-01-04 | 2019-03-22 | 江苏正赫通信息科技有限公司 | A kind of silent biopsy method based on micro- Expression Recognition and noninductive recognition of face |
CN110199296A (en) * | 2019-04-25 | 2019-09-03 | 深圳市汇顶科技股份有限公司 | Face identification method, processing chip and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2021042375A1 (en) | 2021-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778928B (en) | Image processing method and device | |
US20230085605A1 (en) | Face image processing method, apparatus, device, and storage medium | |
JP6636154B2 (en) | Face image processing method and apparatus, and storage medium | |
CN109952594B (en) | Image processing method, device, terminal and storage medium | |
CN106372629B (en) | Living body detection method and device | |
KR20220066366A (en) | Predictive individual 3D body model | |
CN108550176A (en) | Image processing method, equipment and storage medium | |
US9443325B2 (en) | Image processing apparatus, image processing method, and computer program | |
CN109271930B (en) | Micro-expression recognition method, device and storage medium | |
WO2014186422A1 (en) | Image masks for face-related selection and processing in images | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
CN111091075A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN108830892A (en) | Face image processing process, device, electronic equipment and computer readable storage medium | |
CN110287862B (en) | Anti-candid detection method based on deep learning | |
CN113128428B (en) | Depth map prediction-based in vivo detection method and related equipment | |
KR20160144699A (en) | the automatic 3D modeliing method using 2D facial image | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
CN113436735A (en) | Body weight index prediction method, device and storage medium based on face structure measurement | |
CN107886568B (en) | Method and system for reconstructing facial expression by using 3D Avatar | |
CN108288023B (en) | Face recognition method and device | |
CN111382791B (en) | Deep learning task processing method, image recognition task processing method and device | |
CN114549598A (en) | Face model reconstruction method and device, terminal equipment and storage medium | |
CN112997185A (en) | Face living body detection method, chip and electronic equipment | |
US20220157016A1 (en) | System and method for automatically reconstructing 3d model of an object using machine learning model | |
CN116959113A (en) | Gait recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |