CN112784661B - Real face recognition method and real face recognition device - Google Patents
Real face recognition method and real face recognition device Download PDFInfo
- Publication number
- CN112784661B CN112784661B CN202010140606.0A CN202010140606A CN112784661B CN 112784661 B CN112784661 B CN 112784661B CN 202010140606 A CN202010140606 A CN 202010140606A CN 112784661 B CN112784661 B CN 112784661B
- Authority
- CN
- China
- Prior art keywords
- face
- target
- processor
- characteristic value
- quadratic curve
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 13
- 230000001815 facial effect Effects 0.000 claims description 19
- 238000013136 deep learning model Methods 0.000 claims description 14
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000005070 sampling Methods 0.000 description 8
- 210000000056 organ Anatomy 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000005452 bending Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method and a device for recognizing a real human face. The method comprises the following steps: obtaining a face image of a target face; obtaining depth information of a target area in the face image; analyzing the depth information to obtain at least one characteristic value related to a quadratic curve, wherein the quadratic curve reflects the depth distribution state of the target area; judging whether the at least one characteristic value accords with a preset condition or not; if the at least one characteristic value accords with the preset condition, judging that the target face is the face in the photo; and if the at least one characteristic value does not accord with the preset condition, judging that the target face is a real face.
Description
Technical Field
The present invention relates to image recognition technology, and more particularly, to a method and apparatus for recognizing a real face.
Background
With the advancement of technology, login authentication of electronic devices using face recognition technology is becoming more and more popular. The user can log in directly through the face verification mechanism as long as the user presents the face in front of the lens of the electronic device. However, some non-zodiac signs may use a photo downloaded over the network or a photo of the real user to perform a facial scan in an attempt to log in to the other's electronic device. Therefore, how to improve the recognition efficiency of the real face in the login verification process is one of the subjects of the study conducted by the person skilled in the art.
Disclosure of Invention
The invention provides a method and a device for recognizing a real human face, which can effectively improve the recognition efficiency of recognizing the human face before a lens as the real human face or the human face in a photo.
The embodiment of the invention provides a method for identifying a real human face, which comprises the following steps: obtaining a face image of a target face; obtaining depth information of a target area in the face image; analyzing the depth information to obtain at least one characteristic value related to a quadratic curve, wherein the quadratic curve reflects the depth distribution state of the target area; judging whether the at least one characteristic value accords with a preset condition or not; if the at least one characteristic value accords with the preset condition, judging that the target face is the face in the photo; and if the at least one characteristic value does not accord with the preset condition, judging that the target face is a real face.
The embodiment of the invention further provides a device for recognizing the real human face, which comprises a depth camera and a processor. The processor is connected to the depth camera. The processor is used for obtaining a face image of the target face through the depth camera. The processor is also configured to obtain depth information for a target region in the facial image. The processor is also configured to analyze the depth information to obtain at least one characteristic value associated with a conic. The quadratic curve reflects a depth distribution state of the target area. The processor is further configured to determine whether the at least one feature value meets a preset condition. And if the at least one characteristic value meets the preset condition, the processor is further used for judging that the target face is the face in the photo. And if the at least one characteristic value does not meet the preset condition, the processor is further configured to determine that the target face is a real face.
Based on the above, when a face image of a target face is obtained, depth information of a target region in the face image can be obtained together. By analyzing the depth information, at least one characteristic value related to a quadratic curve can be obtained, and the quadratic curve can reflect the depth distribution state of the target area. And then, judging whether the at least one characteristic value meets a preset condition or not, and effectively recognizing the target face as a real face or a face in a photo. Therefore, the recognition efficiency of recognizing the face before the lens as the real face or the face in the photo can be effectively improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention;
FIG. 2 is a schematic illustration of a face image shown in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a target area shown according to an embodiment of the invention;
FIG. 4 is a schematic diagram showing a curve reflecting a depth profile state according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a conic section shown according to one embodiment of the invention;
fig. 6 is a flowchart illustrating a method of recognizing a real face according to an embodiment of the present invention.
Description of the reference numerals
10: an electronic device;
11: a depth camera;
12: a storage device;
13: a processor;
101: a deep learning model;
21: a face image;
22: a target face;
201 to 205: a reference point;
301 to 306: a line segment;
401-406: a curve;
501: a quadratic curve;
s601 to S606: and (3) step (c).
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. Referring to fig. 1, the electronic device (also referred to as a real face recognition device) 10 may be a notebook computer, a desktop computer, a tablet computer, a smart phone, a game console, or a Kiosk (Kiosk) with a depth camera and a processor, and the type of the electronic device 10 is not limited to the above.
The electronic device 10 includes a depth camera 11, a storage device 12, and a processor 13. The depth camera 11 may be used to capture images with depth information. For example, when a face (also referred to as a target face) exists in front of the lens of the depth camera 11, the captured image may be a face image and at least one pixel point in this face image may carry depth information of a corresponding position. For example, the depth camera 11 may include at least one lens, at least one photosensitive element, and/or at least one depth sensor to accomplish the above-described functions.
The storage device 12 is used for storing data. For example, the storage device 12 may include a non-volatile memory module and a volatile memory module. The non-volatile memory module may be used to store data non-volatile. For example, the non-volatile memory module may include Read Only Memory (ROM), solid State Disk (SSD), and/or a conventional hard disk (HDD). Volatile memory modules may be used to temporarily store data. For example, the volatile memory module may include dynamic Random Access Memory (RAM). In addition, the non-volatile memory module and/or the volatile memory module may also include other types of storage media, and the invention is not limited.
In one embodiment, the storage device 12 stores the deep learning model 101. The deep learning model 101 is also referred to as an artificial intelligence model. The deep learning model 101 may have a neural network-like architecture and may be used for image recognition. In one embodiment, the deep learning model 101 may be used to identify faces. In an embodiment, the deep learning model 101 may be used to identify at least one facial organ (e.g., eyes (or pupils), nose, mouth, and/or ears) in a human face. In addition, the deep learning model 101 can gradually improve the accuracy of image recognition through training. In one embodiment, the deep learning model 101 may also be implemented as a hardware circuit (e.g., a chip), which is not a limitation of the present invention.
The processor 13 is connected to the depth camera 11 and the storage device 12. The processor 13 may be a Central Processing Unit (CPU), a Graphics Processor (GPU), or other general purpose or special purpose microprocessor that is programmable, a digital signal processor (Digital Signal Processor, DSP), a programmable controller, an application specific integrated circuit (Application Specific Integrated Circuits, ASIC), a programmable logic device (Programmable Logic Device, PLD), or other similar device or combination of devices. The processor 13 may control the overall or partial operation of the electronic device 10. For example, the processor 13 may run the deep learning model 101 to perform image recognition.
In one embodiment, the electronic device 10 further includes at least one input/output interface to receive signals or output signals. For example, the input/output interface may include a screen, a touch pad, a mouse, a keyboard, physical buttons, a speaker, a microphone, a wired network card, and/or a wireless network card, and the type of input/output interface is not limited thereto.
When there is a face (i.e., a target face) in front of the lens of the depth camera 11, the processor 13 may obtain a face image of the target face through the depth camera 11. The processor 13 may also obtain depth information for a specific region (also referred to as a target region) in the face image via the depth camera 11. It should be noted that the present invention is not limited to the number and/or shape of target regions in a single facial image. The processor 13 may analyze the depth information to obtain at least one characteristic value associated with a certain conic. Wherein the quadratic curve may reflect a depth distribution state of the target region.
After obtaining the characteristic value, the processor 13 may determine whether the characteristic value meets a preset condition. If the feature value meets a preset condition, the processor 13 may determine that the target face is a face in the photo. In addition, if the feature value does not meet the preset condition, the processor 13 may determine that the target face is a real face (i.e. a face in a non-photo). For example, if there is a user presenting a face in a photograph displayed on the screen of the mobile phone (or a face in a paper photograph) in front of the lens of the depth camera 11, the processor 13 may determine that the face in front of the current lens is a face in the photograph, not a real face, according to the above operation. Thus, the false operation performed by misjudging the face in the photo as the real face can be reduced.
In one embodiment, the processor 13 may analyze the facial image through the deep learning model 101 to obtain a location of at least one facial organ of the target face. The processor 13 may then determine the target region based on the location of the at least one facial organ.
Fig. 2 is a schematic diagram of a face image according to an embodiment of the present invention. Referring to fig. 1 and 2, a target face 22 is shown in the face image 21. The processor 13 may identify facial organs, such as eyes, nose, mouth, and/or ears, in the target face 22 through the deep learning model 101.
In one embodiment, the processor 13 may set the reference points 201-205 based on the identified location of at least a portion of the facial organ. For example, the reference point 201 may be set at the location of the left eye in the target face 22, the reference point 202 may be set at the location of the right eye in the target face 22, the reference point 203 may be set at the location of the nose in the target face 22, the reference point 204 may be set to the left of the mouth in the target face 22, and the reference point 205 may be set to the right of the mouth in the target face 22. It should be noted that, in other embodiments, the reference points 201 to 205 may be disposed at other positions in the target face 22 and/or the number of the reference points 201 to 205 may be more or less, which is not limited by the present invention.
Fig. 3 is a schematic diagram of a target area according to an embodiment of the present invention. Referring to fig. 1 to 3, in an embodiment, at least one of the line segments 301 to 306 may be determined according to the set reference points 201 to 205. For example, the processor 13 may set the line segment 301 as a line between the midpoint between the reference points 201 and 202 and the midpoint between the reference points 204 and 205. For example, the processor 13 may set the line segment 302 as a line between the reference points 201 and 205. For example, the processor 13 may set the line segment 303 as a line between the reference points 202 and 204. For example, the processor 13 may set the line segment 304 as a line between the midpoint between the reference points 201 and 204 and the midpoint between the reference points 202 and 205. For example, the processor 13 may set the line segment 305 as a line between the reference points 201 and 204. For example, the processor 13 may set the line segment 306 as a line between the reference points 202 and 205. The path taken by at least one of the line segments 301-306 may be determined as the target region. In other words, the target region may include pixels (or pixel locations) through which at least one of the line segments 301-306 passes or covers. Further, at least one pixel point in the target area may be regarded as a sampling point. Each sample point may have a depth information (e.g., a depth value) to reflect the depth of the location of the sample point.
In one embodiment, the target area may be divided into at least one first area and at least one second area. The first region includes the location of the nose of the target face. For example, the path taken by the line segments 301-304 of FIG. 3 may be considered a first region. The second region does not include the location of the nose of the target face. For example, the path taken by the line segments 305 and 306 of fig. 3 may be regarded as a second region. The processor 13 may obtain the at least one feature value by analyzing depth information of the first region and/or the second region.
FIG. 4 is a schematic diagram showing a curve reflecting a depth distribution state according to an embodiment of the present invention. Referring to fig. 1 to 4, it is assumed that sampling points 1 to 100, 101 to 200, 201 to 300, 301 to 400, 401 to 500, and 501 to 600 are respectively located in a target area through which line segments 301 to 306 pass. Depth values corresponding to sampling points 1 to 100, 101 to 200, 201 to 300, 301 to 400, 401 to 500, and 501 to 600, respectively, can be represented by curves 401 to 406. In other words, curve 401 may reflect the depth profile of the plurality of sample points 1-100 on the path traversed by line segment 301, and curve 406 may reflect the depth profile of the plurality of sample points 501-600 on the path traversed by line segment 306, and so on.
It should be noted that in the embodiment of fig. 4, it is assumed that the face image 21 in fig. 2 is obtained by capturing a real face (i.e., the target face 22 is a real face). Therefore, the path traversed by the line segments 301 to 304 includes the position of the nose (the depth value of the position of the nose is small) in the target face 22, so the curves 401 to 404 will take a curved shape similar to a quadratic curve, and the opening direction of the curve 401 is upward. In addition, since the path taken by the line segments 305 and 306 does not include the position of the nose in the target face 22 (i.e. the line segments 305 and 306 are the cheek portions of the real face and the depth change is small), the curves 405 and 406 are flat.
However, the curves 401-406 in FIG. 4 are Fan Li only and are not intended to limit the present invention. In other embodiments, the depth value corresponding to any one of the curves 401 to 406 may be different and/or the number of sampling points corresponding to any one of the curves 401 to 406 may be different, which is not limited by the present invention. Alternatively, in another embodiment of fig. 4, if the face image 21 in fig. 2 is obtained by taking a face in a photo (i.e., the target face 22 is not a real face), the depth distribution states of the curves 401 to 406 are also significantly different.
In an embodiment, the processor 13 may utilize a quadratic curve to simulate or approximate at least one of the curves 401-406 to obtain a characteristic value associated with at least one of the curves 401-406. In an embodiment, the feature values include a first feature value and a second feature value. The first characteristic value reflects an opening direction of the quadratic curve and a bending degree of the quadratic curve. The second eigenvalue reflects the position (or relative position) of the extremum of the conic in the conic.
Fig. 5 is a schematic diagram of a conic section shown according to an embodiment of the invention. Referring to fig. 1 to 5, taking curve 401 as an example, processor 13 may use quadratic curve 501 to simulate or approximate curve 401. The conic 501 can be described by the following equation (1.1).
y=a(x-b) 2 +c (1.1)
In equation (1.1), the parameter y represents the depth value of the quadratic curve 501 in the vertical axis direction, the parameter x represents the sampling point of the quadratic curve 501 in the horizontal axis direction, the parameter a reflects the opening direction of the quadratic curve 501 and the degree of curvature of the quadratic curve 501, the parameter b reflects the position of the extremum of the quadratic curve 501 in the quadratic curve 501, and the parameter c is a constant. In the embodiment of fig. 5, the positive value of the parameter a reflects that the opening direction of the conic 501 is upward, the value of the parameter a is positively correlated with the bending degree of the conic 501, and the value of the parameter b reflects that the minimum depth value in the conic 501 occurs at the b-th sampling point.
In an embodiment, the processor 13 may obtain a first characteristic value related to the curve 401 (or the conic 501) from the parameter a and a second characteristic value related to the curve 401 (or the conic 501) from the parameter b. In an embodiment, the processor 13 may also obtain the characteristic values related to any of the curves 402-406 in fig. 4 in the same manner, and the description thereof is not repeated here. The processor 13 may determine that the target face 22 of fig. 2 is a real face or a face in a photo according to the first feature value and the second feature value.
In one embodiment, the processor 13 may determine the parameter a as the first characteristic value. In one embodiment, the processor 13 may divide the parameter b by the total number of sampling points (e.g., 100) corresponding to the curve 401 and determine the calculation result as the second feature value. Thus, in an embodiment, the first characteristic value may be parameter a and the second characteristic value may be parameter p. Where the parameter p=b/(the total number of sampling points=100). It should be noted that, in other embodiments, the first feature value and the second feature value may be obtained by performing other logic operations according to the parameters a and b, respectively, which is not limited by the present invention.
In an embodiment, the processor 13 may determine whether the first characteristic value (represented by the parameter C1) and/or the second characteristic value (represented by the parameter C2) meet the preset condition. In one embodiment, the preset conditions corresponding to different target areas (i.e., line segments) can be shown in table 1 below.
TABLE 1
Line segment (target area) | First characteristic value C1 | Second characteristic value C2 |
301 | C1<V1 | |C2-V4|<V5 |
302 | C1<V1 | |C2-V41<V5 |
303 | C1<V1 | |C2-V4|<V5 |
304 | C1<V2 | |C2-V4|<V6 |
305 | C1>V3 | |
306 | C1>V3 |
In one embodiment, the parameter V1 may be 0.015, the parameter V2 may be 0.03, the parameter V3 may be 0.02, the parameter V4 may be 0.5, the parameter V5 may be 0.3 and/or the parameter V6 may be 0.2. However, in another embodiment, the parameters V1-V6 may be other values, and the invention is not limited. In one embodiment, the processor 13 may train the deep learning model 101 using a plurality of training face images. Based on the training results, the processor 13 can generalize the parameters V1-V6 that can be used to distinguish between faces in the photo and real faces.
In one embodiment, the target face 22 of fig. 2 may be determined to be a face in a photograph as long as any of the conditions listed in table 1 are met. Alternatively, in one embodiment, the target face 22 of FIG. 2 may be determined to be a face in a photograph only if at least 2 of the conditions listed in Table 1 are met. Alternatively, in one embodiment, the target face 22 of FIG. 2 may be determined to be the face in the photograph only if all of the conditions listed in Table 1 are met.
For example, in one embodiment, assuming that the first eigenvalue C1 related to the curve 401 meets the condition of C1 < V1 corresponding to the line segment 301 in table 1, the processor 13 may determine that the target face 22 in fig. 2 is a face in a photo, not a real face, in response to meeting the condition. Alternatively, in one embodiment, assuming that the first characteristic value C1 associated with the curve 405 meets the condition of C1 > V3 corresponding to the line segment 305 in table 1, the processor 13 may determine that the target face 22 in fig. 2 is a face in a photo, rather than a real face, in response to meeting the condition. Alternatively, in one embodiment, assuming that the second eigenvalue C2 related to the curve 402 meets the condition of |c2-v4| < V5 corresponding to the line segment 302 in table 1, the processor 13 may determine that the target face 22 of fig. 2 is a face in the photograph, not a real face, in response to the meeting of the condition.
In one embodiment, if the target face is determined to be a real face, the processor 13 may allow the subsequent operations related to face verification or face image registration to continue. For example, after determining that the target face 22 of fig. 2 is a true face, the processor 13 may allow facial verification and/or facial image registration to be performed using the facial image 21. Conversely, if the target face is determined to be a face in the photograph (i.e., a non-real face), the processor 13 may stop continuing to perform subsequent operations related to face verification or face image registration. Thus, the false operation performed by misjudging the face in the photo as the real face can be reduced.
Fig. 6 is a flowchart illustrating a method of recognizing a real face according to an embodiment of the present invention. Referring to fig. 6, in step S601, a face image of a target face is obtained. In step S602, depth information of a target region in the face image is obtained. In step S603, the depth information is analyzed to obtain at least one feature value related to a conic. Wherein the quadratic curve reflects a depth distribution state of the target area. In step S604, it is determined whether the at least one feature value meets a preset condition. If the at least one feature value meets the preset condition, in step S605, it is determined that the target face is a face in the photo. However, if none of the at least one feature values meets the predetermined condition, in step S606, it is determined that the target face is a real face.
However, the steps in fig. 6 are described in detail above, and will not be described again here. It should be noted that each step in fig. 6 may be implemented as a plurality of program codes or circuits, and the present invention is not limited thereto. In addition, the method of fig. 6 may be used with the above exemplary embodiment, or may be used alone, and the present invention is not limited thereto.
In summary, the embodiment of the invention can effectively filter the face in the photo before the lens and/or identify the real face before the lens, thereby reducing the false actions executed by misjudging the face in the photo as the real face.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (4)
1. A method for recognizing a real face, comprising:
obtaining a face image of a target face;
analyzing the face image through a deep learning model to obtain the position of the nose of the target face in the face image;
determining a target region in the face image according to the position of the nose in the face image, wherein the target region comprises a first region and a second region, wherein the first region comprises the position of the nose in the face image and the second region does not comprise the position of the nose in the face image;
obtaining first depth information of the first area and second depth information of the second area;
analyzing the first depth information and the second depth information to obtain at least one characteristic value related to a quadratic curve, wherein the quadratic curve reflects the depth distribution state of the target area;
judging whether the at least one characteristic value accords with a preset condition or not;
if the at least one characteristic value accords with the preset condition, judging that the target face is the face in the photo; and
if the at least one characteristic value does not accord with the preset condition, the target face is judged to be a real face,
wherein the at least one feature value comprises a first feature value and a second feature value,
the step of analyzing the first depth information and the second depth information to obtain the at least one eigenvalue related to the conic comprises:
by the equation y=a (x-b) 2 +c describes the conic;
obtaining the first characteristic value according to a parameter a in the equation; and
the second eigenvalue is obtained from parameter b in the equation.
2. The recognition method of a real face according to claim 1, wherein the first characteristic value reflects an opening direction of the quadratic curve and a degree of curvature of the quadratic curve, and the second characteristic value reflects a position of an extremum of the quadratic curve in the quadratic curve.
3. A device for recognizing a real human face, comprising:
a depth camera; and
a processor connected to the depth camera,
wherein the processor is configured to obtain a facial image of a target face with the depth camera,
the processor is further configured to analyze the facial image by a deep learning model to obtain a position of a nose of the target face in the facial image,
the processor is further configured to determine a target region in the facial image based on the position of the nose in the facial image, wherein the target region comprises a first region and a second region, wherein the first region comprises the position of the nose in the facial image and the second region does not comprise the position of the nose in the facial image,
the processor is also configured to obtain first depth information for the first region and second depth information for the second region,
the processor is also configured to analyze the first depth information and the second depth information to obtain at least one eigenvalue related to a quadratic curve, wherein the quadratic curve reflects a depth distribution state of the target area,
the processor is further configured to determine whether the at least one feature value meets a preset condition,
if the at least one feature value meets the preset condition, the processor is further configured to determine that the target face is a face in a photo, and
if the at least one characteristic value does not meet the preset condition, the processor is further configured to determine that the target face is a real face,
wherein the at least one feature value comprises a first feature value and a second feature value,
the step of the processor analyzing the first depth information and the second depth information to obtain the at least one eigenvalue related to the conic comprises:
by the equation y=a (x-b) 2 +c describes the conic;
obtaining the first characteristic value according to a parameter a in the equation; and
the second eigenvalue is obtained from parameter b in the equation.
4. A real face recognition apparatus according to claim 3, wherein the first characteristic value reflects an opening direction of the quadratic curve and a degree of curvature of the quadratic curve, and the second characteristic value reflects a position of an extremum of the quadratic curve in the quadratic curve.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW108139763A TWI731461B (en) | 2019-11-01 | 2019-11-01 | Identification method of real face and identification device using the same |
TW108139763 | 2019-11-01 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112784661A CN112784661A (en) | 2021-05-11 |
CN112784661B true CN112784661B (en) | 2024-01-19 |
Family
ID=75749984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010140606.0A Active CN112784661B (en) | 2019-11-01 | 2020-03-03 | Real face recognition method and real face recognition device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112784661B (en) |
TW (1) | TWI731461B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018086543A1 (en) * | 2016-11-10 | 2018-05-17 | 腾讯科技(深圳)有限公司 | Living body identification method, identity authentication method, terminal, server and storage medium |
CN108376239A (en) * | 2018-01-25 | 2018-08-07 | 努比亚技术有限公司 | A kind of face identification method, mobile terminal and storage medium |
CN108416291A (en) * | 2018-03-06 | 2018-08-17 | 广州逗号智能零售有限公司 | Face datection recognition methods, device and system |
CN109117755A (en) * | 2018-07-25 | 2019-01-01 | 北京飞搜科技有限公司 | A kind of human face in-vivo detection method, system and equipment |
WO2019056988A1 (en) * | 2017-09-25 | 2019-03-28 | 杭州海康威视数字技术股份有限公司 | Face recognition method and apparatus, and computer device |
WO2019075840A1 (en) * | 2017-10-17 | 2019-04-25 | 平安科技(深圳)有限公司 | Identity verification method and apparatus, storage medium and computer device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI252437B (en) * | 2004-07-20 | 2006-04-01 | Jing-Jing Fang | Feature-based head structure and texturing head |
US8090160B2 (en) * | 2007-10-12 | 2012-01-03 | The University Of Houston System | Automated method for human face modeling and relighting with application to face recognition |
WO2016018488A2 (en) * | 2014-05-09 | 2016-02-04 | Eyefluence, Inc. | Systems and methods for discerning eye signals and continuous biometric identification |
TW201727537A (en) * | 2016-01-22 | 2017-08-01 | 鴻海精密工業股份有限公司 | Face recognition system and face recognition method |
-
2019
- 2019-11-01 TW TW108139763A patent/TWI731461B/en active
-
2020
- 2020-03-03 CN CN202010140606.0A patent/CN112784661B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018086543A1 (en) * | 2016-11-10 | 2018-05-17 | 腾讯科技(深圳)有限公司 | Living body identification method, identity authentication method, terminal, server and storage medium |
WO2019056988A1 (en) * | 2017-09-25 | 2019-03-28 | 杭州海康威视数字技术股份有限公司 | Face recognition method and apparatus, and computer device |
WO2019075840A1 (en) * | 2017-10-17 | 2019-04-25 | 平安科技(深圳)有限公司 | Identity verification method and apparatus, storage medium and computer device |
CN108376239A (en) * | 2018-01-25 | 2018-08-07 | 努比亚技术有限公司 | A kind of face identification method, mobile terminal and storage medium |
CN108416291A (en) * | 2018-03-06 | 2018-08-17 | 广州逗号智能零售有限公司 | Face datection recognition methods, device and system |
CN109117755A (en) * | 2018-07-25 | 2019-01-01 | 北京飞搜科技有限公司 | A kind of human face in-vivo detection method, system and equipment |
Non-Patent Citations (2)
Title |
---|
邓茜文 ; 冯子亮 ; 邱晨鹏 ; .基于近红外与可见光双目视觉的活体人脸检测方法.计算机应用.(07),全文. * |
马博宇 ; 尉寅玮 ; .基于AdaBoost算法的人脸识别系统的研究与实现.仪器仪表学报.2016,(S1),全文. * |
Also Published As
Publication number | Publication date |
---|---|
TWI731461B (en) | 2021-06-21 |
TW202119287A (en) | 2021-05-16 |
CN112784661A (en) | 2021-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826519B (en) | Face shielding detection method and device, computer equipment and storage medium | |
CN109948408B (en) | Activity test method and apparatus | |
JP6550094B2 (en) | Authentication device and authentication method | |
US20230086552A1 (en) | Image processing method and apparatus, device, storage medium, and computer program product | |
CN106934376B (en) | A kind of image-recognizing method, device and mobile terminal | |
WO2018028546A1 (en) | Key point positioning method, terminal, and computer storage medium | |
CN109345553B (en) | Palm and key point detection method and device thereof, and terminal equipment | |
WO2020199611A1 (en) | Liveness detection method and apparatus, electronic device, and storage medium | |
CN110929805B (en) | Training method, target detection method and device for neural network, circuit and medium | |
CN111476268A (en) | Method, device, equipment and medium for training reproduction recognition model and image recognition | |
CN111767900A (en) | Face living body detection method and device, computer equipment and storage medium | |
CN111539911B (en) | Mouth breathing face recognition method, device and storage medium | |
CN109388926B (en) | Method of processing biometric image and electronic device including the same | |
CN112149615B (en) | Face living body detection method, device, medium and electronic equipment | |
EP2639743A2 (en) | Image processing device, image processing program, and image processing method | |
JP2011210126A (en) | Apparatus and method for recognizing pattern | |
CN114495241B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN112836653A (en) | Face privacy method, device and apparatus and computer storage medium | |
KR101961462B1 (en) | Object recognition method and the device thereof | |
CN111353325A (en) | Key point detection model training method and device | |
CN113642639A (en) | Living body detection method, living body detection device, living body detection apparatus, and storage medium | |
JP2012118927A (en) | Image processing program and image processing device | |
CN112861743A (en) | Palm vein image anti-counterfeiting method, device and equipment | |
CN112784661B (en) | Real face recognition method and real face recognition device | |
CN112200109A (en) | Face attribute recognition method, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |