[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112001233A - Biological characteristic identification system and identification method - Google Patents

Biological characteristic identification system and identification method Download PDF

Info

Publication number
CN112001233A
CN112001233A CN202010666402.0A CN202010666402A CN112001233A CN 112001233 A CN112001233 A CN 112001233A CN 202010666402 A CN202010666402 A CN 202010666402A CN 112001233 A CN112001233 A CN 112001233A
Authority
CN
China
Prior art keywords
information
identification
distance
extraction unit
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010666402.0A
Other languages
Chinese (zh)
Other versions
CN112001233B (en
Inventor
郑智元
林威汉
翁振庭
赵芳誉
蔡呈新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elan Microelectronics Corp
Original Assignee
Elan Microelectronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elan Microelectronics Corp filed Critical Elan Microelectronics Corp
Publication of CN112001233A publication Critical patent/CN112001233A/en
Application granted granted Critical
Publication of CN112001233B publication Critical patent/CN112001233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

The invention discloses a biological characteristic identification system and a biological characteristic identification method, wherein the biological characteristic identification system comprises a sensor, a first sensor, a second sensor and a third sensor, wherein the sensor is used for sensing a first biological characteristic to generate an image; a first feature extraction unit, coupled to the sensor, for generating a first information according to the image, the first information describing a uniqueness of the first biological feature; a second feature extraction unit, coupled to the sensor, for generating a second information according to the image, where the second information describes a true or false characteristic of the first biological feature; and an identification unit coupled to the first feature extraction unit and the second feature extraction unit for generating an identification result according to the first information and/or the second information.

Description

Biological characteristic identification system and identification method
Technical Field
The invention relates to a biological characteristic identification system and a biological characteristic identification method.
Background
More and more electronic devices or systems use biometric features to identify a user's identity. Common biometric identification includes fingerprint identification, face identification, iris identification, voice print identification, and palm print identification. However, the current biometric identification has defects, such as the fingerprint identification cannot effectively distinguish the authenticity of the finger. Thus, the biometric recognition system still has the doubt that it may be spoofed.
Disclosure of Invention
The present invention is directed to a biometric feature recognition system and a biometric feature recognition method thereof, which can prevent false biometric features from passing through identity recognition.
According to the present invention, a biometric identification system includes a sensor, a first feature extraction unit, a second feature extraction unit, and an identification unit. The sensor senses a first biological characteristic to generate an image. The first feature extraction unit is coupled to the sensor and generates first information according to the image, wherein the first information describes true and false characteristics of the first biological feature. The second feature extraction unit is coupled to the sensor and generates second information according to the image, wherein the second information describes true and false characteristics of the first biological feature. The identification unit is coupled to the first feature extraction unit and the second feature extraction unit, and generates an identification result according to the second information or the first information and the second information.
According to the present invention, a biometric feature identification method includes: sensing a first biological characteristic to generate an image; generating a first information according to the image, wherein the first information describes the uniqueness of the first biological feature; obtaining second information from the image, wherein the second information describes true and false characteristics of the first biological feature; and generating an identification result according to the second information or the first information and the second information.
The identification system and the identification method can effectively improve the safety of the biological characteristic identification system and prevent false biological characteristics from passing identity authentication.
Drawings
FIG. 1 is a block diagram of an embodiment of a biometric identification system according to the present invention.
FIG. 2 is a flow chart of a biometric identification method according to the present invention.
FIG. 3 shows a block diagram of an embodiment of the CNN and classifier.
Description of reference numerals: 10-an identification system; 11-a sensor; 12-a first feature extraction unit; 13-a second feature extraction unit; 131-a convolutional neural network; 1311-feature extraction section; 1312-classification section; 14-memory; 15-an identification unit; 151-classifier.
Detailed Description
Fig. 1 shows an embodiment of the biometric identification system of the present invention, in which an identification system 10 includes a sensor 11, a first feature extraction unit 12, a second feature extraction unit 13, a memory 14, and an identification unit 15. The sensor 11 may be an image sensor for sensing a first biological feature to generate an image (image) a. The first biometric characteristic may be, for example, a fingerprint, a face, a palm print, or an iris. The first feature extraction unit 12, the second feature extraction unit 13 and the identification unit 15 may be implemented in software or hardware. The first feature extraction unit 12 is coupled to the sensor 11, and generates a first information according to the image a provided by the sensor 11, wherein the first information describes the uniqueness of the first biological feature. The first information includes a plurality of sets of feature vectors. In one embodiment, the first feature extraction unit 12 may extract features of the image a by using a computer vision (computer vision) method to generate the first information. The computer vision method may be an algorithm such as Accelerated Segment Test Feature (FAST), Adaptive and generic corner detection based on the accessed Segment Test (AGAST), Scale-invariant feature transform (SIFT), Accelerated Up Robust Features (SURF), KAZE or AKAZE, etc. In another embodiment, the first feature extraction unit 12 is a deep learning model with complete training, and may be implemented in a model architecture based on a Convolutional Neural Network (CNN) 131, for example. The method used by the first feature extraction unit 12 may be, but is not limited to, Local Binary Patterns (LBP), Local Phase Quantization (LPQ), Histogram of Oriented Gradients (HOG), or other algorithms.
The second feature extraction unit 13 is coupled to the sensor 11, and generates a second information describing the true and false characteristics of the first biological feature according to the image a. In one embodiment, the second feature extraction unit 13 may extract features of the image a by using a computer vision (computer vision) method to generate the second information. The computer vision method may be an algorithm using, for example, Accelerated Segment Test Features (FAST), Adaptive and generic corner detection based on the accessed Segment Test, AGAST, Scale-invariant feature transform (SIFT), Accelerated Up Robust Features (SURF), KAZE or AKAZE, etc. In another embodiment, the second feature extraction unit 13 is a deep learning model trained in advance, and may be implemented by, for example, a model architecture based on a Convolutional Neural Network (CNN) 131 or an improvement thereof, or may also be a deep Network model such as AlexNet, MobileNet, or the like. The method used by the second feature extraction unit 13 may be, but is not limited to, Local Binary Patterns (LBP), Local Phase Quantization (LPQ), Histogram of Oriented Gradients (HOG), or other algorithms. In one embodiment, the second information includes embedded features (embedding features) of the image, which are vector data generated when CNN 131 performs data transformation according to image a, and are used to describe true and false characteristics of the first biological feature, which is not just a value representing true (1) or false (0).
The first characteristic extraction unit 12 and the second characteristic extraction unit 13 are used for characteristic extraction. However, the purpose of both is different, and the coefficients for extracting the features are different. The first feature extraction unit 12 and the second feature extraction unit 13 can be understood as viewing image a from different angles. The first feature extraction unit 12 is used to describe the uniqueness of the first biological feature shown in the image a, such as the distribution relationship of some specific feature points. This uniqueness can be used to distinguish different individuals. And the second feature extraction unit 13 is used to describe the true and false characteristics of the first biological feature shown in the image a, and is usually used to determine whether the first biological feature is from a living body.
The memory 14 is coupled to the first feature extraction unit 12, the second feature extraction unit 13 and the identification unit 15, respectively, for storing a first template information and a second template information. The first template information and the second template information are generated by the first feature extraction unit 12 and the second feature extraction unit 13, respectively, in a registration (enrolment) procedure of the user. In the registration process, the sensor 11 senses a second biometric characteristic of a user to be registered to generate an image B (not shown), wherein the first biometric characteristic and the second biometric characteristic are the same type of biometric characteristic. The first feature extraction unit 12 generates first template information describing the uniqueness of the second biometric feature from the image B. The second feature extraction unit 13 generates the second template information according to the image B, where the second template information describes true and false characteristics of the second biometric feature.
The identification unit 15 is coupled to the first feature extraction unit 12, the second feature extraction unit 13 and the memory 14. The recognition unit 15 generates a recognition result representing pass or fail identity authentication according to the first information, the second information or the first information and the second information. In one embodiment, the recognition unit 15 includes a classifier 151. The classifier 151 is based on a model trained by Machine Learning (Machine Learning), which may be implemented by a Support Vector Machine (SVM) or a Neural Network (NN). The classifier 151 may be a software or hardware circuit.
The operation of the recognition system 10 is described below by taking fingerprint recognition as an example, but the present invention is not limited to fingerprint recognition, and the present invention can also be applied to other biometric recognition, such as face recognition, iris recognition, and palm print recognition. When the identification system 10 performs the enrollment process, the enrollee places his finger on the sensor 11. in this embodiment, the sensor 11 is a fingerprint sensor, which may be an optical fingerprint sensor or a capacitive fingerprint sensor. The sensor 11 senses the fingerprint of the finger (corresponding to the second biometric feature described above) to generate a fingerprint image Fi 1. The first feature extraction unit 12 generates first template information En1 from the fingerprint image Fi1, and the first template information En1 can be understood as describing fingerprint characteristics of the fingerprint image Fi 1. The fingerprint lines of each person are unique and different from those of other persons. The first template information En1 is the uniqueness of the fingerprint. The second feature extraction unit 13 generates second template information En2 based on the fingerprint image Fil. The second template information En2 describes the true and false characteristics of the fingerprint image Fi 1. After the first template information En1 and the second template information En2 are stored in the memory 14, the registration process is completed.
When the identification system 10 performs the authentication procedure, after the to-be-authenticated person puts the finger on the sensor 11, the sensor 11 senses the fingerprint (corresponding to the first biometric characteristic) of the finger to generate a fingerprint image Fi2, as shown in step S10 of fig. 2. The first feature extraction unit 12 generates the first information Ve1 according to the fingerprint image Fi1, as shown in step S12 of fig. 2. The second feature extraction unit 13 generates second information Ve2 based on the fingerprint image Fi2, as shown in step S14 of fig. 2. The second information Ve2 describes the true and false characteristics of the fingerprint image Fi 2. Finally, the recognition unit 15 generates a recognition result Vre to indicate the authentication success or failure according to the first information Ve1, the second information Ve2, or the first information Ve1 and the second information Ve2, as shown in step S16 of fig. 2.
In an embodiment, the recognition unit 15 generates the recognition result Vre according to the difference D1 between the first information Ve1 and the first template information En1 and the difference D2 between the second information Ve2 and the second template information En 2. The identification unit 15 can determine the difference D1 between the first information Ve1 and the first template information En1 and the difference D2 between the second information Ve2 and the second template information En2 by using, but not limited to, one of the euclidean distance, the manhattan distance, the chebyshev distance, the minkowski distance, the normalized euclidean distance, the mahalanobis distance, and the hamming distance, and then add the differences D1 and D2 to the weights W1 and W2 respectively to generate a sum, wherein the weights W1 and W2 are values other than 0. If the sum is smaller than a predetermined value TH1, the identification result Vre is a value "1", indicating that the authentication is successful. On the contrary, if the sum is greater than the preset value TH1, the identification result Vre is generated to be a value "0" indicating that the authentication has failed.
In another embodiment, the recognition unit 15 includes a classifier 151, and the classifier 151 may be implemented by hardware or software, and performs classification using a machine learning method. The classifier 151 may perform a recognition step to generate a recognition result Vre. The identification step includes, but is not limited to, judging the similarity between the combination of the first information Ve1 and the second information Ve2 and the combination of the first template information En1 and the second template information En2 to generate the identification result Vre. If the similarity is greater than a predetermined value TH2, the identification result Vre is generated as a value "1" to indicate that the authentication is successful, otherwise, the identification result Vre is generated as a value "0" to indicate that the authentication is failed.
In one embodiment, the recognition unit 15 also includes the classifier 151, and the recognition unit 15 first determines whether the first information Ve1 is similar to the first template information En1 by using one of, but not limited to, euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, and hamming distance. If not, the recognition unit 15 generates a recognition result Vre of "0", indicating that authentication has failed. If so, the recognition unit 15 performs the above recognition procedure by using the classifier 151 to generate the recognition result Vre. This has the advantage that resources can be saved. If the first information Ve1 is judged to be dissimilar to the first template information EN1, it is judged that the verifier is different from the registered user. It is therefore not necessary to continue the identification step.
In another embodiment, the recognition unit 15 also includes the classifier 151, and the recognition unit 15 first determines whether the second information Ve2 is similar to the second template information En2 by using one of, but not limited to, euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, and hamming distance. If not, the recognition unit 15 generates a recognition result Vre of "0", indicating that authentication has failed. If so, the recognition unit 15 performs the above recognition procedure by using the classifier 151 to generate the recognition result Vre.
If the first feature extraction unit 12 is a deep learning model using training, training is required in advance to obtain the feature extraction capability. The following description will take fingerprint recognition as an example. During the training process, fingerprint images of many different persons are provided to a training program T1. The training program T1 has the same model architecture as the first feature extraction unit 12. The training program T1 obtains a set of coefficients for extracting the features of the fingerprint images according to the fingerprint images of these different persons and their corresponding owners. This process in essence tells training program T1 which fingerprint images represent who and lets training program T1 learn how to classify. The first feature extraction unit 12 operates by using the set of coefficients to extract features of the fingerprint image to generate first information. This first information describes the uniqueness of the fingerprint and can also be understood as mathematically describing how the fingerprint looks like. If the first feature extraction unit 12 is to extract features using computer vision methods, it uses well-defined features to describe what this biometric is, i.e. its uniqueness.
If the second feature extraction unit 13 is a model using a trained deep learning, the training is also required in advance to obtain the feature extraction capability. The following description will take fingerprint recognition as an example. During the training process, a large number of true fingerprint images and false fingerprint images are required to be provided to the training program T2, and the training program T2 has the same model architecture as the first feature extraction unit 13. The aforementioned large number of real fingerprint images are obtained by sensing fingerprints of many different persons, all of which are fingerprint images taken from living bodies. These fingerprint images are formed on a material such as silica gel and sensed to obtain a large number of false fingerprint images. By telling the training program T2 which are images of true fingerprints and which are images of false fingerprints, training program T2 learns a set of coefficients to identify a true fingerprint or a false fingerprint. This process is actually letting the training program T2 learn how to identify true and false fingerprints. The second feature extraction unit 13 extracts features of the fingerprint image using the set of coefficients to generate second information. It is also possible to directly judge whether it is a true fingerprint or a false fingerprint using this second information, but the present invention does not do so. The second feature extraction unit 13 of the present invention mainly obtains a set of feature values generated after the feature extraction and before the determination of the true or false fingerprint as the second information. Therefore, the second information describes the authenticity of the fingerprint, and can be understood as describing the authenticity of the fingerprint mathematically. If the first feature extraction unit 12 uses computer vision to extract features, it uses well-defined features to describe the true and false characteristics of this biometric feature.
By providing a large number of images to the first feature extraction unit 12 and the second feature extraction unit 13 in advance, a large number of first information and second information can be obtained for machine-learning model learning classification by the classifier 151. So that the classifier 151 has the capability of judging success or failure of authentication. For example, in case of fingerprint recognition, a training procedure T3 having the same model structure as the classifier 151 is prepared. Next, a large number of combinations of the first information and the second information are provided to training program T3, and training program T3 is informed which combinations were successfully verified and which combinations were failed. For example, training program T3 is told that the first and second information generated using the true fingerprint image represent a successful verification (pass) and the first and second information generated using the true fingerprint image represent a failed verification (fail). From this process, the training program T3 can learn how to judge whether the combination of the first information and the second information represents a success or failure of the verification.
As can be understood from the above description of training and recognition, the present invention determines whether the authentication is passed by determining whether the currently sensed fingerprint and the enrolled fingerprint are not similar to each other by the first information generated by the first feature extraction unit 11 and whether the true and false characteristics of the currently sensed fingerprint and the true and false characteristics of the enrolled fingerprint are close to each other by the second information generated by the second feature extraction unit 12.
Fig. 2 provides an embodiment of the architecture of CNN 131, which mainly includes a feature extraction section 1311 and a classification section 1312. The image a is processed by the feature extraction component 1311 and the classification component 1312 to generate embedded feature information at the classifier 1312, where the embedded feature information represents the true or false characteristics of the biometric feature (e.g., fingerprint) shown in the image a. The embedded characteristic information may be a set of values, such as "1101000". Please note that, the second feature extraction unit 13 does not generate the true or false recognition result by classifying the image data with the classification part 1312, but obtains the embedded feature information generated by the classifier 1312. There may be accuracy problems if a feature extraction unit is used to directly determine whether a biometric to be verified is true or false. For example, if 10 kinds of false fingerprints are provided to train the feature extraction unit, the feature extraction unit cannot accurately determine whether the false fingerprint is true or false for the false fingerprints other than the 10 kinds of material. The second extracted feature unit 13 of the present invention does not directly determine whether the biometric feature is true or false, but obtains the embedded feature information describing the true or false characteristic of the biometric feature to compare with the second template information registered in the memory. Therefore, the influence on the accuracy of the authentication of the present invention is low for new material that has not been learned by the second extracted feature unit 13.
The foregoing description of the preferred embodiments of the present invention has been presented for purposes of illustration and description and is not intended to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the embodiments of the invention, which are presented to illustrate the principles of the invention and to enable one skilled in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated and the scope of the invention is intended to be determined by the appended claims and their equivalents.

Claims (28)

1. A biometric identification system, comprising:
the sensor is used for sensing a first biological characteristic to generate an image;
a first feature extraction unit, coupled to the sensor, for generating a first information according to the image, the first information describing a uniqueness of the first biological feature;
a second feature extraction unit coupled to the sensor, the second feature extraction unit generating second information according to the image, the second information describing true and false characteristics of the first biological feature; and
and the identification unit is coupled with the first characteristic extraction unit and the second characteristic extraction unit and generates an identification result according to the second information or the first information and the second information.
2. The identification system of claim 1, wherein the identification result represents passing or failing of identity authentication.
3. The identification system of claim 1 wherein the sensor is a fingerprint sensor and the first biometric characteristic is a fingerprint.
4. The identification system of claim 1 further comprising a memory for storing a first template information and a second template information.
5. The identification system of claim 4 wherein the identification unit generates the identification result based on a difference between the first information and the first template information and a difference between the second information and the second template information.
6. An identification system as claimed in claim 5 wherein the identification unit uses one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming distance to determine the difference between the first information and the first template information and the difference between the second information and the second template information.
7. The identification system of claim 4 wherein the identification unit comprises a classifier for performing an identification step to determine similarity between the combination of the first information and the second information and the combination of the first template information and the second template information to generate the identification result.
8. The identification system of claim 7 wherein the classifier comprises a support vector machine or a neural network.
9. The identification system of claim 7 wherein the identification unit further comprises determining whether the first information is similar to the first template information, and if so, the identification unit utilizes the classifier to perform the identifying step.
10. The identification system of claim 7 wherein the identification unit further comprises determining whether the second information is similar to the second template information, and if so, the identification unit utilizes the classifier to perform the identifying step.
11. An identification system as claimed in claim 9 or 10 wherein the identification unit further comprises using one of a euclidean distance, a manhattan distance, a chebyshev distance, a minkowski distance, a normalized euclidean distance, a mahalanobis distance and a hamming distance to determine whether the first information is similar to the first template information or whether the second information is similar to the second template information.
12. The identification system of claim 1 wherein the first feature extraction unit or the second feature extraction unit comprises a deep learning model.
13. The identification system of claim 1 wherein the first feature extraction unit or the second feature extraction unit comprises a convolutional neural network.
14. An identification system as claimed in claim 1, wherein the first feature extraction unit or the second feature extraction unit uses an acceleration segment test feature, adaptive generalized acceleration segmentation detection, scale invariant feature transform, acceleration robust feature, KAZE, AKAZE, local binary pattern, local phase quantization or direction gradient histogram algorithm to obtain the first information.
15. A method for identifying a biometric feature, comprising the steps of:
A. sensing a first biological characteristic to generate an image;
B. generating a first information according to the image, wherein the first information describes the uniqueness of the first biological feature;
C. obtaining second information from the image, wherein the second information describes true and false characteristics of the first biological feature; and
D. and generating a recognition result according to the second information or the first information and the second information.
16. The identification method of claim 15 further comprising determining whether the identity authentication is passed or not passed according to the identification result.
17. The method of claim 15, wherein the first biometric characteristic is a fingerprint.
18. The identification method of claim 15 further comprising obtaining a first template information and a second template information from a memory.
19. The identification method of claim 18, wherein the step D comprises generating the identification result according to the difference between the first information and the first template information and the difference between the second information and the second template information.
20. An identification method as claimed in claim 19 wherein the step D further includes determining the difference between the first information and the first template information and determining the difference between the second information and the second template information using one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming distance.
21. The method of claim 18, wherein the step D comprises performing an identification step using a classifier to determine similarity between the combination of the first information and the second information and the combination of the first template information and the second template information to generate the identification result.
22. An identification method as claimed in claim 21 further comprising implementing the classifier with a support vector machine or a neural network.
23. The identification method of claim 21, wherein the step D further comprises:
judging whether the first information is similar to the first template information; and
if yes, the identification step is carried out.
24. The identification method of claim 21, wherein the step D further comprises:
judging whether the second information is similar to the second template information; and
if yes, the identification step is carried out.
25. An identification method as claimed in claim 23 or 24 wherein the step D further includes using one of Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, normalized Euclidean distance, Mahalanobis distance and Hamming distance to determine whether the first information is similar to the first template information or whether the second information is similar to the second template information.
26. An identification method according to claim 15 wherein step B includes using acceleration segment test features, adaptive generalized acceleration segmentation detection, scale invariant feature transformation, acceleration robust features, KAZE, AKAZE, local binary pattern, local phase quantization or direction gradient histogram algorithm to obtain the first information.
27. The method of claim 15, wherein the step B includes using a deep learning model to obtain the first information or the second information.
28. An identification method according to claim 15 wherein step B includes using a convolutional neural network to obtain the first information or the second information.
CN202010666402.0A 2020-07-01 2020-07-13 Biological feature identification system and identification method Active CN112001233B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW109122271A TWI792017B (en) 2020-07-01 2020-07-01 Biometric identification system and identification method thereof
TW109122271 2020-07-01

Publications (2)

Publication Number Publication Date
CN112001233A true CN112001233A (en) 2020-11-27
CN112001233B CN112001233B (en) 2024-08-20

Family

ID=73467956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010666402.0A Active CN112001233B (en) 2020-07-01 2020-07-13 Biological feature identification system and identification method

Country Status (2)

Country Link
CN (1) CN112001233B (en)
TW (1) TWI792017B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI794696B (en) * 2020-12-14 2023-03-01 晶元光電股份有限公司 Optical sensing device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211926A1 (en) * 2006-03-13 2007-09-13 Fujitsu Limited Fingerprint authenticating apparatus, live-finger determining apparatus, and live-finger determining method
CN101162499A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for using human face formwork combination to contrast
CN102708360A (en) * 2012-05-09 2012-10-03 深圳市亚略特生物识别科技有限公司 Method for generating and automatically updating fingerprint template
JP2012252644A (en) * 2011-06-06 2012-12-20 Seiko Epson Corp Biological identification device and biological identification method
CN103136504A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device
CN103902961A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Face recognition method and device
CN104042220A (en) * 2014-05-28 2014-09-17 上海思立微电子科技有限公司 Device and method for detecting living body fingerprint
CN105740750A (en) * 2014-12-11 2016-07-06 深圳印象认知技术有限公司 Fingerprint in-vivo detection and identification method and apparatus
US20170004352A1 (en) * 2015-07-03 2017-01-05 Fingerprint Cards Ab Apparatus and computer-implemented method for fingerprint based authentication
CN106657056A (en) * 2016-12-20 2017-05-10 深圳芯启航科技有限公司 Biological feature information management method and system
CN107223251A (en) * 2017-05-03 2017-09-29 深圳市汇顶科技股份有限公司 Determination method, identity identifying method and the device of vital sign information
CN107278308A (en) * 2017-05-20 2017-10-20 深圳信炜科技有限公司 Image-recognizing method, pattern recognition device, electronic installation and computer-readable storage medium
CN107358145A (en) * 2017-05-20 2017-11-17 深圳信炜科技有限公司 Imaging sensor and electronic installation
WO2018082011A1 (en) * 2016-11-04 2018-05-11 深圳市汇顶科技股份有限公司 Living fingerprint recognition method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946930B2 (en) * 2007-06-11 2018-04-17 Jeffrey A. Matos Apparatus and method for verifying the identity of an author and a person receiving information
CN107088071B (en) * 2016-02-17 2021-10-15 松下知识产权经营株式会社 Biological information detection device
US10410045B2 (en) * 2016-03-23 2019-09-10 Intel Corporation Automated facial recognition systems and methods

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211926A1 (en) * 2006-03-13 2007-09-13 Fujitsu Limited Fingerprint authenticating apparatus, live-finger determining apparatus, and live-finger determining method
CN101162499A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for using human face formwork combination to contrast
JP2012252644A (en) * 2011-06-06 2012-12-20 Seiko Epson Corp Biological identification device and biological identification method
CN103136504A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device
CN102708360A (en) * 2012-05-09 2012-10-03 深圳市亚略特生物识别科技有限公司 Method for generating and automatically updating fingerprint template
CN103902961A (en) * 2012-12-28 2014-07-02 汉王科技股份有限公司 Face recognition method and device
CN104042220A (en) * 2014-05-28 2014-09-17 上海思立微电子科技有限公司 Device and method for detecting living body fingerprint
CN105740750A (en) * 2014-12-11 2016-07-06 深圳印象认知技术有限公司 Fingerprint in-vivo detection and identification method and apparatus
US20170004352A1 (en) * 2015-07-03 2017-01-05 Fingerprint Cards Ab Apparatus and computer-implemented method for fingerprint based authentication
CN106663204A (en) * 2015-07-03 2017-05-10 指纹卡有限公司 Apparatus and computer-implemented method for fingerprint based authentication
WO2018082011A1 (en) * 2016-11-04 2018-05-11 深圳市汇顶科技股份有限公司 Living fingerprint recognition method and device
CN106657056A (en) * 2016-12-20 2017-05-10 深圳芯启航科技有限公司 Biological feature information management method and system
CN107223251A (en) * 2017-05-03 2017-09-29 深圳市汇顶科技股份有限公司 Determination method, identity identifying method and the device of vital sign information
CN107278308A (en) * 2017-05-20 2017-10-20 深圳信炜科技有限公司 Image-recognizing method, pattern recognition device, electronic installation and computer-readable storage medium
CN107358145A (en) * 2017-05-20 2017-11-17 深圳信炜科技有限公司 Imaging sensor and electronic installation

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RODRIGO FRASSETTO NOGUEIRA ET. AL: "Evaluating software-based fingerprint liveness detection using Convolutional Networks and Local Binary Patterns", 2014 IEEE WORKSHOP ON BIOMETRIC MEASUREMENTS AND SYSTEMS FOR SECURITY AND MEDICAL APPLICATIONS (BIOMS) PROCEEDINGS, 17 October 2014 (2014-10-17), pages 129 - 29 *
张杨等: "一种基于相似离度匹配的人脸精确跟踪算法", 东北大学学报(自然科学版), no. 02, 15 February 2011 (2011-02-15), pages 188 - 192 *
李伟等: "基于"互联网+"的高压线验电器与接地线智能管理装置的实现", 科学技术与工程, 30 December 2020 (2020-12-30), pages 14085 - 14094 *
李长云等: "智能感知技术及在电气工程中的应用", 电子科技大学出版社, pages: 129 *
赛斯维传感器网: "从光学传感器到电容传感器——指纹采集技术的演进", pages 1 - 5, Retrieved from the Internet <URL:https://www.sensorway.cn/knowledge/2013.html> *

Also Published As

Publication number Publication date
CN112001233B (en) 2024-08-20
TW202203055A (en) 2022-01-16
TWI792017B (en) 2023-02-11

Similar Documents

Publication Publication Date Title
KR102455633B1 (en) Liveness test method and apparatus
JP6838005B2 (en) Device and computer mounting method for fingerprint-based authentication
Ko Multimodal biometric identification for large user population using fingerprint, face and iris recognition
Ribarić et al. Multimodal biometric user-identification system for network-based applications
CN115147874A (en) Method and apparatus for biometric information forgery detection
Heenaye et al. A multimodal hand vein biometric based on score level fusion
Ribaric et al. A biometric verification system based on the fusion of palmprint and face features
Charfi et al. Personal recognition system using hand modality based on local features
Ramachandran et al. Score level based fusion method for multimodal biometric recognition using palmprint and iris
CN112001233B (en) Biological feature identification system and identification method
KR100564766B1 (en) Apparatus for registrating and identifying multiple human features and method thereof
Kaur et al. Minutiae extraction and variation of fast Fourier transform on fingerprint recognition
Binh Tran et al. Multimodal personal verification using likelihood ratio for the match score fusion
Aravinth et al. A novel feature extraction techniques for multimodal score fusion using density based gaussian mixture model approach
Gawande et al. Improving iris recognition accuracy by score based fusion method
Sharma et al. Multimodal biometric system fusion using fingerprint and face with fuzzy logic
Oyeniyi et al. An enhanced iris feature extraction technique using continuous wavelet transform
Nigam et al. Finger knuckle-based multi-biometric authentication systems
Sehgal Palm recognition using LBP and SVM
CN113361314A (en) Anti-spoofing method and apparatus
Sree et al. Dorsal Hand Vein Pattern Authentication by Hough Peaks
Benson-Emenike Mercy et al. Trimodal Biometric Authentication System using Cascaded Link-based Feed forward Neural Network [CLBFFNN]
Sujitha et al. Counter measures for indirect attack for iris based biometric authentication
Tiwari et al. TARC: A novel score fusion scheme for multimodal biometric systems
Meraoumia et al. Person’s recognition using palmprint based on 2D Gabor filter response

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant