The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems
<p>A finger knuckle with visible furrows.</p> "> Figure 2
<p>The rig for the acquisition of knuckle images.</p> "> Figure 3
<p>Rig for image acquisition. 1—laptop, 2—video camera on a tripod.</p> "> Figure 4
<p>Stages of image subtraction operation: (<b>a</b>) Reference photo of the keyboard <math display="inline"><semantics> <mrow> <msup> <mi>I</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msup> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) Photo of the keyboard with the hands on it <math display="inline"><semantics> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>; (<b>c</b>) Photo <math display="inline"><semantics> <mrow> <msup> <mi>I</mi> <mi>S</mi> </msup> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> obtained as a result of image subtraction <math display="inline"><semantics> <mrow> <msup> <mi>I</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>f</mi> </mrow> </msup> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>, (<b>d</b>) Photo <math display="inline"><semantics> <mrow> <msup> <mi>I</mi> <mi>S</mi> </msup> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> subjected to binarization.</p> "> Figure 5
<p>(<b>a</b>) Image <math display="inline"><semantics> <msup> <mi>I</mi> <mi>S</mi> </msup> </semantics></math>; (<b>b</b>) Pattern <span class="html-italic">T</span>; (<b>c</b>) Graphical representation of the matching function in the matrix <math display="inline"><semantics> <mi mathvariant="bold">R</mi> </semantics></math>.</p> "> Figure 6
<p>The patterns searched for in the image: (<b>a</b>–<b>d</b>) Pattern of hand; (<b>e</b>–<b>h</b>) Pattern of finger.</p> "> Figure 6 Cont.
<p>The patterns searched for in the image: (<b>a</b>–<b>d</b>) Pattern of hand; (<b>e</b>–<b>h</b>) Pattern of finger.</p> "> Figure 7
<p>Image <math display="inline"><semantics> <msup> <mi>I</mi> <mi>S</mi> </msup> </semantics></math>, where squares indicate the location of the right hand, determined with the use of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> patterns.</p> "> Figure 8
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <mi>ϑ</mi> <mo stretchy="false">(</mo> <msup> <mi>I</mi> <mi>f</mi> </msup> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1.35</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>ϑ</mi> <mo stretchy="false">(</mo> <msup> <mi>I</mi> <mi>f</mi> </msup> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1.16</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>ϑ</mi> <mo stretchy="false">(</mo> <msup> <mi>I</mi> <mi>f</mi> </msup> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1.00</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>ϑ</mi> <mo stretchy="false">(</mo> <msup> <mi>I</mi> <mi>f</mi> </msup> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics></math>.</p> "> Figure 9
<p>All stages of the presented method.</p> "> Figure 10
<p>The influence of the parameter <span class="html-italic">Q</span> on the effectiveness of the method.</p> ">
Abstract
:1. Introduction
- versatility—each person should have a given feature;
- uniqueness—no two persons should have the same feature;
- durability—invariability of the feature in time;
- measurability—a possibility of measuring with the use of a practical device;
- storability—features can be registered and stored;
- acceptability and convenience of use along with the adequacy of the size of the device.
- developing a method of automatic acquisition of knuckle images that enables continuous verification of the user without the necessity to interrupt the user’s work,
- developing a method of evaluating the quality of the image obtained as a result of the acquisition,
- demonstrating the high effectiveness and speed of operation of the method,
- proposing the implementation of the method as an element of a biometric or multi-biometric system,
2. A Method of Automatic Acquisition of Finger Knuckle Images
2.1. Taking a Photo of the Hand
2.2. Exposing the Hand on the Keyboard
2.3. Location of Patterns in the Image
- -
- Square Difference (SD)
- -
- Square Difference Normed(SDN)
- -
- Correlation (C)
- -
- Correlation Normed (CN)
Algorithm 1: Location of n patterns in the image |
2.4. Assessment of Finger Image Quality
3. A Method for Continuous Verification Based on the Finger Knuckle Image
4. Experimental Verification
5. Conclusions
- The proposed method does not require the user to interrupt the work.
- The tests indicated a high effectiveness of the proposed method. After determining the optimal parameters of the method, the following verification errors were obtained: FAR = 4.18%, FRR = 7.85%.
- The values obtained are comparable with results of currently known methods; however, it should be emphasized that the competing methods do not offer automatic image acquisition, which negatively affects the usability of such methods.
- The effectiveness of the proposed method has also been tested as a part of the multi-biometric method in which, apart from the analysis of the knuckle image, the dynamics of typing on the keyboard is analyzed too. Also in this case, the use of the new manner of acquisition did not negatively affect the effectiveness of the method.
Funding
Conflicts of Interest
References
- Salem, M.B.; Hershkop, S.; Stolfo, S.J. A Survey of Insider Attack Detection Research. In Insider Attack and Cyber Security: Beyond the Hacker; Stolfo, S.J., Bellovin, S.M., Keromytis, A.D., Hershkop, S., Smith, S.W., Sinclair, S., Eds.; Springer US: Boston, MA, USA, 2008; pp. 69–90. [Google Scholar]
- Fernández-Alemán, J.L.; Señor, I.C.; Ángel Oliver Lozoya, P.; Toval, A. Security and privacy in electronic health records: A systematic literature review. J. Biomed. Inform. 2013, 46, 541–562. [Google Scholar] [CrossRef] [PubMed]
- Gunter, T.D.; Terry, N.P. The Emergence of National Electronic Health Record Architectures in the United States and Australia: Models, Costs, and Questions. J. Med. Internet Res. 2005, 7, e3. [Google Scholar] [CrossRef] [PubMed]
- Doroz, R.; Porwik, P.; Wrobel, K. Signature Recognition Based on Voting Schemes. In Proceedings of the 2013 International Conference on Biometrics and Kansei Engineering, Tokyo, Japan, 5–7 July 2013; pp. 53–57. [Google Scholar]
- Doroz, R.; Porwik, P. Handwritten Signature Recognition with Adaptive Selection of Behavioral Features. In Proceedings of the 10th International Conference on Computer Information Systems Analysis and Technologies, Kolkata, India, 14–16 December 2011; pp. 128–136. [Google Scholar]
- Peralta, D.; Galar, M.; Triguero, I.; Miguel-Hurtado, O.; Benitez, J.M.; Herrera, F. Minutiae filtering to improve both efficacy and efficiency of fingerprint matching algorithms. Eng. Appl. Artif. Intell. 2014, 32, 37–53. [Google Scholar] [CrossRef]
- Barpanda, S.S.; Sa, P.K.; Marques, O.; Majhi, B.; Bakshi, S. Iris recognition with tunable filter bank based feature. Multimedia Tools Appl. 2018, 77, 7637–7674. [Google Scholar] [CrossRef]
- Albakoor, M.; Saeed, K.; Rybnik, M.; Dabash, M. FE8R—A Universal Method for Face Expression Recognition. In Proceedings of the15th IFIP International Conference on Computer Information Systems and Industrial Management (CISIM), Vilnius, Lithuania, 14–16 September 2016; Springer International Publishing: Vilnius, Lithuania, 2016; pp. 633–646. [Google Scholar]
- Arsalan, M.; Hong, H.G.; Naqvi, R.A.; Lee, M.B.; Kim, M.C.; Kim, D.S.; Kim, C.S.; Park, K.R. Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. Symmetry 2017, 9, 263. [Google Scholar] [CrossRef]
- Porwik, P.; Wrobel, K. The New Algorithm of Fingerprint Reference Point Location Based on Identification Masks. In Proceedings of the 4th International Conference on Computer Recognition Systems, CORES’05, Rydzyna Castle, Poland, 22–25 May 2005; pp. 807–814. [Google Scholar]
- Clarke, R. Human Identification in Information Systems: Management Challenges and Public Policy Issues. Inf. Technol. People 1994, 7, 6–37. [Google Scholar] [CrossRef]
- Safaverdi, H.; Wesolowski, T.E.; Doroz, R.; Wrobel, K.; Porwik, P. Computer User Verification Based on Typing Habits and Finger-Knuckle Analysis. In Proceedings of the 9th International Conference on Computational Collective Intelligence—ICCCI 2017, Nicosia, Cyprus, 27–29 September 2017; pp. 161–170. [Google Scholar]
- Wesolowski, T.E.; Doroz, R.; Wrobel, K.; Safaverdi, H. Keystroke Dynamics and Finger Knuckle Imaging Fusion for Continuous User Verification. In Proceedings of the 16th IFIP TC8 International Conference on Computer Information Systems and Industrial Management—CISIM 2017, Bialystok, Poland, 16–18 June 2017; pp. 141–152. [Google Scholar]
- Doroz, R.; Wrobel, K.; Porwik, P.; Safaverdi, H. The Method of Person Verification by Use of Finger Knuckle Images. In Proceedings of the 10th International Conference on Computer Recognition Systems CORES 2017, Polanica Zdroj, Poland, 22–24 May 2017; pp. 248–257. [Google Scholar]
- Wesolowski, T.E.; Safaverdi, H.; Doroz, R.; Wrobel, K. Hybrid verification method based on finger-knuckle analysis and keystroke dynamics. J. Med. Inform. Technol. 2017, 26, 26–36. [Google Scholar]
- Jeeva, S.; Sivabalakrishnan, M. Survey on Background Modeling and Foreground Detection for Real Time Video Surveillance. Procedia Comput. Sci. 2015, 50, 566–571. [Google Scholar] [CrossRef]
- Nawaz, M.; Cosmas, J.; Adnan, A.; Haq, M.I.U.; Alazawi, E. Foreground detection using background subtraction with histogram. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB 2013, London, UK, 5–7 June 2013; pp. 1–5. [Google Scholar]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Khalil, M. Car plate recognition using the template matching method. Int. J. Comput. Theory Eng. 2010, 2, 683. [Google Scholar] [CrossRef]
- Oron, S.; Dekel, T.; Xue, T.; Freeman, W.T.; Avidan, S. Best-Buddies Similarity—Robust Template Matching Using Mutual Nearest Neighbors. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1799–1813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Weber, J.; Lefèvre, S. Spatial and spectral morphological template matching. Image Vis. Comput. 2012, 30, 934–945. [Google Scholar] [CrossRef]
- Hisham, M.B.; Yaakob, S.N.; Raof, R.A.A.; Nazren, A.B.A.; Embedded, N.M.W. Template Matching using Sum of Squared Difference and Normalized Cross Correlation. In Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 100–104. [Google Scholar]
- Doroz, R.; Wrobel, K.; Porwik, P. An accurate fingerprint reference point determination method based on curvature estimation of separated ridges. Appl. Math. Comput. Sci. 2018, 28, 209–225. [Google Scholar] [CrossRef] [Green Version]
- Ng, C.C.; Yap, M.H.; Costen, N.; Li, B. Automatic Wrinkle Detection Using Hybrid Hessian Filter. In Proceedings of the Tenth Asian Conference on Computer Vision—-ACCV 2014, Columbus, OH, USA, 23–28 June 2014; Springer International Publishing: Cham, Switzerland, 2015; pp. 609–622. [Google Scholar]
- Iwahori, Y.; Hattori, A.; Adachi, Y.; Bhuyan, M.; Woodham, R.J.; Kasugai, K. Automatic Detection of Polyp Using Hessian Filter and HOG Features. Procedia Comput. Sci. 2015, 60, 730–739. [Google Scholar] [CrossRef]
- Belongie, S.; Mori, G.; Malik, J. Matching with Shape Contexts. In Statistics and Analysis of Shapes; Krim, H., Yezzi, A., Eds.; Birkhäuser Boston: Boston, MA, USA, 2006; pp. 81–105. [Google Scholar]
- Zhang, H.; Malik, J. Learning a discriminative classifier using shape context distances. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 1. [Google Scholar]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
- Fagert, M.; Morris, K. Quantifying the limits of fingerprint variability. Forensic Sci. Int. 2015, 254, 87–99. [Google Scholar] [CrossRef] [PubMed]
- Bolle, R. Guide to Biometrics; Springer Professional Computing; Springer: New York, NY, USA, 2004. [Google Scholar]
- Yang, J.; Sun, W.; Liu, N.; Chen, Y.; Wang, Y.; Han, S. A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods. Symmetry 2018, 10, 96. [Google Scholar] [CrossRef]
Image | Metrics Used to Analyze the Image | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Size | SD | SDN | C | CN | ||||||||
(Value of M (px)) | FAR [%] | FRR [%] | ACC [%] | FAR [%] | FRR [%] | ACC [%] | FAR [%] | FRR [%] | ACC [%] | FAR [%] | FRR [%] | ACC [%] |
100 | 17.09 ± 0.22 | 29.72 ± 0.25 | 76.05 ± 0.79 | 20.14 ± 0.53 | 32.42 ± 0.42 | 64.10 ± 0.55 | 15.26 ± 0.21 | 24.56 ± 0.36 | 79.15 ± 1.16 | 20.14 ± 0.20 | 32.42 ± 0.42 | 67.38 ± 0.74 |
200 | 9.21 ± 0.14 | 22.09 ± 0.37 | 86.42 ± 0.93 | 10.06 ± 0.25 | 23.47 ± 0.33 | 75.22 ± 0.89 | 8.45 ± 0.09 | 19.72 ± 0.29 | 84.86 ± 1.07 | 10.06 ± 0.15 | 23.47 ± 0.22 | 83.57 ± 0.72 |
300 | 5.18 ± 0.78 | 11.49 ± 0.09 | 90.85 ± 1.35 | 5.67 ± 0.07 | 13.15 ± 0.20 | 90.61 ± 0.99 | 4.89 ± 0.06 | 11.05 ± 0.11 | 92.84 ± 1.05 | 5.67 ± 0.06 | 13.15 ± 0.17 | 90.70 ± 1.32 |
400 | 4.39 ± 0.08 | 8.16 ± 0.12 | 91.66 ± 0.99 | 4.81 ± 0.06 | 9.03 ± 0.08 | 93.19 ± 1.44 | 4.18 ± 0.05 | 7.85 ± 0.10 | 94.81 ± 1.08 | 4.81 ± 0.06 | 9.03 ± 0.15 | 92.69 ± 1.31 |
500 | 4.26 ± 0.06 | 8.15 ± 0.09 | 94.15 ± 1.13 | 4.68 ± 0.06 | 8.86 ± 0.10 | 93.98 ± 1.20 | 4.14 ± 0.07 | 7.84 ± 0.09 | 94.85 ± 0.99 | 4.68 ± 0.05 | 8.86 ± 0.14 | 94.20 ± 1.11 |
600 | 4.56 ± 0.07 | 8.84 ± 0.08 | 92.23 ± 1.15 | 4.97 ± 0.06 | 9.39 ± 0.13 | 89.32 ± 1.01 | 4.18 ± 0.06 | 7.89 ± 0.12 | 94.80 ± 0.98 | 4.97 ± 0.07 | 9.39 ± 0.11 | 89.69 ± 1.08 |
700 | 4.61 ± 0.07 | 9.11 ± 0.16 | 93.01 ± 1.01 | 4.61 ± 0.06 | 8.79 ± 0.11 | 91.23 ± 1.28 | 4.15 ± 0.06 | 7.92 ± 0.09 | 94.85 ± 1.43 | 4.61 ± 0.07 | 8.79 ± 0.12 | 94.29 ± 1.00 |
800 | 4.42 ± 0.05 | 8.43 ± 0.10 | 94.29 ± 1.07 | 4.84 ± 0.06 | 8.98 ± 0.11 | 93.03 ± 1.22 | 4.21 ± 0.06 | 7.81 ± 0.09 | 94.81 ± 1.12 | 4.84 ± 0.08 | 8.98 ± 0.13 | 93.44 ± 1.12 |
900 | 4.53 ± 0.07 | 8.53 ± 0.11 | 93.37 ± 1.16 | 4.99 ± 0.06 | 9.32 ± 0.13 | 89.88 ± 0.97 | 4.19 ± 0.07 | 7.83 ± 0.10 | 94.81 ± 1.20 | 4.99 ± 0.06 | 9.32 ± 0.14 | 91.56 ± 1.13 |
Number n of Patterns | FAR [%] | FRR [%] | ACC [%] |
---|---|---|---|
4 | 6.45 ± 0.09 | 11.26 ± 0.14 | 94.42 ± 1.25 |
5 | 6.18 ± 0.11 | 10.97 ± 0.13 | 92.15 ± 1.22 |
6 | 4.49 ± 0.10 | 8.01 ± 0.13 | 94.71 ± 1.32 |
7 | 4.18 ± 0.11 | 7.85 ± 0.12 | 94.81 ± 1.28 |
8 | 4.17 ± 0.10 | 7.85 ± 0.12 | 94.78 ± 1.22 |
9 | 4.18 ± 0.10 | 7.86 ± 0.12 | 94.79 ± 1.21 |
10 | 4.18 ± 0.11 | 7.85 ± 0.13 | 94.81 ± 1.22 |
Number n of Patterns | Time (ms) | |
---|---|---|
Palm | Finger | |
4 | 422 | 296 |
5 | 545 | 373 |
6 | 650 | 450 |
7 | 776 | 519 |
8 | 892 | 592 |
9 | 985 | 661 |
10 | 1063 | 734 |
Stage | Time (s) |
---|---|
Taking a photo of the hand | 0.048 |
Exposing the hand on the keyboard | 0.032 |
Location of hand on the keyboard | 0.776 |
Location of finger on the keyboard | 0.519 |
Assessment of finger image quality | 0.163 |
Verification | 0.295 |
Sum | 1.833 |
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Doroz, R. The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems. Symmetry 2018, 10, 624. https://doi.org/10.3390/sym10110624
Doroz R. The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems. Symmetry. 2018; 10(11):624. https://doi.org/10.3390/sym10110624
Chicago/Turabian StyleDoroz, Rafal. 2018. "The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems" Symmetry 10, no. 11: 624. https://doi.org/10.3390/sym10110624
APA StyleDoroz, R. (2018). The Method of Automatic Knuckle Image Acquisition for Continuous Verification Systems. Symmetry, 10(11), 624. https://doi.org/10.3390/sym10110624