[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Learning Frequency-Aware Common Feature for VIS-NIR Heterogeneous Palmprint Recognition

Published: 12 August 2024 Publication History

Abstract

Palmprint recognition has shown great value for biometric recognition due to its advantages of good hygiene, semi-privacy and low invasiveness. However, most existing palmprint recognition studies focus only on homogeneous palmprint recognition, where comparing palmprint images are collected under similar conditions with small domain gaps. To address the problem of matching heterogeneous palmprint images captured under the visible light (VIS) and the near-infrared (NIR) spectrum with large domain gaps, in this paper, we propose a Fourier-based feature learning network (FFLNet) for VIS-NIR heterogeneous palmprint recognition. First, we extract the multi-scale shallow representations of heterogeneous palmprint images via three vanilla convolution layers. Then, we convert the shallow palmprint feature maps into frequency-specific representations via Fourier transform to separate different layers of palmprint features, and exploit the underlying common and palmprint-specific frequency information of heterogeneous palmprint images. This effectively reduces the modality gap of heterogeneous palmprint images at the feature level. After that, we convert the common frequency-specific feature maps back to the spatial domain to learn the identity-invariant discriminative features via residual convolution for heterogeneous palmprint recognition. Extensive experimental results on three challenging heterogeneous palmprint databases clearly demonstrate the effectiveness of the proposed FFLNet for VIS-NIR heterogeneous palmprint recognition.

References

[1]
D. Zhong, X. Du, and K. Zhong, “Decade progress of palmprint recognition: A brief survey,” Neurocomputing, vol. 328, pp. 16–28, Feb. 2019.
[2]
L. Fei, G. Lu, W. Jia, S. Teng, and D. Zhang, “Feature extraction methods for palmprint recognition: A survey and evaluation,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 49, no. 2, pp. 346–363, Feb. 2019.
[3]
D.-S. Huang, W. Jia, and D. Zhang, “Palmprint verification based on principal lines,” Pattern Recognit., vol. 41, no. 4, pp. 1316–1328, Apr. 2008.
[4]
C.-L. Lin, T. C. Chuang, and K.-C. Fan, “Palmprint verification using hierarchical decomposition,” Pattern Recognit., vol. 38, no. 12, pp. 2639–2652, Dec. 2005.
[5]
Q. Zheng, A. Kumar, and G. Pan, “A 3D feature descriptor recovered from a single 2D palmprint image,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 6, pp. 1272–1279, Jun. 2016.
[6]
W. Jia et al., “Palmprint recognition based on complete direction representation,” IEEE Trans. Image Process., vol. 26, no. 9, pp. 4483–4498, Sep. 2017.
[7]
Y.-T. Luo et al., “Local line directional pattern for palmprint recognition,” Pattern Recognit., vol. 50, pp. 26–44, Feb. 2016.
[8]
X. Wu, Q. Zhao, and W. Bu, “A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors,” Pattern Recognit., vol. 47, no. 10, pp. 3314–3326, Oct. 2014.
[9]
L. Fei, B. Zhang, Y. Xu, Z. Guo, J. Wen, and W. Jia, “Learning discriminant direction binary palmprint descriptor,” IEEE Trans. Image Process., vol. 28, no. 8, pp. 3808–3820, Aug. 2019.
[10]
L. Fei, B. Zhang, L. Zhang, W. Jia, J. Wen, and J. Wu, “Learning compact multifeature codes for palmprint recognition from a single training image per palm,” IEEE Trans. Multimedia, vol. 23, pp. 2930–2942, 2021.
[11]
S. Zhao and B. Zhang, “Learning salient and discriminative descriptor for palmprint feature extraction and identification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 12, pp. 5219–5230, Dec. 2020.
[12]
A. Genovese, V. Piuri, K. N. Plataniotis, and F. Scotti, “PalmNet: Gabor-PCA convolutional networks for touchless palmprint recognition,” IEEE Trans. Inf. Forensics Secur., vol. 14, no. 12, pp. 3160–3174, Dec. 2019.
[13]
Q. Zhu et al., “Contactless palmprint image recognition across smartphones with self-paced CycleGAN,” IEEE Trans. Inf. Forensics Security, vol. 18, pp. 4944–4954, 2023.
[14]
W. M. Matkowski, T. Chai, and A. W. K. Kong, “Palmprint recognition in uncontrolled and uncooperative environment,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 1601–1615, 2020.
[15]
Z. Yang et al., “CO3Net: Coordinate-aware contrastive competitive neural network for palmprint recognition,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–14, 2023.
[16]
Amazon One. Accessed: Jan. 2021. [Online]. Available: https://one.amazon.com/
[17]
WePalm. Accessed: Sep. 2022. [Online]. Available: https://mp.weixin.qq.com/s/KTaW-yflUvA1IqTzhD03ug/
[18]
L. Fei, W. K. Wong, S. Zhao, J. Wen, J. Zhu, and Y. Xu, “Learning spectrum-invariance representation for cross-spectral palmprint recognition,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 53, no. 6, pp. 3868–3879, Jun. 2023.
[19]
L. Shen, Y. Zhang, K. Zhao, R. Zhang, and W. Shen, “Distribution alignment for cross-device palmprint recognition,” Pattern Recognit., vol. 132, Dec. 2022, Art. no.
[20]
T. Wu, L. Leng, and M. K. Khan, “A multi-spectral palmprint fuzzy commitment based on deep hashing code with discriminative bit selection,” Artif. Intell. Rev., vol. 56, no. 7, pp. 6169–6186, Jul. 2023.
[21]
H. Shao, D. Zhong, and Y. Li, “PalmGAN for cross-domain palmprint recognition,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2019, pp. 1390–1395.
[22]
H. Shao and D. Zhong, “Towards cross-dataset palmprint recognition via joint pixel and feature alignment,” IEEE Trans. Image Process., vol. 30, pp. 3764–3777, 2021.
[23]
H. Shao and D. Zhong, “Learning with partners to improve the multi-source cross-dataset palmprint recognition,” IEEE Trans. Inf. Forensics Security, vol. 16, pp. 5182–5194, 2021.
[24]
D. Zhang, Z. Guo, G. Lu, L. Zhang, and W. Zuo, “An online system of multispectral palmprint verification,” IEEE Trans. Instrum. Meas., vol. 59, no. 2, pp. 480–490, Feb. 2010.
[25]
S. Ouyang, T. Hospedales, Y.-Z. Song, X. Li, C. C. Loy, and X. Wang, “A survey on heterogeneous face recognition: Sketch, infra-red, 3D and low-resolution,” Image Vis. Comput., vol. 56, pp. 28–48, Dec. 2016.
[26]
L. Su, L. Fei, S. Zhao, J. Wen, J. Zhu, and S. Teng, “Learning modality-invariant binary descriptor for crossing palmprint to palm-vein recognition,” Pattern Recognit. Lett., vol. 172, pp. 1–7, Aug. 2023.
[27]
Z. Guo, D. Zhang, L. Zhang, and W. Liu, “Feature band selection for online multispectral palmprint recognition,” IEEE Trans. Inf. Forensics Security, vol. 7, no. 3, pp. 1094–1099, Jun. 2012.
[28]
H. J. Nussbaumer, Fast Fourier Transform and Convolution Algorithms. New York, NY, USA: Springer, 1981, pp. 80–111.
[29]
Y. Rao, Z. Wen-Liang, Z. Zhu, L. Lu, and J. Zhou, “Global filter networks for image classification,” in Proc. Adv. Neural Inf. Process. Syst., 2021, pp. 980–993.
[30]
X. Mao, Y. Liu, F. Liu, Q. Li, W. Shen, and Y. Wang, “Intriguing findings of frequency selection for image deblurring,” in Proc. 37th AAAI Conf. Artif. Intell., 2023, pp. 1905–1913.
[31]
R. Ma, S. Li, B. Zhang, and Z. Li, “Generative adaptive convolutions for real-world noisy image denoising,” in Proc. AAAI Conf. Artif. Intell. (AAAI), 2022, pp. 1935–1943.
[32]
K. Zhou, Z. Liu, Y. Qiao, T. Xiang, and C. C. Loy, “Domain generalization: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 4396–4415, Apr. 2023.
[33]
J. Huang, D. Guan, A. Xiao, and S. Lu, “FSDR: Frequency space domain randomization for domain generalization,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 6891–6902.
[34]
Q. Xu, R. Zhang, Y. Zhang, Y. Wang, and Q. Tian, “A Fourier-based framework for domain generalization,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 14378–14387.
[35]
S. Lee, J. Bae, and H. Young Kim, “Decompose, adjust, compose: Effective normalization by playing with frequency for domain generalization,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 11776–11785.
[36]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[37]
Y. Hao, Z. Sun, T. Tan, and C. Ren, “Multispectral palm image fusion for accurate contact-free palmprint recognition,” in Proc. 15th IEEE Int. Conf. Image Process., Oct. 2008, pp. 281–284.
[38]
Y. Hao, Z. Sun, and T. Tan, “Comparative studies on multispectral palm image fusion for biometrics,” in Proc. Asian Conf. Comput. Vis. (ACCV), 2007, pp. 12–21.
[39]
D. Zhang, W.-K. Kong, J. You, and M. Wong, “Online palmprint identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 9, pp. 1041–1050, Sep. 2003.
[40]
A. W.-K. Kong and D. Zhang, “Competitive coding scheme for palmprint verification,” in Proc. 17th Int. Conf. Pattern Recognit., 2004, pp. 520–523.
[41]
Z. Sun, T. Tan, Y. Wang, and S. Z. Li, “Ordinal palmprint represention for personal identification [represention read representation],” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 1, Jun. 2005, pp. 279–284.
[42]
L. Fei, Y. Xu, W. Tang, and D. Zhang, “Double-orientation code and nonlinear matching scheme for palmprint recognition,” Pattern Recognit., vol. 49, pp. 89–101, Jan. 2016.
[43]
L. Fei, B. Zhang, Y. Xu, D. Huang, W. Jia, and J. Wen, “Local discriminant direction binary pattern for palmprint representation and recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 2, pp. 468–481, Feb. 2020.
[44]
N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical guidelines for efficient CNN architecture design,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 122–138.
[45]
M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. Int. Conf. Mach. Learn., 2019, pp. 6105–6114.
[46]
X. Liu, H. Peng, N. Zheng, Y. Yang, H. Hu, and Y. Yuan, “EfficientViT: Memory efficient vision transformer with cascaded group attention,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 14420–14430.
[47]
D. Zhong and J. Zhu, “Centralized large margin cosine loss for open-set deep palmprint recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 6, pp. 1559–1568, Jun. 2020.
[48]
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, Feb. 2020.
[49]
S. Li, B. Zhang, L. Wu, R. Ma, and X. Ning, “Robust and sparse least square regression for finger vein and finger knuckle print recognition,” IEEE Trans. Inf. Forensics Security, vol. 19, pp. 2709–2719, 2024.

Index Terms

  1. Learning Frequency-Aware Common Feature for VIS-NIR Heterogeneous Palmprint Recognition
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Information Forensics and Security
    IEEE Transactions on Information Forensics and Security  Volume 19, Issue
    2024
    9628 pages

    Publisher

    IEEE Press

    Publication History

    Published: 12 August 2024

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media