[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensor-Based Behavioral Biometrics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 10 February 2025 | Viewed by 7429

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
Interests: sensor data processing; identification; data and information fusion; sensor networks; computer networks, biometrics

E-Mail Website
Guest Editor
Computer Engineering, University of Pavia, Pavia, Italy
Interests: computer vision; pattern recognition; image processing

Special Issue Information

Dear Colleagues,

Behavioral biometrics is a subfield of the science of personal identification. The main goal is to build a unique pattern of behavior of a certain type of activity of a person by which they can be identified. Usually, the considered activities are physical and cognitive. In the broader sense, however, biosignals can also be added as a reflection of the functioning of certain human organs. Of interest are the individual gait or the manner of walking, gesturing, speed and intonation of speaking and the manner of handling various devices and tools, such as smartphones, keyboards, computer mouse, etc. Among the cognitive ones, we can count the movement of the eyes when perceiving textual information, searching for an object in a scene, searching for mistakes or repetitions, counting certain types of objects, the way of working on the Internet, etc. In the field of biosignals, there are already developments for biometrics based on eye movement, ECG and EEG signals, human breathing, etc.

It is interesting to note that in a number of cases, information concerning individual behavior is already available (usually recorded by the digital device we work with) and only needs to be subjected to additional processing in order to make the identification.

Behavioral biometrics can be seen as a powerful additional means of identification. With the development of various methods of behavioral biometrics, it is expected that in the near future, it will find a place in almost all digital devices and helps prevent different types of fraud.

Dr. Kiril Alexiev
Dr. Virginio Cantoni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors/sensing
  • biometrics
  • biometric recognition
  • biosignal
  • ECG/EEG/EMG/EOG signal sensing
  • biometric systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1672 KiB  
Article
Pedestrian Re-Identification Based on Fine-Grained Feature Learning and Fusion
by Anming Chen and Weiqiang Liu
Sensors 2024, 24(23), 7536; https://doi.org/10.3390/s24237536 - 26 Nov 2024
Viewed by 499
Abstract
Video-based pedestrian re-identification (Re-ID) is used to re-identify the same person across different camera views. One of the key problems is to learn an effective representation for the pedestrian from video. However, it is difficult to learn an effective representation from one single [...] Read more.
Video-based pedestrian re-identification (Re-ID) is used to re-identify the same person across different camera views. One of the key problems is to learn an effective representation for the pedestrian from video. However, it is difficult to learn an effective representation from one single modality of a feature due to complicated issues with video, such as background, occlusion, and blurred scenes. Therefore, there are some studies on fusing multimodal features for video-based pedestrian Re-ID. However, most of these works fuse features at the global level, which is not effective in reflecting fine-grained and complementary information. Therefore, the improvement in performance is limited. To obtain a more effective representation, we propose to learn fine-grained features from different modalities of the video, and then they are aligned and fused at the fine-grained level to capture rich semantic information. As a result, a multimodal token-learning and alignment model (MTLA) is proposed to re-identify pedestrians across camera videos. An MTLA consists of three modules, i.e., a multimodal feature encoder, token-based cross-modal alignment, and correlation-aware fusion. Firstly, the multimodal feature encoder is used to extract the multimodal features from the visual appearance and gait information views, and then fine-grained tokens are learned and denoised from these features. Then, the token-based cross-modal alignment module is used to align the multimodal features at the token level to capture fine-grained semantic information. Finally, the correlation-aware fusion module is used to fuse the multimodal token features by learning the inter- and intra-modal correlation, in which the features refine each other and a unified representation is obtained for pedestrian Re-ID. To evaluate the performance of fine-grained features alignment and fusion, we conduct extensive experiments on three benchmark datasets. Compared with the state-of-art approaches, all the evaluation metrices of mAP and Rank-K are improved by more than 0.4 percentage points. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

Figure 1
<p>Framework of MTLA, which consists of three modules; i.e., a multimodal feature encoder is used to learn fine-grained and discriminative features from the visual appearance and gait views, token-based cross-modal alignment is used to align the tokens based on the contrastive learning method, and correlation-aware fusion is used to fully integrate the two types of features to obtain an effective representation via cross-attention and aggregator.</p>
Full article ">Figure 2
<p>The attention block.</p>
Full article ">Figure 3
<p>Examples of top one results of different approaches, and the images of the correct ID are marked with red boxes.</p>
Full article ">
13 pages, 1585 KiB  
Article
Analyzing Arabic Handwriting Style through Hand Kinematics
by Vahan Babushkin, Haneen Alsuradi, Muhamed Osman Al-Khalil and Mohamad Eid
Sensors 2024, 24(19), 6357; https://doi.org/10.3390/s24196357 - 30 Sep 2024
Cited by 1 | Viewed by 1111
Abstract
Handwriting style is an important aspect affecting the quality of handwriting. Adhering to one style is crucial for languages that follow cursive orthography and possess multiple handwriting styles, such as Arabic. The majority of available studies analyze Arabic handwriting style from static documents, [...] Read more.
Handwriting style is an important aspect affecting the quality of handwriting. Adhering to one style is crucial for languages that follow cursive orthography and possess multiple handwriting styles, such as Arabic. The majority of available studies analyze Arabic handwriting style from static documents, focusing only on pure styles. In this study, we analyze handwriting samples with mixed styles, pure styles (Ruq’ah and Naskh), and samples without a specific style from dynamic features of the stylus and hand kinematics. We propose a model for classifying handwritten samples into four classes based on adherence to style. The stylus and hand kinematics data were collected from 50 participants who were writing an Arabic text containing all 28 letters and covering most Arabic orthography. The parameter search was conducted to find the best hyperparameters for the model, the optimal sliding window length, and the overlap. The proposed model for style classification achieves an accuracy of 88%. The explainability analysis with Shapley values revealed that hand speed, pressure, and pen slant are among the top 12 important features, with other features contributing nearly equally to style classification. Finally, we explore which features are important for Arabic handwriting style detection. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Experiment setup, (<b>b</b>) sample text dictated to subjects, (<b>c</b>) user interface with a sample of subject’s handwriting (text is shown using the Naskh style).</p>
Full article ">Figure 2
<p>Distribution of the expert’s evaluation of style consistency by paragraphs. (<b>a</b>) Original distribution, (<b>b</b>) after retaining paragraphs that correspond to the prevailing style of the subject.</p>
Full article ">Figure 3
<p>The proposed architecture with two temporal convolution layers: <span class="html-italic">w</span>—the length of the window, <span class="html-italic">s</span>—overlap, <span class="html-italic">n</span>—number of windows, <span class="html-italic">T</span>—length of the entire paragraph, <math display="inline"><semantics> <msub> <mi>K</mi> <mn>1</mn> </msub> </semantics></math>—kernel sizes of the first/second 1D-CNN layers, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math>—number of channels in first/second 1D-CNN layers.</p>
Full article ">Figure 4
<p>Searching for optimal (<b>a</b>) overlap and (<b>b</b>) window size.</p>
Full article ">Figure 5
<p>Average of confusion matrices over 5 folds. The diagonal shows average recall across 5 folds for each of the 4 classes.</p>
Full article ">Figure 6
<p>Average of normalized Shapley values across 5 folds.</p>
Full article ">
14 pages, 761 KiB  
Article
Online Signature Biometrics for Mobile Devices
by Katarzyna Roszczewska and Ewa Niewiadomska-Szynkiewicz
Sensors 2024, 24(11), 3524; https://doi.org/10.3390/s24113524 - 30 May 2024
Viewed by 837
Abstract
This paper addresses issues concerning biometric authentication based on handwritten signatures. Our research aimed to check whether a handwritten signature acquired with a mobile device can effectively verify a user’s identity. We present a novel online signature verification method using coordinates of points [...] Read more.
This paper addresses issues concerning biometric authentication based on handwritten signatures. Our research aimed to check whether a handwritten signature acquired with a mobile device can effectively verify a user’s identity. We present a novel online signature verification method using coordinates of points and pressure values at each point collected with a mobile device. Convolutional neural networks are used for signature verification. In this paper, three neural network models are investigated, i.e., two self-made light SigNet and SigNetExt models and the VGG-16 model commonly used in image processing. The convolutional neural networks aim to determine whether the acquired signature sample matches the class declared by the signer. Thus, the scenario of closed set verification is performed. The effectiveness of our method was tested on signatures acquired with mobile phones. We used the subset of the multimodal database, MobiBits, that was captured using a custom-made application and consists of samples acquired from 53 people of diverse ages. The experimental results on accurate data demonstrate that developed architectures of deep neural networks can be successfully used for online handwritten signature verification. We achieved an equal error rate (EER) of 0.63% for random forgeries and 6.66% for skilled forgeries. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

Figure 1
<p>SigNet network architecture diagram.</p>
Full article ">Figure 2
<p>An example of a genuine signature (<b>top</b>) and its skilled forgery (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Number of samples for each class: genuine signatures (<b>left</b>) and skilled forgeries (<b>right</b>).</p>
Full article ">Figure 4
<p>Handwritten signatures represented as images (pixels correspond to pressure values), scaled to the SigNet input data size.</p>
Full article ">Figure 5
<p>Validation of SigNet, SigNetExt and VGG-16 on the TrainSet dataset. Averaged ROC curves for 20 rounds of cross-validation.</p>
Full article ">Figure 6
<p>Validation of SigNet, SigNetExt and VGG-16 on the ValSet dataset. Averaged ROC curves for 20 rounds of cross-validation.</p>
Full article ">Figure 7
<p>Testing of SigNet, SigNetExt, and VGG-16 on the TestSet dataset. Averaged ROC curves for 20 rounds of cross-validation.</p>
Full article ">
35 pages, 6451 KiB  
Article
Efhamni: A Deep Learning-Based Saudi Sign Language Recognition Application
by Lama Al Khuzayem, Suha Shafi, Safia Aljahdali, Rawan Alkhamesie and Ohoud Alzamzami
Sensors 2024, 24(10), 3112; https://doi.org/10.3390/s24103112 - 14 May 2024
Cited by 2 | Viewed by 2020
Abstract
Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools [...] Read more.
Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

Figure 1
<p>Examples from the KSU-SSL dataset [<a href="#B10-sensors-24-03112" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>Efhamni’s architecture.</p>
Full article ">Figure 3
<p>Efhamni (Understand Me) application use case.</p>
Full article ">Figure 4
<p>Deaf and hard-of-hearing chat sequence diagram.</p>
Full article ">Figure 5
<p>Deaf and hard-of-hearing flowchart.</p>
Full article ">Figure 6
<p>Efhamni (Understand Me) Application: (<b>A</b>) sign-in and (<b>B</b>) sign-up screens.</p>
Full article ">Figure 7
<p>Efhamni (Understand Me) Application: (<b>A</b>) deaf and hard-of-hearing home page and (<b>B</b>) deaf and hard-of-hearing side menu screens.</p>
Full article ">Figure 8
<p>Efhamni (Understand Me) Application: (<b>A</b>) camera and (<b>B</b>) editing screens.</p>
Full article ">Figure 9
<p>Efhamni (Understand Me) Application: (<b>A</b>) QR of the person enrolled in the system and (<b>B</b>) deaf and hard-of-hearing side menu screens.</p>
Full article ">Figure 10
<p>Efhamni (Understand Me) Application: (<b>A</b>) deaf and hard-of-hearing translation and (<b>B</b>) deaf and hard-of-hearing chat screens.</p>
Full article ">Figure 11
<p>Flowchart of the deep learning model.</p>
Full article ">Figure 12
<p>Performance of the model.</p>
Full article ">Figure 13
<p>Loss behavior of the model.</p>
Full article ">Figure 14
<p>Confusion matrix.</p>
Full article ">Figure 15
<p>Father frame.</p>
Full article ">Figure 16
<p>Father key points.</p>
Full article ">Figure 17
<p>Five frame.</p>
Full article ">Figure 18
<p>Five key points.</p>
Full article ">
19 pages, 1537 KiB  
Article
A Perifacial EMG Acquisition System for Facial-Muscle-Movement Recognition
by Jianhang Zhang, Shucheng Huang, Jingting Li, Yan Wang, Zizhao Dong and Su-Jing Wang
Sensors 2023, 23(21), 8758; https://doi.org/10.3390/s23218758 - 27 Oct 2023
Cited by 2 | Viewed by 2037
Abstract
This paper proposes a portable wireless transmission system for the multi-channel acquisition of surface electromyography (EMG) signals. Because EMG signals have great application value in psychotherapy and human–computer interaction, this system is designed to acquire reliable, real-time facial-muscle-movement signals. Electrodes placed on the [...] Read more.
This paper proposes a portable wireless transmission system for the multi-channel acquisition of surface electromyography (EMG) signals. Because EMG signals have great application value in psychotherapy and human–computer interaction, this system is designed to acquire reliable, real-time facial-muscle-movement signals. Electrodes placed on the surface of a facial-muscle source can inhibit facial-muscle movement due to weight, size, etc., and we propose to solve this problem by placing the electrodes at the periphery of the face to acquire the signals. The multi-channel approach allows this system to detect muscle activity in 16 regions simultaneously. Wireless transmission (Wi-Fi) technology is employed to increase the flexibility of portable applications. The sampling rate is 1 KHz and the resolution is 24 bit. To verify the reliability and practicality of this system, we carried out a comparison with a commercial device and achieved a correlation coefficient of more than 70% on the comparison metrics. Next, to test the system’s utility, we placed 16 electrodes around the face for the recognition of five facial movements. Three classifiers, random forest, support vector machine (SVM) and backpropagation neural network (BPNN), were used for the recognition of the five facial movements, in which random forest proved to be practical by achieving a classification accuracy of 91.79%. It is also demonstrated that electrodes placed around the face can still achieve good recognition of facial movements, making the landing of wearable EMG signal-acquisition devices more feasible. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the measurement system architecture.</p>
Full article ">Figure 2
<p>Program flow of the hardware program. (<b>a</b>) Program flow of the MCU; (<b>b</b>) Program flow of the Wi-Fi.</p>
Full article ">Figure 3
<p>EMG signals from the commercial (blue) and the proposed device (green).</p>
Full article ">Figure 4
<p>Structure of the EMG signal-processing scheme.</p>
Full article ">Figure 5
<p>Data-Acquisition Experimental Procedure.</p>
Full article ">Figure 6
<p>Number of various facial movements labeled.</p>
Full article ">Figure 7
<p>Confusion matrix for results of five facial-muscle movements and ten facial-muscle-movements recognition. CE_N: Close eyes (Normal); PL_N: purse lips (Normal); RE_N: Raise eyebrows (Normal); LCM_N: Lift the corners of the mouth (Normal); FR_N: Frown (Normal); CE_F: Close eyes (Forceful); PL_F: purse lips (Forceful); RE_F: Raise eyebrows (Forceful); LCM_F: Lift the corners of the mouth (Forceful); FR_F: Frown (Forceful).</p>
Full article ">
Back to TopTop