[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (506)

Search Parameters:
Keywords = eye gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1814 KiB  
Article
Comparative Analysis of Physiological Vergence Angle Calculations from Objective Measurements of Gaze Position
by Linda Krauze, Karola Panke, Gunta Krumina and Tatjana Pladere
Sensors 2024, 24(24), 8198; https://doi.org/10.3390/s24248198 (registering DOI) - 22 Dec 2024
Abstract
Eccentric photorefractometry is widely used to measure eye refraction, accommodation, gaze position, and pupil size. While the individual calibration of refraction and accommodation data has been extensively studied, gaze measurements have received less attention. PowerRef 3 does not incorporate individual calibration for gaze [...] Read more.
Eccentric photorefractometry is widely used to measure eye refraction, accommodation, gaze position, and pupil size. While the individual calibration of refraction and accommodation data has been extensively studied, gaze measurements have received less attention. PowerRef 3 does not incorporate individual calibration for gaze measurements, resulting in a divergent offset between the measured and expected gaze positions. To address this, we proposed two methods to calculate the physiological vergence angle based on the visual vergence data obtained from PowerRef 3. Twenty-three participants aged 25 ± 4 years viewed Maltese cross stimuli at distances of 25, 30, 50, 70, and 600 cm. The expected vergence angles were calculated considering the individual interpupillary distance at far. Our results demonstrate that the PowerRef 3 gaze data deviated from the expected vergence angles by 9.64 ± 2.73° at 25 cm and 9.25 ± 3.52° at 6 m. The kappa angle calibration method reduced the discrepancy to 3.93 ± 1.19° at 25 cm and 3.70 ± 0.36° at 600 cm, whereas the linear regression method further improved the accuracy to 3.30 ± 0.86° at 25 cm and 0.26 ± 0.01° at 600 cm. Both methods improved the gaze results, with the linear regression calibration method showing greater overall accuracy. Full article
(This article belongs to the Special Issue Advanced Optics and Photonics Technologies for Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>PowerRef 3 setup with custom-built stimulus construction on rails (<b>a</b>), and Maltese cross stimulus (<b>b</b>).</p>
Full article ">Figure 2
<p>Expected vergence angle (<span class="html-italic">EVA</span>, black dashed line) and total physiological vergence angle obtained using the two proposed calibration methods: (<b>a</b>) kappa angle calibration and (<b>b</b>) linear regression calibration. Gaze data are represented over a duration of 4 s at each of the 5 distances for four participants: P1 (FD = −10, largest exo fixation disparity, blue line), P2 (FD = 0, closest to <span class="html-italic">EVA</span>, light gray line), P3 (FD = 0, largest deviation from <span class="html-italic">EVA</span>, dark gray line), and P4 (FD = 4, largest eso fixation disparity, green line). FD—fixation disparity.</p>
Full article ">Figure 2 Cont.
<p>Expected vergence angle (<span class="html-italic">EVA</span>, black dashed line) and total physiological vergence angle obtained using the two proposed calibration methods: (<b>a</b>) kappa angle calibration and (<b>b</b>) linear regression calibration. Gaze data are represented over a duration of 4 s at each of the 5 distances for four participants: P1 (FD = −10, largest exo fixation disparity, blue line), P2 (FD = 0, closest to <span class="html-italic">EVA</span>, light gray line), P3 (FD = 0, largest deviation from <span class="html-italic">EVA</span>, dark gray line), and P4 (FD = 4, largest eso fixation disparity, green line). FD—fixation disparity.</p>
Full article ">
24 pages, 875 KiB  
Systematic Review
Empirical Insights into Eye-Tracking for Design Evaluation: Applications in Visual Communication and New Media Design
by Ruirui Guo, Nayeon Kim and Jisun Lee
Behav. Sci. 2024, 14(12), 1231; https://doi.org/10.3390/bs14121231 (registering DOI) - 21 Dec 2024
Viewed by 404
Abstract
(1) Background: As digital technology continues to reshape visual landscapes, understanding how design elements influence customer experience has become essential. Eye-tracking technology offers a powerful, quantitative approach to assessing visibility, aesthetics, and design components, providing unique insights into visual engagement. (2) Methods: This [...] Read more.
(1) Background: As digital technology continues to reshape visual landscapes, understanding how design elements influence customer experience has become essential. Eye-tracking technology offers a powerful, quantitative approach to assessing visibility, aesthetics, and design components, providing unique insights into visual engagement. (2) Methods: This paper presents a systematic review of eye-tracking methodologies applied in design research. Thirty studies were selected for analysis from recognized academic databases using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method. Employing the Population, Intervention, Comparison, and Outcomes (PICO) framework, this review focused on experimental studies in visual communication and new media design that utilized visual symbols for communication and leveraged new media technologies. (3) Results: The findings corroborated that eye-tracking technology offers in-depth insights into gaze patterns, visual perception, and attention, which can inform design strategies. This review shows that assessing visual designs based on eye-tracking data can enhance consumer-centered interfaces, better align with user preferences, and foster more engaged behaviors in both digital and physical environments. (4) Conclusions: This review deepens our understanding of the cognitive and emotional processes underlying visual engagement. It also suggests new avenues for integrating diverse eye-tracking metrics into design evaluation, offering practical applications for improving design strategies and advancing the field of design research. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

Figure 1
<p>PRISMA Flow Diagram.</p>
Full article ">
21 pages, 5864 KiB  
Article
Perceiving Etruscan Art: AI and Visual Perception
by Maurizio Forte
Humans 2024, 4(4), 409-429; https://doi.org/10.3390/humans4040027 - 18 Dec 2024
Viewed by 463
Abstract
This research project is aimed at exploring the cognitive and emotional processes involved in perceiving Etruscan artifacts. The case study is the Sarcophagus of the Spouses at the National Etruscan Museum in Rome, one of the most important masterpieces in pre-Roman art. The [...] Read more.
This research project is aimed at exploring the cognitive and emotional processes involved in perceiving Etruscan artifacts. The case study is the Sarcophagus of the Spouses at the National Etruscan Museum in Rome, one of the most important masterpieces in pre-Roman art. The study utilized AI and eye-tracking technology to analyze how viewers engaged with the Etruscan Sarcophagus of the Spouses, revealing key patterns of visual attention and engagement. OpenAI, ChatGPT-4 (accessed on 12 October 2024) was used in conjunction with Colab–Python in order to elaborate all the spreadsheets and data arising from the eye-tracking recording. The results showed that viewers primarily focused on the central figures, especially on their faces and hands, indicating a high level of interest in the human elements of the artifact. The longer fixation duration on these features suggest that viewers find them particularly engaging, which is likely due to their detailed craftsmanship and symbolic significance. The eye-tracking data also highlighted specific gaze patterns, such as diagonal scanning across the sarcophagus, which reflects the composition’s ability to guide viewer attention strategically. The results indicate that viewer focus centers on human elements, especially on faces and hands, suggesting that these features hold both esthetic and symbolic significance. Full article
Show Figures

Figure 1

Figure 1
<p>The Sarcophagus of the Spouses at the National Etruscan Museum of Villa Giulia in Rome.</p>
Full article ">Figure 2
<p>Gaze direction of the male (pink) and female’s eyes (light blue and blue) by OpenAI, ChatGPT-4 (accessed on 12 October 2024) via Python coding. This calculation was made on the basis of the 3D model of the sarcophagus. The arrows indicate the gaze direction of each eye for the male and female figures. Each arrow’s orientation (i.e., the direction it points to) shows where each eye is directed to in 3D space relative to the sarcophagus. The length of each vector represents the magnitude of the gaze, showing the relative focus distance.</p>
Full article ">Figure 3
<p>The “aura” of the sarcophagus is depicted with varying degrees of visual attention. The sarcophagus (darker oval) in the center of the funerary chamber is simulated by this AI visual reconstruction (OpenAI, ChatGPT-4 (accessed on 12 October 2024)) based on the intensity of visual interaction. The red dots concern the sounds and acoustic engagement during funerary rituals.</p>
Full article ">Figure 4
<p>Graphic reconstruction of the featured affordances based on AI identification (OpenAI, <span class="html-italic">ChatGPT-4</span> (accessed on 12 October 2024). The schematic graph shows how different elements of the artifact interact with its audience.</p>
Full article ">Figure 5
<p>Participants distributed by background and education.</p>
Full article ">Figure 6
<p>Participants distributed by age.</p>
Full article ">Figure 7
<p>Virtual reconstruction of the museum room with visual simulation of the position of the sarcophagus from the point of view of the visitor (model by Forte, Mencocci).</p>
Full article ">Figure 8
<p>Eye tracking by time of observation, combining the cumulative observations of 42 individuals in 60 s. In this case, the focus is centered on the central part of the sarcophagus, namely the two faces and the hands (processing by Alaimo Di Loro, Mingione).</p>
Full article ">Figure 9
<p>Cumulative heat map based on the 42 visitors and calculated by OpenAI, ChatGPT-4 (accessed on 12 October 2024) Python coding using eye-tracking data.</p>
Full article ">Figure 10
<p>The highest levels of visual concentration on the sarcophagus by ROI (processing by Alaimo Di Loro, Mingione).</p>
Full article ">Figure 11
<p>Average fixation duration by feature (OpenAI, ChatGPT-4 (accessed on 12 October 2024) Python coding).</p>
Full article ">Figure 12
<p>Heatmap of average fixation duration by feature/participant (OpenAI, ChatGPT-4 (accessed on 12 October 2024) Python coding).</p>
Full article ">Figure 13
<p>Saccade patterns (OpenAI, ChatGPT-4 (accessed on 12 October 2024) Python coding). Noise concerns all the visual areas out of the region of interest; background concerns the back side of the museum case. It is clear that noise and background are part of the museum experience.</p>
Full article ">Figure 14
<p>Time spent on each feature (OpenAI, ChatGPT-4 (accessed on 12 October 2024) Python coding). The key features are the male chest, male face, female chest, and female hands. A shorter attention span is observed for male hands. Background and noise are also quite evident in this chart.</p>
Full article ">
19 pages, 6356 KiB  
Article
An Objective Handling Qualities Assessment Framework of Electric Vertical Takeoff and Landing
by Yuhan Li, Shuguang Zhang, Yibing Wu, Sharina Kimura, Michael Zintl and Florian Holzapfel
Aerospace 2024, 11(12), 1020; https://doi.org/10.3390/aerospace11121020 - 11 Dec 2024
Viewed by 452
Abstract
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling [...] Read more.
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling qualities remains a challenge. This paper proposed a handling qualities framework to assess eVTOL handling qualities, integrating pilot compensation, task performance, and qualitative comments. An experiment was conducted, where eye-tracking data and subjective ratings from 16 participants as they performed various Mission Task Elements (MTEs) in an eVTOL simulator were analyzed. The relationship between pilot compensation and task workload was investigated based on eye metrics. Data mining results revealed that pilots’ eye movement patterns and workload perception change when performing Mission Task Elements (MTEs) that involve aircraft deficiencies. Additionally, pupil size, pupil diameter, iris diameter, interpupillary distance, iris-to-pupil ratio, and gaze entropy are found to be correlated with both handling qualities and task workload. Furthermore, a handling qualities and pilot workload recognition model is developed based on Long-Short Term Memory (LSTM), which is subsequently trained and evaluated with experimental data, achieving an accuracy of 97%. A case study was conducted to validate the effectiveness of the proposed framework. Overall, the proposed framework addresses the limitations of the existing Handling Qualities Rating Method (HQRM), offering a more comprehensive approach to handling qualities assessment. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>The Cooper–Harper Rating scale [<a href="#B3-aerospace-11-01020" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>The handling quality assessment framework.</p>
Full article ">Figure 3
<p>The motion-based Mixed Reality simulator [<a href="#B35-aerospace-11-01020" class="html-bibr">35</a>].</p>
Full article ">Figure 4
<p>The virtual scene from participants. From left to right, hover turn, pirouette, and sidestep are performing.</p>
Full article ">Figure 5
<p>The overall experimental procedures.</p>
Full article ">Figure 6
<p>The structure of LSTM cell.</p>
Full article ">Figure 7
<p>The selected input features.</p>
Full article ">Figure 8
<p>The proposed LSTM model.</p>
Full article ">Figure 9
<p>The Spearman’s rank correlation coefficients. Coefficients reaching the green region represent highly related features.</p>
Full article ">Figure 10
<p>The results of Random Forest importance analysis.</p>
Full article ">Figure 11
<p>The heatmaps of participants’ gaze directions.</p>
Full article ">Figure 12
<p>The handling quality assessment process based on the proposed framework.</p>
Full article ">Figure A1
<p>The indicators output by Varjo XR 3 [<a href="#B42-aerospace-11-01020" class="html-bibr">42</a>].</p>
Full article ">Figure A2
<p>The calculated eye indicators [<a href="#B23-aerospace-11-01020" class="html-bibr">23</a>].</p>
Full article ">
15 pages, 5809 KiB  
Article
The Use of Eye-Tracking to Explore the Relationship Between Consumers’ Gaze Behaviour and Their Choice Process
by Maria-Jesus Agost and Vicente Bayarri-Porcar
Big Data Cogn. Comput. 2024, 8(12), 184; https://doi.org/10.3390/bdcc8120184 - 9 Dec 2024
Viewed by 458
Abstract
Eye-tracking technology can assist researchers in understanding motivational decision-making and choice processes by analysing consumers’ gaze behaviour. Previous studies showed that attention is related to decision, as the preferred stimulus is generally the most observed and the last visited before a decision is [...] Read more.
Eye-tracking technology can assist researchers in understanding motivational decision-making and choice processes by analysing consumers’ gaze behaviour. Previous studies showed that attention is related to decision, as the preferred stimulus is generally the most observed and the last visited before a decision is made. In this work, the relationship between gaze behaviour and decision-making was explored using eye-tracking technology. Images of six wardrobes incorporating different sustainable design strategies were presented to 57 subjects, who were tasked with selecting the wardrobe they intended to keep the longest. The amount of time spent looking was higher when it was the chosen version. Detailed analyses of gaze plots and heat maps derived from eye-tracking records were employed to identify different patterns of gaze behaviour during the selection process. These patterns included alternating attention between a few versions or comparing them against a reference, allowing the identification of stimuli that initially piqued interest but were ultimately not chosen, as well as potential doubts in the decision-making process. These findings suggest that doubts that arise before making a selection warrant further investigation. By identifying stimuli that attract attention but are not chosen, this study provides valuable insights into consumer behaviour and decision-making processes. Full article
Show Figures

Figure 1

Figure 1
<p>Slides with the description of each version of the wardrobe (translated from the original Spanish version).</p>
Full article ">Figure 2
<p>A participant during the experiment.</p>
Full article ">Figure 3
<p>Images summarizing the six versions of the wardrobes used for subjects to make the selection (translated from the original Spanish version). From top to bottom and left to right: arrangement 1, arrangement 2 and arrangement 3.</p>
Full article ">Figure 4
<p>AOIs that were defined (translated from the original Spanish version).</p>
Full article ">Figure 5
<p>Mean TFD spent looking at each AOI (wardrobe versions), depending on the one finally chosen.</p>
Full article ">Figure 6
<p>Examples of gaze plots and heat maps showing the gaze behaviour for each of the three patterns observed. In gaze plots, the numbered nodes indicate the sequence of the path. (<b>a</b>) Attention focused on just a few alternative versions; (<b>b</b>) Reference version; (<b>c</b>) Quick decision process.</p>
Full article ">Figure 6 Cont.
<p>Examples of gaze plots and heat maps showing the gaze behaviour for each of the three patterns observed. In gaze plots, the numbered nodes indicate the sequence of the path. (<b>a</b>) Attention focused on just a few alternative versions; (<b>b</b>) Reference version; (<b>c</b>) Quick decision process.</p>
Full article ">
16 pages, 2423 KiB  
Article
Enhancing Autism Detection Through Gaze Analysis Using Eye Tracking Sensors and Data Attribution with Distillation in Deep Neural Networks
by Federica Colonnese, Francesco Di Luzio, Antonello Rosato and Massimo Panella
Sensors 2024, 24(23), 7792; https://doi.org/10.3390/s24237792 - 5 Dec 2024
Viewed by 609
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by differences in social communication and repetitive behaviors, often associated with atypical visual attention patterns. In this paper, the Gaze-Based Autism Classifier (GBAC) is proposed, which is a Deep Neural Network model that leverages both data distillation and data attribution techniques to enhance ASD classification accuracy and explainability. Using data sampled by eye tracking sensors, the model identifies unique gaze behaviors linked to ASD and applies an explainability technique called TracIn for data attribution by computing self-influence scores to filter out noisy or anomalous training samples. This refinement process significantly improves both accuracy and computational efficiency, achieving a test accuracy of 94.35% while using only 77% of the dataset, showing that the proposed GBAC outperforms the same model trained on the full dataset and random sample reductions, as well as the benchmarks. Additionally, the data attribution analysis provides insights into the most influential training examples, offering a deeper understanding of how gaze patterns correlate with ASD-specific characteristics. These results underscore the potential of integrating explainable artificial intelligence into neurodevelopmental disorder diagnostics, advancing clinical research by providing deeper insights into the visual attention patterns associated with ASD. Full article
Show Figures

Figure 1

Figure 1
<p>Samples from the Saliency4ASD dataset used for training the neural network model, which include images, fixation maps, and scanpaths.</p>
Full article ">Figure 2
<p>Architecture of the first two sub-networks (CNN_image_FM, CNN_fixmaps_FM), which processes the input images and fixation maps to extract spatial features for ASD classification.</p>
Full article ">Figure 3
<p>Architecture of the LSTM_scanpath_HS network used for scanpath sequences, capturing the temporal dependencies between fixation points.</p>
Full article ">Figure 4
<p>Overview of the complete GBAC model, illustrating the concatenation of the outputs from the image, fixation point, and scanpath sub-networks for the final classification.</p>
Full article ">Figure 5
<p>Accuracy and loss trends for training and validation sets across 16 epochs for model GBAC-FT.</p>
Full article ">Figure 6
<p>Accuracy and loss trends for training and validation sets across 16 epochs for models GBAC-SIST 1 and GBAC-RST 1.</p>
Full article ">Figure 7
<p>Accuracy and loss trends for training and validation sets across 16 epochs for models GBAC-SIST 2 and GBAC-RST 2.</p>
Full article ">Figure 8
<p>Accuracy and loss trends for training and validation sets across 16 epochs for models GBAC-SIST 3 and GBAC-RST 3.</p>
Full article ">Figure 9
<p>Examples of high-influence training images evaluated using (<a href="#FD6-sensors-24-07792" class="html-disp-formula">6</a>) from TracIn, particularly those featuring people or groups that played a key role in distinguishing ASD-related gaze patterns from neurotypical behaviors.</p>
Full article ">
24 pages, 9111 KiB  
Review
Bi-Directional Gaze-Based Communication: A Review
by Björn Rene Severitt, Nora Castner and Siegfried Wahl
Multimodal Technol. Interact. 2024, 8(12), 108; https://doi.org/10.3390/mti8120108 - 4 Dec 2024
Viewed by 519
Abstract
Bi-directional gaze-based communication offers an intuitive and natural way for users to interact with systems. This approach utilizes the user’s gaze not only to communicate intent but also to obtain feedback, which promotes mutual understanding and trust between the user and the system. [...] Read more.
Bi-directional gaze-based communication offers an intuitive and natural way for users to interact with systems. This approach utilizes the user’s gaze not only to communicate intent but also to obtain feedback, which promotes mutual understanding and trust between the user and the system. In this review, we explore the state of the art in gaze-based communication, focusing on both directions: From user to system and from system to user. First, we examine how eye-tracking data is processed and utilized for communication from the user to the system. This includes a range of techniques for gaze-based interaction and the critical role of intent prediction, which enhances the system’s ability to anticipate the user’s needs. Next, we analyze the reverse pathway—how systems provide feedback to users via various channels, highlighting their advantages and limitations. Finally, we discuss the potential integration of these two communication streams, paving the way for more intuitive and efficient gaze-based interaction models, especially in the context of Artificial Intelligence. Our overview emphasizes the future prospects for combining these approaches to create seamless, trust-building communication between users and systems. Ensuring that these systems are designed with a focus on usability and accessibility will be critical to making them effective communication tools for a wide range of users. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The process of interpreting gaze data involves extracting features such as fixations, saccades, and smooth pursuits, which can then be used for various applications. These applications range from gaming and training simulations to medical diagnostics, showcasing the versatility of gaze-based interaction. The data interpretation can be accomplished through various methods, including scanpath analysis, statistical approaches, and machine learning or deep learning techniques.</p>
Full article ">Figure 2
<p>Examples of scanpath analyzes. One of the first published scanpath analyses was by Yarbus [<a href="#B49-mti-08-00108" class="html-bibr">49</a>] An example can be seen in (<b>a</b>) which illustrates scanpaths of a Yarbus-like measurement by Geisler et al. [<a href="#B54-mti-08-00108" class="html-bibr">54</a>], where the participants see the same images, but the task was different: <span class="html-italic">Task 1</span> Indicate the age of the subjects. <span class="html-italic">Task 2</span> Remember the clothes the people are wearing. <span class="html-italic">Task 3</span> estimate how long the visitor was away from the family. This showed that the scanpath depends on the task. This knowledge can now be transferred to deep learning architectures, as shown in (<b>b</b>), a visualzation of the analysis by Castner et al. [<a href="#B50-mti-08-00108" class="html-bibr">50</a>], where the scanpath is reconstructed based on the fixation data and visualized using a VGG-16 network to compare the similarity of fixations.</p>
Full article ">Figure 3
<p>The most intuitive way to select a target with the gaze is to use the dwell time [<a href="#B82-mti-08-00108" class="html-bibr">82</a>,<a href="#B83-mti-08-00108" class="html-bibr">83</a>]. However, as this leads to problems (<span class="html-italic">Midas Touch</span>), there are other solutions such as using head gestures [<a href="#B84-mti-08-00108" class="html-bibr">84</a>] or specific eye movements to confirm the selection [<a href="#B85-mti-08-00108" class="html-bibr">85</a>,<a href="#B86-mti-08-00108" class="html-bibr">86</a>].</p>
Full article ">Figure 4
<p>Possible solutions by Sidenmark et al. for various problems in gaze-based interaction are visualized here. In (<b>a</b>) is a visualization of a gaze-based interaction method by Sidenmark et al. [<a href="#B95-mti-08-00108" class="html-bibr">95</a>] to avoid <span class="html-italic">Midas Touch</span>, since the selection is not only based on the gaze, but also uses the head as confirmation. A target (square) is selected using the gaze point (red circle). Then the circle expands and the selection can be confirmed by moving the head direction point (green circle) into the gaze circle. In (<b>b</b>) a sketch of the idea from Sidenmark et al. [<a href="#B97-mti-08-00108" class="html-bibr">97</a>] is shown. It is an approach for selecting different targets that are in a very similar direction to the user. The vergence angle can be used to decide which target the user wants to select, even if the combined gaze direction, shown by the orange dashed line, is the same. In <span class="html-italic">A</span>, the object of interest is close, so the vergence angle is large, while in <span class="html-italic">B</span> the target is far away, so the vergence angle is smaller.</p>
Full article ">Figure 5
<p>Summary of areas of application for gaze-based intention prediction. These applications span across various domains, including human–agent interaction, immersive environments, assistive technology, and human–robot interaction.</p>
Full article ">Figure 6
<p>Summary of feedback modalities and their strengths in gaze-based communication. Combining modalities can optimize user experience by enhancing awareness and improving interaction efficiency.</p>
Full article ">Figure 7
<p>Diagram of a gaze-based communication system with AI. The process starts with the user providing input through modalities such as gaze or head movements, and scene information captured by sensors. The gaze-based interaction system processes this input to estimate which object the user intends to interact with. The intention prediction component then uses both the estimated interaction object and the raw data (modalities and scene information) to predict the user’s broader intention. The AI interprets this intention and provides feedback through various channels (visual, audio, haptic), creating a feedback loop that informs the user and enhances communication and interaction with the system.</p>
Full article ">
17 pages, 1114 KiB  
Article
Integrating Students’ Real-Time Gaze in Teacher–Student Interactions: Case Studies on the Benefits and Challenges of Eye Tracking in Primary Education
by Raimundo da Silva Soares, Eneyse Dayane Pinheiro, Amanda Yumi Ambriola Oku, Marilia Biscaia Rizzo, Carolinne das Neves Vieira and João Ricardo Sato
Appl. Sci. 2024, 14(23), 11007; https://doi.org/10.3390/app142311007 - 27 Nov 2024
Viewed by 885
Abstract
Integrating neuroscience techniques, such as eye tracking, into educational practices has opened new avenues for understanding the cognitive processes underlying learning. This study investigates the feasibility and practicality of using eye tracking as a supportive tool for educators in primary school settings. By [...] Read more.
Integrating neuroscience techniques, such as eye tracking, into educational practices has opened new avenues for understanding the cognitive processes underlying learning. This study investigates the feasibility and practicality of using eye tracking as a supportive tool for educators in primary school settings. By taking into account eye-tracking features in lesson plans and instruction, this study explores the benefits and challenges of this technology from teachers’ perspective. The findings reveal that eye tracking can enhance interactivity, maintain student attention, and provide immediate feedback, thereby aiding in identifying student difficulties that may otherwise go unnoticed. However, the study also highlights concerns related to technical complexities, data privacy, and the need for teacher training to utilize and interpret eye-tracking data effectively. These insights contribute to a nuanced understanding of how eye-tracking technology can be implemented in educational settings, offering potential pathways for personalized teaching and improved learning outcomes. Full article
(This article belongs to the Special Issue ICT in Education, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Mathematics activity. (<b>A</b>) A challenge in which students needed to reproduce the path without repeating any edges. (<b>B</b>) The activity was designed to be impossible to solve without repeating edges, prompting students to recognize and analyze why the task could not be completed as instructed.</p>
Full article ">Figure 2
<p>Student gaze estimation. The video displays the real-time eye movements of a student as he works on a problem during class. First, the student realizes he made a mistake. Then, he rereads the problem statement, compares it with the previously created code block, identifies where they could make the change, and checks to see if it is correct. The student’s gaze is focused on the area where they can identify the correct solution.</p>
Full article ">Figure 3
<p>Geography eye-tracking activity. Image of the map “Distribution of Soybean across Different Biomes” (2016) used in the Geography activity. The red cross represents the student’s gaze.</p>
Full article ">Figure 4
<p>Biology eye-tracking activity. Screenshot from video showing student’s gaze path during one of the activities.</p>
Full article ">
23 pages, 5517 KiB  
Article
Research on an Eye Control Method Based on the Fusion of Facial Expression and Gaze Intention Recognition
by Xiangyang Sun and Zihan Cai
Appl. Sci. 2024, 14(22), 10520; https://doi.org/10.3390/app142210520 - 15 Nov 2024
Viewed by 525
Abstract
With the deep integration of psychology and artificial intelligence technology and other related technologies, eye control technology has achieved certain results at the practical application level. However, it is found that the accuracy of the current single-modal eye control technology is still not [...] Read more.
With the deep integration of psychology and artificial intelligence technology and other related technologies, eye control technology has achieved certain results at the practical application level. However, it is found that the accuracy of the current single-modal eye control technology is still not high, which is mainly caused by the inaccurate eye movement detection caused by the high randomness of eye movements in the process of human–computer interaction. Therefore, this study will propose an intent recognition method that fuses facial expressions and eye movement information and expects to complete an eye control method based on the fusion of facial expression and eye movement information based on the multimodal intent recognition dataset, including facial expressions and eye movement information constructed in this study. Based on the self-attention fusion strategy, the fused features are calculated, and the multi-layer perceptron is used to classify the fused features, so as to realize the mutual attention between different features, and improve the accuracy of intention recognition by enhancing the weight of effective features in a targeted manner. In order to solve the problem of inaccurate eye movement detection, an improved YOLOv5 model was proposed, and the accuracy of the model detection was improved by adding two strategies: a small target layer and a CA attention mechanism. At the same time, the corresponding eye movement behavior discrimination algorithm was combined for each eye movement action to realize the output of eye behavior instructions. Finally, the experimental verification of the eye–computer interaction scheme combining the intention recognition model and the eye movement detection model showed that the accuracy of the eye-controlled manipulator to perform various tasks could reach more than 95 percent based on this scheme. Full article
Show Figures

Figure 1

Figure 1
<p>The technical route of this paper’s research.</p>
Full article ">Figure 2
<p>Face image dataset example.</p>
Full article ">Figure 3
<p>This eye movement intent detection flow chart describes the conversion of eye movement data to intent classification.</p>
Full article ">Figure 4
<p>Integration framework based on attention mechanism.</p>
Full article ">Figure 5
<p>Comparison of performance in single-mode and multimodal prediction.</p>
Full article ">Figure 6
<p>Line charts of five indicators of different models.</p>
Full article ">Figure 7
<p>Loss function curve of Anchor method before and after improvement.</p>
Full article ">Figure 8
<p>Structure diagram of the CA attention mechanism [<a href="#B9-applsci-14-10520" class="html-bibr">9</a>].</p>
Full article ">Figure 9
<p>Improved YOLOv5 model structure.</p>
Full article ">Figure 10
<p>Improved loss variation diagram for the YOLOv5 model.</p>
Full article ">Figure 10 Cont.
<p>Improved loss variation diagram for the YOLOv5 model.</p>
Full article ">Figure 11
<p>The average accuracy (AP) curve of the improved model.</p>
Full article ">Figure 12
<p>The F1 score curve of the improved model.</p>
Full article ">Figure 13
<p>Test results before and after improvement.</p>
Full article ">Figure 14
<p>Human–computer interaction experiment platform.</p>
Full article ">Figure 15
<p>The overall flow chart of the experiment.</p>
Full article ">Figure 16
<p>Comparison of calculation efficiency indicators.</p>
Full article ">Figure 17
<p>Complete human–computer interaction process.</p>
Full article ">Figure 18
<p>Test results.</p>
Full article ">Figure 19
<p>Test results for different tasks.</p>
Full article ">
16 pages, 25350 KiB  
Article
Eye Tracking and Human Influence Factors’ Impact on Quality of Experience of Mobile Gaming
by Omer Nawaz, Siamak Khatibi, Muhammad Nauman Sheikh and Markus Fiedler
Future Internet 2024, 16(11), 420; https://doi.org/10.3390/fi16110420 - 13 Nov 2024
Viewed by 522
Abstract
Mobile gaming accounts for more than 50% of global online gaming revenue, surpassing console and browser-based gaming. The success of mobile gaming titles depends on optimizing applications for the specific hardware constraints of mobile devices, such as smaller displays and lower computational power, [...] Read more.
Mobile gaming accounts for more than 50% of global online gaming revenue, surpassing console and browser-based gaming. The success of mobile gaming titles depends on optimizing applications for the specific hardware constraints of mobile devices, such as smaller displays and lower computational power, to maximize battery life. Additionally, these applications must dynamically adapt to the variations in network speed inherent in mobile environments. Ultimately, user engagement and satisfaction are critical, necessitating a favorable comparison to browser and console-based gaming experiences. While Quality of Experience (QoE) subjective evaluations through user surveys are the most reliable method for assessing user perception, various factors, termed influence factors (IFs), can affect user ratings of stimulus quality. This study examines human influence factors in mobile gaming, specifically analyzing the impact of user delight towards displayed content and the effect of gaze tracking. Using Pupil Core eye-tracking hardware, we captured user interactions with mobile devices and measured visual attention. Video stimuli from eight popular games were selected, with resolutions of 720p and 1080p and frame rates of 30 and 60 fps. Our results indicate a statistically significant impact of user delight on the MOS for most video stimuli across all games. Additionally, a trend favoring higher frame rates over screen resolution emerged in user ratings. These findings underscore the significance of optimizing mobile gaming experiences by incorporating models that estimate human influence factors to enhance user satisfaction and engagement. Full article
Show Figures

Figure 1

Figure 1
<p>Subjective assessment with Pupil Core.</p>
Full article ">Figure 2
<p>Pupil Core software v3.5.1. (<b>a</b>) Pupil Capture for session recording with fixation detector in the calibrated area; (<b>b</b>) Pupil Player screen for replaying the individual session and exporting data.</p>
Full article ">Figure 3
<p>Frames of video stimuli. (<b>a</b>) Animal Crossing; (<b>b</b>) Counter-Strike 2; (<b>c</b>) Call of Duty; (<b>d</b>) Code Vein; (<b>e</b>) Fortnite; (<b>f</b>) Minecraft; (<b>g</b>) PUBG; (<b>h</b>) Rocket League.</p>
Full article ">Figure 4
<p>MOS of subjective assessment based on delight with 95% CI. (<b>a</b>) MOS_Animal Crossing; (<b>b</b>) MOS_Counter-Strike 2; (<b>c</b>) MOS_Code Vein; (<b>d</b>) MOS_PUBG.</p>
Full article ">Figure 5
<p>Histogram of user gaze.</p>
Full article ">Figure 6
<p>Relative frequency of gaze based on %GoB and %PoW ratings.</p>
Full article ">Figure 7
Full article ">
21 pages, 12428 KiB  
Article
Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification
by Frouke Hermens, Wim Anker and Charmaine Noten
Sensors 2024, 24(22), 7254; https://doi.org/10.3390/s24227254 - 13 Nov 2024
Viewed by 744
Abstract
Gaze zone detection involves estimating where drivers look in terms of broad categories (e.g., left mirror, speedometer, rear mirror). We here specifically focus on the automatic annotation of gaze zones in the context of road safety research, where the system can be tuned [...] Read more.
Gaze zone detection involves estimating where drivers look in terms of broad categories (e.g., left mirror, speedometer, rear mirror). We here specifically focus on the automatic annotation of gaze zones in the context of road safety research, where the system can be tuned to specific drivers and driving conditions, so that an easy to use but accurate system may be obtained. We show with an existing dataset of eye region crops (nine gaze zones) and two newly collected datasets (12 and 10 gaze zones) that image classification with YOLOv8, which has a simple command line interface, achieves near-perfect accuracy without any pre-processing of the images, as long as a model is trained on the driver and conditions for which annotation is required (such as whether the drivers wear glasses or sunglasses). We also present two apps to collect the training images and to train and apply the YOLOv8 models. Future research will need to explore how well the method extends to real driving conditions, which may be more variable and more difficult to annotate for ground truth labels. Full article
Show Figures

Figure 1

Figure 1
<p>Four images from one of the five drivers in the Lisa2 dataset [<a href="#B36-sensors-24-07254" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>Photograph of the setup. Two webcams were attached to a laptop controlling data collection and placed on the driver seat. Little round stickers in different colours served to help the participant to fixate on different gaze zones. The position of the sticker for the right window is indicated. Other stickers inside this image are for the speedometer, the centre console, and the right mirror.</p>
Full article ">Figure 3
<p>Examples of images of looking and pointing in a different context. A total of 10 different targets were selected around the screen that the webcam was attached to and other parts of the room. Note that in between recording sessions the actor changed the blue jacket for a red jacket.</p>
Full article ">Figure 4
<p>Accuracy per model trained on individual drivers for the Lisa2 dataset without glasses. Accuracy is defined as the percentage of predictions that agree with the annotated label (also known as the ’top1’ accuracy).</p>
Full article ">Figure 5
<p>Confusion matrices for each combination of the driver during training and the driver used for the test images, based on the validation sets.</p>
Full article ">Figure 6
<p>Accuracy per driver on models trained on different numbers of drivers for the Lisa2 dataset without glasses.</p>
Full article ">Figure 7
<p>Four images from one of the five drivers in the Lisa2 dataset, now with glasses.</p>
Full article ">Figure 8
<p>(<b>a</b>) Accuracy per driver on images with glasses when trained on images without glasses or images with glasses. (<b>b</b>) Accuracy per driver on images with and without glasses when trained on images with and without glasses. Images are from the Lisa2 dataset.</p>
Full article ">Figure 9
<p>Examples of images of the male driver, with and without glasses, recorded with our own app.</p>
Full article ">Figure 10
<p>(<b>a</b>) Zone classification accuracy for the male and female driver for smaller (320 × 240) and larger (640 × 480) images (both without sunglasses). Each model was trained on that particular combination of driver and image size and then applied to the validation set (seen during training) and test set (not seen during training). (<b>b</b>) Accuracy per driver on a model trained with the same driver on a model trained with the other driver or a model trained on both drivers. Performance is computed across the training, validation, and test sets. (<b>c</b>) Accuracy for the male driver with or without sunglasses on a model trained with or without sunglasses or images with and without sunglasses (’Both’). Performance is computed across the training, validation, and test sets.</p>
Full article ">Figure 11
<p>Zone classification accuracy for when an actor was looking or pointing at objects inside a living room. In between recordings, the actor changed from a red to a blue jacket, or vice versa. The change of the jacket reduced accuracy by around 5% (pointing) to 10% (looking) if these images were not included during training (’both’ refers to when both red and blue jacket training images were included).</p>
Full article ">Figure 12
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p>
Full article ">Figure 12 Cont.
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p>
Full article ">Figure 13
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p>
Full article ">Figure 13 Cont.
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p>
Full article ">
12 pages, 1574 KiB  
Article
Proprioceptive Training Improves Postural Stability and Reduces Pain in Cervicogenic Headache Patients: A Randomized Clinical Trial
by Mohamed Abdelaziz Emam, Tibor Hortobágyi, András Attila Horváth, Salma Ragab and Magda Ramadan
J. Clin. Med. 2024, 13(22), 6777; https://doi.org/10.3390/jcm13226777 - 11 Nov 2024
Viewed by 978
Abstract
Background: Headache is one of the leading causes of disability in the world. Neck proprioception, pain, and postural control are interconnected in both healthy individuals and those with chronic neck pain. This study examines the effects of proprioceptive training using a gaze direction [...] Read more.
Background: Headache is one of the leading causes of disability in the world. Neck proprioception, pain, and postural control are interconnected in both healthy individuals and those with chronic neck pain. This study examines the effects of proprioceptive training using a gaze direction recognition task on postural stability and pain in cervicogenic headache patients. Methods: Patients with cervicogenic headache (n = 34, age: 35–49 y) were randomized into a control group (CON), receiving only selected physical therapy rehabilitation or to an experimental group (EXP), performing proprioceptive training using a gaze direction recognition task plus selected physical therapy rehabilitation. Both programs consisted of 24, 60 min long sessions over 8 weeks. Postural stability was assessed by the modified clinical test of sensory integration of balance (mCTSIB) and a center of pressure test (COP) using the HUMAC balance system. Neck pain was assessed by a visual analog scale. Results: In all six tests, there was a time main effect (p < 0.001). In three of the six tests, there were group by time interactions so that EXP vs. CON improved more in postural stability measured while standing on foam with eyes closed normalized to population norms, COP velocity, and headache (all p ≤ 0.006). There was an association between the percent changes in standing on foam with eyes closed normalized to population norms and percent changes in COP velocity (r = 0.48, p = 0.004, n = 34) and between percent changes in COP velocity and percent changes in headache (r = 0.44, p = 0.008, n = 34). Conclusions: While we did not examine the underlying mechanisms, proprioceptive training in the form of a gaze direction recognition task can improve selected measures of postural stability, standing balance, and pain in cervicogenic headache patients. Full article
(This article belongs to the Section Clinical Rehabilitation)
Show Figures

Figure 1

Figure 1
<p>Flowchart of patient recruitment and study participation. This flowchart illustrates the number of patients screened, enrolled, and excluded at each stage of the study. It details the progression from initial recruitment through to final analysis, highlighting reasons for exclusion and the final sample sizes for the control (CON) and experimental (EXP) groups.</p>
Full article ">Figure 2
<p>Individual pre- and post-intervention data for six outcomes. Each symbol represents one patient. Intervention effects are shown for: (<b>A</b>) HSEO (hard surface, eyes open), (<b>B</b>) HSEC (hard surface, eyes closed), (<b>C</b>) SSEO (soft surface, eyes open), (<b>D</b>) SSEC (soft surface, eyes closed), (<b>E</b>) COP (center of pressure velocity, cm·s<sup>−1</sup>), and (<b>F</b>) VAS (visual analog scale of neck pain, mm). Units for (<b>A</b>–<b>D</b>) are % relative to population data. Pre = before intervention, Post = after intervention.</p>
Full article ">Figure 3
<p>(<b>A</b>) Percent changes in SSEC (soft surface, eyes closed) versus percent changes in COP velocity. (<b>B</b>) Percent changes in COP velocity versus percent changes in neck pain. In both panels, open symbols (n = 17) represent the control group, and filled symbols (n = 17) represent the ex-perimental group.</p>
Full article ">
19 pages, 3586 KiB  
Article
Effect of Stimulus Regularities on Eye Movement Characteristics
by Bilyana Genova, Nadejda Bocheva and Ivan Hristov
Appl. Sci. 2024, 14(21), 10055; https://doi.org/10.3390/app142110055 - 4 Nov 2024
Viewed by 657
Abstract
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence [...] Read more.
Humans have the unique ability to discern spatial and temporal regularities in their surroundings. However, the effect of learning these regularities on eye movement characteristics has not been studied enough. In the present study, we investigated the effect of the frequency of occurrence and the presence of common chunks in visual images on eye movement characteristics like the fixation duration, saccade amplitude and number, and gaze number across sequential experimental epochs. The participants had to discriminate the patterns presented in pairs as the same or different. The order of pairs was repeated six times. Our results show an increase in fixation duration and a decrease in saccade amplitude in the sequential epochs, suggesting a transition from ambient to focal information processing as participants acquire knowledge. This transition indicates deeper cognitive engagement and extended analysis of the stimulus information. Interestingly, contrary to our expectations, the saccade number increased, and the gaze number decreased. These unexpected results might imply a reduction in the memory load and a narrowing of attentional focus when the relevant stimulus characteristics are already determined. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

Figure 1
<p>Pattern set used in the stimuli design. Each column contains patterns A, B, C, and D; each row—the patterns from different groups.</p>
Full article ">Figure 2
<p>Distribution of the fixation duration in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 3
<p>Predicted fixation duration for the different pattern combinations with 95% credible interval.</p>
Full article ">Figure 4
<p>Predicted fixation duration for the different stimuli and epochs with 95% credible interval.</p>
Full article ">Figure 5
<p>Distribution of the saccade number in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 6
<p>Predicted number of saccades for different pattern combinations.</p>
Full article ">Figure 7
<p>Predicted number of saccades for different stimuli and epochs with 95% credible interval.</p>
Full article ">Figure 8
<p>Distribution of the gaze number in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 9
<p>Predicted number of gazes for different pattern combinations.</p>
Full article ">Figure 10
<p>Predicted number of gazes for different pattern combinations and epochs.</p>
Full article ">Figure 11
<p>Distribution of the saccade amplitude in the sequential epochs (1–6) of the experiment.</p>
Full article ">Figure 12
<p>Posterior distributions for the saccade amplitudes for all stimulus patterns: (<b>A</b>) for the stimuli from Group 1; (<b>B</b>) for the stimuli from Group 2; (<b>C</b>) for the stimuli from Group 3. The shaded regions correspond to the 90% credible intervals of the median.</p>
Full article ">
13 pages, 878 KiB  
Article
Voluntary Attention Assessing Tests in Children with Neurodevelopmental Disorders Using Eye Tracking
by Anna Rebreikina, Dmitry Zakharchenko, Antonina Shaposhnikova, Nikita Korotkov, Yuri Klimov and Tatyana Batysheva
Children 2024, 11(11), 1333; https://doi.org/10.3390/children11111333 - 31 Oct 2024
Viewed by 573
Abstract
Background/Objectives: The development of techniques for assessing cognitive functions using eye tracking is particularly important for children with developmental disabilities. In this paper, we present pilot results from the validation of two methods for assessing voluntary attention based on eye tracking. Methods: The [...] Read more.
Background/Objectives: The development of techniques for assessing cognitive functions using eye tracking is particularly important for children with developmental disabilities. In this paper, we present pilot results from the validation of two methods for assessing voluntary attention based on eye tracking. Methods: The study involved 80 children aged 3 to 8 years with neurodevelopmental disorders. Children performed two eye-tracking tests in which they had to ‘catch’ a stimulus by looking at it. They also completed the Attention Sustained subtest of the Leiter-3 International Performance Scale. In the first test, the stimuli were presented at different locations on the screen in subtests with stimuli onset asynchrony of 2 s and 1 s. A translucent blue marker marked the position of the gaze on the screen. The number of trials in which the gaze marker approached the stimulus was determined. In the second test, the location of the stimuli on the screen was changed based on gaze fixation in the ROI area. The time taken to complete the task was evaluated. Results: The results of both eye-tracking tests showed significant correlations with scores on the Attention Sustained Leiter-3 subtest and significant test–retest reliability. Conclusions: The results indicate that the present eye-tracking tests can be used for assessing voluntary attention in children with some neurodevelopmental disorders, and further research is warranted to assess the feasibility of these tests for a broader range of developmental disorders. Our findings could have practical implications for the early intervention and ongoing monitoring of attention-related issues. Full article
Show Figures

Figure 1

Figure 1
<p>Design of: the White Dots test (<b>a</b>); the Red Balls test (<b>b</b>).</p>
Full article ">Figure 2
<p>Correlation plots of the number of performed trials (the vertical axis) with the Leiter-3 Attention Sustained subtest score (horizontal axis): (<b>a</b>) subtest 1, automatic analysis (SOA of 2 s); (<b>b</b>) subtest 1, video analysis; (<b>c</b>) subtest 2, automatic analysis (SOA of 1 s); (<b>d</b>) subtest 2, video analysis.</p>
Full article ">Figure 3
<p>Correlations plots of Leiter-3 Attention Sustained subtest scores (horizontal axis) with the Red Balls test performance time (the vertical axis): (<b>a</b>) first subtest, (<b>b</b>) second subtest, (<b>c</b>) third subtest, and (<b>d</b>) averaged over three subtests.</p>
Full article ">
30 pages, 2719 KiB  
Article
Predicting Shot Accuracy in Badminton Using Quiet Eye Metrics and Neural Networks
by Samson Tan and Teik Toe Teoh
Appl. Sci. 2024, 14(21), 9906; https://doi.org/10.3390/app14219906 - 29 Oct 2024
Viewed by 1020
Abstract
This paper presents a novel approach to predicting shot accuracy in badminton by analyzing Quiet Eye (QE) metrics such as QE duration, fixation points, and gaze dynamics. We develop a neural network model that combines visual data from eye-tracking devices with biomechanical data [...] Read more.
This paper presents a novel approach to predicting shot accuracy in badminton by analyzing Quiet Eye (QE) metrics such as QE duration, fixation points, and gaze dynamics. We develop a neural network model that combines visual data from eye-tracking devices with biomechanical data such as body posture and shuttlecock trajectory. Our model is designed to predict shot accuracy, providing insights into the role of QE in performance. The study involved 30 badminton players of varying skill levels from the Chinese Swimming Club in Singapore. Using a combination of eye-tracking technology and motion capture systems, we collected data on QE metrics and biomechanical factors during a series of badminton shots for a total of 750. Key results include: (1) The neural network model achieved 85% accuracy in predicting shot outcomes, demonstrating the potential of integrating QE metrics with biomechanical data. (2) QE duration and onset were identified as the most significant predictors of shot accuracy, followed by racket speed and wrist angle at impact. (3) Elite players exhibited significantly longer QE durations (M = 289.5 ms) compared to intermediate (M = 213.7 ms) and novice players (M = 168.3 ms). (4) A strong positive correlation (r = 0.72) was found between QE duration and shot accuracy across all skill levels. These findings have important implications for badminton training and performance evaluation. The study suggests that QE-based training programs could significantly enhance players’ shot accuracy. Furthermore, the predictive model developed in this study offers a framework for real-time performance analysis and personalized training regimens in badminton. By bridging cognitive neuroscience and sports performance through advanced data analytics, this research paves the way for more sophisticated, individualized training approaches in badminton and potentially other fast-paced sports. Future research directions include exploring the temporal dynamics of QE during matches and developing real-time feedback systems based on QE metrics. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the Neural Network Architecture developed in this study.</p>
Full article ">Figure 2
<p>Training and validation loss.</p>
Full article ">Figure 3
<p>ROC curve.</p>
Full article ">Figure 4
<p>SHAP summary plot.</p>
Full article ">Figure 5
<p>Learning curves.</p>
Full article ">Figure 6
<p>SHAP dependency plot for QE duration and racket speed.</p>
Full article ">Figure 7
<p>Scatter plot of QE duration vs. shot accuracy.</p>
Full article ">
Back to TopTop