[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Machine Learning for HCI: Cases, Trends and Challenges

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: closed (31 October 2024) | Viewed by 6368

Special Issue Editors


E-Mail Website
Guest Editor
Department of Management Science and Technology, University of Patras, GR 26500 Patra, Greece
Interests: user modelling; web mining; HCI; interaction design; usability evaluation; digital marketing and programmatic advertising

E-Mail Website
Guest Editor
Electrical and Computer Engineering Department, University of the Peloponnese, GR 263 34 Patra, Greece
Interests: human computer interaction; interaction design; information systems; databases; data/web mining; knowledge on demand/personalized services

Special Issue Information

Dear Colleagues,

Over the last few years, the field of human–computer interaction (HCI) has undergone significant progress due to contributions of machine learning (ML) techniques. The deployment of ML allows HCI researchers and practitioners to dissect user behavior, forecast user inclinations, streamline interface adjustments, and tailor interactions to personal needs and preferences, thus enabling improved interaction design and usability. ML techniques can leverage various types of HCI data such as user actions (clicks, taps, gestures), usage patterns (time spent on tasks, sequence of actions, etc.), user feedback (surveys, interviews, etc.), biometric data (eye-tracking, facial expressions, physiological signals, etc.), contextual and preference data, error logs or accessibility data (disabilities).

The convergence of ML and HCI has introduced a new era of perceptive, adaptable, and user-centric interactive systems, as designers are no longer required to anticipate user needs and specify static interactions, but are able to analyse user behaviour and dynamically adapt the interaction accordingly, leading to more intuitive, engaging and usable interactions. The goal of this Special Issue is to bring together researchers from the areas of ML and HCI working on the combination of the two domains. The issue will gather best practices, latest findings and current trends and challenges from research and industry, deploying ML techniques for solving HCI-related problems and offering new or improved capabilities to the way humans interact with modern computer systems.

Dr. Maria Rigou
Prof. Dr. Spiros Sirmakessis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • user behaviour analysis
  • gesture and voice interaction
  • attention monitoring
  • affective interaction
  • interface adaptation
  • personality trait recognition
  • intelligent user interfaces
  • recommender systems
  • human-in-the-loop machine learning
  • ethics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 3468 KiB  
Article
Adaptive Real-Time Translation Assistance Through Eye-Tracking
by Dimosthenis Minas, Eleanna Theodosiou, Konstantinos Roumpas and Michalis Xenos
AI 2025, 6(1), 5; https://doi.org/10.3390/ai6010005 - 2 Jan 2025
Viewed by 508
Abstract
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively [...] Read more.
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively provides translations, maintaining reading flow and engagement without undermining language learning. The key technological components include a desktop eye-tracker integrated with a custom Python-based application. Through a user-centered design, ETS dynamically adapts to individual reading needs, reducing cognitive strain by offering word-level translations when needed. A study involving 53 participants assessed ETS’s impact on reading speed, fixation duration, and user experience, with findings indicating improved comprehension and reading efficiency. Results demonstrated that gaze-based adaptations significantly improved their reading experience and reduced cognitive load. Participants positively rated ETS’s usability and were noted through preferences for customization, such as pop-up placement and sentence-level translations. Future work will integrate AI-driven adaptations, allowing the system to adjust based on user proficiency and reading behavior. The study contributes to the growing evidence of eye-tracking’s potential in educational and professional applications, offering a flexible, personalized approach to reading assistance that balances language exposure with real-time support. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>Mockup screen of the original translation pop-up, where user frustration was identified.</p>
Full article ">Figure 2
<p>The final graphical user interface of ETS is when the user triggers a translation.</p>
Full article ">Figure 3
<p>The user’s document is displayed as scrollable content, and a connection with the eye-tracker can be initialized.</p>
Full article ">Figure 4
<p>Experimental laboratory.</p>
Full article ">Figure 5
<p>Word with and without ETS.</p>
Full article ">
32 pages, 4863 KiB  
Article
From Eye Movements to Personality Traits: A Machine Learning Approach in Blood Donation Advertising
by Stefanos Balaskas, Maria Koutroumani, Maria Rigou and Spiros Sirmakessis
AI 2024, 5(2), 635-666; https://doi.org/10.3390/ai5020034 - 10 May 2024
Cited by 1 | Viewed by 2051
Abstract
Blood donation heavily depends on voluntary involvement, but the problem of motivating and retaining potential blood donors remains. Understanding the personality traits of donors can assist in this case, bridging communication gaps and increasing participation and retention. To this end, an eye-tracking experiment [...] Read more.
Blood donation heavily depends on voluntary involvement, but the problem of motivating and retaining potential blood donors remains. Understanding the personality traits of donors can assist in this case, bridging communication gaps and increasing participation and retention. To this end, an eye-tracking experiment was designed to examine the viewing behavior of 75 participants as they viewed various blood donation-related advertisements. The purpose of these stimuli was to elicit various types of emotions (positive/negative) and message framings (altruistic/egoistic) to investigate cognitive reactions that arise from donating blood using eye-tracking parameters such as the fixation duration, fixation count, saccade duration, and saccade amplitude. The results indicated significant differences among the eye-tracking metrics, suggesting that visual engagement varies considerably in response to different types of advertisements. The fixation duration also revealed substantial differences in emotions, logo types, and emotional arousal, suggesting that the nature of stimuli can affect how viewers disperse their attention. The saccade amplitude and saccade duration were also affected by the message framings, thus indicating their relevance to eye movement behavior. Generalised linear models (GLMs) showed significant influences of personality trait effects on eye-tracking metrics, including a negative association between honesty–humility and fixation duration and a positive link between openness and both the saccade duration and fixation count. These results indicate that personality traits can significantly impact visual attention processes. The present study broadens the current research frontier by employing machine learning techniques on the collected eye-tracking data to identify personality traits that can influence donation decisions and experiences. Participants’ eye movements were analysed to categorize their dominant personality traits using hierarchical clustering, while machine learning algorithms, including Support Vector Machine (SVM), Random Forest, and k-Nearest Neighbours (KNN), were employed to predict personality traits. Among the models, SVM and KNN exhibited high accuracy (86.67%), while Random Forest scored considerably lower (66.67%). This investigation reveals that computational models can infer personality traits from eye movements, which shows great potential for psychological profiling and human–computer interaction. This study integrates psychology research and machine learning, paving the way for further studies on personality assessment by eye tracking. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>Spearman correlation map.</p>
Full article ">Figure 2
<p>Interaction between the emotion type and message type on fixation count: This graph illustrates the difference in fixation counts between messages with altruistic (altr) and egoistic (ego) themes across negative and positive emotions. Notably, fixation counts decrease for ego messages when paired with positive emotions.</p>
Full article ">Figure 3
<p>Interaction effects of specific emotions and message types on fixation count: This graph demonstrates the fixation counts for altruistic (altr) and egoistic (ego) message types across a range of specific emotions. The varying levels of fixation count highlight the complex influence of discrete emotions on the processing of different message themes.</p>
Full article ">Figure 4
<p>Interaction effect of emotion and openness level on fixation count: The graph delineates how varying levels of openness (low, medium, high) interact with different emotions to influence fixation counts.</p>
Full article ">Figure 5
<p>Architectural overview.</p>
Full article ">Figure 6
<p>Hierarchical clustering.</p>
Full article ">Figure 7
<p>Relationships among eye movement metrics and personality traits. Emotionality (pred 0), openness (pred 1), and honesty–humility (pred 2).</p>
Full article ">Figure A1
<p>Variants of the fear ad. On the left we showcase the ad with the altruistic message and on the right with the egoistic message.</p>
Full article ">Figure A2
<p>Illustrations of negative appeal ads. On the left we showcase the disgust emotional appeal ad with egoistic message framing (“Each blood collection set is new and sterile, and is destroyed immediately after blood collection. This way there is NO chance of getting infected during the blood donation”) and on the right the guilt ad with altruistic message framing (“Unfortunately, our country is forced to import blood due to the very low rates of voluntary blood donation”).</p>
Full article ">Figure A3
<p>Illustrations of the positive appeal ads. On the left we showcase the positive emotional arousal inspiration with egoistic message framing (“People who volunteer for humanitarian causes usually live longer”) and on the right the interesting positive arousal with altruistic message framing (“There are 8 different blood types that determine whether a donor is compatible with a recipient”).</p>
Full article ">
17 pages, 4056 KiB  
Article
Visual Analytics in Explaining Neural Networks with Neuron Clustering
by Gulsum Alicioglu and Bo Sun
AI 2024, 5(2), 465-481; https://doi.org/10.3390/ai5020023 - 5 Apr 2024
Viewed by 1916
Abstract
Deep learning (DL) models have achieved state-of-the-art performance in many domains. The interpretation of their working mechanisms and decision-making process is essential because of their complex structure and black-box nature, especially for sensitive domains such as healthcare. Visual analytics (VA) combined with DL [...] Read more.
Deep learning (DL) models have achieved state-of-the-art performance in many domains. The interpretation of their working mechanisms and decision-making process is essential because of their complex structure and black-box nature, especially for sensitive domains such as healthcare. Visual analytics (VA) combined with DL methods have been widely used to discover data insights, but they often encounter visual clutter (VC) issues. This study presents a compact neural network (NN) view design to reduce the visual clutter in explaining the DL model components for domain experts and end users. We utilized clustering algorithms to group hidden neurons based on their activation similarities. This design supports the overall and detailed view of the neuron clusters. We used a tabular healthcare dataset as a case study. The design for clustered results reduced visual clutter among neuron representations by 54% and connections by 88.7% and helped to observe similar neuron activations learned during the training process. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>The proposed visual analytics to explain deep neural networks with neuron clustering. (<b>a</b>) A color-coded input feature layer along with feature descriptions. (<b>b</b>) Network level view: Represents a deep neural network to observe feature contribution to the predictions. (<b>c</b>) Feature level view: A detailed view of activation-based clustered neurons with a circular design and feature and neuron weights as a vertically stacked bar chart.</p>
Full article ">Figure 2
<p>Obtaining the average activation values per neuron at each class. The activation matrix (<b>A</b>) is grouped by class per neuron (<span class="html-italic">n<sub>j</sub></span>) and each class (<span class="html-italic">c<sub>k</sub></span>) has an averaged activation value (<span class="html-italic">a<sub>ij</sub></span>).</p>
Full article ">Figure 3
<p>An illustration of the neuron clustering approach to an NN architecture. The clustering algorithm will convert a group of clustered neurons into a mega cluster that represents similarly activated neurons.</p>
Full article ">Figure 4
<p>The selection of a number of clusters for k-means (<b>a</b>) and hierarchical clustering (<b>b</b>).</p>
Full article ">Figure 5
<p>A compact view of neural networks with clustered mega neurons. (<b>a</b>) represents the legend of input layer along with feature definitions and a color legend for activations. (<b>b</b>) Sankey diagram represents <b>network level</b> visualization.</p>
Full article ">Figure 6
<p>An example of an input layer grouped based on their contributions to the prediction with a drag and drop option. F3, F9, F2 and F4 have the highest weights as supported in literature [<a href="#B55-ai-05-00023" class="html-bibr">55</a>].</p>
Full article ">Figure 7
<p>A <b>feature level</b> view of the proposed visualization tool. Feature level shows a stacked bar chart to represent weights, circular mega neurons, and a circular custom design to show each neuron within clusters with activation values. (<b>a</b>) Vertical stacked bar chart for feature weights. Colors indicate individual features, as seen in legend. (<b>b</b>) Mega neurons that represent clustered neurons based on averaged activations. Mega neurons are colored based on their average activation value as shown in legend.</p>
Full article ">Figure 8
<p>Mega neuron 1 at the first hidden layer, along with feature weights. The color legend indicates that most of the neurons at cluster 1 are not activated (dead).</p>
Full article ">Figure 9
<p>Clusters 4 and 5 for both hidden layers, along with the weights. Cluster 4 has highly activated neurons while cluster 5 at the second hidden layer is almost zero.</p>
Full article ">
Back to TopTop