[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Fault Diagnosis in Power Generators: A Comparative Analysis of Machine Learning Models
Previous Article in Journal
Innovative Sentiment Analysis and Prediction of Stock Price Using FinBERT, GPT-4 and Logistic Regression: A Data-Driven Approach
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Powered Eye Tracking for Bias Detection in Online Course Reviews: A Udemy Case Study

by
Hedda Martina Šola
1,2,*,
Fayyaz Hussain Qureshi
3 and
Sarwar Khawaja
3
1
Oxford Centre For Applied Research and Entrepreneurship (OxCARE), Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK
2
Institute for Neuromarketing & Intellectual Property, Jurja Ves III spur no 4, 10000 Zagreb, Croatia
3
Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(11), 144; https://doi.org/10.3390/bdcc8110144
Submission received: 14 September 2024 / Revised: 21 October 2024 / Accepted: 23 October 2024 / Published: 25 October 2024
Figure 1
<p>(<b>a</b>): Research flow chart. (<b>b</b>): A detailed research roadmap is derived from project development, setup, and execution.</p> ">
Figure 1 Cont.
<p>(<b>a</b>): Research flow chart. (<b>b</b>): A detailed research roadmap is derived from project development, setup, and execution.</p> ">
Figure 2
<p>(<b>a</b>–<b>c</b>): Focus score differences between negative and positive reviews based on the video data analysis with heatmaps selected on reviews. The heat map illustrates the areas that garnered the most significant attention, while the attention itself was evaluated on a frame-by-frame basis throughout the entire video. (<b>d</b>–<b>f</b>): Cognitive Demand score differences between negative and positive reviews are based on the video data analysis with fog map selected on reviews. The fog map unambiguously reveals the areas not discernible to the human eye when recording the cognitive demand frame by frame. Consequently, the figure appears illegible.</p> ">
Figure 2 Cont.
<p>(<b>a</b>–<b>c</b>): Focus score differences between negative and positive reviews based on the video data analysis with heatmaps selected on reviews. The heat map illustrates the areas that garnered the most significant attention, while the attention itself was evaluated on a frame-by-frame basis throughout the entire video. (<b>d</b>–<b>f</b>): Cognitive Demand score differences between negative and positive reviews are based on the video data analysis with fog map selected on reviews. The fog map unambiguously reveals the areas not discernible to the human eye when recording the cognitive demand frame by frame. Consequently, the figure appears illegible.</p> ">
Figure 2 Cont.
<p>(<b>a</b>–<b>c</b>): Focus score differences between negative and positive reviews based on the video data analysis with heatmaps selected on reviews. The heat map illustrates the areas that garnered the most significant attention, while the attention itself was evaluated on a frame-by-frame basis throughout the entire video. (<b>d</b>–<b>f</b>): Cognitive Demand score differences between negative and positive reviews are based on the video data analysis with fog map selected on reviews. The fog map unambiguously reveals the areas not discernible to the human eye when recording the cognitive demand frame by frame. Consequently, the figure appears illegible.</p> ">
Figure 3
<p>(<b>a</b>): Total Attention-derived focus heat map of the negative (2-star) review category based on the image data analysis. (<b>b</b>): Total Attention-derived heat map of the positive (5-star) review category based on the image data analysis. The ‘both’ figure represents the AOI’s selected per each review which was needed to the obtain more insightful findings.</p> ">
Figure 3 Cont.
<p>(<b>a</b>): Total Attention-derived focus heat map of the negative (2-star) review category based on the image data analysis. (<b>b</b>): Total Attention-derived heat map of the positive (5-star) review category based on the image data analysis. The ‘both’ figure represents the AOI’s selected per each review which was needed to the obtain more insightful findings.</p> ">
Figure 4
<p>Correlation matrix for the review view of the negative (2-star) review category from the image data analysis.</p> ">
Versions Notes

Abstract

:
The rapid growth of e-learning increased the use of digital reviews to influence consumer purchases. In a pioneering approach, we employed AI-powered eye tracking to evaluate the accuracy of predictions in forecasting purchasing patterns. This study examined customer perceptions of negative, positive, and neutral reviews by analysing emotional valence, review content, and perceived credibility. We measured ‘Attention’, ‘Engagement’, ‘Clarity’, ‘Cognitive Demand’, ‘Time Spent’, ‘Percentage Seen’, and ‘Focus’, focusing on differences across review categories to understand their effects on customers and the correlation between these metrics and navigation to other screen areas, indicating purchasing intent. Our goal was to assess the predictive power of online reviews on future buying behaviour. We selected Udemy courses, a platform with over 70 million learners. Predict (version 1.0.), developed by Stanford University, was used with the algorithm on the consumer neuroscience database (n = 180,000) from Tobii eye tracking (Tobii X2-30, Tobii Pro AB, Danderyd, Sweden). We utilised R programming, ANOVA, and t-tests for analysis. The study concludes that AI neuromarketing techniques in digital feedback analysis offer valuable insights for educators to tailor strategies based on review susceptibility, thereby sparking interest in the innovative possibilities of using AI technology in neuromarketing.

1. Introduction

Consumer decisions regarding various goods and services, including online courses and massive open online courses (MOOCs), are heavily influenced by internet-based reviews. The conventional educational landscape was substantially altered by online courses and MOOCs, particularly in light of the COVID-19 crisis [1,2,3]. The pandemic hastened the adoption of algorithm-driven learning management systems, which were touted as remedies for systemic educational challenges. Nevertheless, these systems also exacerbated disparities and provoked public opposition [4]. These innovations ushered in novel educational approaches that contest traditional in-person teaching methods, presenting both possibilities and hurdles for teachers and academic institutions. Educational establishments embraced hybrid learning strategies utilising platforms such as Coursera MOOCs, which demonstrated favourable effects on learner contentment and involvement [4]. The swift transition to digital learning hastened the integration of technological solutions, which became crucial in contemporary educational frameworks. Instructors grapple with maintaining the standard of virtual education, requiring novel quality control mechanisms that incorporate particular metrics for online courses. Nevertheless, this shift brought to light existing disparities and introduced fresh obstacles that demand attention. Educators encounter difficulties upholding the calibre of virtual learning, necessitating innovative quality assurance protocols encompassing specific benchmarks for digital courses [5,6]. Research indicates that favourable reviews and high ratings positively influence purchase intentions through the perceived efficacy of social media platforms and online trust [3]. The causal link between reviews and course uptake is often mediated by perceived course quality, instructor standing, and alignment with learner objectives. The effect of reviews fluctuates based on consumer membership tiers, with lower-level members being more susceptible to review influence [7]. Review consistency and overall ratings are vital for MOOCs’ adoption choices [8]. The stimulus–organism–response (S-O-R) framework has been employed to comprehend how review ratings, content, and security policies affect consumers’ emotions and subsequent purchase motivations [9]. Moreover, influencers play a moderating role in purchase decisions, underscoring the significance of online reviews in decision-making [10]. These findings underscore the pivotal role of online reviews in moulding consumer behaviour across various online purchasing contexts. The impact of online reviews extends beyond individual buying decisions, potentially shaping market trends and product development strategies. Firms may need to modify their marketing approaches to cater to consumer segments based on their susceptibility to review influence. Furthermore, the interplay between reviews, influencer marketing, and consumer psychology presents a complex landscape for businesses to navigate in the digital marketplace. Reviews also sway consumer emotions and buying behaviour, with factors such as reviewer quality and exposure playing a crucial role. Consumers tend to trust reviews from reputable sources, which can significantly influence their purchasing decisions [11]. Online reviews revolutionised digital marketing by enabling businesses to engage actively with consumers. They provide a platform for real-time feedback, helping companies maintain a positive online reputation and forge strong consumer relationships [12]. Online feedback tools can further enhance this engagement, allowing businesses to gather valuable insights and adapt their strategies accordingly [12]. While online reviews offer numerous benefits, they also present challenges. Negative reviews can damage brand reputation and consumer trust, necessitating proactive management strategies. Companies often adjust their product strategies based on online reviews. A dual-element dynamic strategy, which involves adjusting quality and price, is financially beneficial. Initially, firms may enhance quality and reduce prices to attract customers, then decrease quality and raise prices as reviews accumulate and influence consumer perceptions [13]. Online reviews are crucial in shaping consumer purchasing decisions in the digital age. Online reviews significantly affect consumer purchase intentions by providing valuable insights and opinions from other consumers. The interplay between consumers and reviewers can influence purchasing decisions, as illustrated by the opinion dynamics model, which emphasises the importance of individual characteristics and expertise in moulding consumer perspectives [14]. For businesses to refine their approaches and enhance their products, it is essential to forecast consumer buying behaviour based on online reviews. Utilising AI-driven sentiment analysis and natural language processing methods can extract meaningful insights from extensive review datasets, facilitating more precise predictions of consumer conduct and preferences [15,16,17,18]. Examining review data longitudinally may uncover trends in consumer attitudes, assisting businesses in anticipating future market requirements.

1.1. Literature Review

Examining AI-based assessment of online educational content uncovers several areas requiring attention to boost AI’s efficacy and relevance in educational contexts. These areas encompass discrepancies in data labelling, restricted adoption of cutting-edge AI technologies, and a dearth of all-encompassing frameworks that connect AI applications with educational theories. Tackling these issues is vital for creating robust AI models capable of precisely evaluating and improving online educational content.

1.2. Discrepancies in Data Labelling

Current datasets for gauging student engagement (SE) in virtual learning settings are marred by inconsistent definitions and annotation methods, often misaligned with educational psychology norms. This inconsistency hinders the development of universally applicable AI models for SE measurement [19,20]. A scarcity of datasets employing psychometrically validated scales leads to challenges in comparing AI model performance across various datasets [19].

1.3. Restricted Adoption of Cutting-Edge AI Technologies

There is a noticeable scarcity of studies utilising advanced AI technologies, such as deep learning algorithms, in educational settings. While conventional AI technologies such as natural language processing are more prevalent, the potential of more sophisticated techniques remains largely unexplored [21]. Research is yet to adequately investigate AI applications in physical classroom environments or the use of AI for personalised education through technologies such as generative adversarial networks [21].

1.4. Requirement for All-Encompassing Frameworks

The incorporation of AI in education frequently lacks a robust connection with educational theories, which is essential for creating practical AI-driven educational tools [21]. There is a need for interdisciplinary research to formulate guidelines and frameworks for the application of generative AI in higher education, which can bolster curriculum development and assessment practices [22].

1.5. Ethical and Methodological Considerations

Ethical issues, including data privacy and algorithmic bias, are significant concerns that need addressing to ensure equitable and just AI applications in online learning environments [23]. The transparency of literature reviews on AI in education is often inadequate, potentially limiting the reliability of findings and impeding the development of robust AI applications [24]. While current research highlights these gaps, it also indicates opportunities for future exploration. For instance, there is potential in harnessing AI for adaptive learning and real-time feedback mechanisms, which could significantly enhance teaching effectiveness and student performance in online settings [21,22,23,24]. Moreover, addressing ethical concerns and improving methodological transparency can lead to more trustworthy AI applications in education. By focusing on these areas, researchers can contribute to the development of AI tools that are not only technologically advanced, but also pedagogically sound and ethically responsible [25,26].

1.6. Marketing and Advertising Applications of AI Eye Tracking

AI eye tracking has become an influential instrument in marketing and advertising, providing valuable insights into consumer behaviour and improving the efficacy of promotional campaigns. By examining the duration and focus of consumers’ gaze on various advertisement components, marketers can fine-tune content to attract and maintain attention. Nevertheless, the technology faces certain constraints, including precision issues and environmental limitations, which may affect its implementation. We shall explore the potential uses and restrictions of AI eye tracking in marketing and advertising below [27,28].
Neuromarketing: AI eye tracking forecasts consumer purchasing behaviour by scrutinising visual attention patterns. This method is more cost-effective than conventional neuromarketing tools such as fMRI and EEG, making it more accessible to a broader range of companies and researchers [29].
Dynamic Content Analysis: Eye tracking aids in comprehending how viewers engage with dynamic content, such as video advertisements. It enables marketers to recognise which elements capture attention and how to optimise them for enhanced engagement [30].
Visual Attention Models: AI eye tracking can be incorporated into machine learning models to predict areas of visual focus in communication campaigns, such as those utilised during health crises. This assists in crafting more impactful messages that swiftly capture public attention [31].
Feedback and Personalisation: eye tracking offers implicit feedback on user interest and comprehension, which can be utilised to customise content to individual preferences, thereby enhancing user experience and engagement [32].

1.7. Constraints of AI Eye-Tracking Technology

Technical Hurdles: mobile eye tracking encounters challenges, such as diminished accuracy in bright sunlight and equipment slippage, which can compromise data quality and restrict its use in diverse settings [32]. Precision Issues: while combining eye tracking with EEG can enhance accuracy, standalone eye trackers may still struggle with precision, particularly in peripheral screen regions [33]. Sampling Restrictions: Eye tracking may not always be feasible, such as in high-security environments, and alternative methods such as artificial foveation can impact task performance and accuracy [34]. AI eye tracking presents considerable benefits for comprehending consumer behaviour and enhancing marketing tactics [35], yet it faces certain obstacles. Environmental elements and technical limitations can impede the technology’s efficacy, necessitating continuous research and innovation to improve its precision and relevance. Notwithstanding these constraints, combining AI eye tracking with other technologies, such as EEG, shows potential for surmounting some of these hurdles and broadening its applications in marketing and advertising. This study aims to illustrate how AI-based eye-tracking technologies can forecast consumer behaviour and shape customers’ buying decisions by harnessing the predictive power of online reviews [36]. As this pioneering research lacks precedent in measuring AI eye-tracking prediction software (version 1.0) for online reviews and their impact on purchasing choices, our investigation seeks to demonstrate how more economical methods, such as AI eye-tracking prediction software, can be employed in neuromarketing, replacing traditional techniques such as eye tracking, EEG, or other sensors typically used in this field. For this reason, we opted to use the primary research question: To what extent can AI-powered eye tracking predict future buying behaviour based on analysis of online course reviews?

2. Materials and Methods

This study investigated the perception of customers’ negative, positive, and neutral reviews in an online educational context. Understanding how customers perceive negative, positive, and neutral reviews in an online educational context involves examining the interplay of emotional valence, review content, and the perceived credibility of the reviews. This perception is influenced by various factors, including the emotional tone of the review, the presence of ambivalence, and the context in which the review is presented [3,5,6,7,8,9,10]. For our research, we formulated the following primary hypotheses:
H1. 
Negative reviews elicit stronger cognitive and behavioural responses from consumers than positive or neutral reviews.
This hypothesis predicts that review valence, particularly negative sentiment, will substantially impact consumer response and information processing more than positive or neutral evaluations in the context of online course reviews.
We further wanted to understand how this perceived effect of all review categories could potentially drive customers’ purchasing decisions. Review credibility and quality are pivotal in shaping consumer purchase intentions. Credible reviews directly enhance purchase intentions by building trust, while high-quality reviews serve as a partial mediator, indirectly influencing decisions by improving the perceived reliability of the information provided [11]. Negative perceptions of review quality, such as irrelevance or incredibility, can deter consumers from reading reviews, thereby reducing purchase intentions. Conversely, positive perceptions of review usefulness increase engagement with reviews and positively impact purchase decisions [12].
The Udemy platform was selected for this research due to its status as one of the market’s largest providers of online courses. The platform reports over 70 million learners, more than 220,000 courses, and over 75,000 instructors teaching courses in nearly 74 languages, with over 970 million course enrolments [13]. Online language courses on Udemy have become a significant component of digital education, offering many language learning opportunities. Udemy hosts the most important number of online language courses compared to other platforms, with 4270 courses available. These courses encompass 21 languages, with English, Spanish, French, Italian, and Chinese being the most prevalent. The platform’s flexibility and accessibility make it a popular choice for learners worldwide, providing free and paid options, although most courses are paid [14]. Upon examination of the platform, Udemy’s “Learn the Italian Language: Complete Italian Course–Beginners” [15] was identified as having notable review ratings (4349 ratings) from 34,152 students, which was determined to be a sufficient sample size for this neuromarketing research. Predict, a neuromarketing eye-tracking artificial intelligence consumer behaviour prediction software (version 1.0), was utilised to evaluate the effects of online reviews on purchasing decisions and to test the hypothesis. This software was selected as the algorithm, developed by Stanford University, incorporates one of the world’s largest consumer neuroscience databases, encompassing over 100 billion consumer behaviour data points (n = 180,000) from eye tracking (eye tracker: Tobii X2-30, Tobii Pro AB, Danderyd, Sweden). Predict’s model database includes eye-tracking, EEG, and implicit response data. To date, the algorithm has been constructed solely on the eye-tracking recordings. The eye-tracking recordings, obtained globally in 15 different consumer contexts (n = 180,000), trained an encoder–decoder architecture (ConvNext as a pre-trained encoder). The database expands monthly, is annually upgraded with over 50,000 participants, and is extended in various consumer contexts as the artificial intelligence learns from each research study. The provider secured a predicate accuracy rate of 97–99% for attention, the highest in the industry, attributed to the software’s collaboration with Stanford University [16]. The AI eye-tracking software was developed to achieve three crucial validation types for assessing model performance against eye-tracking data. Firstly, a pixel-by-pixel comparison was conducted, where the software creators evaluated both the correlation (using Spearman’s R, SR) and the error rate (standard error rate, SER) between the predicted and actual eye-tracking data. Secondly, an area of interest analysis was performed, which involved aggregating data within predefined relevant AOIs and then analysing the consolidated values between the model’s prediction and the eye-tracking data, again utilising Spearman’s R and SER. Lastly, an interpretation comparison was conducted, wherein a group of individuals experienced in interpreting eye-tracking data were tasked with inferring information from heat maps. Studying the impact of reviews on purchasing behaviour is crucial, as the tone of feedback, whether favourable or unfavourable, significantly influences consumers’ choices. The volume of reviews also matters; more reviews can increase the perceived credibility and reliability of the product information [13]. The trustworthiness of online reviews is a significant factor in their influence on purchasing decisions. Consumers are more likely to be swayed by reviews they perceive as credible and unbiased. The rationality of reviewers and manipulative reviews can affect the perceived trustworthiness of the reviews [14]. Online reviews can enhance consumer trust in a brand or product, which is crucial for purchase decisions [15]. Companies actively monitor online reviews to improve their products and marketing strategies. By responding to reviews and addressing consumer concerns, brands can enhance customer relationship management and positively influence purchasing decisions [16].

2.1. Dynamic Testing: Experimental Design and Methodology

Our research encompassed static (image) and dynamic (video) testing due to the differing metrics of software for static and dynamic assessments to optimise the outcomes (refer to Figure 1a,b). To obtain the most reliable output of the online course and consumer perception of reviews, we acquired video footage from the Udemy website for our investigation. The video data were categorised into ‘negative’ and ‘positive’ segments, each approximately 2 min long, to ensure consistency in comparison for both videos. The negative category represents one- and two-star reviews, while the positive category represents three-to-five-star reviews in video settings. The negative review video depicted a typical user’s viewing pattern of the screen displaying only negative reviews on the course page. The positive review video followed a similar approach. For each frame of the video data, we measured frame by frame for each second of the video using AI eye-tracking software, which predicted consumer behaviour. We divided the data into positive and negative reviews, from which we extracted data for ‘Focus’ (an index of how large a portion of reviews draws attention; higher focus scores are achieved when a single or very few narrow areas draw attention) and ‘Cognitive Demand’ (which indicates the amount of information that the viewer has to process by looking at reviews). The influence of focus on consumer decision-making during purchases is multifaceted, involving individual differences, attentional control, and external stimuli. Focus, whether promotion- or prevention-oriented, significantly impacts how consumers process information and make purchasing decisions. This response explores how different types of focus affect consumer behaviour, drawing insights from various studies [17,18]. Cognitive demand affects consumer perception by influencing how they process product information. When cognitive resources are limited, consumers rely on heuristics or mental shortcuts, such as brand recognition or visual cues, to make decisions [37]. The anticipation of cognitive demand can also guide decision-making. Consumers may prepare for decisions that require significant mental effort, as evidenced by anticipatory physiological responses observed in high cognitive demand scenarios [38]. This involved examining statistical disparities (using t-tests) between Focus and Cognitive Demand scores across the two categories. Furthermore, we intended to gauge the correlation between Focus and Cognitive Demand and how it varied between the categories. The interplay between cognitive and affective factors shapes consumer behaviour [39]. Unplanned purchases, influenced by cognitive and affective elements, demonstrate how prefactual thinking (considering potential outcomes) can alter consumer intentions. This is particularly effective when tailored to consumers’ regulatory focus, such as anticipated regret for prevention-focused individuals [36]. The primary advantage of dynamic testing is that we can simulate the website scrolling in real time, replicating the behaviour of website visitors of this course as they evaluate whether to purchase the course or not while examining the online reviews.

2.2. Static Testing: Experimental Design and Methodology

Following this preliminary evaluation, we aimed to investigate additional metrics by employing static testing. To this end, we created screenshots from Udemy’s course website, which we categorised under three distinct categories. Compared with dynamic testing, these categories were further subdivided: the negative category encompassed views from 1-star and 2-star reviews, the neutral category comprised 3-star reviews, and the positive category included views from 4-star and 5-star reviews. Unlike the video data, the image data incorporate the ‘neutral’ category to examine the trend more comprehensively. To assess emotional valence and perceived credibility of reviews, we utilised various metrics including ‘Attention’ (Total: overall attention spent by users, Start: attention during the first 2 s, and End: attention during the last 2 s), ‘Engagement’ (user immersion whilst viewing the stimulus), ‘Clarity’ (predicted content clarity for users), ‘Cognitive Demand’, ‘Time Spent’ (predicted viewing time during a 5-s exposure), and ‘Percentage Seen’ (percentage of users expected to view the area at least once during exposure). This analysis was not limited to the ‘overall view’, which included all areas of interest (AOIs), but also encompassed the ‘reviews view’ (focusing solely on reviews as AOIs) and the ‘other view’ (examining screens with elements such as ‘add to cart’ associated with the respective reviews as AOIs). Given that the video data in our AI eye-tracking software provide headline scores for ‘Focus’ and ‘Cognitive Demand’ without underlying metrics, whilst offering a more representative view of the customer experience while measuring the time frame by frame, we opted to utilise this data for the initial analysis of categories. We analysed the variations (employing t-tests and ANOVAs) across all categories for each metric. In eye-tracking research, t-tests and ANOVA are commonly used statistical methods for analysing data due to their simplicity and effectiveness in comparing means across groups. These methods are particularly advantageous when dealing with straightforward experimental designs and when the assumptions of normality and homogeneity of variance are met. They are beneficial for initial exploratory analyses to identify significant differences between groups or conditions [40].

2.3. Dynamics and Static Research Methodology: Final Framework

Additionally, we examined the correlation between scores from these respective categories and navigation to other screen areas, which may indicate purchasing inclination. We aimed to assess the potential of these predictions in forecasting future customer purchasing behaviours and to test our hypothesis. R was utilised for the analysis in this study. Subsequently, we planned to integrate video analysis with that of the image data, which offered a static representation of user views and provided greater insight into screen impact through a broader range of metrics in our software to construct a more comprehensive response to our research inquiry. As per the accessibility–diagnosticity theory (ADT), review visibility significantly impacts consumer behaviour. Reviews that are more visible and diagnostically rich are perceived as helpful, influencing consumer decisions more strongly. This suggests that the ‘reviews view’ can enhance consumer engagement by making reviews more accessible and informative [40]. These elements demand user attention, potentially indicating future purchasing behaviour. Additionally, we examined the initial and final attention within each category to determine if any disparities existed. Our final objective was to explore the relationships between metrics within each review category. We adopted a similar approach to analysing these connections, not only for the overall perspective, but also for the separate review and alternative views. Initially, we examined the overall metric scores across all review categories for both video and image data, comparing them to the indexed ranges of the Predict and using them as benchmarks against one another. The software indicates that a score between 0 and 24 represents a low range for the metrics, while 75–100 signifies a high range. A low Focus score indicates that numerous elements compete for attention. In contrast, a high Focus score suggests that one or a few specific areas attract the most attention and are more likely to be noticed. A low Cognitive Demand score implies that the information is straightforward to process, which may consequently reduce viewing time. Conversely, a high Cognitive Demand score indicates that the information is highly complex, potentially overwhelming viewers. Minimal Engagement ratings suggest an absence of emotionally compelling material, whilst elevated Engagement scores indicate strong purchase intent. Reduced Clarity may be associated with low Cognitive Demand scores when the substance or the progression is challenging to comprehend, whereas high Clarity implies a more easily digestible content framework. Regions where viewers dedicate less time are likely to correlate with fewer attention-capturing elements compared to areas that command more viewer time. Percentage Seen provides a more comprehensive view of content consumption across various users and their average attentiveness to lower stimuli. Percentage Seen scores typically denote reduced importance from a user’s perspective, whilst higher scores suggest increased significance, on average. Attention scores were primarily utilised to generate heat maps and evaluate trends based on these visual representations. Understanding these dynamics can help businesses not only predict but also shape future consumer behaviour in a more informed and strategic manner [40,41,42,43,44]. Machine learning algorithms have become pivotal in enhancing the accuracy of predictions related to customer purchasing behaviours. This involves analysing transaction data to identify patterns and associations between products, which can inform inventory management and personalised marketing strategies [45]. By leveraging vast amounts of data and sophisticated algorithms, businesses can gain deeper insights into consumer patterns, enabling more precise forecasting and strategic decision-making. This response explores how machine learning can improve the accuracy of these predictions, drawing insights from various research studies [20,37,46]. Deep learning models, including recurrent neural networks (RNN) and long short-term memory (LSTM), show promise in capturing temporal dependencies in customer behaviour data. These models are adept at processing sequential data, making them suitable for predicting purchase intentions based on time series data such as user interactions over time [20]. We sought to establish a foundation through this research which could serve as a valuable tool for educators to understand the impact of course reviews on prospective students. Future research could focus on integrating machine learning with other technologies, such as natural language processing and computer vision, to further enhance predictive capabilities and provide a more holistic understanding of customer behaviour [17,18].

3. Results

3.1. Preliminary Results from the Video Analysis

The negative review video exhibited moderate levels for both Focus (M = 44.83, SD = 11.98, and range = 19.05–77.39) and Cognitive Demand (M = 51.76, SD = 3.92, and range = 30.01–60.22) on average. Similarly, the positive review video also showed moderate scores for Focus (M = 46.46, SD = 14.02, and range = 17.64–73.82) and Cognitive Demand (M = 51.15, SD = 5.96, and range = 37.92–62.94). Nevertheless, Welch two-sample t-tests uncovered significant disparities in both Focus (t(5702.1) = −4.86, p < 0.001) and Cognitive Demand (t(4907.4) = 4.62, p < 0.001), with the positive review video scoring notably higher on Focus and lower on Cognitive Demand compared to the negative review video (refer to Figure 2a,b). This finding laid the groundwork for further investigation, suggesting a more favourable impact of positive reviews on readers. Furthermore, Pearson’s correlations revealed a weak association between Focus and Cognitive Demand for negative reviews (cor = 0.24, p < 0.001) but a moderate one for positive reviews (cor = 0.6, p < 0.001). Considering the moderate average scores and Cognitive Demand range, a stronger relationship between these variables might indicate a more beneficial impact, as increased Focus could encourage more excellent information processing without becoming overly challenging to comprehend. In summary, the video data analysis implied that positive reviews had a more significant influence on users than negative reviews. The impact of reviews is moderated by factors such as review valence, volume, and the sequence in which they are presented. Popular reviews can enhance the effect of review valence on sales, acting as a complementary factor [47]. Our findings might suggest the greater power of positive reviews in positively driving purchasing behaviours compared to the power of negative reviews in negatively driving users against purchases. Despite the negativity bias, negative reviews can sometimes increase sales by raising product awareness, particularly for lesser-known products. This effect is more pronounced when there is a gap between the publicity and the purchase occasion [38]. That is why it is essential to determine the correlation between these two reviews. The relative impact of these reviews can vary based on several factors, including review volume, product awareness, and consumer biases.

3.2. Statistical Differences from the Image Analysis

For this analysis, we decided to examine the metrics for three levels of views: overall view, review view, and other views to evaluate statistically significant differences (using t-tests and ANOVA). While the first two levels revealed a few essential patterns, the other view showed no statistically significant difference across the metrics categories. Below, we report our results and analyses of the statistical discrepancies metric-wise.

3.2.1. Total Attention

We found no significant difference in Total Attention for the overall view across the review categories. As we moved to look at the reviews view, we found a substantial difference in the Total Attention for negative (2-star) and positive (5-star reviews) (t(20.102) = 2.3, and p = 0.03), with the average Total Attention for negative 2-star reviews being higher than that for the positive 5-star reviews (refer to Figure 3a,b). While we considered this an important insight, the other view revealed no statistically significant difference across categories. Hence, the above could have resulted from how the data were dissected.

3.2.2. Engagement

The analysis revealed a notable overall disparity across all Engagement categories (F(4) = 4.07, p = 0.004) for the overall view. Further examination through pair-wise comparison highlighted substantial differences between several review categories: negative 1-star and positive 4-star (t(35.712) = 2.48, p = 0.02), negative 2-star and neutral 3-star (t(39.131) = 2.55, p = 0.02), negative 2-star and positive 4-star (t(24.387) = 3.6, p = 0.001), and negative 2-star and positive 5-star (t(39.561) = 2.8, p = 0.008). For all these differences, the negative reviews emerged as more engaging. To explore this more closely, we looked at the review view to validate these differences. There was again a significant overall difference across all the categories for Engagement (F(4) = 11.5, p < 0.001) for the review view. The pair-wise comparison validated the prior views comparison. It showed significant differences between negative 1-star and negative 2-star reviews (t(21.704) = −2.13, p = 0.045), negative 1-star and neutral reviews (t(24.89) = 3.04, p = 0.006), negative 1-star and positive 5-star reviews (t(24.496) = 3.52, p = 0.002), negative 2-star and neutral reviews (t(15.822) = 4.33, p < 0.001), negative 2-star and positive 4-star categories (t(18.261) = 3.52, p = 0.002), and negative 2-star and positive 5-star categories (t(15.714) = 4.64, p < 0.001) (refer to Table 1 where the metrics for Engagement heat maps for all review categories are presented). Despite these differences, the negative reviews (predominantly 2-star reviews) demonstrated higher engagement levels, potentially indicating that users find negative reviews more compelling (refer to Figure 4). Consequently, this may influence users against the decision to purchase and enroll in courses, thereby confirming our H1 hypothesis.

3.2.3. Clarity

There was no statistically significant difference across the review categories for the overall view. However, the pair-wise comparison suggested a substantial difference in Clarity between the negative (2-star) and positive (4-star) reviews (t(34.302) = 2.33, p = 0.03) and negative (2-star) and positive (5-star) reviews (t(38.033) = 2.5, p = 0.017). For these, negative reviews (2-star) proved to be more evident than positive reviews (both 4-star and 5-star) in comparison. Upon examining the review view, there was a significant difference across all categories (F(4) = 4.02, p = 0.005). For the pair-wise comparison, there was a substantial difference between the negative (2-star) and neutral reviews (t(23.783) = 2.69, p = 0.013), negative (2-star) and positive (4-star) reviews (t(23.434) = 2.35, p = 0.03) and negative (2-star) and positive (5-star) reviews (t(23.904) = 3.28, p = 0.003) (refer to Table 1 for Clarity heat maps). This almost aligned with the overall views, suggesting that negative reviews, especially those with two stars, were more apparent than positive reviews. Putting this into perspective, it also validated why the engagement scores for the same were higher—more evident reviews made them more engaging, potentially pulling the viewers back from purchasing tendencies.

3.2.4. Cognitive Demand

There was no significant difference across all the categories for the overall view. The pair-wise comparison revealed an essential difference between negative (2-star) and positive (4-star) reviews (t(25.256) = 2.652, p = 0.01). The Cognitive Demand for this was higher for the negative reviews category. For the review view, there was a significant difference across all the categories (F(4) = 6.072, p < 0.001). Additionally, there was a significant difference between negative (1-star) and positive (5-star) reviews (t(28.319) = 3.1, p = 0.004), negative (2-star) and neutral reviews (t(24.009) = 2.77, p = 0.01), negative (2-star) and positive (4-star) reviews (t(19.659) = 2.83, p = 0.01), negative (2-star) and positive (5-star) reviews (t(21.082) = 4.3007, p < 0.001), and positive (4-star) and positive (5-star) reviews (t(29.64) = 2.302, p = 0.03), with the positive review category with 4-star reviews having higher Cognitive Demand (refer to Table 1 for Cognitive Demand heat maps). For the rest of the differences, in alignment with the overall views, Cognitive Demand was observed to be higher for the negative categories, which corroborates our H1 hypothesis. This phenomenon is likely attributable to the increased engagement associated with negative reviews, enabling participants to ascribe the demand for additional cognitive resources to these reviews. This suggests that processing negative reviews may require more cognitive effort from consumers as they evaluate the information and its implications for their decision-making.

3.2.5. Start and End Attentions

For any view, there was no significant difference across the categories, including the pair-wise comparisons. Additionally, our analysis of whether there was any significant difference between Start and End Attention within the respective review categories also revealed no differences for any of the review categories and any of the views, suggesting that Attention remained consistent throughout the views.

3.2.6. Time Spent and Percentage Seen

Overall, there was no significant difference in Time Spent or Percentage Seen across any review categories, including pair-wise differences. While there was no significant difference across categories for the review view either, pair-wise differences showed that users were predicted to spend significantly more time (t(21.072) = 2.5, p = 0.02) viewing negative reviews (2-star) as compared to positive reviews (5-star) and also a higher percentage of users (t(20.625) = 2.195, p = 0.04) were predicted to be likely to spend time on negative reviews (2-star) than the positive reviews (5-star), reinforcing the engagement of negative reviews (2-star). These findings confirm our H1 hypothesis.

3.3. Correlations from the Image Analysis

The image analysis revealed significant correlations between metrics across various review categories and views, indicating interconnected factors influencing consumer behaviour. The analysis of the negative (2-star) review category, based on the results presented in Table 2, reveals several significant correlations across different viewing perspectives: Overall View: Total Attention exhibited strong positive correlations with both Time Spent (r = 0.99) and Percentage Seen (r = 0.99). Time Spent and Percentage Seen demonstrated a perfect positive correlation (r = 1.00). Clarity and Cognitive Demand showed a moderate positive correlation (r = 0.66). Review View: Similar to the Overall View, Total Attention strongly correlated with Time Spent (r = 0.99) and Percentage Seen (r = 0.99). Time Spent and Percentage Seen maintained a perfect positive correlation (r = 1.00). Notably, Clarity, and Cognitive Demand displayed a strong positive correlation (r = 0.83). Other View: The correlations between Total Attention, Time Spent, and Percentage Seen remained consistent with the previous views (r = 0.99 for Total Attention with both Time Spent and Percentage Seen; r = 1.00 for Time Spent and Percentage Seen). Clarity and Cognitive Demand showed a moderate positive correlation (r = 0.66), identical to the Overall View. Figure 4 visually represents these correlations for the review view, illustrating the complex relationships between different metrics. These findings suggest several key insights: 1. The consistently strong relationships between Total Attention, Time Spent, and Percentage Seen across all views indicate that for negative reviews, increased attention is associated with longer viewing times and a higher proportion of content being observed. 2. The positive correlation between Clarity and Cognitive Demand, particularly strong in the Review View, suggests that clearer negative reviews may require more cognitive processing from readers. 3. The consistency of correlations across different views implies robust relationships between these metrics for negative reviews. 4. The pronounced correlations in the Review View suggest that negative reviews may have a particularly significant impact when users focus specifically on the review content. These results support the hypothesis that negative reviews elicit substantial engagement from readers, potentially influencing their perceptions and decision-making processes. Further research is warranted to explore the implications of these findings on consumer behavior and the impact of negative reviews in various contexts.
Table 2. Notable correlation values for each perspective of the two-star (negative) review group with measurements derived from the visual data examination.
Table 2. Notable correlation values for each perspective of the two-star (negative) review group with measurements derived from the visual data examination.
ViewMetricsPearson’s Correlation Scorep-Value
OverallTotal Attention and Engagement0.460.03
Total Attention and Cognitive Demand0.79<0.001
Total Attention and Start Attention0.86<0.001
Total Attention and End Attention0.94<0.001
Total Attention and Time Spent0.99<0.001
Total Attention and Percentage Seen0.995<0.001
Engagement and Clarity0.430.04
Engagement and Cognitive Demand0.78<0.001
Engagement and Time Spent0.430.04
Engagement and Percentage Seen0.430.04
Clarity and Start Attention0.50.02
Cognitive Demand and Start Attention0.65<0.001
Cognitive Demand and End Attention0.73<0.001
Cognitive Demand and Time Spent0.77<0.001
Cognitive Demand and Percentage Seen0.77<0.001
Start Attention and End Attention0.93<0.001
Start Attention and Time Spent0.84<0.001
Start Attention and Percentage Seen0.87<0.001
End Attention and Time Spent0.94<0.001
End Attention and Percentage Seen0.94<0.001
Time Spent and Percentage Seen0.99<0.001
ReviewTotal Attention and Cognitive Demand0.550.02
Total Attention and Start Attention0.8<0.001
Total Attention and End Attention0.93<0.001
Total Attention and Time Spent0.99<0.001
Total Attention and Percentage Seen0.996<0.001
Engagement and Cognitive Demand0.570.02
Clarity and Cognitive Demand0.60.01
Cognitive Demand and End Attention0.60.01
Cognitive Demand and Time Spent0.590.01
Cognitive Demand and Percentage Seen0.550.02
Start Attention and End Attention0.91<0.001
Start Attention and Time Spent0.77<0.001
Start Attention and Percentage Seen0.81<0.001
End Attention and Time Spent0.92<0.001
End Attention and Percentage Seen0.92<0.001
Time Spent and Percentage Seen0.98<0.001
OtherTotal Attention and Engagement0.99<0.001
Total Attention and Cognitive Demand0.99<0.001
Total Attention and Start Attention0.98<0.001
Total Attention and End Attention0.99<0.001
Total Attention and Time Spent0.998<0.001
Total Attention and Percentage Seen0.999<0.001
Engagement and Cognitive Demand0.98<0.001
Engagement and Start Attention0.99<0.001
Engagement and Time Spent0.996<0.001
Engagement and Percentage Seen0.99<0.001
Cognitive Demand and Start Attention0.960.002
Cognitive Demand and End Attention0.99<0.001
Cognitive Demand and Time Spent0.99<0.001
Cognitive Demand and Percentage Seen0.99<0.001
Start Attention and End Attention0.99<0.001
Start Attention and Time Spent0.99<0.001
Start Attention and Percentage Seen0.98<0.001
End Attention and Time Spent0.996<0.001
End Attention and Percentage Seen0.99<0.001
Time Spent and Percentage Seen0.998<0.001
Figure 4. Correlation matrix for the review view of the negative (2-star) review category from the image data analysis.
Figure 4. Correlation matrix for the review view of the negative (2-star) review category from the image data analysis.
Bdcc 08 00144 g004

4. Discussion

The study’s findings provide valuable insights into consumer behaviour in the context of online course reviews and their impact on purchasing decisions. The research confirms hypothesis H1, demonstrating that negative reviews, particularly those with 2-star ratings, elicit stronger cognitive and behavioural responses from consumers compared to positive or neutral reviews. This is evidenced by heightened engagement, attention allocation, cognitive demand, viewing duration, and content consumption. The results align with existing literature on the negativity bias in consumer behaviour, where negative information tends to have a more pronounced impact on decision-making processes [39]. This phenomenon is particularly relevant in educational settings, where prospective students carefully evaluate course quality before making a purchase decision. The study’s findings suggest that negative reviews may disproportionately influence consumer perceptions, and consequently, course enrollment rates. Interestingly, the research also highlights the non-linear relationship between review ratings and purchase probability, with the optimal rating range falling between 4.2 and 4.5 stars [6]. This nuanced understanding of consumer behaviour challenges the assumption that highly positive ratings always lead to higher sales, emphasising the complexity of factors influencing purchasing decisions in online education. The study’s methodological approach, combining multiple analytical techniques, proves to be a significant strength. The research provides a more comprehensive and robust analysis of consumer responses to online course reviews by integrating various metrics from neuromarketing settings, including eye-tracking data and engagement measures. This multi-dimensional approach aligns with recent literature emphasising the need for diverse measurement techniques to capture the full spectrum of student engagement [48].
Moreover, the research contributes to the field of neuromarketing by demonstrating the limitations of focusing on narrow aspects of reviews. The findings suggest that a more holistic approach, considering various economic, psychological, technological, and demographic factors, is necessary to understand consumer behaviour in online education fully [35,49,50,51,52,53]. This comprehensive perspective is crucial for educational providers aiming to optimise their offerings and marketing strategies. The study also addresses the importance of external validity in eye-tracking research, acknowledging potential limitations in generalising findings from controlled laboratory settings to real-world online behaviour [54]. By utilising dynamic (video) and static (image) representations of review exposure, the research methodology attempts to enhance the applicability of its findings to practical scenarios. However, it is important to note some limitations of the study. While insightful, the focus on negative reviews may not provide a complete picture of consumer decision-making processes. Future research could benefit from a more balanced examination of positive, neutral, and negative reviews to comprehensively understand their relative impacts on consumer behaviour. In conclusion, this study significantly contributes to neuromarketing in online education. By demonstrating the powerful influence of negative reviews, particularly those with 2-star ratings, on consumer engagement and attention, the research provides actionable insights for online course providers. The findings underscore the importance of addressing negative feedback promptly and effectively to mitigate its potential impact on course enrollment. Furthermore, the study’s methodological approach, combining multiple neuromarketing and statistical techniques and considering various factors influencing consumer behaviour, sets a foundation for future research in AI eye tracking and neuromarketing. This comprehensive approach can lead to more nuanced and practical insights for improving online educational content and marketing strategies, particularly in the context of MOOCs and other online learning platforms.

5. Limitations

While we employed the AI eye-tracking consumer behaviour prediction software for the analysis, which aided this study’s analysis in gaining insights efficiently, some limitations must be mentioned:
  • First and foremost, limited metrics for video data: this AI eye-tracking software can only generate limited metrics (‘Focus’ and ‘Cognitive Demand’) for video data sets, restricting the depth of analysis possible for this format. That is why we could only use the video data to set the basis for this study. This limitation led to image screenshots for a more comprehensive examination understanding (using additional metrics such as ‘Engagement’, ‘Clarity’, etc.), potentially missing some nuances of the user experience captured in video format.
  • Inability to measure emotional valence: The Predict cannot assess the impact of emotional valence on purchasing behaviours. This is a significant limitation, as research has shows that discrete emotions play a crucial role in the consumer decision-making processes beyond simple positive or negative effects [55,56]. Valence, which refers to an event’s intrinsic attractiveness or aversiveness, plays a significant role in consumer decision-making processes [29,30]. However, the complexity of emotions and their discrete nature can influence purchasing behaviours in ways that are not fully captured by valence alone. Research indicates that discrete emotions significantly influence consumer behaviour rather than just valence.
  • Lack of consideration for specific emotions: The study’s focus on general metrics may overlook the influence of discrete emotions on consumer behaviour. Research demonstrated that specific emotions, such as gratitude, regret, and disappointment, can significantly impact consumer judgments and behaviours in ways not fully captured by valence-based measures [57,58]. This suggests marketers should focus on specific emotions rather than general valence to predict better and influence consumer behaviour [31,32].
  • Absence of emotional priming effects: The study does not account for the potential influence of emotional priming on purchase decisions. Positive emotional primes can increase purchase intentions, while negative primes can decrease them. This suggests that emotional context and priming can be powerful tools in marketing strategies [33]. These limitations highlight the need for future research to address these gaps and expand upon the current findings. Incorporating multi-method approaches, considering discrete emotions, and conducting longitudinal studies for static and not just for dynamic testing could provide a more comprehensive understanding of consumer behaviour in digital environments. While future researchers are encouraged to expand this scope further, the metrics in this research helped build solid foundations for this research question, given that the AI eye-tracking machine tool is trained on a large training data set of 180,000 participants, with prerecorded eye tracking and EEG studies at Stanford University.

Author Contributions

Conceptualization, H.M.Š.; methodology, H.M.Š.; software, H.M.Š.; writing—original draft, H.M.Š. and F.H.Q.; formal analysis, H.M.Š.; supervision, H.M.Š.; visualisation, H.M.Š.; resources, and funding acquisition, F.H.Q. and S.K.; writing—review and editing, H.M.Š., F.H.Q. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

It was supported by the Institute for Neuromarketing & Intellectual Property, Zagreb, Croatia (research activities included designing and conducting research utilising neuromarketing software and analysing the data) and the Oxford Business College (paying the article processing charges for this publication).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting this study’s findings are available in Figshare at DOI 10.3390/educsci14090933 (accessed 24 October 2024). These data were published under a CC BY 4.0 Deed Attribution 4.0 International license.

Acknowledgments

We thank Shubhangi Butta from the Institute for Neuromarketing & Intellectual Property for her valuable contribution to R coding and analysis assistance for this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AOIAreas of interest
ADTAccessibility–diagnosticity theory
AIArtificial intelligence
EEGelectroencephalography
PREDICTAI eye-tracking consumer behaviour prediction software

References

  1. Silva-Torres, J.-J.; Martínez-Martínez, L.; Cuesta-Cambra, U. Diseño de un modelo de atención visual para campañas de comunicación. El caso de la COVID-19. Prof. Inf. 2020, 29, e290627. [Google Scholar] [CrossRef]
  2. Stracke, C.M.; Sharma, R.C.; Bozkurt, A.; Burgos, D.; Cassafieres, C.S.; dos Santos, A.I.; Mason, J.; Ossiannilsson, E.; Santos-Hermosa, G.; Shon, J.G.; et al. Impact of COVID-19 on Formal Education: An International Review of Practices and Potentials of Open Education at a Distance. Int. Rev. Res. Open Distrib. Learn. 2022, 23, 1–18. [Google Scholar] [CrossRef]
  3. Lee, P.-C.; Liang, L.-L.; Huang, M.-H.; Huang, C.-Y. A comparative study of positive and negative electronic word-of-mouth on the SERVQUAL scale during the COVID-19 epidemic—Taking a regional teaching hospital in Taiwan as an example. BMC Health Serv. Res. 2022, 22, 1568. [Google Scholar] [CrossRef] [PubMed]
  4. Ho, N.T.T.; Pham, H.-H.; Sivapalan, S.; Dinh, V.-H. The adoption of blended learning using Coursera MOOCs: A case study in a Vietnamese higher education institution. Australas. J. Educ. Technol. 2022, 38, 121–138. [Google Scholar] [CrossRef]
  5. Dong, W.; Liu, Y.; Zhu, Z.; Cao, X. The Impact of Ambivalent Attitudes on the Helpfulness of Web-Based Reviews: Secondary Analysis of Data From a Large Physician Review Website. J. Med. Internet Res. 2023, 25, e38306. [Google Scholar] [CrossRef]
  6. Merle, A.; St-Onge, A.; Sénécal, S. Does it pay to be honest? The effect of retailer-provided negative feedback on consumers’ product choice and shopping experience. J. Bus. Res. 2022, 147, 532–543. [Google Scholar] [CrossRef]
  7. Ai, J.; Gursoy, D.; Liu, Y.; Lv, X. Effects of offering incentives for reviews on trust: Role of review quality and incentive source. Int. J. Hosp. Manag. 2022, 100, 103101. [Google Scholar] [CrossRef]
  8. Zhu, Q.; Lo, L.Y.-H.; Xia, M.; Chen, Z.; Ma, X. Bias-Aware Design for Informed Decisions: Raising Awareness of Self-Selection Bias in User Ratings and Reviews. Proc. ACM Hum. Comput. Interact. 2022, 6, 1–31. [Google Scholar] [CrossRef]
  9. Bilal, M.; Almazroi, A.A. Effectiveness of Fine-tuned BERT Model in Classification of Helpful and Unhelpful Online Customer Reviews. Electron. Commer. Res. 2022, 23, 2737–2757. [Google Scholar] [CrossRef]
  10. Kastrati, Z.; Imran, A.S.; Kurti, A. Weakly Supervised Framework for Aspect-Based Sentiment Analysis on Students’ Reviews of MOOCs. IEEE Access 2020, 8, 106799–106810. [Google Scholar] [CrossRef]
  11. Campos, J.D.S.; Campos, J.R. Evaluating The Impact of Online Product Review Credibility and Online Product Review Quality on Purchase Intention of Online Consumers. Appl. Quant. Anal. 2024, 4, 12–28. [Google Scholar] [CrossRef]
  12. Heesook, H.; Hye-Shin, K.; Sharron, L. The Effects of Perceived Quality and Usefulness of Consumer Reviews on Review Reading and Purchase Intention. J. Consum. Satisf. Dissatisf. Complain. Behav. 2019, 31, 1–19. [Google Scholar]
  13. Mahdi, A. Impact of Online Reviews on Consumer Purchase Decisions. Int. J. Financ. Adm. Econ. Sci. 2023, 2, 19–31. [Google Scholar] [CrossRef]
  14. Dipankar, D. Measurement of Trustworthiness of the Online Reviews. arXiv 2023, arXiv:2210.00815. [Google Scholar] [CrossRef]
  15. Putri, Y.A.; Lukitaningsih, A.; Fadhilah, M. Analisis online consumer reviews dan green product terhadap purchase decision melalui trust sebagai variabel intervening. J. Pendidik. Ekon. (JURKAMI) 2023, 8, 334–346. [Google Scholar] [CrossRef]
  16. Sharma, S.; Kumar, S. Insights into the Impact of Online Product Reviews on Consumer Purchasing Decisions: A Survey-based Analysis of Brands’ Response Strategies. Scholedge Int. J. Manag. Dev. 2023, 10, 1. [Google Scholar] [CrossRef]
  17. KMall, G.; Pandey, A.C.; Tiwari, A.S.; Chauhan, A.R.; Agarwal, D.A.; Asrani, K.A. E-Commerce customer behavior using machine learning. Int. J. Innov. Res. Comput. Sci. Technol. (IJIRCST) 2024, 12, 324–330. [Google Scholar] [CrossRef]
  18. Kumaran, T.E.; Lokesh, B.; Arunkumar, P.; Thirumeni, M. Forecasting Customer Attrition using Machine Learning. In Proceedings of the 2024 10th International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 12–14 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 801–806. [Google Scholar] [CrossRef]
  19. Liu, Z. Analysis of Key Economic Factors in Consumer Behavior and Purchase Decisions in Online Markets. Adv. Econ. Manag. Political Sci. 2024, 77, 26–32. [Google Scholar] [CrossRef]
  20. Liu, D.; Huang, H.; Zhang, H.; Luo, X.; Fan, Z. Enhancing customer behaviour prediction in e-commerce: A comparative analysis of machine learning and deep learning models. Appl. Comput. Eng. 2024, 55, 190–204. [Google Scholar] [CrossRef]
  21. Nuradina, K. Psychological factors affects online buying behaviour. J. Bus. Manag. Ina. 2022, 1, 112–123. [Google Scholar] [CrossRef]
  22. Pokhrel, L. Factor That Influence Online Consumer Buying Behavior with Reference to Nepalgunj city. Acad. Res. J. 2023, 2, 60–69. [Google Scholar] [CrossRef]
  23. Su, Y.; Zhao, L. Research on online education consumer choice behavior path based on informatization. China Commun. 2021, 18, 233–252. [Google Scholar] [CrossRef]
  24. Noor, N.M.; Thanakodi, S.; Fadzlah, A.F.A.; Wahab, N.A.; Talib, M.L.; Manimaran, K. Factors influencing online purchasing behaviour: A case study on Malaysian university students. AIP Conf. Proc. 2022, 2617, 060004. [Google Scholar] [CrossRef]
  25. Ayalew, M.; Zewdie, S. What Factors Determine the Online Consumer Behavior in This Digitalized World? A Systematic Literature. Hum. Behav. Emerg. Technol. 2022, 2022, 1–18. [Google Scholar] [CrossRef]
  26. Freya, Z.A.; Heike, K.S.; Christina, P.; Teresa, K.N.; Rinaldo, K. Measuring selective exposure to online information: Combining eye-tracking and content analysis of users’ actual search behaviour. In ZORA (Zurich Open Repository and Archive), 14th ed.; Halem: Köln, Germany, 2019; Available online: https://www.zora.uzh.ch/id/eprint/176070/ (accessed on 18 October 2024).
  27. Silva, B.B.; Orrego-Carmona, D.; Szarkowska, A. Using linear mixed models to analyse data from eye-tracking research on subtitling. Transl. Spaces 2022, 11, 60–88. [Google Scholar] [CrossRef]
  28. Sharova, T.; Bodyk, O.; Kravchenko, V.; Zemlianska, A.; Nisanoglu, N. Quantitative Analysis of MOOC for Language Training. Int. J. Inf. Educ. Technol. 2022, 12, 421–429. [Google Scholar] [CrossRef]
  29. Floh, A.; Koller, M.; Zauner, A. Taking a deeper look at online reviews: The asymmetric effect of valence intensity on shopping behaviour. J. Mark. Manag. 2013, 29, 646–670. [Google Scholar] [CrossRef]
  30. Yang, J.; Sarathy, R.; Walsh, S.M. Do review valence and review volume impact consumers’ purchase decisions as assumed? Nankai Bus. Rev. Int. 2016, 7, 231–257. [Google Scholar] [CrossRef]
  31. Kranzbühler, A.-M.; Zerres, A.; Kleijnen, M.H.P.; Verlegh, P.W.J. Beyond valence: A meta-analysis of discrete emotions in firm-customer encounters. J. Acad. Mark. Sci. 2020, 48, 478–498. [Google Scholar] [CrossRef]
  32. Zeelenberg, M.; Pieters, R. Beyond valence in customer dissatisfaction. J. Bus. Res. 2004, 57, 445–455. [Google Scholar] [CrossRef]
  33. Bello, E. Unravelling the Consumer Brain: The Role of Emotion in Purchase Behavior. Bachelor’s Thesis, William & Mary, Williamsburg, VA, USA, 2014. Available online: https://scholarworks.wm.edu/honorstheses/48 (accessed on 1 September 2024).
  34. Matzen, L.E.; Stites, M.C.; Gastelum, Z.N. Studying visual search without an eye tracker: An assessment of artificial foveation. Cogn. Res. Princ. Implic. 2021, 6, 45. [Google Scholar] [CrossRef] [PubMed]
  35. Šola, H.M.; Qureshi, F.H.; Khawaja, S. Predicting Behaviour Patterns in Online and PDF Magazines with AI Eye-Tracking. Behav. Sci. 2024, 14, 677. [Google Scholar] [CrossRef] [PubMed]
  36. Chen, T.; Samaranayake, P.; Cen, X.; Qi, M.; Lan, Y.-C. The Impact of Online Reviews on Consumers’ Purchasing Decisions: Evidence From an Eye-Tracking Study. Front. Psychol. 2022, 13, 865702. [Google Scholar] [CrossRef] [PubMed]
  37. Sun, R. Applications of Machine Learning Algorithms in Predicting User’s Purchasing Behavior. Sci. Technol. Eng. Chem. Environ. Prot. 2024, 1, 2–6. [Google Scholar] [CrossRef]
  38. Berger, J.; Sorensen, A.T.; Rasmussen, S.J. Positive Effects of Negative Publicity: When Negative Reviews Increase Sales. Mark. Sci. 2010, 29, 815–827. [Google Scholar] [CrossRef]
  39. Ramachandran, R.; Sudhir, S.; Unnithan, A.B. Exploring the relationship between emotionality and product star ratings in online reviews. IIMB Manag. Rev. 2021, 33, 299–308. [Google Scholar] [CrossRef]
  40. Qu, L.; Chau, P.Y.K. Nudge with interface designs of online product review systems—Effects of online product review system designs on purchase behaviour. Inf. Technol. People 2023, 36, 1555–1579. [Google Scholar] [CrossRef]
  41. Hernandez-Bocanegra, D.C.; Ziegler, J. Effects of Interactivity and Presentation on Review-Based Explanations for Recommendations. In Proceedings of the Human-Computer Interaction—INTERACT 2021, Bari, Italy, 30 August–3 September 2021; pp. 597–618. [Google Scholar] [CrossRef]
  42. Liu, R.; Ford, J.B.; Raajpoot, N. Theoretical investigation of the antecedent role of review valence in building electronic customer relationships. Int. J. Electron. Cust. Relatsh. Manag. 2022, 13, 187. [Google Scholar] [CrossRef]
  43. Du, X.; Zhao, Z.; Cui, X. The Effect of Review Valence, New Product Types and Regulatory Focus on New Product Online Review Usefulness. Acta Psychol. Sin. 2015, 47, 555. [Google Scholar] [CrossRef]
  44. Li, Y.; Geng, L.; Chang, Y.; Ning, P. Research online and purchase offline: The disruptive impact of consumers’ online information on offline sales interaction. Psychol. Mark. 2023, 40, 2642–2652. [Google Scholar] [CrossRef]
  45. Meftah, M.; Ounacer, S.; Azzouazi, M. Enhancing Customer Engagement in Loyalty Programs Through AI-Powered Market Basket Prediction Using Machine Learning Algorithms. In Engineering Applications of Artificial Intelligence; Springer: Cham, Switzerland, 2024; pp. 319–338. [Google Scholar] [CrossRef]
  46. Munde, A.; Kaur, J. Predictive Modelling of Customer Sustainable Jewelry Purchases Using Machine Learning Algorithms. Procedia Comput. Sci. 2024, 235, 683–700. [Google Scholar] [CrossRef]
  47. Su, X.; Niu, M. Too obvious to ignore: Influence of popular reviews on consumer online purchasing decisions. Hum. Syst. Manag. 2021, 40, 211–222. [Google Scholar] [CrossRef]
  48. Kassab, S.E.; Al-Eraky, M.; El-Sayed, W.; Hamdy, H.; Schmidt, H. Measurement of student engagement in health professions education: A review of literature. BMC Med. Educ. 2023, 23, 354. [Google Scholar] [CrossRef]
  49. Tonbuloglu, B. An Evaluation of the Use of Artificial Intelligence Applications in Online Education. J. Educ. Technol. Online Learn. 2023, 6, 866–884. [Google Scholar] [CrossRef]
  50. Shafique, R.; Aljedaani, W.; Rustam, F.; Lee, E.; Mehmood, A.; Choi, G.S. Role of Artificial Intelligence in Online Education: A Systematic Mapping Study. IEEE Access 2023, 11, 52570–52584. [Google Scholar] [CrossRef]
  51. Dogan, M.E.; Dogan, T.G.; Bozkurt, A. The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Appl. Sci. 2023, 13, 3056. [Google Scholar] [CrossRef]
  52. Durso, S.D.O.; Arruda, E.P. Artificial intelligence in distance education: A systematic literature review of Brazilian studies. Probl. Educ. 21st Century 2022, 80, 679–692. [Google Scholar] [CrossRef]
  53. Šola, H.M.; Qureshi, F.H.; Khawaja, S. AI Eye-Tracking Technology: A New Era in Managing Cognitive Loads for Online Learners. Educ. Sci. 2024, 14, 933. [Google Scholar] [CrossRef]
  54. Mansor, A.A.; Isa, M.S. Development of Neuromarketing Model in Branding Service. In Proceedings of the 8th International Conference on Education and Information Management (ICEIM-2015), Penang, Malaysia, 16–17 May 2015; pp. 1–10. Available online: https://www.researchgate.net/publication/306396646 (accessed on 3 September 2024).
  55. Armengol-Urpi, A.; Salazar-Gómez, A.F.; Sarma, S.E. Brainwave-Augmented Eye Tracker: High-Frequency SSVEPs Improves Camera-Based Eye Tracking Accuracy. In Proceedings of the 27th International Conference on Intelligent User Interfaces, Helsinki Finland, 22–25 March 2022; ACM: New York, NY, USA, 2022; pp. 258–276. [Google Scholar] [CrossRef]
  56. Lescroart, M.; Binaee, K.; Shankar, B.; Sinnott, C.; Hart, J.A.; Biswas, A.; Nudnou, I.; Balas, B.; Greene, M.R.; MacNeilage, P. Methodological limits on sampling visual experience with mobile eye tracking. J. Vis. 2022, 22, 3201. [Google Scholar] [CrossRef]
  57. Alateyyat, S.; Soltan, M. Utilizing Artificial Intelligence in Higher Education: A Systematic Review. In Proceedings of the 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), Manama, Bahrain, 28–29 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 371–374. [Google Scholar] [CrossRef]
  58. Tlili, A.; Huang, R.; Mustafa, M.Y.; Zhao, J.; Bozkurt, A.; Xu, L.; Wang, H.; Salha, S.; Altinay, F.; Affouneh, S.; et al. Speaking of transparency: Are all Artificial Intelligence (AI) literature reviews in education transparent? J. Appl. Learn. Teach. 2023, 6, 45–49. [Google Scholar] [CrossRef]
Figure 1. (a): Research flow chart. (b): A detailed research roadmap is derived from project development, setup, and execution.
Figure 1. (a): Research flow chart. (b): A detailed research roadmap is derived from project development, setup, and execution.
Bdcc 08 00144 g001aBdcc 08 00144 g001b
Figure 2. (ac): Focus score differences between negative and positive reviews based on the video data analysis with heatmaps selected on reviews. The heat map illustrates the areas that garnered the most significant attention, while the attention itself was evaluated on a frame-by-frame basis throughout the entire video. (df): Cognitive Demand score differences between negative and positive reviews are based on the video data analysis with fog map selected on reviews. The fog map unambiguously reveals the areas not discernible to the human eye when recording the cognitive demand frame by frame. Consequently, the figure appears illegible.
Figure 2. (ac): Focus score differences between negative and positive reviews based on the video data analysis with heatmaps selected on reviews. The heat map illustrates the areas that garnered the most significant attention, while the attention itself was evaluated on a frame-by-frame basis throughout the entire video. (df): Cognitive Demand score differences between negative and positive reviews are based on the video data analysis with fog map selected on reviews. The fog map unambiguously reveals the areas not discernible to the human eye when recording the cognitive demand frame by frame. Consequently, the figure appears illegible.
Bdcc 08 00144 g002aBdcc 08 00144 g002bBdcc 08 00144 g002c
Figure 3. (a): Total Attention-derived focus heat map of the negative (2-star) review category based on the image data analysis. (b): Total Attention-derived heat map of the positive (5-star) review category based on the image data analysis. The ‘both’ figure represents the AOI’s selected per each review which was needed to the obtain more insightful findings.
Figure 3. (a): Total Attention-derived focus heat map of the negative (2-star) review category based on the image data analysis. (b): Total Attention-derived heat map of the positive (5-star) review category based on the image data analysis. The ‘both’ figure represents the AOI’s selected per each review which was needed to the obtain more insightful findings.
Bdcc 08 00144 g003aBdcc 08 00144 g003b
Table 1. Attention heat map of Engagement, Clarity, and Cognitive Demand per review category based on the image data analysis.
Table 1. Attention heat map of Engagement, Clarity, and Cognitive Demand per review category based on the image data analysis.
Review CategoryMetricsResults
Negative (1-star) ReviewEngagement28
(2-star) ReviewEngagement29
(3-star) ReviewEngagement28
(4-star) ReviewEngagement26
Positive (5-star) ReviewEngagement30
(2-star) ReviewClarity45
(3-star) ReviewClarity57
(4-star) ReviewClarity47
Positive (5-star) ReviewClarity63
Negative (1-star) ReviewCognitive Demand24
(2-star) ReviewCognitive Demand25
(3-star) ReviewCognitive Demand24
(4-star) ReviewCognitive Demand24
Positive (5-star) ReviewCognitive Demand24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Šola, H.M.; Qureshi, F.H.; Khawaja, S. AI-Powered Eye Tracking for Bias Detection in Online Course Reviews: A Udemy Case Study. Big Data Cogn. Comput. 2024, 8, 144. https://doi.org/10.3390/bdcc8110144

AMA Style

Šola HM, Qureshi FH, Khawaja S. AI-Powered Eye Tracking for Bias Detection in Online Course Reviews: A Udemy Case Study. Big Data and Cognitive Computing. 2024; 8(11):144. https://doi.org/10.3390/bdcc8110144

Chicago/Turabian Style

Šola, Hedda Martina, Fayyaz Hussain Qureshi, and Sarwar Khawaja. 2024. "AI-Powered Eye Tracking for Bias Detection in Online Course Reviews: A Udemy Case Study" Big Data and Cognitive Computing 8, no. 11: 144. https://doi.org/10.3390/bdcc8110144

APA Style

Šola, H. M., Qureshi, F. H., & Khawaja, S. (2024). AI-Powered Eye Tracking for Bias Detection in Online Course Reviews: A Udemy Case Study. Big Data and Cognitive Computing, 8(11), 144. https://doi.org/10.3390/bdcc8110144

Article Metrics

Back to TopTop