[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Enhancing Traffic Accident Severity Prediction Using ResNet and SHAP for Interpretability
Previous Article in Journal
Empirical Evaluation and Analysis of YOLO Models in Smart Transportation
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review

by
Silvia Marocco
1,*,
Barbara Barbieri
2 and
Alessandra Talamo
1
1
Department of Social and Developmental Psychology, Sapienza University of Rome, 00185 Rome, Italy
2
Department of Political and Social Sciences, University of Cagliari, 09123 Cagliari, Italy
*
Author to whom correspondence should be addressed.
AI 2024, 5(4), 2538-2567; https://doi.org/10.3390/ai5040123
Submission received: 8 October 2024 / Revised: 8 November 2024 / Accepted: 17 November 2024 / Published: 27 November 2024

Abstract

:
Introduction—Decision making (DM) is a fundamental responsibility for managers, with significant implications for organizational performance and strategic direction. The increasing complexity of modern business environments, along with the recognition of human reasoning limitations related to cognitive and emotional biases, has led to a heightened interest in harnessing emerging technologies like Artificial Intelligence (AI) to enhance DM processes. However, a notable disparity exists between the potential of AI and its actual adoption within organizations, revealing skepticism and practical challenges associated with integrating AI into complex managerial DM scenarios. This systematic literature review aims to address this gap by examining the factors that influence managers’ adoption of AI in DM. Methods—This study adhered to the PRISMA guidelines. Articles from 2010 to 2024 were selected from the Scopus database using specific keywords. Eligible studies were included after rigorous screening and quality assessment using checklist tools. Results—From 202 articles screened, a data synthesis of 16 eligible studies revealed seven major interconnected factors acting as key facilitators or barriers to AI integration within organizations. These factors—Managers’ Perceptions of AI, Ethical Factors, Psychological and Individual Factors, Social and Psychosocial Factors, Organizational Factors, External Factors, and Technical and Design Characteristics of AI—were then organized into a complex analytical framework informed by existing theoretical constructs. Discussion—This contribution provides valuable insights into how managers perceive and interact with AI systems, as well as the conditions necessary for successful integration into organizational DM processes.

This article is based on a previous PhD Thesis entitled “The Role of Artificial Intelligence in Multi-Actor Decision-Making: a focus on human capital investments”.

1. Introduction

Decision making (DM) is one of the most crucial responsibilities for managers, directly influencing the performance and strategic direction of organizations. The most prevalent studies in the field of DM assume that these processes are driven primarily by rational and cognitive factors. Tversky and Kahneman [1] introduced the concept of “cognitive bias” to describe systematic yet flawed patterns of judgment and DM under uncertainty [2]. They argued that these biases arise from using heuristics—simple cognitive shortcuts that decision makers adopt to ease cognitive or computational demands [3]. This framework was inspired by Herbert Simon’s [4] principle of bounded rationality, which addressed how individuals make decisions despite their limited cognitive resources, motivational constraints, and the need to adapt to complex environments [2,5].
These human DM limitations, combined with the increasing complexity of modern business environments, have spurred interest in how DM can be supported or enhanced by emerging technologies such as Artificial Intelligence (AI] [6,7]. Unlike human decision makers, AI systems, with their ability to process vast amounts of data at high speeds and in a widely rational manner [8], are not constrained by the limitations of cognitive or emotional biases, making them highly efficient, accurate, and flexible [9]. These capabilities have made AI particularly attractive for automating routine tasks, but advances in machine learning, deep learning, and natural language processing have expanded its potential into more complex, higher-level DM roles, like managerial tasks [10,11]. Moreover, in recent years, organizations have begun experimenting with AI systems not only as tools to assist managers but as autonomous decision makers in their own right. Examples of this shift include the Hong Kong-based venture capital firm Deep Knowledge appointing an AI algorithm, VITAL, to its board of directors and Amazon’s warehouse management system autonomously firing workers based on performance data [12,13]. These developments signal the rise of “management by algorithm”, where AI is entrusted with responsibilities traditionally reserved for human managers [14,15].
In the organizational domain, AI has already demonstrated its ability to streamline DM processes by automating labor-intensive tasks like candidate selection, personality assessments, and interview scheduling [16,17,18]. Despite these successes, actual AI usage in HR remains relatively low, with only a small number of companies, such as Unilever, fully embracing AI-driven recruitment systems [19].
This disparity between AI’s potential and its real-world adoption highlights the ongoing skepticism and practical challenges associated with integrating AI into complex managerial DM processes [20].
Possible explanations for this skepticism can be provided by research perspectives that frame the DM model from different viewpoints. For example, studies like Argyris and Schön’s [21] have increasingly emphasized that DM is far from a purely rational, linear process. Their work on double-loop learning suggests that breaking free from repetitive, rational patterns is crucial for finding innovative and superior solutions. This approach shifts the focus toward the non-linear, adaptive nature of learning and DM, where individuals critically reflect on the assumptions, values, and norms that guide their actions. Through double-loop learning, people or organizations examine the root causes of problems, challenge underlying beliefs, and explore alternative strategies. This broader perspective raises significant questions about AI’s capacity to substitute for human reflexivity.
Another core issue is the dichotomy between human DM methods: quick and intuitive versus slow and reasoned methods [22]. Dreyfus and Dreyfus argue that computer systems struggle to achieve the rapid, intuitive DM that characterizes expertise. Instead, these systems remain limited to more deliberate, reasoned processing, which they described as merely “competent”. Kahneman [23] echoed this distinction with his concept of “System 1” (intuitive) and “System 2” (analytical) thinking. More recently, Jarrahi [7] reinforced this view, suggesting that “AI is more useful for supporting analytical rather than intuitive decision-making” (p. 579), highlighting how AI may not possess the capacity to replace human intuition.
On the other hand, acceptance of AI and trust in its outputs is another critical factor. Research on algorithm aversion shows that human decision makers, particularly managers, are often reluctant to trust AI-generated insights [24]. One explanation for this resistance is the “black box” nature of AI and the lack of transparency in algorithmic DM processes, which present considerable challenges and significantly impact user trust—an essential element in the acceptance of AI. In particular, Shin [25] investigated the factors that shape trust and acceptance of AI, demonstrating that perceptions of algorithmic features such as fairness, accountability, and transparency (FAT) directly influence trust, and that causability, or the quality of explanations, plays an antecedent role to explainability in building trust [25,26]. The evidence also suggests that trust plays a crucial role in shaping users’ responses to recommendations [27,28].
In addition to these challenges, ethical concerns about bias and fairness further complicate AI adoption [29]. For instance, Trocin and colleagues [30] highlighted various ethical concerns, including the transparency of data and their usage, the interpretability of DM processes, the risk of unfair or biased outcomes, and concerns about privacy violations. Hence, as organizations continue to explore AI-driven DM, the need for transparency, explainability, and fairness will become increasingly important.
The literature on technology acceptance provides valuable insights into how managers perceive and adopt new technologies, including AI. Established models, such as the Technology Acceptance Model (TAM) [31], which emphasizes perceived ease of use and perceived usefulness as key determinants of technology acceptance, and the Unified Theory of Acceptance and Use of Technology (UTAUT) [32], which integrates multiple constructs such as performance expectancy and social influence, have been widely used to predict and explain technology adoption in several contexts. However, these models primarily focus on functional technologies and may not fully address the complexities of AI adoption, which involves deeper concerns such as trust, risk, fears, and well-being. There is limited research on the development of theoretical models to understand the factors influencing individuals’ technology-avoidance intentions. One of the most-cited models in this context is the Technology Threat Avoidance Theory (TTAT) [33]. According to the TTAT model, the core constructs determining IT users’ avoidance motivation include perceived technology threats, the effectiveness of safeguarding measures, associated costs, and self-efficacy. These constructs interact to shape users’ threat perceptions, which are influenced by both the perceived probability of a threat occurring and the perceived severity of its negative consequences. This theoretical framework emphasizes how negative perceptions about new technologies can significantly impact managers’ decisions to avoid or embrace AI, highlighting the importance of addressing emotional and psychological dimensions in understanding technology adoption.
Thus, while there is already a substantial body of literature on human acceptance of AI, we have realized that the adoption of AI in organizational and managerial DM—a particularly complex context, as it involves managing people—is still underexplored and deserves a more comprehensive examination. To this aim, this systematic review seeks to address the gap in the literature by providing a comprehensive examination of the factors that influence AI adoption in managerial DM. Specifically, this study aims to map the key facilitators and barriers to AI integration by managers in DM processes within organizational settings, drawing on empirical and theoretical studies. Given the limited literature available, we were unable to perform a more detailed distinction of the types of DM used by managers and thus did not differentiate between individual, group, or multi-actor DM [5], although we recognize that this could represent an important direction for future research.
By organizing these factors into an analytical framework, informed by the existing theoretical constructs in the literature, we aim to offer a deeper understanding of how human managers perceive and interact with AI systems and under what conditions AI can be successfully integrated into DM processes.
The structure of this paper is as follows. First, the methodology used in this study to conduct the systematic literature review (SLR) and to describe the data sources and selection criteria is presented. Next, the findings are discussed, including the main facilitators and barriers to AI adoption. Finally, an organizational framework is proposed to outline potential implications for practitioners in facilitating the integration of AI into organizational DM processes.

2. Material and Methods

2.1. Source of Information and Search Strategy

This study reports a systematic literature review (SLR) of research focused on the integration of AI into various organizational contexts, particularly within managerial DM. It explores the key facilitators and barriers that influence the acceptance of AI in DM among managers. This review methodology was chosen because it is highly recommended in the literature to ensure the transparency, systematicity, and replicability of results [34]. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework [35] was used to conduct the research, along with Rousseau and colleagues’ [36] best-practice recommendations as guidelines for conducting and reporting SLRs. The study protocol was registered in OSF (https://doi.org/10.17605/OSF.IO/EHGZS). Specifically, Rousseau and colleagues [36] suggested a four-stage synthesis process for conducting a review: (1) research purpose and question formulation, where the purpose of the review and the research questions must be clearly defined; (2) extensive identification of relevant research, including precise inclusion/exclusion criteria and multiple types of data; (3) organization and interpretation, which involves the use of multiple extractors, systematic organization of data into accessible formats, and the development of descriptive summaries; and (4) synthesis and organizing framework, which includes integrative explanations that take into account different perspectives, limitations, and contexts. The study team created a review strategy to identify the factors that influence the adoption of AI in managerial DM. The database search was conducted in Scopus during August 2024. Scopus was chosen as it is one of the largest databases of peer-reviewed studies, covering a wide range of disciplines, including social sciences, management, and technology, which are highly relevant to this study. The following keywords were used, incorporating alternative words and combining them using Boolean operators: ((“artificial intelligence” OR “AI” AND “decision-making” OR “decision” OR “managerial decision-making” AND “manager” AND “adoption” OR “acceptance” OR “intention” OR “aversion”)).

2.2. Research Purpose and Question Formulation

The primary objective of this SLR is to explore the facilitators and barriers that shape managers’ perceptions and acceptance of AI in the organizational DM.
The study consists of an SLR of existing conceptual and empirical studies, which allows the goal of systematizing the knowledge produced and identifying the factors that either promote or obstruct the integration of AI-based systems for managerial DM processes to be achieved. More specifically, the research question is: “What factors facilitate or hinder the adoption of AI by managers in DM processes within organizational settings?”.

2.3. Extensive Identification of Relevant Research

Inclusion and Exclusion Criteria

With this question in mind, the eligibility criteria of this SLR were determined. The search was limited to peer-reviewed journal articles published in English. Based on an exploration of scientific research trends in Scopus (see Figure 1), which showed that interest in AI and DM began to spread more consistently starting from the 2010s, it was decided to focus exclusively on research and reviews published from 2010 to the present.
Studies that lacked comprehensive texts, were not published in English, were published before 2010, or did not address managers’ acceptance of using AI systems within organizational contexts were excluded. Only research articles and reviews were included in the search criteria, while conference reviews, conference papers, books, and book chapters were excluded. This selection was made to ensure a focus on peer-reviewed, high-quality sources that provide in-depth analysis and empirical evidence, which are essential for a systematic review (see Table 1).
This initial selection revealed a predominant focus in the disciplines of Computer Science (21%) and Business, Management, and Accounting (19.8%), which together accounted for approximately 41% of the total articles. In contrast, Social Sciences and Psychology represented only 7.3% and 2%, respectively, indicating that the human, psychological and social aspects of AI integration in DM contexts are still significantly underexplored and require greater attention (see Figure 2).

2.4. Data Extraction and Selection

A systematic search was conducted in the Scopus database, identifying a total of 202 records. The data collection and selection process were conducted in blind mode by all three authors. These records were uploaded to Rayyan.ai software (https://www.rayyan.ai/) in order to optimize the papers’ coding and selection. Duplicates were checked using Rayyan.ai software, resulting in 0 duplicates. Four records were excluded due to language limitations. Additionally, 68 records were removed based on their publication type (see Table 1). Following the title and abstract screening, among the initially identified records, 101 were subsequently excluded for the following reasons: 64 for the wrong focus of the paper, 16 for the wrong technology explored (i.e., robot or voice assistant), 11 for the wrong context of application (i.e., clinical/medical DM), 10 for the wrong population (i.e., students or customers). After the full-text screening using the predefined inclusion and exclusion criteria, an additional 13 records were removed due to the unavailability of the full texts. Ultimately, 16 papers meeting the established criteria were included in the study (Table 1). The information sought on eligible papers included the year of publication, article type, location, and main findings. Below, we present the PRISMA flowchart depicting the article selection process (Figure 3).

Characteristics of Included Studies

Table 2 offers insights into the included dataset. The dataset consists of 16 documents in total, 12 categorized as quantitative research articles, 3 categorized as systematic reviews, and 1 as a theoretical research article.
The dataset spans the period from 2021 to 2024, highlighting the very recent surge of interest in this topic, which, despite its growing relevance, remains relatively underexplored. Figure 4 displays the distribution of the reviewed articles by year, while Figure 5 shows the distribution across the different countries. Specifically, there were 2 studies conducted in India [37,38] and 1 in South Africa [39]. Additionally, 1 study was conducted jointly in the United Arab Emirates and the United Kingdom [40], 1 in Portugal [41], 1 across Belgium and Singapore [42], 2 in Australia [43,44], 1 in Malaysia [45], and 1 spanning Germany, Australia, and Austria [44]. Further, 1 study was conducted collaboratively between Finland and Canada [46]; another between Finland and Bangladesh [47]; 1 in the USA [48]; 1 between the United Kingdom and France [49]; 1 involving Italy, South Africa, and Canada [50]; 1 in Vietnam [51]; and 1 in Romania [52].

2.5. Quality Assessment of Included Studies

The quality and risk of bias of the 16 eligible studies were evaluated using the Critical Appraisal Skills Programme (CASP) checklists [53]. This quality assessment was conducted to ensure that the included studies met high methodological standards and minimized potential bias, enhancing the reliability and validity of this systematic review’s findings. The CASP tools offer a structured approach to critically appraising each study’s rigor, validity, and reliability. By assessing study quality, we aimed to strengthen the overall evaluation of evidence regarding factors influencing managers’ adoption of AI in DM processes. Fifteen of the selected studies scored six or higher out of ten on the CASP checklist, demonstrating satisfactory methodological quality. The study of Urbani et al. [50] could not be assessed via CASP criteria as it is a theoretical research study, a category for which CASP does not provide specific guidelines. The CASP tool used in this assessment is available in the Supplementary Materials, with Table S1 for systematic reviews and Table S2 for cross-sectional studies.

2.6. Organizing and Interpreting

A first reading of the papers allowed us to distinguish 7 recurring categories of crucial factors impacting AI acceptance by managers in organizational DM. Precisely, these categories included the investigation of the following:
  • Managers’ Perceptions of AI;
  • Psychological and Individual Factors;
  • Ethical Factors;
  • Psychosocial and Social Factors;
  • Organizational Factors;
  • External Factors;
  • Technical and Design Characteristics of AI-Based Technologies.
A second in-depth reading of the papers followed which was organized into these 7 thematic categories. This cross-reading strategy helped us to better understand the different content dealt with by all 7 clusters of the papers and to highlight possible interactions among them. Throughout this analysis, key factors, facilitators, and barriers related to AI implementation were identified.
Table 3 outlines the categories of factors that were examined and validated in these studies. Each of the aforementioned factor components is categorized into two distinct clusters: facilitators and barriers, with some factors falling into both categories.

3. Results

3.1. Synthesis and Elaboration of the Organizing Framework

This section will provide a detailed description of the seven thematic categories addressed in the 16 reviewed studies, analyzing specific aspects as either facilitators or barriers to AI adoption in managerial DM. By summarizing the key findings, this approach helps to organize a framework that not only integrates these results but also highlights the interrelationships and connections between the various factors. These findings are broadly applicable to a wide range of sectors. However, any generalization should consider industry-specific constraints and other contextual factors that could influence the prioritization of certain factors over others. A comprehensive and structured synthesis of the results is provided below. The components within each factor are arranged in order of perceived relevance as determined by the authors.

3.1.1. Managers’ Perceptions of AI

This section focuses on the individual perceptions of managers regarding AI. Several studies have underscored the critical role of Managers’ Perceptions of AI in driving its adoption within organizational DM [40,41,44,46,47,52]. Below, each of these categories will be explored in greater detail, highlighting which aspects serve as facilitators and which act as barriers.

3.1.2. Perceived Ease of Use, Perceived Usefulness, and Effort Expectancy

The study by Vărzaru [52] provides strong validation for the impact of Perceived Ease of Use and Perceived Usefulness in shaping managers’ intentions to adopt AI solutions. This research stands out for introducing a modified TAM model [31] tailored to the acceptance of AI technologies in management contexts. The findings emphasize that managers’ perceptions of AI’s usability and usefulness greatly influence their willingness to adopt these tools. Essentially, managers are more likely to consider adopting AI-based solutions when they view them as user-friendly and advantageous for their tasks. Additionally, the study sheds light on the determinants of Perceived Ease of Use and Perceived Usefulness, with speed and innovation emerging as key factors. This indicates that the speed with which operations are conducted and the presence of innovative features are critical in shaping managers’ views on AI solutions.
Similarly, the study of Cao and colleagues [40] confirms the influence of Effort Expectancy, which represents the perceived ease of using AI technology [32], on managers’ willingness to adopt AI in DM.

3.1.3. User Satisfaction

Vărzaru [52] also affirmed the significance of User Satisfaction, reported after using AI, as a key driver of AI adoption. This satisfaction positively affects both the intention to use and the actual usage of AI solutions. In essence, this suggests that when managers experience satisfaction in their interactions with these solutions, their likelihood of intending to use them in the future, as well as their ongoing engagement with them, is considerably strengthened.

3.1.4. Perceived Threat, Severity, and Susceptibility

Cao and colleagues [40] examined the dimensions of Perceived Threat, Severity, and Susceptibility and their impact on managers’ AI adoption. Specifically, Perceived Threat is defined as the extent to which an individual perceives using AI for DM as dangerous or harmful [54,55]. Perceived Severity reflects an individual’s belief in the extent of potential negative outcomes of using AI in terms of making poor decisions [54,55], while Perceived Susceptibility pertains to an individual’s belief in the likelihood that using AI will lead to poor decisions [54,55].
Based on the Technology Threat Avoidance Theory (TTAT) framework [33], the study shows that Perceived Threat is positively influenced by both Perceived Severity and Susceptibility. Furthermore, findings reveal that Perceived Threat negatively affects both attitude toward and behavioral intention for AI adoption. This novel extension of the TTAT framework to AI adoption highlights the importance of considering potential risks and perceived threats when evaluating AI integration into DM processes. Additionally, the study strengthens empirical evidence supporting the relationship between attitude and intention to use AI solutions, consistent with prior research in the field.

3.1.5. Perceived Adaptability

The investigation conducted by Leyer and Schneider [44] shed light on the aspect of Perceived Adaptability in the context of delegating AI for strategic management decisions. A perceptible percentage, in particular 5% of the participants, attributed their choices to the perceived limited adaptability of AI to specific DM contexts. While this percentage may seem relatively small, it indicates that a segment of managers remain skeptical about AI’s ability to tailor its capabilities to the unique demands of different organizational situations. The theme of adaptability is closely linked to that of specificity and flexibility in relation to context, which together emerge as core aspects of significant importance regarding managers’ acceptance of AI in organizational DM.

3.1.6. Performance Expectancy and Perceived Benefits

The study by Cao and colleagues [40] expands on the idea of Performance Expectancy, defined as the individual’s belief in AI’s ability to improve job performance, as described by Venkatesh and colleagues [32]. This factor was shown to have a significant impact on intentions to adopt AI.
Similarly, Cunha and colleagues [41] emphasize that recognizing AI’s benefits, such as enhanced productivity and reduced operational costs, further motivates managers to integrate AI into their DM processes.
In summary, recognizing performance-related benefits of AI adoption emerges as a crucial factor in motivating the integration of AI into organizational DM processes.

3.1.7. Perceived Value

Mahmud and colleagues [47] provided insights into the impact of managers’ perceptions regarding the substantial changes brought about by innovation adoption, specifically identifying barriers related to usage, value, and risk. Notably, the study found that value barriers—closely associated with the performance-to-price ratio compared to competitors [56,57,58]—have a significant effect on algorithm aversion, contrasting with the influences of usage and risk barriers. This difference in impact suggests a plausible explanation linked to the specific demographics of the sample, particularly within the banking and financial sector. Managers in this field typically possess strong educational backgrounds, extensive technological expertise, and a high level of comfort with technology. Moreover, their professional environments often require them to navigate risk-prone situations.

3.1.8. Perceived Nature of the Task

The systematic review of Mahmud and colleagues [46] identified several factors that influence algorithm aversion among managers, which were categorized into four broad areas: Algorithm Factors, Individual Factors, Task Factors, and High-Level Factors. In this context, the perception of the nature of the task for which the algorithm is used, including its subjectivity, morality, and complexity, has a role in influencing resistance to algorithmic decisions.
More specifically, they indicate that people are more comfortable using algorithms when tasks are perceived as objective, such as in personnel hiring through psychometric tests. Conversely, algorithms are less accepted for tasks involving moral decisions; legal, medical, or military concerns; and simple tasks that do not demand complex computation.

3.2. Ethical Factors

This category addresses individual ethical concerns related to AI, such as potential discrimination or the violation of privacy issues. Research by Booyse and Scheepers [39] and Cunha and colleagues [41] underscores ethical concerns as significant obstacles to AI adoption in organizational DM. The specific ethical issues they identify are outlined below.

3.2.1. Making Life-or-Death Decisions, Potential Discrimination, and the Risk of Human Replacement with Machines

Through their exploratory study, Booyse and Scheepers [39] identify three key ethical challenges related to AI adoption in their exploratory study. First, they discuss the implications of AI making critical life-or-death decisions, such as in self-driving cars, and the ethical principles guiding these choices. Second, they highlight the risk of discrimination, particularly if AI systems are trained on biased data or are inherently programmed with biases. Third, they examine the ethical concerns of replacing human workers with AI, especially when those affected may lack alternative means of livelihood. The managers they interviewed expressed such concerns as potential obstacles to AI adoption.

3.2.2. Violation of Ethical and Privacy Issues

Cunha and colleagues [41] observe that although smart systems based on AI provide numerous advantages to organizations, their adoption is hindered by several challenges. These include ethical and privacy concerns, alongside insufficient funding, a lack of expertise, and inadequate specialized training [59,60,61,62,63,64]. Additionally, these challenges adversely affect managers’ perceptions and understanding of smart systems, as they create obstacles to curiosity and knowledge seeking, thereby reinforcing barriers to their use.

3.3. Psychological and Individual Factors

Individual Factors, although not directly related to interactions with AI, can impact the acceptance of the technology based on inherent dispositional traits or personal characteristics. This category has been explored in studies by Cao and colleagues [40], Cunha and colleagues [41], Haesevoets and colleagues [42], and Leyer and Schneider [44]. Research highlights the following key aspects, predominantly categorized as barriers to the adoption of this technology.

3.3.1. Overconfidence, Desire for Control, and Desire for Human Primacy

In the study conducted by Leyer and Schneider [44], a thorough exploration was carried out to uncover the underlying reasons behind individuals’ choices regarding the delegation of strategic managerial decisions to AI. The findings indicate a range of influential factors contributing to non-delegation behaviors. Foremost among these factors is a marked overconfidence in human capabilities, which accounted for 34.5% of the reasons reported. Additionally, the desire for control emerged as a significant motivator, comprising 19.9% of the responses.
Similarly, the research by Haesevoets and colleagues [42] examined how human managers perceive machine involvement in DM. While managers generally resist scenarios where machines assume a primary role, the study revealed that they are receptive to machine participation as long as the machines contribute less than humans. These findings align with earlier research, such as that of Bigman and Gray [65], who found that people prefer machines in advisory roles, and Dietvorst and colleagues [66], who noted that acceptance of machine-generated input increases when individuals maintain control over the final decision. However, the study of Haesevoets and colleagues goes further by precisely identifying the optimal balance between human and machine involvement. It was found that managers are more willing to accept machine participation as human influence on the final decision increases, reaching up to approximately 70% influence. Beyond this threshold, additional human input does not necessarily enhance acceptance rates.
The study by Leyer and Schneider [44] and the research by Haesevoets et al. [42] collectively emphasize that managerial DM preferences heavily favor human involvement, particularly when the perceived efficacy of human judgment and the desire for control is high.

3.3.2. Personality Traits

The study by Mahmud and colleagues [46] reveals important insights into the complex interplay between personality traits and individuals’ aversion to algorithms. Central to their findings is the concept of core self-evaluation, which encompasses the fundamental beliefs people hold about themselves. Notably, individuals with high self-esteem may view algorithmic judgments as dehumanizing, leading to resistance against these assessments, especially when they conflict with personal opinions. This resistance aligns with psychological reactance—a response triggered by perceived threats to autonomy. The study also highlights the role of self-efficacy—the belief in one’s ability to influence outcomes. Individuals with strong self-efficacy, particularly experts, tend to favor their own judgment over algorithmic input. Interestingly, in digital contexts, increased online self-efficacy fosters greater reliance on algorithmic systems, indicating how confidence in technological skills can shape DM. This aligns with the findings of Leyer and Schneider [44], who identified human overconfidence as a key factor influencing individuals’ aversion to delegating strategic managerial choices to AI.
Another significant factor that emerged from their study is the role of the locus of control. Those with an internal locus—who believe they can directly influence outcomes—are generally more skeptical of algorithms. This skepticism is particularly pronounced in fields like medicine, where a desire for control leads individuals to prefer human expertise to algorithmic diagnosis. Interestingly, even small elements of control, such as the ability to modify algorithmic parameters, can enhance receptivity to these technologies. Conversely, individuals with high levels of neuroticism, who often experience anxiety and insecurity, are typically less trusting of algorithms. They may perceive algorithmic DM as risky, fearing negative outcomes due to their inability to effectively navigate these systems. This anxiety can also extend to concerns about how reliance on algorithms might disrupt personal relationships, especially in sensitive domains like healthcare.
The study further discusses the implications of the Big Five personality traits. For instance, extraversion correlates with heightened sensitivity to errors in algorithmic decisions, leading extroverted individuals to favor human DM, which they believe is more likely to yield accurate results, despite its risks. Additionally, a distinction emerges between judgers and perceivers: those who prefer intuitive DM are often less inclined to adopt algorithms, while those with an analytical approach are more open to using algorithmic tools.
In summary, the study highlights how individual personality traits significantly shape attitudes toward algorithmic DM. While these factors are less actionable for system design due to their fixed nature, they offer insights into which managerial characteristics align best with the acceptance of AI-based technologies.

3.3.3. Demography

Mahmud and colleagues [46] also found that algorithm acceptance varies according to demographic factors such as age, gender, and education level. Research suggests that older individuals tend to trust algorithms less and find them less useful, often preferring human advisors for tasks like news recommendations [67,68]. However, in areas such as medication management, older adults may rely more on algorithms due to decreased confidence in their own abilities [69]. Interestingly, age does not consistently influence algorithm aversion, as demonstrated in studies involving business and geopolitical DM [70]. Gender also plays a role, with women frequently perceiving algorithms as less useful, even though this perception does not apply across all sectors [67,68]. Furthermore, individuals with lower education levels and less numerical confidence tend to have a diminished appreciation for algorithms [68,70].
These demographic factors, similar to the findings related to personality traits, highlight which managerial characteristics facilitate trust in AI-based technologies in DM.

3.3.4. Personal Well-Being and Personal Development Concerns

Cao and colleagues [40] examined the influence of personal concerns, particularly those related to Personal Well-Being and Personal Development, on attitudes and behavioral intentions regarding AI adoption. Personal Well-Being refers to an individual’s anxiety and stress about the potential negative effects of using AI technology. This aspect aligns with the findings of Agogo and Hess [71] and Brougham and Haar [72]. In contrast, Personal Development pertains to worries about how AI might hinder one’s ability to learn from personal experiences, as discussed in the research by Duan et al. [73] and Edwards et al. [74]. The study’s findings indicate that these personal concerns can negatively impact managers’ attitudes and intentions toward embracing AI technology.

3.3.5. Familiarity with AI

The study by Cunha and colleagues [41] emphasizes the importance of managers’ familiarity with AI technologies. Managers who possess a deeper understanding of AI and its practical applications are more likely to perceive AI as beneficial and are thus more inclined to adopt it. This knowledge is not only shaped by the managers themselves but also by the understanding of AI among other stakeholders within the organization.
Similarly, Mahmud and colleagues [46] highlight how people’s familiarity with algorithms, tasks, and human experts impacts their reliance on decision aids. Familiarity, indeed, tends to reduce algorithm aversion, as individuals develop a sense of comfort and acceptance (status quo bias) with algorithms [75]. However, negative experiences with algorithms can lead to aversion and regret, which increases the tendency to avoid using them in the future [76]. According to their findings, establishing familiarity with algorithms at the outset of the DM process, whether through direct experience or simulated scenarios, can enhance trust in these systems [77,78].

3.4. Psychosocial and Social Factors

In this section, there is a shift of focus toward factors that pertain to the individual within their social context, examining the social needs and relationships that can influence AI adoption in organizational settings. These factors, highlighted by the studies of Mahmud and colleagues [46,47] and Booyse and Scheepers [39], are described below.

3.4.1. The Need for Social Interactions

The qualitative study by Booyse and Scheepers [39] employs the Adaptive Structuration Theory (AST) [79] to analyze the barriers to AI adoption in organizational DM. The AST examines the iterative relationship between technology and social action, emphasizing how each continuously influences and shapes the other. According to DeSanctis and Poole [79], the effectiveness of advanced information technologies is contingent on the optimal alignment of both social and technological structures. Focusing on an interpretive paradigm, the researchers carried out exploratory qualitative interviews with 13 senior managers from South African organizations engaged in AI initiatives. This approach aimed to uncover potential obstacles to the integration of AI in automated DM processes. Through thematic analysis, the study identified seven key barriers, which were mapped to the dimensions of AST. Among the identified barriers, the study highlighted the need for social interactions. The interviews revealed that the need for social interactions and dynamics, such as team motivation and the relationships between leaders and followers, could act as obstacles to AI adoption. Additionally, the work environment could become negative or less productive if team members do not perceive the AI decision maker as an integral part of the team. The findings also suggest that in social work settings, AI is more likely to serve as an augmentation tool for human decision makers, as employees tend to relate more effectively to human managers.

3.4.2. Social Influence

Social influence, as identified by Mahmud and colleagues [46], can significantly affect the adoption of AI within organizations. People’s perceptions of algorithms are heavily shaped by the opinions of those around them, such as colleagues, friends, and supervisors [80]. Algorithms are often seen as less professional and fair, leading to users being perceived as less capable and intelligent [81,82,83]. These societal views create an unfavorable environment for algorithm use. Furthermore, feedback from current or previous users and insights into how algorithmic decisions have impacted their performance play a crucial role in determining an individual’s willingness to trust and adopt algorithms [84,85].

3.4.3. Tradition and Image Barriers

The investigation conducted by Mahmud and colleagues [47] examined the influence of certain Psychosocial Factors, specifically Tradition Barriers and Image Barriers, on the phenomenon of algorithm aversion among managers. Tradition Barriers arise when individuals face the need to move away from long-standing societal norms due to the introduction of innovations, often leading to resistance manifested through behaviors such as negative word-of-mouth, boycotts, and opposition [86]. On the other hand, Image Barriers refer to the negative perceptions of innovations that stem from preconceived, stereotypical notions held by users themselves [58].
The research findings clearly indicate that managers who perceive higher levels of Tradition and Image Barriers are more likely to exhibit a greater aversion to adopting AI-based solutions. This observation is consistent with the existing literature [56,87,88,89,90], reinforcing a coherent pattern across various studies.

3.5. Organizational Factors

This category focuses on a higher level of analysis, that of the organizational context in which AI is intended to be integrated, focusing on how this context and its characteristics influence managers’ acceptance. A significant number of studies emphasize the crucial role of Organizational Factors in shaping the adoption of AI within managerial DM [37,38,40,43,45,46,49,51]. These factors encompass organizational readiness, industry-specific solutions, organizational norms, and other crucial elements. Some factors exert a direct influence on managers, such as organizational norms, while others, such as the level of organizational readiness or digital transformation, have an indirect impact, as they operate through organizational characteristics that affect managers’ behavioral intentions. Below, we will explore each of these factors in greater detail, identifying which act as facilitators and which serve as barriers to AI adoption within organizations.

3.5.1. Type of Organization

Mahmud and colleagues [46] revealed that people tend to trust AI-based decisions made by not-for-profit organizations, such as government-run firms, more than those made by for-profit companies like banks or insurance firms [91]. However, no significant correlation was found between firm size, industry, or product type and algorithm aversion [92]. The review also highlights that in risky and volatile DM environments, managers tend to reject algorithms, even when they offer optimal solutions. Studies in areas like high-risk financial advice, medical DM, and demand forecasting show a clear preference for humans over algorithmic advisors due to concerns over uncertain outcomes and their consequences [76,85,93].

3.5.2. Organizational Readiness

The study by Phước [51] highlights the importance of organizational readiness in the context of AI adoption. This readiness includes not only Technological Factors, such as infrastructure and data structure, but also the skills of human resources. The availability of AI expertise, access to necessary data for training personnel in AI utilization, and technical knowledge are all crucial for facilitating AI implementation. From this perspective, organizations that are better prepared tend to achieve higher levels of AI adoption. Likewise, the research by Lada and colleagues [45] underscores the vital role of organizational readiness in promoting AI adoption, particularly within small and medium-sized enterprises.
In summary, fostering organizational readiness is essential for enhancing AI adoption, emphasizing the need for both technological infrastructure and skilled human resources.

3.5.3. Level of Digital Transformation

The study conducted by Rodríguez-Espíndola and colleagues [49] emphasizes the positive influence of companies’ engagement in digital transformation on the adoption of cutting-edge and disruptive technologies. Digital transformation involves reconfiguring and advancing processes, activities, and skills to leverage emerging technologies [94]. Organizations with greater technological expertise and knowledge are often early adopters, as they are better positioned to understand new technologies in their initial stages [95]. The findings from Rodríguez-Espíndola and colleagues’ study [49] reveal a positive correlation between digital transformation and both the perceived usefulness and perceived ease of use of AI technologies among managers. These perceptions, consistent with the TAM [31], significantly influence the intention to utilize AI-based solutions.

3.5.4. Organizational Resilience

Within the context of disruptive technologies, Rodríguez-Espíndola and colleagues [49] highlight the vital role of organizational resilience, which has a positive impact on the behavioral intention to adopt AI. Organizational resilience is crucial for empowering business strategies, establishing preparedness, developing emergency operation plans, responding effectively to unexpected disruptions, and achieving efficient recovery from such incidents [96,97]. Resilient organizations, characterized by their flexibility and adaptability, enjoy a competitive advantage in successfully integrating technologies that are less conventionally adopted. This emphasizes the importance of fostering resilience as a fundamental attribute for organizations aiming to embrace advanced technologies and navigate the ever-changing technological landscape.

3.5.5. Influence of Societal and Organizational Norms

Urbani and colleagues [50] emphasize that societal attitudes toward technology and organizational norms shape the adoption of AI. Specifically, factors such as the influence of colleagues, integration with existing systems, and the establishment of a supportive culture are critical for successful AI adoption.
In line with this, Mahmud and colleagues [46] highlight that societal factors are unique in fostering acceptance of AI algorithms, whereas other elements, such as organizational, environmental, and cultural factors, often contribute to greater algorithm aversion. Societal norms, particularly regarding technology adoption, can act as a catalyst for AI acceptance, while resistance may arise when organizational cultures fail to align with the perceived benefits of AI or when there is a lack of understanding and trust in the technology.
The interplay between societal attitudes and organizational culture is crucial for promoting AI adoption, highlighting the need for alignment between organizational values and technological integration.

3.5.6. Facilitating Conditions

Cao and colleagues [40] highlight the importance of facilitating conditions—a concept introduced by Venkatesh and colleagues [32]—which refers to the degree to which individuals believe that an organizational and technical infrastructure exists to support the use of AI, in having an effective mechanism to alleviate managers’ concerns, and a balanced consideration of both the benefits and the dark side associated with using AI.

3.5.7. Organizationally Driven Decisions

Basu and colleagues [37] demonstrate that collective, organizationally driven decisions to adopt AI technologies, especially non-robotic ones, tend to result in more favorable outcomes compared to individual-driven initiatives. Factors like corporate investments, training programs, and adaptive intentions play a significant role in effective AI integration.

3.5.8. Cost of Adoption and Return on Investment

The systematic review by Jan and colleagues [38] highlights key challenges impeding the adoption of industrial AI solutions, particularly within technical and organizational domains. Organizational challenges include the high cost of adoption, especially for small and medium-sized enterprises, as well as uncertainties regarding return on investment (ROI), which are significant barriers to implementation. Indeed, for small and medium-sized enterprises, the initial investment in AI and advanced manufacturing technologies can be prohibitive. This gap presents a potential area for future research, especially in exploring how AI adoption can be made more accessible and cost-effective for mid-tier industries.

3.6. External Factors

This thematic category elevates the analysis by shifting focus from the organizational context to External Factors that can influence the acceptance of AI within organizations. These External Factors, such as government involvement and market pressure, play a significant role in facilitating or hindering AI adoption in organizational settings. Below, each of these factors, as examined by Jackson and Allen [43], Rodríguez-Espíndola and colleagues [49], and Phước [51], will be discussed in greater detail.

3.6.1. Government Involvement

The study by Phước [51] emphasizes the significant role of government involvement in the adoption of AI-based solutions. Government engagement is essential for promoting IT innovation, as noted by Wang and colleagues [98]. Governments can implement strategies and supportive policies that encourage the commercialization of new technologies, as well as introduce regulations to guide their development. According to Al-Hawamdeh and Alshaer [99], the adoption of new technologies is a complex process, and the regulatory framework established by the government plays a critical role in this process.

3.6.2. Vendor Partnership

The research conducted by Phước [51] also underscores the impact of vendor partnerships on AI adoption. According to Assael [100], vendor involvement can significantly affect the rate of adoption and diffusion of AI solutions among managers. Vendors typically require a considerable amount of data to train their AI technologies, which often involves sensitive consumer information. Consequently, suppliers must collaborate closely with companies to facilitate AI training both during and after the implementation process.

3.6.3. Regulatory Guidance

Rodríguez-Espíndola and colleagues [49] validated the significant influence of External Factors on managers’ perceptions of technology adoption. Specifically, regulatory guidance can greatly shape the perceived ease of using emerging technologies. Regulatory support provides managers with valuable information about these technologies, enhancing their understanding of their utility and reducing the uncertainty that might otherwise lead to user insecurity.

3.6.4. Market Pressure

Rodríguez-Espíndola and colleagues [49] also emphasize that market pressure, which drives firms to strategically plan and innovate their operations [101,102], also significantly influences the perceived usefulness of AI technologies. This pressure compels companies to adopt AI as a means to stay competitive and enhance efficiency. As noted earlier, both perceived ease of use and perceived usefulness are critical facilitators for the intention to adopt AI technologies, aligning with established technology adoption models. These factors collectively shape how firms approach AI integration, responding to external market demands while assessing internal capabilities.

3.6.5. Professional Associations

Jackson and Allen [43] highlight the crucial role professional associations play in assisting organizations to navigate both enablers and barriers to AI adoption, helping their members develop tailored strategies for implementation. Specifically, the study gives evidence of the importance of collaboration between educational institutions, professional bodies, and industry to better prepare future professionals and managers, particularly in fields like accounting, for the technological shifts brought on by AI and other emerging tools.

3.7. Technical and Design Characteristics of AI-Based Technologies

This final section centers on the primary focus of the investigation: the characteristics of AI-based systems themselves, aiming to evaluate which technical and design elements may facilitate acceptance within organizational contexts.
Studies by Jan and colleagues [38], Leyer and Schneider [44], Mahmud and colleagues [46], and Misra and colleagues [48] have examined various algorithmic attributes, including transparency, explainability, approaches to AI integration, and design methodologies. The following discussion will explore each of these characteristics in detail.

3.7.1. Transparency and Explainability

The findings of Mahmud and colleagues [46] revealed a range of factors that contribute to managers’ aversion to algorithms, shedding light on how the design, decision, and delivery of algorithms significantly influence trust and acceptance. Notably, they confirm the extensive literature available on the topic of AI, indicating that the “black box” nature of algorithmic design—where transparency is lacking—plays a major role in fostering managers’ resistance. Managers tend to distrust algorithms when they cannot understand how decisions are made, craving explanations that are clear, accessible, and interactive. The study’s findings emphasize that increasing transparency, by making algorithms explainable and understandable, can enhance trust and reduce aversion.

3.7.2. Interaction and Control

Moreover, Mahmud and colleagues [46] identified that a system’s capacity for interaction and control is crucial for mitigating aversion among managers. Allowing users to engage with and modify input in response to algorithmic feedback satisfies their desire for control and strengthens their confidence in the system. This design aspect aligns with the findings of Haesevoets and colleagues [42], which emphasize the importance of managers’ need to retain control within a partnership where human primacy and oversight are paramount. By fostering this collaborative environment, organizations can effectively integrate AI into DM processes while addressing managers’ concerns about relinquishing control.

3.7.3. Complexity and Speed of Algorithms

Interestingly, the complexity and speed of algorithms also emerged as crucial factors in the systematic review of Mahmud and colleagues [46]; while people expect algorithms to handle tasks quickly and efficiently, they tend to be wary of complex or slow DM processes, which feel unnatural and lead to decreased reliance.
Misra and colleagues [48] add another layer of complexity by highlighting public-sector managers’ concerns about AI. These managers, while not inherently distrustful of AI, express reservations about its implementation, particularly regarding the complexity of outcomes and the ethical implications of AI usage.
In summary, both the perceived complexity and speed of algorithms, along with ethical considerations, play significant roles in shaping attitudes toward AI adoption.

3.7.4. Decision Accuracy and Investment in DM

The study by Mahmud and colleagues [46] also found that decision accuracy plays a significant role. Indeed, when algorithms make errors, especially in simple tasks, trust diminishes rapidly. However, when people see algorithms learning from mistakes, their confidence is restored. Additionally, they highlighted that people are more likely to follow algorithms when they have invested in the process—whether financially or in terms of effort—and when the outlook of the decision points to positive outcomes. The authors also emphasize the importance of the timing of algorithmic errors: errors that occur in the early stages of DM tend to have a more detrimental impact, due to the primacy effect, than those that arise later, which are influenced by the recency effect.

3.7.5. Human-Like Decision Delivery

Lastly, according to Mahmud and colleagues [46], the delivery of decisions significantly impacts algorithm acceptance. The findings suggest that human-like delivery, particularly through oral communication or human-like agents, captures more attention than screen-based presentations. This can be achieved through the anthropomorphic design of interfaces, which can take various forms—ranging from a human-like appearance to the language used, the sounds produced, and the display of liveliness. Examples include voice-based communication, chat avatars, or images of experts. These anthropomorphic features create a sense of social presence, which enhances the perceived relatability and trustworthiness of algorithms, ultimately increasing their acceptance among managers. However, the anthropomorphic design must be user-friendly to obtain the desired effect [85,103,104].

3.7.6. Voluntary/Mandatory Integration of AI Systems into Managerial Roles

Leyer and Schneider [44] discuss how AI tools for DM can either augment or automate decisions, impacting managerial roles based on how these tools are designed and implemented. Specifically, the design and integration of AI systems into managerial roles involve balancing voluntary and mandatory AI usage. Voluntary augmentation enables managers to maintain control over decisions, aligning with Haesevoets and colleagues [42], who emphasize the importance of human primacy in DM. In contrast, mandatory or fully automated systems may shift responsibility and alter power dynamics, potentially reducing managerial influence in the DM process.

3.7.7. Industry-Specific Solutions

According to Jan and colleagues [38], different industries encounter distinct challenges related to data collection, quality control, and the integration of AI and machine learning. Solutions that are effective in one sector may not be easily applicable to others, highlighting the necessity of context-specific strategies. For instance, Alawamleh and colleagues [105] emphasize that the limitations of AI in healthcare can differ significantly from those in manufacturing, necessitating tailored implementation strategies.
In line with this perspective, Jackson and Allen [43] emphasize the importance of customized strategies based on organizational size. Larger organizations are more likely to invest in internal infrastructure and training, while smaller entities may depend on external support, such as cloud services. This tailored approach ensures that the unique needs of each organization are effectively addressed.

3.8. A Comprehensive Framework

Insights from the present SLR aid in the development of a comprehensive framework for AI managers’ acceptance in organizational DM, including all the influencing factors identified thus far (see Table 3). This framework is represented in Figure 6 and categorizes the factors into seven major interconnected groups: Managers’ Perceptions of AI, Ethical Factors, Psychological and Individual Factors, Social and Psychosocial Factors, Organizational Factors, External Factors, and Technical and Design Characteristics of AI.
This review study suggests that the categories of Technical and Design Characteristics of AI, Managers’ Perceptions of AI, Organizational Factors, and External Factors are primarily associated with facilitators of AI acceptance (classified in Figure 4 with “(+)”).
Conversely, barriers (highlighted in Figure 4 with “(−)”) are predominantly associated with the categories of Psychological and Individual Factors, Social and Psychosocial Factors, and Ethical Factors.
Attitudes serve as critical mediators between the influencing factors and managers’ intentions to use AI in DM, as supported by the TAM [31] and behavioral intention represents a managers’ readiness to perform a given behavior and is the primary predictor of their actual use of AI, as supported by the Theory of Planned Behavior (TPB) [106].
Trust emerged as another vital mediating effect, influenced by various facilitating elements, such as Managers’ Perceptions of AI, including user satisfaction, and Technical and Design Characteristics of AI-Based Technologies, particularly transparency and explainability, as well as decision accuracy and investment in DM, ultimately impacting managers’ intentions to adopt AI for organizational DM.
The relationships illustrated in the framework, both among the macro-categories and between the specific elements within those categories, highlight the interconnectedness of these aspects and the importance of understanding how facilitators and barriers interact in a complex and multi-perspective context. By grasping this complexity, organizations and practitioners can cultivate a supportive environment that encourages the integration of AI into DM processes effectively. In the following section, indeed, we present the key findings and practical implications for implementing AI in the context of organizational DM.

4. Discussion

A crucial insight from this SLR is the recognition that the integration of AI in managerial DM necessitates a holistic and integrated approach that goes beyond mere technological development and significantly considers the human and social components. The identified barriers and facilitators span diverse domains that warrant careful consideration, including Managers’ Perceptions of AI, Ethical Factors, Psychological and Individual Factors, Social and Psychosocial Factors, Organizational Factors, External Factors, and Technical and Design Characteristics of AI.
Echoing Gaudin’s theory of innovation [107], it becomes clear that innovation is driven not merely by technological advancements but also by the evolving behaviors of organizations, which function as living entities. For AI adoption to be successful, organizations must align their missions, behaviors, and processes with emerging technologies [20,108]. In line with this perspective, the results of this review highlight how Organizational Factors play a crucial role in either facilitating or hindering the acceptance and adoption of AI in managerial DM on the basis of their characteristics, norms, and their social context. Among the Organizational Factors that emerged, several key aspects stand out: some of these factors exert a direct influence on managers, such as organizational norms [50], while others, like organizational readiness [51] or the level of digital transformation [49], have an indirect impact on managers’ behavioral intentions.
Alongside Organizational Factors, several psychosocial and social dimensions were identified as influencing managers’ intentions to use AI in their strategic DM, such as the need for social interactions, as explored by Booyse and Scheepers [39], or social influence, as identified by Mahmud et al. [46]. In line with the UTAUT, the opinions of colleagues, managers, and influential figures within an organization can shape an individual’s willingness to trust and adopt AI. Lastly, Mahmud and colleagues [47] highlight psychosocial obstacles to AI adoption due to the perception of deviations from established norms and obstacles connected to stereotypes about technology.
The perceptions of managers regarding AI also play a vital role in facilitating its adoption. The reviewed studies support the TAM by revealing that perceived ease of use and perceived usefulness are critical facilitators. As highlighted by Vărzaru [52], managers who find AI systems user-friendly and beneficial to their tasks are more likely to adopt these technologies. This is reinforced by findings from Cao and colleagues [40], which emphasize that the ease with which AI tools can be integrated into daily tasks significantly influences adoption. However, alongside these positive perceptions, managers’ concerns regarding potential risks associated with AI can impact their willingness to adopt such systems. For instance, Cao and colleagues [40] found that fears of negative outcomes, particularly harmful decisions made by AI, can deter managers from fully embracing AI.
Psychological and Individual Factors, while not exclusively and directly tied to AI technology itself, play a significant role in managers’ acceptance of AI in DM. A recurring theme across the studies is managers’ desire for control in their strategic decisions [44]. This desire for control is especially pronounced in high-stakes DM scenarios, where the implications can be substantial. In these situations, managers may be particularly apprehensive about delegating authority to AI. This aligns with findings from Dietvorst and colleagues [66] and Haesevoets and colleagues [42], who suggest that individuals are more likely to trust AI when they maintain the final say in decisions. Other Psychological and Individual Factors include concerns related to personal well-being and personal development, as highlighted by Cunha and colleagues [41]. Additionally, familiarity with and experience in using AI influence managers’ expectations and trust and mediate the effects of social influence.
Demographic factors and personality traits, such as self-esteem, self-efficacy, internal locus of control, neuroticism, and extraversion, also appear as significant barriers to AI acceptance [46] while older managers, women, and individuals with lower educational backgrounds may demonstrate lower levels of comfort and acceptance when engaging with AI technologies [46].
The interplay between Individual Factors, such as desire for control, managers’ perceptions, and Technical Factors related to AI system design, is also significant. For instance, the review’s findings emphasize the importance of designing AI-based systems that allow managers to interact with, modify, and oversee AI-generated recommendations [46]. This concept of a collaborative partnership aligns with the theory of double-loop learning [21], which emphasizes that while AI systems can provide rational and competent insights, they should not supplant human intuition and reflexivity. Moreover, transparency, the complexity and speed of algorithms, decision accuracy, and the delivery of decisions also play a vital role. For instance, human-like interfaces, such as voice communication or anthropomorphic designs, can significantly enhance the relatability and acceptance of AI systems [46]. The voluntary use of AI in managerial DM can further facilitate its integration [44]. Additionally, industry-specific solutions are crucial, as different sectors encounter unique challenges in AI adoption, requiring tailored strategies that account for organizational size, context, and specific needs [38,43].
Ethical concerns also significantly impact the intention of using AI, particularly by influencing managers’ perceptions of risks and potential threats associated with its use. Booyse and Scheepers [39] identify three primary ethical issues: the implications of AI making life-or-death decisions (e.g., self-driving cars], the risk of discrimination due to biased data or programming, and the ethical challenges of replacing humans with AI. Similarly, Cunha et al. [41] emphasize that while AI offers numerous organizational advantages, ethical concerns—especially regarding privacy—hinder its broader adoption. Managers facing these issues become less curious about AI systems, which diminishes their willingness to adopt the technology.
Finally, the External Factors identified in the findings act as key facilitators, helping organizations create an environment that supports and encourages AI adoption. Indeed, government involvement plays a key role by promoting innovation through supportive policies and regulations. Vendor partnerships are crucial in facilitating AI implementation, providing essential training and addressing data privacy concerns [51]. Regulatory guidance helps reduce uncertainty, boosting managers’ confidence in AI technologies [49]. Additionally, market pressure pushes companies to adopt AI to remain competitive [101], while professional associations guide organizations through challenges, helping prepare managers for AI integration [43].

Practical Implications

From a practical viewpoint, our exploration revealed two overarching areas of implications—organizational and design implications—that can guide organizations and designers in promoting the adoption of AI-based systems by managers in organizational DM.
Organizational Implications:
The following outlines the key organizational implications that can be derived from the insights gained in this study:
  • Infrastructure and Resource Allocation: A significant takeaway is the importance of organizational readiness. Investing in technical infrastructure, such as data management systems and cloud platforms, along with allocating resources for continuous employee training, is crucial for enhancing AI-related competencies.
  • Foster Human–AI Collaboration: Organizations should promote a culture where AI is viewed as a support system rather than an independent decision maker. This approach enables managers to focus on nuanced decisions while AI handles data-heavy tasks, thereby reducing algorithm aversion and enhancing trust.
  • Change Management Strategies: Effective change management is necessary to challenge preconceived notions about AI. Clear communication and ongoing education can help managers understand the benefits and risks associated with AI, overcoming psychological barriers and embracing technological innovations.
  • Ethical Guidelines and Accountability: Establishing ethical guidelines to govern AI usage is critical. Organizations must address potential biases and privacy concerns by ensuring algorithm transparency and creating accountability mechanisms for contested AI-generated decisions.
  • External Partnerships: Collaborating with AI vendors and regulatory bodies can provide valuable expertise and ensure that organizations remain informed about industry-specific AI solutions and compliance requirements.
  • Context-Specific AI Solutions: Tailoring AI systems to meet the unique needs of an organization and its industry is essential. Industry-specific AI solutions can effectively address sector-specific challenges.
Design Implications:
Below are the main design implications that can be derived from this SLR:
  • Customization and Flexibility: AI systems should be adaptable and customizable based on the specific needs and preferences of individual managers or teams. Customization enhances user satisfaction and supports long-term adoption.
  • Interaction and Control: Resistance to AI adoption often stems from concerns over relinquishing control. Therefore, AI systems must incorporate features that allow for human oversight and intervention, which can mitigate psychological discomfort and foster trust.
  • Ease of Use and User-Friendly Interfaces: The perceived ease of use remains a key facilitator of AI adoption. Designing user-friendly interfaces can reduce cognitive load and increase adoption rates.
  • Explainable AI (XAI): Transparency is a major concern. Implementing explainable AI that provides clear, interpretable outputs can help managers understand DM processes, which is essential for gaining their trust.
  • Human-Like Interaction: Incorporating human-like elements in AI interfaces, such as natural language processing (NLP) for communication or anthropomorphic design, can improve user acceptance and foster a sense of familiarity and comfort, making AI systems more approachable and trustworthy for managers.
Therefore, designing AI systems that are customizable, user-friendly, and transparent is crucial for ensuring that managers feel in control and can trust the decisions supported by AI. By adopting a user-centered design approach, organizations can significantly enhance the effectiveness of AI adoption, tailoring these systems to meet the distinct needs of both the organizations and their users. In summary, the critical insight is that successful AI integration hinges on the adaptability of AI systems to meet the specific needs of users and their contexts and the capacity of organizations to foster an environment of trust and collaboration.

5. Conclusions

This SLR offers a comprehensive analysis of the facilitators and barriers influencing managers’ acceptance of AI-based systems in organizational DM. The review identifies seven thematic categories that significantly shape managers’ attitudes and behaviors toward AI, offering a framework that not only integrates the results but also highlights the interrelationships and connections between the various factors.
The complexity of this framework underscores the necessity of exploring the interrelationships among these various dimensions to enhance our understanding of AI adoption by managers. Through the examination of the findings, it becomes evident that AI adoption largely depends on the intentions of managers, which are inextricably linked to their attitudes shaped by a confluence of factors. Trust emerges as a crucial mediating effect, deeply influenced by both the technical characteristics of an AI system and managers’ perceptions. This interplay is further shaped by ethical considerations. Moreover, Organizational Factors shape managers’ intentions both directly and indirectly, with the broader organizational context and support serving as catalysts for cultivating trust [109,110].
External Factors such as government involvement, regulatory guidelines, and professional associations also play a supportive role in shaping an organization’s willingness to adopt AI technologies.
The review also emphasizes the necessity for fostering human–AI collaboration, advocating for a perspective that positions AI as a supportive tool that augments human DM rather than replacing it, in order to respond to managers’ desire for control in their strategic decisions.
It is crucial to specify that some factors are more actionable, particularly those related to design and organizational aspects, while others, such as Individual Factors, serve more as indicators to support decisions where managers may be more inclined to trust AI or of which aspects to leverage to facilitate that trust. Overall, this complex and integrated perspective moves us away from the notion that technological knowledge alone dominates the construction of these systems. Rather, these systems should be designed with the specificities of the individuals who will adopt them in mind. To foster innovation, indeed, it is essential not only to focus on development but also to consider how to facilitate adoption, which in turn requires the acceptance of technological innovations. This is why it is essential to consider the human and psychological dimensions.
Hence, addressing facilitators and barriers through a holistic approach that intertwines technical considerations with human, social, and organizational factors becomes fundamental. Emphasizing user-centered design and user research could be essential for comprehending both the organizational landscape and the specific needs of managers as users [26,111,112]. In fact, by incorporating user and organizational feedback throughout the design and development of AI-based systems, organizations can ensure that AI services not only meet user requirements but also align with the scope and the internal models of the provider organization [26,111,112]. This allows for a focus on different layers, not only on the external side of technology but also on the study of human reasoning models to shape the “internal side of technologies” [20,113].
In conclusion, this contribution seeks to bridge the existing gap in the literature, where only 9.3% of the articles on AI adoption in DM stem from the psychological and social sciences, which we consider essential. Understanding the significance of human and individual factors, along with the interplay between organizational and social contexts and technological and system design, will be crucial for achieving effective AI adoption in the future of organizations.

Strengths, Limitations of the Study, and Future Work

The primary limitation of this review lies in the small number of studies included (16 studies). This could be attributed to the relatively recent application of AI in the organizational DM context. This limited pool of studies might constrain the generalizability of the findings. Furthermore, the heterogeneity of the type of included studies prevented the inclusion of a meta-analysis.
As research on this topic progresses, it will be also valuable to examine more specialized DM modalities, such as group DM or multi-actor DM, and to explore how these differ from individual DM in collaboration with AI. Future studies could include investigating advanced models like Digital Twins (DTs) powered by large language models (LLMs), which are increasingly helping organizations simulate, analyze, and optimize processes in virtual, risk-free environments [114]. Another promising direction is granular computing with three-way DM, which improves decision accuracy by introducing a “non-commitment” option alongside traditional “accept” and “reject” choices [115], thereby further refining decision accuracy.
Prioritizing these research directions could yield more comprehensive insights, allowing for a deeper consideration of specific contextual differences and technological advancements that meaningfully impact AI integration in complex organizational DM processes.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ai5040123/s1: Table S1 “Quality assessment of the eligible studies using the CASP Systematic Review Checklist, adapted from the critical appraisal skills program”, and Table S2 “Quality assessment of the eligible studies using the CASP for Cross-Sectional Studies Checklist, adapted from the critical appraisal skills program”.

Author Contributions

Conceptualization, S.M. and A.T.; writing—original draft preparation, S.M.; writing—review and editing, S.M. and B.B.; supervision, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
  2. Wilke, A.; Mata, R. Cognitive Bias. In Encyclopedia of Human Behavior, 2nd ed.; Ramachandran, V., Ed.; Academic Press: San Diego, CA, USA, 2012; pp. 531–535. [Google Scholar]
  3. Gigerenzer, G.; Todd, P.M.; The ABC Research Group. Simple Heuristics That Make Us Smart; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
  4. Simon, H.A. Rational choice and the structure of the environment. Psychol. Rev. 1956, 63, 129–138. [Google Scholar] [CrossRef] [PubMed]
  5. Marocco, S.; Talamo, A. The Contribution of Activity Theory to Modeling Multi-Actor Decision-Making: A Focus on Human Capital Investments. Front. Psychol. 2022, 13, 997062. [Google Scholar] [CrossRef]
  6. Sterman, J.D. Modeling managerial behavior: Misperceptions of feedback in a dynamic decision-making experiment. Manag. Sci. 1989, 35, 321–339. [Google Scholar] [CrossRef]
  7. Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision-making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  8. Smith, H.A.; McKeen, J. Enabling cooperation with IT. Commun. AIS 2011, 28, 243–254. [Google Scholar]
  9. Agrawal, A.; Gans, J.; Goldfarb, A. How AI will change the way we make decisions. Harv. Bus. Rev. 2017, 26, 1–5. [Google Scholar]
  10. Brynjolfsson, E.; McAfee, A. The business of artificial intelligence. Harv. Bus. Rev. 2017, 1, 1–31. [Google Scholar]
  11. Nenni, M.E.; De Felice, F.; De Luca, C.; Forcina, A. How Artificial Intelligence Will Transform Project Management in the Age of Digitization: A Systematic Literature Review. Manag. Rev. Q. 2024. [Google Scholar] [CrossRef]
  12. Nelson, J. AI in the Boardroom—Fantasy or Reality? Available online: https://cglytics.com/ai-in-the-boardroom-fantasy-or-reality (accessed on 1 August 2024).
  13. Bort, J. Amazon’s Warehouse-Worker Tracking System Can Automatically Fire People Without a Human Supervisor’s Involvement; Business Insider, 2019; Available online: https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4 (accessed on 1 August 2024).
  14. Schrage, M. 4 Models for Using AI to Make Decisions. Harv. Bus. Rev. 2017. Available online: https://hbr.org/2017/01/4-models-for-using-ai-to-make-decisions (accessed on 1 August 2024).
  15. De Cremer, D. Leadership by Algorithm; [edition unavailable]; Harriman House, 20 August 2020; Available online: https://www.perlego.com/book/1527138/leadership-by-algorithm-who-leads-and-who-follows-in-the-ai-era-pdf (accessed on 1 August 2024).
  16. Albert, E.T. AI in talent acquisition: A review of AI applications used in recruitment and selection. Strateg. HR Rev. 2019, 18, 215–221. [Google Scholar] [CrossRef]
  17. Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [Google Scholar] [CrossRef]
  18. Michelotti, M.; McColl, R.; Puncheva-Michelotti, P.; Clarke, R.; McNamara, T. The Effects of Medium and Sequence on Personality Trait Assessments in Face-to-Face and Videoconference Selection Interviews: Implications for HR Analytics. Hum. Resour. Manag. J. 2021, 31, 1025–1062. [Google Scholar] [CrossRef]
  19. Feloni, R. Consumer Goods Giant Unilever Has Been Hiring Employees Using Brain Games and Artificial Intelligence and It’s a Huge Success. 2017. Available online: https://www.s4ye.org/node/4137 (accessed on 1 August 2024).
  20. Talamo, A.; Marocco, S.; Tricol, C. “The Flow in the Funnel”: Modeling Organizational and Individual Decision-Making for Designing Financial AI-Based Systems. Front. Psychol. 2021, 12, 697101. [Google Scholar] [CrossRef]
  21. Argyris, C.; Schon, D. Organizational Learning: A Theory of Action Perspective; Addison-Wesley: Boston, MA, USA, 1978. [Google Scholar]
  22. Dreyfus, H.L.; Dreyfus, S. Peripheral vision: Expertise in real world contexts. Organ. Stud. 2005, 26, 779–792. [Google Scholar] [CrossRef]
  23. Kahneman, D. Thinking, Fast and Slow; Allen Lane: London, UK, 2011. [Google Scholar]
  24. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  25. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  26. Marocco, S.; Talamo, A.; Quintiliani, F. Applying Design Thinking to Develop AI-Based Multi-Actor Decision-Support Systems: A Case Study on Human Capital Investments. Appl. Sci. 2024, 14, 5613. [Google Scholar] [CrossRef]
  27. Pyle, M.A.; Smith, A.N.; Chevtchouk, Y. In eWOM We Trust: Using Naïve Theories To Understand Consumer Trust in a Complex eWOM Marketspace. J. Bus. Res. 2021, 122, 145–158. [Google Scholar] [CrossRef]
  28. Sharma, M.; Kaushal, D.; Joshi, S.; Kumar, A.; Luthra, S. Electronic Waste Disposal Behavioral Intention of Millennials: A Moderating Role of Electronic Word of Mouth (eWOM) and Perceived Usage of Online Collection Portal. J. Clean. Prod. 2024, 447, 141121. [Google Scholar] [CrossRef]
  29. Floridi, L.; Taddeo, M. What is data ethics? Philos. Trans. R. Soc. A 2016, 374, 20160360. [Google Scholar] [CrossRef] [PubMed]
  30. Trocin, C.; Våge Hovland, I.; Mikalef, P.; Dremel, C. How Artificial Intelligence affords digital innovation: A cross-case analysis of Scandinavian companies. Technol. Forecast. Soc. Chang. 2021, 173, 121081. [Google Scholar] [CrossRef]
  31. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  32. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  33. Liang, H.; Xue, Y. Avoidance of information technology threats: A theoretical perspective. MIS Q. 2009, 33, 71–90. [Google Scholar] [CrossRef]
  34. Tranfield, D.; Denyer, D.; Smart, P. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  35. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Moher, D. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  36. Rousseau, D.M.; Manning, J.; Denyer, D. Evidence in Management and Organizational Science: Assembling the Field’s Full Weight of Scientific Knowledge through Syntheses. Acad. Manag. Ann. 2008, 2, 475–515. [Google Scholar] [CrossRef]
  37. Basu, S.; Majumdar, B.; Mukherjee, K.; Munjal, S.; Palaksha, C. Artificial Intelligence–HRM Interactions and Outcomes: A Systematic Review and Causal Configurational Explanation. Hum. Resour. Manag. Rev. 2023, 33, 100893. [Google Scholar] [CrossRef]
  38. Jan, Z.; Ahamed, F.; Mayer, W.; Patel, N.; Grossmann, G.; Stumptner, M.; Kuusk, A. Artificial intelligence for industry 4.0: Systematic review of applications, challenges, and opportunities. Expert Syst. Appl. 2023, 216, 119456. [Google Scholar] [CrossRef]
  39. Booyse, D.; Scheepers, C.B. Barriers to adopting automated organizational decision-making through the use of artificial intelligence. Manag. Res. Rev. 2024, 47, 64–85. [Google Scholar] [CrossRef]
  40. Cao, G.; Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation 2021, 106, 102312. [Google Scholar] [CrossRef]
  41. Cunha, S.L.; da Costa, R.L.; Gonçalves, R.; Pereira, L.; Dias, Á.; da Silva, R.V. Smart systems adoption in management. Int. J. Bus. Syst. Res. 2023, 17, 703–727. [Google Scholar] [CrossRef]
  42. Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-machine collaboration in managerial decision making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
  43. Jackson, D.; Allen, C. Enablers, barriers and strategies for adopting new technology in accounting. Int. J. Account. Inf. Syst. 2024, 52, 100666. [Google Scholar] [CrossRef]
  44. Leyer, M.; Schneider, S. Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers? Bus. Horiz. 2021, 64, 711–724. [Google Scholar] [CrossRef]
  45. Lada, S.; Chekima, B.; Karim, M.R.A.; Fabeil, N.F.; Ayub, M.S.; Amirul, S.M.; Ansar, R.; Bouteraa, M.; Fook, L.M.; Zaki, H.O. Determining factors related to artificial intelligence (AI) adoption among Malaysia’s small and medium-sized businesses. J. Open Innov. Technol. Mark. Complex. 2023, 9, 100144. [Google Scholar] [CrossRef]
  46. Mahmud, H.; Islam, A.K.M.N.; Ahmed, S.I.; Smolander, K. What Influences Algorithmic Decision-Making? A Systematic Literature Review on Algorithm Aversion. Technol. Forecast. Soc. Chang. 2022, 175, 121390. [Google Scholar] [CrossRef]
  47. Mahmud, H.; Islam, A.K.M.N.; Mitra, R.K. What Drives Managers Towards Algorithm Aversion and How to Overcome It? Mitigating the Impact of Innovation Resistance through Technology Readiness. Technol. Forecast. Soc. Chang. 2023, 193, 122641. [Google Scholar] [CrossRef]
  48. Misra, S.; Katz, B.; Roberts, P.; Carney, M.; Valdivia, I. Toward a Person-Environment Fit Framework for Artificial Intelligence Implementation in the Public Sector. Gov. Inf. Q. 2024, 41, 101962. [Google Scholar] [CrossRef]
  49. Rodríguez-Espíndola, O.; Chowdhury, S.; Dey, P.K.; Albores, P.; Emrouznejad, A. Analysis of the Adoption of Emergent Technologies for Risk Management in the Era of Digital Manufacturing. Technol. Forecast. Soc. Chang. 2022, 178, 21562. [Google Scholar] [CrossRef]
  50. Urbani, R.; Ferreira, C.; Lam, J. Managerial framework for evaluating AI chatbot integration: Bridging organizational readiness and technological challenges. Bus. Horiz. 2024, 67, 595–606. [Google Scholar] [CrossRef]
  51. Phuoc, N.V. The Critical Factors Impacting Artificial Intelligence Applications Adoption in Vietnam: A Structural Equation Modeling Analysis. Economies 2022, 10, 129. [Google Scholar] [CrossRef]
  52. Vărzaru, A.A. Assessing Artificial Intelligence Technology Acceptance in Managerial Accounting. Electronics 2022, 11, 2256. [Google Scholar] [CrossRef]
  53. Critical Appraisal Skills Programme. CASP Checklist. 2018. Available online: https://casp-uk.net/casp-tools-checklists/ (accessed on 1 August 2024).
  54. Chen, Y.; Zahedi, F.M. Individuals’ internet security perceptions and behaviors: Polycontextual contrasts between the United States and China. MIS Q. 2016, 40, 205–222. [Google Scholar] [CrossRef]
  55. Liang, H.; Xue, Y. Understanding security behaviors in personal computer usage: A threat avoidance perspective. J. Assoc. Inf. Syst. 2010, 11, 394–413. [Google Scholar] [CrossRef]
  56. Laukkanen, T.; Sinkkonen, S.; Kivijärvi, M.; Laukkanen, P. Innovation resistance among mature consumers. J. Consum. Mark. 2007, 24, 419–427. [Google Scholar] [CrossRef]
  57. Molesworth, M.; Suortti, J.-P. Buying Cars Online: The Adoption of the Web for High-Involvement, High-Cost Purchases. J. Consum. Behav. 2002, 2, 155–168. [Google Scholar] [CrossRef]
  58. Ram, S.; Sheth, J.N. Consumer Resistance to Innovations: The Marketing Problem and Its Solutions. J. Consum. Mark. 1989, 6, 5. [Google Scholar] [CrossRef]
  59. Amini, L.; Chen, C.-H.; Cox, D.; Oliva, A.; Torralba, A. Experiences and insights for collaborative industry-academic research in artificial intelligence. AI Mag. 2020, 41, 70–81. [Google Scholar] [CrossRef]
  60. Atkinson, R. Don’t Fear AI; European Investment Bank, 2019; Available online: https://www.eib.org/en/publications/eib-big-ideas-dont-fear-ai (accessed on 1 August 2024).
  61. Liu, X.; Zhao, M.; Li, S.; Zhang, F.; Trappe, W. A security framework for the internet of things in the future internet architecture. Future Internet 2017, 9, 27. [Google Scholar] [CrossRef]
  62. Simon, J.P. Artificial intelligence: Scope, players, markets and geography. Digit. Policy Regul. Gov. 2019, 21, 208–237. [Google Scholar] [CrossRef]
  63. Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.; Kraus, S.; et al. Artificial Intelligence and Life in 2030, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel; Stanford University: Stanford, CA, USA, 2016; Available online: http://ai100.stanford.edu/2016-report (accessed on 1 September 2020).
  64. Wasilow, S.; Thorpe, J.B. Artificial intelligence, robotics, ethics, and the military: A Canadian perspective. AI Mag. 2019, 40, 37–48. [Google Scholar] [CrossRef]
  65. Bigman, Y.E.; Gray, K. People are averse to machines making moral decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef] [PubMed]
  66. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  67. Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
  68. Thurman, N.; Moeller, J.; Helberger, N.; Trilling, D. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digit. Journal. 2019, 7, 447–469. [Google Scholar] [CrossRef]
  69. Ho, G.; Wheatley, D.; Scialfa, C.T. Age differences in trust and reliance of a medication management system. Interact. Comput. 2005, 17, 690–710. [Google Scholar] [CrossRef]
  70. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  71. Agogo, D.; Hess, T.J. How does tech make you feel? A review and examination of negative affective responses to technology use. Eur. J. Inf. Syst. 2018, 27, 570–599. [Google Scholar] [CrossRef]
  72. Brougham, D.; Haar, J. Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. J. Manag. Organ. 2018, 24, 239–257. [Google Scholar] [CrossRef]
  73. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial intelligence for decision making in the era of Big Data—Evolution, challenges and research agenda. Int. J. Inform. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  74. Edwards, J.S.; Duan, Y.; Robins, P.C. An analysis of expert systems for supplier evaluation and selection. Comput. Ind. 2001, 44, 37–52. [Google Scholar] [CrossRef]
  75. Fenneman, A.; Sickmann, J.; Pitz, T.; Sanfey, A.G. Two distinct and separable processes underlie individual differences in algorithm adherence: Differences in predictions and differences in trust thresholds. PLoS ONE 2021, 16, e0247084. [Google Scholar] [CrossRef]
  76. Feng, X.; Gao, J. Is optimal recommendation the best? A laboratory investigation under the newsvendor problem. Decis. Support Syst. 2020, 131, 113251. [Google Scholar] [CrossRef]
  77. Dijkstra, J.J. User agreement with incorrect expert system advice. Behav. Inform. Technol. 1999, 18, 399–411. [Google Scholar] [CrossRef]
  78. Yuviler-Gavish, N.; Gopher, D. Effect of descriptive information and experience on automation reliance. Hum. Factors 2011, 53, 230–244. [Google Scholar] [CrossRef]
  79. DeSanctis, G.; Poole, M.S. Capturing the complexity in advanced technology use: Adaptive structuration theory. Organ. Sci. 1994, 5, 121–147. [Google Scholar] [CrossRef]
  80. Workman, M. Expert decision support system use, disuse, and misuse: A study using the theory of planned behavior. Comput. Hum. Behav. 2005, 21, 211–231. [Google Scholar] [CrossRef]
  81. Arkes, H.R.; Shaffer, V.A.; Medow, M.A. Patients derogate physicians who use a computer-assisted diagnostic aid. Med. Decis. Mak. 2007, 27, 189–202. [Google Scholar] [CrossRef]
  82. Diab, D.L.; Pui, S.Y.; Yankelevich, M.; Highhouse, S. Lay perceptions of selection decision aids in US and non-US samples. Int. J. Sel. Assess. 2011, 19, 209–216. [Google Scholar] [CrossRef]
  83. Eastwood, J.; Snook, B.; Luther, K. What people want from their professionals: Attitudes toward decision-making strategies. J. Behav. Decis. Mak. 2012, 25, 458–468. [Google Scholar] [CrossRef]
  84. Alexander, V.; Blinder, C.; Zak, P.J. Why trust an algorithm? Performance, cognition, and neurophysiology. Comput. Hum. Behav. 2018, 89, 279–288. [Google Scholar] [CrossRef]
  85. Zhang, L.; Pentina, I.; Fan, Y. Who do you choose? Comparing perceptions of human vs robo-advisor in the context of financial services. J. Serv. Mark. 2021, 35, 634–646. [Google Scholar] [CrossRef]
  86. John, A.; Klein, J. The boycott puzzle: Consumer motivations for purchase sacrifice. Manag. Sci. 2003, 49, 1196–1209. [Google Scholar] [CrossRef]
  87. Gupta, A.; Arora, N. Understanding determinants and barriers of mobile shopping adoption using behavioral reasoning theory. J. Retail. Consum. Serv. 2017, 36, 1–7. [Google Scholar] [CrossRef]
  88. Leong, L.Y.; Hew, T.S.; Ooi, K.B.; Wei, J. Predicting mobile wallet resistance: A two-staged structural equation modeling-artificial neural network approach. Int. J. Inf. Manag. 2020, 51, 102047. [Google Scholar] [CrossRef]
  89. Ma, L.; Lee, C.S. Understanding the Barriers to the Use of MOOCs in a Developing Country: An Innovation Resistance Perspective. J. Educ. Comput. Res. 2017. [CrossRef]
  90. Moorthy, K.; Suet Ling, C.; Weng Fatt, Y.; Mun Yee, C.; Ket Yin, E.C.; Sin Yee, K.; Kok Wei, L. Barriers of Mobile Commerce Adoption Intention: Perceptions of Generation X in Malaysia. J. Theor. Appl. Electron. Commer. Res. 2017, 12, 37–53. [Google Scholar] [CrossRef]
  91. Lourenço, C.J.S.; Dellaert, B.G.C.; Donkers, B. Whose Algorithm Says So: The Relationships between Type of Firm, Perceptions of Trust and Expertise, and the Acceptance of Financial Robo-advice. J. Interact. Mark. 2020, 49, 107–124. [Google Scholar] [CrossRef]
  92. Sanders, N.R.; Manrodt, K.B. The Efficacy of Using Judgmental versus Quantitative Forecasting Methods in Practice. Omega 2003, 31, 511–522. [Google Scholar] [CrossRef]
  93. Dietvorst, B.J.; Bharti, S. People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychol. Sci. 2020, 31, 1302–1314. [Google Scholar] [CrossRef] [PubMed]
  94. He, Q.; Meadows, M.; Angwin, D.; Gomes, E.; Child, J. Strategic alliance research in the era of digital transformation: Perspectives on future research. Br. J. Manag. 2020, 31, 589–617. [Google Scholar] [CrossRef]
  95. Geroski, P.A. Models of technology diffusion. Res. Policy 2000, 29, 603–625. [Google Scholar] [CrossRef]
  96. Macdonald, J.R.; Zobel, C.W.; Melnyk, S.A.; Griffis, S.E. Supply Chain Risk and Resilience: Theory Building through Structured Experiments and Simulation. Int. J. Prod. Res. 2018, 56, 4337–4355. [Google Scholar] [CrossRef]
  97. Sheffi, Y. The Resilient Enterprise: Overcoming Vulnerability for Competitive Advantage, 1st Paperback ed.; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  98. Wang, H.; Hu, X.; Ali, N. Spatial Characteristics and Driving Factors Toward the Digital Economy: Evidence from Prefecture-Level Cities in China. J. Asian Financ. 2022, 9, 419–426. [Google Scholar]
  99. Al-Hawamdeh, M.M.; Alshaer, S.A. Artificial Intelligence Applications as a Modern Trend to Achieve Organizational Innovation in Jordanian Commercial Banks. J. Asian Financ. 2022, 9, 257–263. [Google Scholar]
  100. Assael, H. Consumer Behavior and Marketing Action; Kent Publishing Company: Boston, MA, USA, 1995. [Google Scholar]
  101. Paulraj, A.; Chen, I.J. Environmental Uncertainty and Strategic Supply Management: A Resource Dependence Perspective and Performance Implications. J. Supply Chain Manag. 2007, 43, 29–42. [Google Scholar] [CrossRef]
  102. Thanki, S.; Thakkar, J. A quantitative framework for lean and green assessment of supply chain performance. Int. J. Prod. Perform. Manag. 2018, 67, 366–400. [Google Scholar] [CrossRef]
  103. Qiu, L.; Benbasat, I. Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. J. Manag. Inf. Syst. 2008, 25, 145–182. [Google Scholar] [CrossRef]
  104. Li, Z.; Rau, P.L.P.; Huang, D. Who should provide clothing recommendation services: Artificial intelligence or human experts? J. Inf. Technol. Res. 2020, 13, 113–125. [Google Scholar] [CrossRef]
  105. Alawamleh, M.; Shammas, N.; Alawamleh, K.; Bani Ismail, L. Examining the limitations of AI in business and the need for human insights using Interpretive Structural Modelling. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100338. [Google Scholar] [CrossRef]
  106. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  107. Gaudin, T. L’écoute des Silences; Union Générale d’Éditions: Paris, France, 1978. [Google Scholar]
  108. Talamo, A.; Recupero, A.; Mellini, B.; Ventura, S. Teachers as designers of GBL scenarios: Fostering creativity in the educational settings. Interact. Des. Archit. J. 2016, 29, 10–23. [Google Scholar] [CrossRef]
  109. Farnese, M.L.; Benevene, P.; Barbieri, B. Learning to trust in social enterprises: The contribution of organisational culture to trust dynamics. J. Trust Res. 2022, 12, 153–178. [Google Scholar] [CrossRef]
  110. Bonaiuto, F.; Fantinelli, S.; Milani, A.; Cortini, M.; Vitiello, M.C.; Bonaiuto, M. Perceived Organizational Support and Work Engagement: The Role of Psychosocial Variables. J. Workplace Learn. 2022, 34, 418–436. [Google Scholar] [CrossRef]
  111. Marocco, S.; Marini, M.; Talamo, A. Enhancing Organizational Processes for Service Innovation: Strategic Organizational Counseling and Organizational Network Analysis. Front. Res. Metr. Anal. 2024, 9, 1270501. [Google Scholar] [CrossRef]
  112. Marocco, S.; Talamo, A.; Quintiliani, F. From Service Design Thinking to the Third Generation of Activity Theory: A New Model for Designing AI-Based Decision-Support Systems. Front. Artif. Intell. 2024, 7, 1303691. [Google Scholar] [CrossRef]
  113. Talamo, A.; Giorgi, S.; Mellini, B. Designing technologies for ageing: Is simplicity always a leading criterion? In Proceedings of the 9th ACM SIGCHI Italian Chapter International Conference on Computer-Human Interaction: Facing Complexity, Alghero, Italy, 13–16 September 2011; ACM: Alghero, Italy, 2011; pp. 33–36. [Google Scholar]
  114. Sun, Y.; Zhang, Q.; Bao, J.; Lu, Y.; Liu, S. Empowering Digital Twins with Large Language Models for Global Temporal Feature Learning. J. Manuf. Syst. 2024, 74, 83–99. [Google Scholar] [CrossRef]
  115. Kong, Q.; Zhang, X.; Xu, W.; Long, B. A Novel Granular Computing Model Based on Three-Way Decision. Int. J. Approx. Reason. 2022, 144, 92–112. [Google Scholar] [CrossRef]
Figure 1. Yearly distribution of documents on “Artificial Intelligence” or “AI” and “decision-making” based on research trend analysis of Scopus.
Figure 1. Yearly distribution of documents on “Artificial Intelligence” or “AI” and “decision-making” based on research trend analysis of Scopus.
Ai 05 00123 g001
Figure 2. Analysis of the documents by subject area conducted via Scopus.
Figure 2. Analysis of the documents by subject area conducted via Scopus.
Ai 05 00123 g002
Figure 3. PRISMA flowchart showing the selection process of the articles.
Figure 3. PRISMA flowchart showing the selection process of the articles.
Ai 05 00123 g003
Figure 4. Distribution of the reviewed articles by year.
Figure 4. Distribution of the reviewed articles by year.
Ai 05 00123 g004
Figure 5. Distribution of the reviewed articles across the countries.
Figure 5. Distribution of the reviewed articles across the countries.
Ai 05 00123 g005
Figure 6. Comprehensive framework for AI acceptance in organizational DM.
Figure 6. Comprehensive framework for AI acceptance in organizational DM.
Ai 05 00123 g006
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
LanguageEnglishNon-English
Publication TypeArticle, ReviewConference Review, Conference Paper, Book, Chapter, Book
Time Frame2010–2024<2010
FocusStudies that investigate factors affecting adoption and usage of AI in DM by managersStudies that do not address factors affecting adoption and usage of AI in DM by managers
Studies that focus on the application of AI in the organizational contextStudies not applied to the organizational context
Table 2. General information about the records included.
Table 2. General information about the records included.
AuthorsYearArticle Type
Basu, S., Majumdar, B., Mukherjee, K., Munjal, S., Palaksha, C.2023Systematic Review
Booyse, D., Scheepers, C.B.2024Quantitative Research Article
Cao, G., Duan, Y., Edwards, J.S., Dwivedi, Y.K.2021Quantitative Research Article
Cunha, S.L., da Costa, R.L., Gonçalves, R., Pereira, L., Dias, Á., da Silva, R.V.2023Quantitative Research Article
Haesevoets, T., De Cremer, D., Dierckx, K., Van Hiel, A.2021Quantitative Research Article
Jackson, D., Allen, C.2024Quantitative Research Article
Jan, Z., Ahamed, F., Mayer, W., Patel, N., Grossmann, G., Stumptner, M., Kuusk, A.2023Systematic Review
Lada, S., Chekima, B., Karim, M.R.A., Fabeil, N.F., Ayub, M.S., Amirul, S.M., Ansar, R., Bouteraa, M., Fook, L.M., Zaki, H.O.2023Quantitative Research Article
Leyer, M., Schneider, S.2021Quantitative Research Article
Mahmud, H., Islam, A.K.M.N., Ahmed, S.I., Smolander, K.2022Systematic Review
Mahmud, H., Islam, A.K.M.N., Mitra, R.K.2023Quantitative Research Article
Misra, S., Katz, B., Roberts, P., Carney, M., Valdivia, I.2024Quantitative Research Article
Rodríguez-Espíndola, O., Chowdhury, S., Dey, P.K., Albores, P., Emrouznejad, A.2022Quantitative Research Article
Urbani, R., Ferreira, C., Lam, J.2024Theoretical Research Article
Van Phước, N.2022Quantitative Research Article
Vărzaru, A.A.2022Quantitative Research Article
Table 3. Overview of the category of factors influencing AI managers’ acceptance within the included studies.
Table 3. Overview of the category of factors influencing AI managers’ acceptance within the included studies.
Categories of FactorsStudiesFacilitatorsBarriers
Managers’ Perceptions of AICao et al. [40]Performance Expectancy
Effort Expectancy
Perceived Threat, Severity, and Susceptibility
Cunha et al. [41]Familiarity
Perceived Benefits
Leyer and Schneider [44]Perceived Adaptability
Mahmud et al. [46]Perceived Nature of the Task
Familiarity with AI
Perceived Nature of the Task
Mahmud et al. [47]Perceived Value
Vărzaru [52]Perceived Ease of Use
Perceived Usefulness
User Satisfaction
Ethical FactorsBooyse and Scheepers [39] Making Life-or-Death Decisions
Potential Discrimination
The Risk of Human Replacement with Machines
Cunha et al. [41] Violation of Ethical and Privacy Issues
Psychological and Individual factorsCao et al. [40] Personal Well-Being Concern
Personal Development Concern
Cunha et al. [41]Familiarity with AI
Haesevoets et al. [42] Desire for Human Primacy
Leyer and Schneider [44] Overconfidence and Desire for Control
Mahmud et al. [46] Personality Traits (Self-Esteem; Self-Efficacy; Internal Locus of Control; Neuroticism; Extraversion)
Demography (Older People; Women; Lower Education)
Social and Psychosocial factorsBooyse and Scheepers [39] The Need for Social Interactions
Mahmud et al. [46]Social InfluenceSocial Influence
Mahmud et al. [47] Tradition and Image Barriers
Organizational FactorsBasu et al. [37]Organizationally Driven Decisions
Cao et al. [40]Facilitating Conditions
Jan et al. [38]Cost of Adoption and Return on Investment
Lada et al. [45]Organizational Readiness
Mahmud et al. [46]Societal and Organizational Norms
Type of Organization
Rodríguez-Espíndola et al. [49]Organizational Resilience
Level of Digital Transformation (High)
Phước [51]Organizational Readiness
External FactorsJackson and Allen [43]Professional Associations
Rodríguez-Espíndola et al. [49]Regulatory Guidance
Market Pressure
Phước [51]Government Involvement
Vendor Partnership
Technical and Design Characteristics of AI-Based TechnologiesJackson and Allen [43]Industry-Specific Solutions
Jan et al. [38]Industry-Specific Solutions
Leyer & Schneider [44]Voluntary Integration of AI Mandatory Integration of AI
Mahmud et al. [46]Transparency and Explainability
Interaction and Control
Speed of Algorithms
Decision Accuracy and Investment in DM
Human-Like Decision Delivery
Complexity of Algorithms
Misra et al. [48] Complexity of Outcomes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marocco, S.; Barbieri, B.; Talamo, A. Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI 2024, 5, 2538-2567. https://doi.org/10.3390/ai5040123

AMA Style

Marocco S, Barbieri B, Talamo A. Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI. 2024; 5(4):2538-2567. https://doi.org/10.3390/ai5040123

Chicago/Turabian Style

Marocco, Silvia, Barbara Barbieri, and Alessandra Talamo. 2024. "Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review" AI 5, no. 4: 2538-2567. https://doi.org/10.3390/ai5040123

APA Style

Marocco, S., Barbieri, B., & Talamo, A. (2024). Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI, 5(4), 2538-2567. https://doi.org/10.3390/ai5040123

Article Metrics

Back to TopTop