Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review
<p>Yearly distribution of documents on “Artificial Intelligence” or “AI” and “decision-making” based on research trend analysis of Scopus.</p> "> Figure 2
<p>Analysis of the documents by subject area conducted via Scopus.</p> "> Figure 3
<p>PRISMA flowchart showing the selection process of the articles.</p> "> Figure 4
<p>Distribution of the reviewed articles by year.</p> "> Figure 5
<p>Distribution of the reviewed articles across the countries.</p> "> Figure 6
<p>Comprehensive framework for AI acceptance in organizational DM.</p> ">
Abstract
:1. Introduction
2. Material and Methods
2.1. Source of Information and Search Strategy
2.2. Research Purpose and Question Formulation
2.3. Extensive Identification of Relevant Research
Inclusion and Exclusion Criteria
2.4. Data Extraction and Selection
Characteristics of Included Studies
2.5. Quality Assessment of Included Studies
2.6. Organizing and Interpreting
- Managers’ Perceptions of AI;
- Psychological and Individual Factors;
- Ethical Factors;
- Psychosocial and Social Factors;
- Organizational Factors;
- External Factors;
- Technical and Design Characteristics of AI-Based Technologies.
3. Results
3.1. Synthesis and Elaboration of the Organizing Framework
3.1.1. Managers’ Perceptions of AI
3.1.2. Perceived Ease of Use, Perceived Usefulness, and Effort Expectancy
3.1.3. User Satisfaction
3.1.4. Perceived Threat, Severity, and Susceptibility
3.1.5. Perceived Adaptability
3.1.6. Performance Expectancy and Perceived Benefits
3.1.7. Perceived Value
3.1.8. Perceived Nature of the Task
3.2. Ethical Factors
3.2.1. Making Life-or-Death Decisions, Potential Discrimination, and the Risk of Human Replacement with Machines
3.2.2. Violation of Ethical and Privacy Issues
3.3. Psychological and Individual Factors
3.3.1. Overconfidence, Desire for Control, and Desire for Human Primacy
3.3.2. Personality Traits
3.3.3. Demography
3.3.4. Personal Well-Being and Personal Development Concerns
3.3.5. Familiarity with AI
3.4. Psychosocial and Social Factors
3.4.1. The Need for Social Interactions
3.4.2. Social Influence
3.4.3. Tradition and Image Barriers
3.5. Organizational Factors
3.5.1. Type of Organization
3.5.2. Organizational Readiness
3.5.3. Level of Digital Transformation
3.5.4. Organizational Resilience
3.5.5. Influence of Societal and Organizational Norms
3.5.6. Facilitating Conditions
3.5.7. Organizationally Driven Decisions
3.5.8. Cost of Adoption and Return on Investment
3.6. External Factors
3.6.1. Government Involvement
3.6.2. Vendor Partnership
3.6.3. Regulatory Guidance
3.6.4. Market Pressure
3.6.5. Professional Associations
3.7. Technical and Design Characteristics of AI-Based Technologies
3.7.1. Transparency and Explainability
3.7.2. Interaction and Control
3.7.3. Complexity and Speed of Algorithms
3.7.4. Decision Accuracy and Investment in DM
3.7.5. Human-Like Decision Delivery
3.7.6. Voluntary/Mandatory Integration of AI Systems into Managerial Roles
3.7.7. Industry-Specific Solutions
3.8. A Comprehensive Framework
4. Discussion
Practical Implications
- Infrastructure and Resource Allocation: A significant takeaway is the importance of organizational readiness. Investing in technical infrastructure, such as data management systems and cloud platforms, along with allocating resources for continuous employee training, is crucial for enhancing AI-related competencies.
- Foster Human–AI Collaboration: Organizations should promote a culture where AI is viewed as a support system rather than an independent decision maker. This approach enables managers to focus on nuanced decisions while AI handles data-heavy tasks, thereby reducing algorithm aversion and enhancing trust.
- Change Management Strategies: Effective change management is necessary to challenge preconceived notions about AI. Clear communication and ongoing education can help managers understand the benefits and risks associated with AI, overcoming psychological barriers and embracing technological innovations.
- Ethical Guidelines and Accountability: Establishing ethical guidelines to govern AI usage is critical. Organizations must address potential biases and privacy concerns by ensuring algorithm transparency and creating accountability mechanisms for contested AI-generated decisions.
- External Partnerships: Collaborating with AI vendors and regulatory bodies can provide valuable expertise and ensure that organizations remain informed about industry-specific AI solutions and compliance requirements.
- Context-Specific AI Solutions: Tailoring AI systems to meet the unique needs of an organization and its industry is essential. Industry-specific AI solutions can effectively address sector-specific challenges.
- Customization and Flexibility: AI systems should be adaptable and customizable based on the specific needs and preferences of individual managers or teams. Customization enhances user satisfaction and supports long-term adoption.
- Interaction and Control: Resistance to AI adoption often stems from concerns over relinquishing control. Therefore, AI systems must incorporate features that allow for human oversight and intervention, which can mitigate psychological discomfort and foster trust.
- Ease of Use and User-Friendly Interfaces: The perceived ease of use remains a key facilitator of AI adoption. Designing user-friendly interfaces can reduce cognitive load and increase adoption rates.
- Explainable AI (XAI): Transparency is a major concern. Implementing explainable AI that provides clear, interpretable outputs can help managers understand DM processes, which is essential for gaining their trust.
- Human-Like Interaction: Incorporating human-like elements in AI interfaces, such as natural language processing (NLP) for communication or anthropomorphic design, can improve user acceptance and foster a sense of familiarity and comfort, making AI systems more approachable and trustworthy for managers.
5. Conclusions
Strengths, Limitations of the Study, and Future Work
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
- Wilke, A.; Mata, R. Cognitive Bias. In Encyclopedia of Human Behavior, 2nd ed.; Ramachandran, V., Ed.; Academic Press: San Diego, CA, USA, 2012; pp. 531–535. [Google Scholar]
- Gigerenzer, G.; Todd, P.M.; The ABC Research Group. Simple Heuristics That Make Us Smart; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
- Simon, H.A. Rational choice and the structure of the environment. Psychol. Rev. 1956, 63, 129–138. [Google Scholar] [CrossRef] [PubMed]
- Marocco, S.; Talamo, A. The Contribution of Activity Theory to Modeling Multi-Actor Decision-Making: A Focus on Human Capital Investments. Front. Psychol. 2022, 13, 997062. [Google Scholar] [CrossRef]
- Sterman, J.D. Modeling managerial behavior: Misperceptions of feedback in a dynamic decision-making experiment. Manag. Sci. 1989, 35, 321–339. [Google Scholar] [CrossRef]
- Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision-making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
- Smith, H.A.; McKeen, J. Enabling cooperation with IT. Commun. AIS 2011, 28, 243–254. [Google Scholar]
- Agrawal, A.; Gans, J.; Goldfarb, A. How AI will change the way we make decisions. Harv. Bus. Rev. 2017, 26, 1–5. [Google Scholar]
- Brynjolfsson, E.; McAfee, A. The business of artificial intelligence. Harv. Bus. Rev. 2017, 1, 1–31. [Google Scholar]
- Nenni, M.E.; De Felice, F.; De Luca, C.; Forcina, A. How Artificial Intelligence Will Transform Project Management in the Age of Digitization: A Systematic Literature Review. Manag. Rev. Q. 2024. [Google Scholar] [CrossRef]
- Nelson, J. AI in the Boardroom—Fantasy or Reality? Available online: https://cglytics.com/ai-in-the-boardroom-fantasy-or-reality (accessed on 1 August 2024).
- Bort, J. Amazon’s Warehouse-Worker Tracking System Can Automatically Fire People Without a Human Supervisor’s Involvement; Business Insider, 2019; Available online: https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4 (accessed on 1 August 2024).
- Schrage, M. 4 Models for Using AI to Make Decisions. Harv. Bus. Rev. 2017. Available online: https://hbr.org/2017/01/4-models-for-using-ai-to-make-decisions (accessed on 1 August 2024).
- De Cremer, D. Leadership by Algorithm; [edition unavailable]; Harriman House, 20 August 2020; Available online: https://www.perlego.com/book/1527138/leadership-by-algorithm-who-leads-and-who-follows-in-the-ai-era-pdf (accessed on 1 August 2024).
- Albert, E.T. AI in talent acquisition: A review of AI applications used in recruitment and selection. Strateg. HR Rev. 2019, 18, 215–221. [Google Scholar] [CrossRef]
- Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [Google Scholar] [CrossRef]
- Michelotti, M.; McColl, R.; Puncheva-Michelotti, P.; Clarke, R.; McNamara, T. The Effects of Medium and Sequence on Personality Trait Assessments in Face-to-Face and Videoconference Selection Interviews: Implications for HR Analytics. Hum. Resour. Manag. J. 2021, 31, 1025–1062. [Google Scholar] [CrossRef]
- Feloni, R. Consumer Goods Giant Unilever Has Been Hiring Employees Using Brain Games and Artificial Intelligence and It’s a Huge Success. 2017. Available online: https://www.s4ye.org/node/4137 (accessed on 1 August 2024).
- Talamo, A.; Marocco, S.; Tricol, C. “The Flow in the Funnel”: Modeling Organizational and Individual Decision-Making for Designing Financial AI-Based Systems. Front. Psychol. 2021, 12, 697101. [Google Scholar] [CrossRef]
- Argyris, C.; Schon, D. Organizational Learning: A Theory of Action Perspective; Addison-Wesley: Boston, MA, USA, 1978. [Google Scholar]
- Dreyfus, H.L.; Dreyfus, S. Peripheral vision: Expertise in real world contexts. Organ. Stud. 2005, 26, 779–792. [Google Scholar] [CrossRef]
- Kahneman, D. Thinking, Fast and Slow; Allen Lane: London, UK, 2011. [Google Scholar]
- Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
- Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
- Marocco, S.; Talamo, A.; Quintiliani, F. Applying Design Thinking to Develop AI-Based Multi-Actor Decision-Support Systems: A Case Study on Human Capital Investments. Appl. Sci. 2024, 14, 5613. [Google Scholar] [CrossRef]
- Pyle, M.A.; Smith, A.N.; Chevtchouk, Y. In eWOM We Trust: Using Naïve Theories To Understand Consumer Trust in a Complex eWOM Marketspace. J. Bus. Res. 2021, 122, 145–158. [Google Scholar] [CrossRef]
- Sharma, M.; Kaushal, D.; Joshi, S.; Kumar, A.; Luthra, S. Electronic Waste Disposal Behavioral Intention of Millennials: A Moderating Role of Electronic Word of Mouth (eWOM) and Perceived Usage of Online Collection Portal. J. Clean. Prod. 2024, 447, 141121. [Google Scholar] [CrossRef]
- Floridi, L.; Taddeo, M. What is data ethics? Philos. Trans. R. Soc. A 2016, 374, 20160360. [Google Scholar] [CrossRef] [PubMed]
- Trocin, C.; Våge Hovland, I.; Mikalef, P.; Dremel, C. How Artificial Intelligence affords digital innovation: A cross-case analysis of Scandinavian companies. Technol. Forecast. Soc. Chang. 2021, 173, 121081. [Google Scholar] [CrossRef]
- Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
- Liang, H.; Xue, Y. Avoidance of information technology threats: A theoretical perspective. MIS Q. 2009, 33, 71–90. [Google Scholar] [CrossRef]
- Tranfield, D.; Denyer, D.; Smart, P. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Moher, D. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
- Rousseau, D.M.; Manning, J.; Denyer, D. Evidence in Management and Organizational Science: Assembling the Field’s Full Weight of Scientific Knowledge through Syntheses. Acad. Manag. Ann. 2008, 2, 475–515. [Google Scholar] [CrossRef]
- Basu, S.; Majumdar, B.; Mukherjee, K.; Munjal, S.; Palaksha, C. Artificial Intelligence–HRM Interactions and Outcomes: A Systematic Review and Causal Configurational Explanation. Hum. Resour. Manag. Rev. 2023, 33, 100893. [Google Scholar] [CrossRef]
- Jan, Z.; Ahamed, F.; Mayer, W.; Patel, N.; Grossmann, G.; Stumptner, M.; Kuusk, A. Artificial intelligence for industry 4.0: Systematic review of applications, challenges, and opportunities. Expert Syst. Appl. 2023, 216, 119456. [Google Scholar] [CrossRef]
- Booyse, D.; Scheepers, C.B. Barriers to adopting automated organizational decision-making through the use of artificial intelligence. Manag. Res. Rev. 2024, 47, 64–85. [Google Scholar] [CrossRef]
- Cao, G.; Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation 2021, 106, 102312. [Google Scholar] [CrossRef]
- Cunha, S.L.; da Costa, R.L.; Gonçalves, R.; Pereira, L.; Dias, Á.; da Silva, R.V. Smart systems adoption in management. Int. J. Bus. Syst. Res. 2023, 17, 703–727. [Google Scholar] [CrossRef]
- Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-machine collaboration in managerial decision making. Comput. Hum. Behav. 2021, 119, 106730. [Google Scholar] [CrossRef]
- Jackson, D.; Allen, C. Enablers, barriers and strategies for adopting new technology in accounting. Int. J. Account. Inf. Syst. 2024, 52, 100666. [Google Scholar] [CrossRef]
- Leyer, M.; Schneider, S. Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers? Bus. Horiz. 2021, 64, 711–724. [Google Scholar] [CrossRef]
- Lada, S.; Chekima, B.; Karim, M.R.A.; Fabeil, N.F.; Ayub, M.S.; Amirul, S.M.; Ansar, R.; Bouteraa, M.; Fook, L.M.; Zaki, H.O. Determining factors related to artificial intelligence (AI) adoption among Malaysia’s small and medium-sized businesses. J. Open Innov. Technol. Mark. Complex. 2023, 9, 100144. [Google Scholar] [CrossRef]
- Mahmud, H.; Islam, A.K.M.N.; Ahmed, S.I.; Smolander, K. What Influences Algorithmic Decision-Making? A Systematic Literature Review on Algorithm Aversion. Technol. Forecast. Soc. Chang. 2022, 175, 121390. [Google Scholar] [CrossRef]
- Mahmud, H.; Islam, A.K.M.N.; Mitra, R.K. What Drives Managers Towards Algorithm Aversion and How to Overcome It? Mitigating the Impact of Innovation Resistance through Technology Readiness. Technol. Forecast. Soc. Chang. 2023, 193, 122641. [Google Scholar] [CrossRef]
- Misra, S.; Katz, B.; Roberts, P.; Carney, M.; Valdivia, I. Toward a Person-Environment Fit Framework for Artificial Intelligence Implementation in the Public Sector. Gov. Inf. Q. 2024, 41, 101962. [Google Scholar] [CrossRef]
- Rodríguez-Espíndola, O.; Chowdhury, S.; Dey, P.K.; Albores, P.; Emrouznejad, A. Analysis of the Adoption of Emergent Technologies for Risk Management in the Era of Digital Manufacturing. Technol. Forecast. Soc. Chang. 2022, 178, 21562. [Google Scholar] [CrossRef]
- Urbani, R.; Ferreira, C.; Lam, J. Managerial framework for evaluating AI chatbot integration: Bridging organizational readiness and technological challenges. Bus. Horiz. 2024, 67, 595–606. [Google Scholar] [CrossRef]
- Phuoc, N.V. The Critical Factors Impacting Artificial Intelligence Applications Adoption in Vietnam: A Structural Equation Modeling Analysis. Economies 2022, 10, 129. [Google Scholar] [CrossRef]
- Vărzaru, A.A. Assessing Artificial Intelligence Technology Acceptance in Managerial Accounting. Electronics 2022, 11, 2256. [Google Scholar] [CrossRef]
- Critical Appraisal Skills Programme. CASP Checklist. 2018. Available online: https://casp-uk.net/casp-tools-checklists/ (accessed on 1 August 2024).
- Chen, Y.; Zahedi, F.M. Individuals’ internet security perceptions and behaviors: Polycontextual contrasts between the United States and China. MIS Q. 2016, 40, 205–222. [Google Scholar] [CrossRef]
- Liang, H.; Xue, Y. Understanding security behaviors in personal computer usage: A threat avoidance perspective. J. Assoc. Inf. Syst. 2010, 11, 394–413. [Google Scholar] [CrossRef]
- Laukkanen, T.; Sinkkonen, S.; Kivijärvi, M.; Laukkanen, P. Innovation resistance among mature consumers. J. Consum. Mark. 2007, 24, 419–427. [Google Scholar] [CrossRef]
- Molesworth, M.; Suortti, J.-P. Buying Cars Online: The Adoption of the Web for High-Involvement, High-Cost Purchases. J. Consum. Behav. 2002, 2, 155–168. [Google Scholar] [CrossRef]
- Ram, S.; Sheth, J.N. Consumer Resistance to Innovations: The Marketing Problem and Its Solutions. J. Consum. Mark. 1989, 6, 5. [Google Scholar] [CrossRef]
- Amini, L.; Chen, C.-H.; Cox, D.; Oliva, A.; Torralba, A. Experiences and insights for collaborative industry-academic research in artificial intelligence. AI Mag. 2020, 41, 70–81. [Google Scholar] [CrossRef]
- Atkinson, R. Don’t Fear AI; European Investment Bank, 2019; Available online: https://www.eib.org/en/publications/eib-big-ideas-dont-fear-ai (accessed on 1 August 2024).
- Liu, X.; Zhao, M.; Li, S.; Zhang, F.; Trappe, W. A security framework for the internet of things in the future internet architecture. Future Internet 2017, 9, 27. [Google Scholar] [CrossRef]
- Simon, J.P. Artificial intelligence: Scope, players, markets and geography. Digit. Policy Regul. Gov. 2019, 21, 208–237. [Google Scholar] [CrossRef]
- Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.; Kraus, S.; et al. Artificial Intelligence and Life in 2030, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel; Stanford University: Stanford, CA, USA, 2016; Available online: http://ai100.stanford.edu/2016-report (accessed on 1 September 2020).
- Wasilow, S.; Thorpe, J.B. Artificial intelligence, robotics, ethics, and the military: A Canadian perspective. AI Mag. 2019, 40, 37–48. [Google Scholar] [CrossRef]
- Bigman, Y.E.; Gray, K. People are averse to machines making moral decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef] [PubMed]
- Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
- Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
- Thurman, N.; Moeller, J.; Helberger, N.; Trilling, D. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digit. Journal. 2019, 7, 447–469. [Google Scholar] [CrossRef]
- Ho, G.; Wheatley, D.; Scialfa, C.T. Age differences in trust and reliance of a medication management system. Interact. Comput. 2005, 17, 690–710. [Google Scholar] [CrossRef]
- Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
- Agogo, D.; Hess, T.J. How does tech make you feel? A review and examination of negative affective responses to technology use. Eur. J. Inf. Syst. 2018, 27, 570–599. [Google Scholar] [CrossRef]
- Brougham, D.; Haar, J. Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. J. Manag. Organ. 2018, 24, 239–257. [Google Scholar] [CrossRef]
- Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial intelligence for decision making in the era of Big Data—Evolution, challenges and research agenda. Int. J. Inform. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
- Edwards, J.S.; Duan, Y.; Robins, P.C. An analysis of expert systems for supplier evaluation and selection. Comput. Ind. 2001, 44, 37–52. [Google Scholar] [CrossRef]
- Fenneman, A.; Sickmann, J.; Pitz, T.; Sanfey, A.G. Two distinct and separable processes underlie individual differences in algorithm adherence: Differences in predictions and differences in trust thresholds. PLoS ONE 2021, 16, e0247084. [Google Scholar] [CrossRef]
- Feng, X.; Gao, J. Is optimal recommendation the best? A laboratory investigation under the newsvendor problem. Decis. Support Syst. 2020, 131, 113251. [Google Scholar] [CrossRef]
- Dijkstra, J.J. User agreement with incorrect expert system advice. Behav. Inform. Technol. 1999, 18, 399–411. [Google Scholar] [CrossRef]
- Yuviler-Gavish, N.; Gopher, D. Effect of descriptive information and experience on automation reliance. Hum. Factors 2011, 53, 230–244. [Google Scholar] [CrossRef]
- DeSanctis, G.; Poole, M.S. Capturing the complexity in advanced technology use: Adaptive structuration theory. Organ. Sci. 1994, 5, 121–147. [Google Scholar] [CrossRef]
- Workman, M. Expert decision support system use, disuse, and misuse: A study using the theory of planned behavior. Comput. Hum. Behav. 2005, 21, 211–231. [Google Scholar] [CrossRef]
- Arkes, H.R.; Shaffer, V.A.; Medow, M.A. Patients derogate physicians who use a computer-assisted diagnostic aid. Med. Decis. Mak. 2007, 27, 189–202. [Google Scholar] [CrossRef]
- Diab, D.L.; Pui, S.Y.; Yankelevich, M.; Highhouse, S. Lay perceptions of selection decision aids in US and non-US samples. Int. J. Sel. Assess. 2011, 19, 209–216. [Google Scholar] [CrossRef]
- Eastwood, J.; Snook, B.; Luther, K. What people want from their professionals: Attitudes toward decision-making strategies. J. Behav. Decis. Mak. 2012, 25, 458–468. [Google Scholar] [CrossRef]
- Alexander, V.; Blinder, C.; Zak, P.J. Why trust an algorithm? Performance, cognition, and neurophysiology. Comput. Hum. Behav. 2018, 89, 279–288. [Google Scholar] [CrossRef]
- Zhang, L.; Pentina, I.; Fan, Y. Who do you choose? Comparing perceptions of human vs robo-advisor in the context of financial services. J. Serv. Mark. 2021, 35, 634–646. [Google Scholar] [CrossRef]
- John, A.; Klein, J. The boycott puzzle: Consumer motivations for purchase sacrifice. Manag. Sci. 2003, 49, 1196–1209. [Google Scholar] [CrossRef]
- Gupta, A.; Arora, N. Understanding determinants and barriers of mobile shopping adoption using behavioral reasoning theory. J. Retail. Consum. Serv. 2017, 36, 1–7. [Google Scholar] [CrossRef]
- Leong, L.Y.; Hew, T.S.; Ooi, K.B.; Wei, J. Predicting mobile wallet resistance: A two-staged structural equation modeling-artificial neural network approach. Int. J. Inf. Manag. 2020, 51, 102047. [Google Scholar] [CrossRef]
- Ma, L.; Lee, C.S. Understanding the Barriers to the Use of MOOCs in a Developing Country: An Innovation Resistance Perspective. J. Educ. Comput. Res. 2017. [CrossRef]
- Moorthy, K.; Suet Ling, C.; Weng Fatt, Y.; Mun Yee, C.; Ket Yin, E.C.; Sin Yee, K.; Kok Wei, L. Barriers of Mobile Commerce Adoption Intention: Perceptions of Generation X in Malaysia. J. Theor. Appl. Electron. Commer. Res. 2017, 12, 37–53. [Google Scholar] [CrossRef]
- Lourenço, C.J.S.; Dellaert, B.G.C.; Donkers, B. Whose Algorithm Says So: The Relationships between Type of Firm, Perceptions of Trust and Expertise, and the Acceptance of Financial Robo-advice. J. Interact. Mark. 2020, 49, 107–124. [Google Scholar] [CrossRef]
- Sanders, N.R.; Manrodt, K.B. The Efficacy of Using Judgmental versus Quantitative Forecasting Methods in Practice. Omega 2003, 31, 511–522. [Google Scholar] [CrossRef]
- Dietvorst, B.J.; Bharti, S. People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychol. Sci. 2020, 31, 1302–1314. [Google Scholar] [CrossRef] [PubMed]
- He, Q.; Meadows, M.; Angwin, D.; Gomes, E.; Child, J. Strategic alliance research in the era of digital transformation: Perspectives on future research. Br. J. Manag. 2020, 31, 589–617. [Google Scholar] [CrossRef]
- Geroski, P.A. Models of technology diffusion. Res. Policy 2000, 29, 603–625. [Google Scholar] [CrossRef]
- Macdonald, J.R.; Zobel, C.W.; Melnyk, S.A.; Griffis, S.E. Supply Chain Risk and Resilience: Theory Building through Structured Experiments and Simulation. Int. J. Prod. Res. 2018, 56, 4337–4355. [Google Scholar] [CrossRef]
- Sheffi, Y. The Resilient Enterprise: Overcoming Vulnerability for Competitive Advantage, 1st Paperback ed.; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
- Wang, H.; Hu, X.; Ali, N. Spatial Characteristics and Driving Factors Toward the Digital Economy: Evidence from Prefecture-Level Cities in China. J. Asian Financ. 2022, 9, 419–426. [Google Scholar]
- Al-Hawamdeh, M.M.; Alshaer, S.A. Artificial Intelligence Applications as a Modern Trend to Achieve Organizational Innovation in Jordanian Commercial Banks. J. Asian Financ. 2022, 9, 257–263. [Google Scholar]
- Assael, H. Consumer Behavior and Marketing Action; Kent Publishing Company: Boston, MA, USA, 1995. [Google Scholar]
- Paulraj, A.; Chen, I.J. Environmental Uncertainty and Strategic Supply Management: A Resource Dependence Perspective and Performance Implications. J. Supply Chain Manag. 2007, 43, 29–42. [Google Scholar] [CrossRef]
- Thanki, S.; Thakkar, J. A quantitative framework for lean and green assessment of supply chain performance. Int. J. Prod. Perform. Manag. 2018, 67, 366–400. [Google Scholar] [CrossRef]
- Qiu, L.; Benbasat, I. Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. J. Manag. Inf. Syst. 2008, 25, 145–182. [Google Scholar] [CrossRef]
- Li, Z.; Rau, P.L.P.; Huang, D. Who should provide clothing recommendation services: Artificial intelligence or human experts? J. Inf. Technol. Res. 2020, 13, 113–125. [Google Scholar] [CrossRef]
- Alawamleh, M.; Shammas, N.; Alawamleh, K.; Bani Ismail, L. Examining the limitations of AI in business and the need for human insights using Interpretive Structural Modelling. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100338. [Google Scholar] [CrossRef]
- Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
- Gaudin, T. L’écoute des Silences; Union Générale d’Éditions: Paris, France, 1978. [Google Scholar]
- Talamo, A.; Recupero, A.; Mellini, B.; Ventura, S. Teachers as designers of GBL scenarios: Fostering creativity in the educational settings. Interact. Des. Archit. J. 2016, 29, 10–23. [Google Scholar] [CrossRef]
- Farnese, M.L.; Benevene, P.; Barbieri, B. Learning to trust in social enterprises: The contribution of organisational culture to trust dynamics. J. Trust Res. 2022, 12, 153–178. [Google Scholar] [CrossRef]
- Bonaiuto, F.; Fantinelli, S.; Milani, A.; Cortini, M.; Vitiello, M.C.; Bonaiuto, M. Perceived Organizational Support and Work Engagement: The Role of Psychosocial Variables. J. Workplace Learn. 2022, 34, 418–436. [Google Scholar] [CrossRef]
- Marocco, S.; Marini, M.; Talamo, A. Enhancing Organizational Processes for Service Innovation: Strategic Organizational Counseling and Organizational Network Analysis. Front. Res. Metr. Anal. 2024, 9, 1270501. [Google Scholar] [CrossRef]
- Marocco, S.; Talamo, A.; Quintiliani, F. From Service Design Thinking to the Third Generation of Activity Theory: A New Model for Designing AI-Based Decision-Support Systems. Front. Artif. Intell. 2024, 7, 1303691. [Google Scholar] [CrossRef]
- Talamo, A.; Giorgi, S.; Mellini, B. Designing technologies for ageing: Is simplicity always a leading criterion? In Proceedings of the 9th ACM SIGCHI Italian Chapter International Conference on Computer-Human Interaction: Facing Complexity, Alghero, Italy, 13–16 September 2011; ACM: Alghero, Italy, 2011; pp. 33–36. [Google Scholar]
- Sun, Y.; Zhang, Q.; Bao, J.; Lu, Y.; Liu, S. Empowering Digital Twins with Large Language Models for Global Temporal Feature Learning. J. Manuf. Syst. 2024, 74, 83–99. [Google Scholar] [CrossRef]
- Kong, Q.; Zhang, X.; Xu, W.; Long, B. A Novel Granular Computing Model Based on Three-Way Decision. Int. J. Approx. Reason. 2022, 144, 92–112. [Google Scholar] [CrossRef]
Inclusion Criteria | Exclusion Criteria | |
---|---|---|
Language | English | Non-English |
Publication Type | Article, Review | Conference Review, Conference Paper, Book, Chapter, Book |
Time Frame | 2010–2024 | <2010 |
Focus | Studies that investigate factors affecting adoption and usage of AI in DM by managers | Studies that do not address factors affecting adoption and usage of AI in DM by managers |
Studies that focus on the application of AI in the organizational context | Studies not applied to the organizational context |
Authors | Year | Article Type |
---|---|---|
Basu, S., Majumdar, B., Mukherjee, K., Munjal, S., Palaksha, C. | 2023 | Systematic Review |
Booyse, D., Scheepers, C.B. | 2024 | Quantitative Research Article |
Cao, G., Duan, Y., Edwards, J.S., Dwivedi, Y.K. | 2021 | Quantitative Research Article |
Cunha, S.L., da Costa, R.L., Gonçalves, R., Pereira, L., Dias, Á., da Silva, R.V. | 2023 | Quantitative Research Article |
Haesevoets, T., De Cremer, D., Dierckx, K., Van Hiel, A. | 2021 | Quantitative Research Article |
Jackson, D., Allen, C. | 2024 | Quantitative Research Article |
Jan, Z., Ahamed, F., Mayer, W., Patel, N., Grossmann, G., Stumptner, M., Kuusk, A. | 2023 | Systematic Review |
Lada, S., Chekima, B., Karim, M.R.A., Fabeil, N.F., Ayub, M.S., Amirul, S.M., Ansar, R., Bouteraa, M., Fook, L.M., Zaki, H.O. | 2023 | Quantitative Research Article |
Leyer, M., Schneider, S. | 2021 | Quantitative Research Article |
Mahmud, H., Islam, A.K.M.N., Ahmed, S.I., Smolander, K. | 2022 | Systematic Review |
Mahmud, H., Islam, A.K.M.N., Mitra, R.K. | 2023 | Quantitative Research Article |
Misra, S., Katz, B., Roberts, P., Carney, M., Valdivia, I. | 2024 | Quantitative Research Article |
Rodríguez-Espíndola, O., Chowdhury, S., Dey, P.K., Albores, P., Emrouznejad, A. | 2022 | Quantitative Research Article |
Urbani, R., Ferreira, C., Lam, J. | 2024 | Theoretical Research Article |
Van Phước, N. | 2022 | Quantitative Research Article |
Vărzaru, A.A. | 2022 | Quantitative Research Article |
Categories of Factors | Studies | Facilitators | Barriers |
---|---|---|---|
Managers’ Perceptions of AI | Cao et al. [40] | Performance Expectancy Effort Expectancy | Perceived Threat, Severity, and Susceptibility |
Cunha et al. [41] | Familiarity Perceived Benefits | ||
Leyer and Schneider [44] | Perceived Adaptability | ||
Mahmud et al. [46] | Perceived Nature of the Task Familiarity with AI | Perceived Nature of the Task | |
Mahmud et al. [47] | Perceived Value | ||
Vărzaru [52] | Perceived Ease of Use Perceived Usefulness User Satisfaction | ||
Ethical Factors | Booyse and Scheepers [39] | Making Life-or-Death Decisions Potential Discrimination The Risk of Human Replacement with Machines | |
Cunha et al. [41] | Violation of Ethical and Privacy Issues | ||
Psychological and Individual factors | Cao et al. [40] | Personal Well-Being Concern Personal Development Concern | |
Cunha et al. [41] | Familiarity with AI | ||
Haesevoets et al. [42] | Desire for Human Primacy | ||
Leyer and Schneider [44] | Overconfidence and Desire for Control | ||
Mahmud et al. [46] | Personality Traits (Self-Esteem; Self-Efficacy; Internal Locus of Control; Neuroticism; Extraversion) Demography (Older People; Women; Lower Education) | ||
Social and Psychosocial factors | Booyse and Scheepers [39] | The Need for Social Interactions | |
Mahmud et al. [46] | Social Influence | Social Influence | |
Mahmud et al. [47] | Tradition and Image Barriers | ||
Organizational Factors | Basu et al. [37] | Organizationally Driven Decisions | |
Cao et al. [40] | Facilitating Conditions | ||
Jan et al. [38] | Cost of Adoption and Return on Investment | ||
Lada et al. [45] | Organizational Readiness | ||
Mahmud et al. [46] | Societal and Organizational Norms Type of Organization | ||
Rodríguez-Espíndola et al. [49] | Organizational Resilience | ||
Level of Digital Transformation (High) | |||
Phước [51] | Organizational Readiness | ||
External Factors | Jackson and Allen [43] | Professional Associations | |
Rodríguez-Espíndola et al. [49] | Regulatory Guidance Market Pressure | ||
Phước [51] | Government Involvement Vendor Partnership | ||
Technical and Design Characteristics of AI-Based Technologies | Jackson and Allen [43] | Industry-Specific Solutions | |
Jan et al. [38] | Industry-Specific Solutions | ||
Leyer & Schneider [44] | Voluntary Integration of AI | Mandatory Integration of AI | |
Mahmud et al. [46] | Transparency and Explainability Interaction and Control Speed of Algorithms Decision Accuracy and Investment in DM Human-Like Decision Delivery | Complexity of Algorithms | |
Misra et al. [48] | Complexity of Outcomes |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Marocco, S.; Barbieri, B.; Talamo, A. Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI 2024, 5, 2538-2567. https://doi.org/10.3390/ai5040123
Marocco S, Barbieri B, Talamo A. Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI. 2024; 5(4):2538-2567. https://doi.org/10.3390/ai5040123
Chicago/Turabian StyleMarocco, Silvia, Barbara Barbieri, and Alessandra Talamo. 2024. "Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review" AI 5, no. 4: 2538-2567. https://doi.org/10.3390/ai5040123
APA StyleMarocco, S., Barbieri, B., & Talamo, A. (2024). Exploring Facilitators and Barriers to Managers’ Adoption of AI-Based Systems in Decision Making: A Systematic Review. AI, 5(4), 2538-2567. https://doi.org/10.3390/ai5040123