Avoid common mistakes on your manuscript.
Introduction
Artificial intelligence (AI) technologies promise new ways to solve (existing) problems and to create innovations, resulting in new paths toward business value. In this way, novel transformations are on the horizon to realize opportunities we have not yet conceived (or thought are currently feasible). While the understanding of what constitutes AI technologies has continued to change over time, machine learning (Janiesch et al., 2021) and—more recently—large language models, respectively foundation models, have become the focus of applied research and practice (Banh & Strobel, 2023). Besides AI applications’ capabilities to outperform humans in certain tasks, it is the “ability to learn and act autonomously [that] makes intelligent technological actors very different from most technologies historically used in organizations” (Bailey et al., 2019, p. 643).
The fast development and manifold opportunities of AI applications motivate research to gain a deeper understanding of AI-enabled information systems, particularly regarding the role of intelligent agents and their impacts on networked businesses. For this editorial, we understand intelligent agents as “a computer system that is capable of flexible autonomous action in order to meet its design objectives” (Jennings & Wooldridge, 1998, p. 4). However, we argue that the property of flexibility and autonomy should not be implicitly assumed as given and that our discourse benefits from distinguishing automated, semi-autonomous, and autonomous systems. Hence, we recommend a non-binary understanding of autonomy by describing the technical artifact’s self-sufficiency (or autarky), its constraints or independence in fulfilling its goals, and its networking capabilities.
In line with Berente et al. (2021) and Baird and Maruping (2021), we value the richness of the agent perspective for researching contemporary and future socio-technical phenomena. The agent perspective is calling into question our existing assumptions about the integration of AI technologies in work systems. Beyond intelligent agents’ capabilities to contribute to work systems, inscrutability issues as technology-implied constraints are particularly salient in today’s research discussions (Berente et al., 2021).
This topical collection focuses on the appropriate design of AI-enabled information systems (IS), their accompanying management, and the transformational processes. This comes with multifaceted and fascinating questions for the IS discourse whose answers take a socio-technical perspective on the changing interaction within organizations (esp. individuals, teams) and between them. Particularly, our focus is on forms of networked business where intelligent and human agents interact for economic purposes within one or among multiple tiers in economic value chains.
This article is organized as follows: In the next section, we outline central issues and topics in the field of AI-enabled information systems structured around the ecological work systems framework that we introduce for that purpose. In the following section, we present our perspective on future research avenues by discussing five major AI transformation headwinds that we derived from our practical experiences and practitioner feedback from our network. In the final section, we provide an overview of the accepted contributions that we included in the initial compilation of this topical collection.
Central issues and topics
We borrowed the conceptualization of the systems surrounding the individual from ecological systems theory (Bronfenbrenner, 1979). First, this allows us to adopt a human-centered perspective on AI-enabled information systems, fostering empathic design (Leonard-Barton, 1995). Second, it enables studying the relationship between (intelligent and) human agents within their environment. Third, it enables us to classify existing research strands and derive future research opportunities.
The ecological work systems framework (see Fig. 1) consists of five levels: human agent, microsystem, mesosystem, ecosystem, and macrosystem. At the center of the framework is the human agent, acknowledging the necessity for a human-centered perspective on AI. The microsystem comprises the interaction of the human agent with intelligent agents and other human agents to accomplish tasks in bilateral collaboration. The mesosystem represents the interconnection between different microsystems (i.e., group work in multilateral collaboration). The exosystem characterizes the links between social settings that do not involve the human (i.e., distant work, for example, with other departments or companies). Finally, the macrosystem stands for the surrounding ecosystem relating to the system of networked business, often directed through corporate values, goals, and directives.
Although the impression may arise that this framework has a purely company-internal focus, interactions can also involve external parties at each level, be it customer interaction, joint development projects, or gig workers. Thus, we emphasize the permeability of corporate boundaries in the networked business and, thus, the work system within the whole ecosystem.
Human agent
The framework’s center embodies our human-centered approach, taking into account the positive and negative consequences of using AI-enabled information systems for the human agent as well as the necessary resources that the human agent needs to be provided. On the one hand, there can be many positive outcomes for the human agent, such as relief from exhausting work or learning from intelligent agents. While consultancies are outdoing each other with business potential estimates, we also want to encourage considering how AI applications could improve work-related fairness and health issues (e.g., compensate for inequalities at work). On the other hand, there can also be adverse outcomes (e.g., increased risk of digital stress) resulting from, among others, the feeling of job insecurity or the excessive demands of controlling or understanding AI application outcomes. However, the causes of stress from AI-enabled information systems have not yet been completely uncovered. Among others, we motivate researching the temporal rhythm and form of AI-enabled work systems and their consequences for human agents (Orlikowski & Yates, 2002). In order to realize the promised productivity gains, the tasks and thus also the skill requirements are expected to change, probably faster than with the previous general-purpose technologies (McAfee, 2024). It is of great importance that human agents are adequately prepared to deal with intelligent agents by following role- and skill-specific learner paths.
Microsystem
Continuing with the second layer of the ecological work systems framework, we focus on the microsystem that comprises the interaction of the human agent with intelligent agents and other human agents to accomplish tasks in bilateral collaboration. We follow Jakob et al. (2024) and describe the four essential activities of collaboration as follows: Firstly, (human and intelligent) agents need to communicate in order to establish goals, engage in negotiations to determine the course of action, and assess progress and outcomes (Mattessich & Monsey, 1992; Terveen, 1995). Secondly, collaboration requires agents to process tasks, meaning they must take action and jointly manage their tasks to accomplish shared objectives (Wang et al., 2020). Thirdly, agents must determine which actions will be undertaken by each participant and allocate responsibilities for specific tasks (Terveen, 1995). Finally, collaboration involves agents negotiating and deciding on the level of control (i.e., authority) each participant will have about their actions, ensuring coordinated operations (Terveen, 1995).
The types of human-AI relationships can be manifold within the spectrum of automating and augmenting human work. Möllers et al. (2024) organize the design space by distinguishing the most prominent human-AI relationships at the workplace into decision support, AI in the loop, algorithmic management, human-AI collaboration and teams, human in the loop, and full delegation to AI. Intelligent agents may not only perform tasks but function as entities guiding human actions and may even alter the dynamics of task delegation between humans and intelligent agents. By this, we do not mean that intelligent agents achieve complete autonomy but that humans use intelligent agents as tools equipped with delegation capabilities and authority. Future intelligent agents might even blur the lines between participants and technological components within a work system by reversing the flow of work delegation or eliminating humans from these systems (Baird & Maruping, 2021). Research has already demonstrated that human-AI systems can—in some cases—yield superior results when the intelligent agent takes a leading role, delegating tasks to humans rather than the reverse (e.g., Leibig et al., 2022). This delegation reversal holds substantial implications for human-AI collaboration (Benbya et al., 2020; Wesche & Sonderegger, 2019). For instance, the study conducted by Guggenberger et al. (2023) delves into this phenomenon using the theoretical framework of principal-agent theory. It identifies novel sources of tension arising specifically in AI-to-human delegation, emphasizing the need for specialized mechanisms to address the ensuing challenges.
Mesosystem
The third layer of the ecological work systems framework, the mesosystem, represents the interconnection between different microsystems, such as group work in multilateral collaboration. So far, human–computer interaction literature has concentrated on individual interactions, leaving out a work system perspective on the multilateral interactions between human and intelligent agents when processing and delegating tasks as well as defining authorities and responsibilities (Jakob et al., 2024). Building on Hinsen et al. (2022), Jakob et al. (2024) introduce a framework that facilitates describing and analyzing work systems of human and intelligent agents’ collaboration beyond bilateral interaction.
One primary complication within this perspective is the effective coordination and management of tasks among diverse team members (both human and intelligent agents). This is exemplified in the context of software engineers working alongside AI copilots like GitHub Copilot. These collaboration settings necessitate the integration of multiple human and intelligent agents’ contributions into larger software projects. Ensuring seamless collaboration requires clearly defined roles, processes, and an understanding of the mutual strengths and limitations. For instance, while AI copilots can significantly enhance coding efficiency and error detection, human engineers must oversee and validate AI-generated outputs to maintain quality and coherence. On paper, human supervision is easy to establish, but there is a risk that the control effect will fail to materialize in day-to-day practice.
Overall, the mesosystem layer underscores the importance of strategically designed collaboration between human and intelligent agents, highlighting the need for robust frameworks to manage these complex, multilateral work settings. Practice will be confronted with new questions regarding the design and coordination of group work to leverage the complementarities of human and intelligent agents’ capabilities without overburdening humans. Not least, practitioners are increasingly confronted with matters of process redesign.
Exosystem
The fourth layer of the ecological work systems framework is the exosystem, which characterizes the connections between social settings that do not directly involve the individual. It includes distant work relationships, such as those with other departments or external companies. The exosystem is an important layer that explains why synergies between microsystems are left untapped or unintended consequences arise. This is especially crucial since the broad relevance of AI can lead to numerous, often uncoordinated, initiatives.
In practice, companies are setting up or professionalizing their AI operating model to align the microsystems according to the corporate (AI) strategy and ethical principles and values. Lessons from digitalization literature, such as the “shadow IT” issue, where different parts of an organization implement their own solutions without centralized oversight (Fuerstenau & Rothe, 2014), provide insights, but it is also crucial to recognize the AI-specific aspects. In particular, the inscrutability facet of AI (i.e., being unintelligible to multiple audiences) (Berente et al., 2021) and the limited robustness of probabilistic outcomes lead to new governance challenges. Lämmermann et al. (2024) highlight that managing AI applications effectively requires a robust information exchange among diverse stakeholders. Without adequate information processing, task uncertainty rises, undermining AI operations. Organizations can better manage AI applications by fostering an environment of transparent and efficient information exchange, thus minimizing operational uncertainties and unintended consequences and maximizing their potential benefits. Thereby, ethical questions will be commonplace when developing or applying AI technologies (even if ethical problems do not always arise depending on the context). The management challenge lies in identifying and understanding ethically problematic questions early and giving answers aligned with the corporate ethical principles and values.
Macrosystem
Finally, the outer layer of the ecological work systems framework, the macrosystem, represents the surrounding ecosystem of networked businesses, which is often guided by corporate values, goals, and directives. AI should not be pursued merely for its own sake but to reinforce the organizational identity and deliver tangible business value. Within this perspective, companies face the significant challenge of keeping pace with rapid technological advancements while capturing enough value to recoup their investments. Balancing innovation with sustainable business practices requires careful navigation of external pressures and internal directives. As organizations strive to stay competitive, aligning their macrosystem strategies with their overarching corporate vision becomes crucial. The challenge, therefore, is to translate the technology-driven momentum—among others resulting from the fear of missing out (FOMO) at the management level—into problem-solving AI applications. To raise business value, organizations should critically reflect on the business value potential of AI use cases early on and invest in the trustworthiness of AI-enabled information systems. To fulfill the criterion of trustworthiness, AI-enabled information systems should be lawful, ethically aligned, and robust from a technical and social perspective (European Commission 2019).
A dedicated AI strategy facilitates navigating the complexities of the AI transformation that comprises the identification and realization of AI use cases as well as the enhancement of the organization’s AI maturity (i.e., capabilities to identify and realize future AI use cases effectively and efficiently). Besides defining the strategic targets and AI application fields, the AI strategy should formulate ethical principles and values regarding the development, operation, and use of AI applications. However, this undertaking is not trivial, as risks often cannot be ruled out, so organizations must specify how good is good enough (e.g., performance thresholds). Increasing an organization’s AI maturity requires not only investments in technology but also organizational capabilities and complementary assets (Berg et al., 2023; Duda et al., 2024; Jöhnk et al., 2021). Without a comprehensive understanding of relevant resources and their impacts on developing, operating, and using AI-enabled information systems, organizations risk inefficient resource allocation and overseeing resource dependencies (Duda et al., 2024). With AI being a “moving frontier of both increasing performance and increasing scope” (Berente et al., 2021), an organization’s AI maturity can also decline without maintaining a continuous transformation. Accelerating the AI transformation must be balanced with strategic alignment to ensure coherent progress across the organization. Insights from previous strategy research, such as digital strategy, highlight the importance of adaptability and ongoing evaluation, ensuring that AI initiatives are both innovative and strategically grounded.
Future research perspectives
To create sustainable value through AI applications, organizations must apply AI technologies purposefully and plan and carry out organizational initiatives that increase the organization’s AI maturity. From our practical experiences and practitioner feedback, we identified five significant AI transformation headwinds:
-
1.
Complexity of coordinating initiatives (e.g., due to the broad scope of AI potentials and involved stakeholders)
-
2.
Lack of orientation (e.g., due to the unavailability of blueprints for higher maturity levels)
-
3.
Limited farsight in a fast-paced technology environment (e.g., regarding AI applications’ future capabilities and regulation)
-
4.
Organizational paralysis in executing the AI strategy (e.g., due to PoC-paralysis or poorly orchestrated investments)
-
5.
Incompatibility of traditional KPIs and target-setting approaches (e.g., due to uncertainty and risks in Machine Learning projects)
To counter the AI transformation headwinds, organizations benefit from an AI transformation management that bridges the system’s boundaries and, thus, ensures the permeation from strategic considerations, such as an organization’s AI ambition, to the individual employees and vice versa (c.f. Fig. 2).
In the following sections, we take a closer look at three permeation issues that we consider to be important. At the same time, we want to warn about just putting old wine in new bottles. Our experience suggests that organizational research findings in the AI field are often not necessarily exclusive to the AI field. From our point of view, this could also be a strength as long as the paper clearly defines what constitutes the object of research and does not hide under the AI umbrella.
An architecture perspective on human-AI collaboration in work systems
Research into human–computer or human–machine interactions has a long tradition. Due to the prevailing focus on bilateral interactions, a notable gap exists in guidance regarding the holistic design of collaborative frameworks for human and intelligent agents within work systems. By holistic design, we refer to integrating functional, economic, ecological, and social considerations. We expect this guidance to become more important with the proliferation of AI applications and, thus, advocate an architectural perspective for designing work systems that can effectively exploit the collaborative potential between diverse agents and avoid severe consequences.
Considering the collaborative potential, we look forward to learning more about the complementary capabilities of the different actors beyond the binary classification of human and software agents. We long for research that does not privilege the privileged. For instance, this could include research on AI-enabled work systems with neurodiverse actors or the integration of employees with disabilities (Maddali et al., 2022). We also consider IS research responsible for identifying the negative consequences of inappropriately designed work systems for our planet and its living beings. Explorative research discovering them could focus on an organization, its networked business, or the big picture of our society. An exemplary concern is the uncontrolled cascading of bugs, low-quality data, false or misinterpreted output, or biases.
The operative glue between the systems
While an appropriate design of work systems is a necessary first step, one should also account for its execution. From our practical insights into corporate practice, we introduce four types of operative glue that can go hand in hand in their implementation: technical glue, process glue, information, and social glue. Technical glue in the form of glue or reusable code allows for the integration of tools and resources (Duda et al., 2024). Process glue comprises development, operation, and governance processes that guide actions (e.g., the Hourglass Model of Organizational AI Governance introduced by Mäntymäki et al. (2022)). Information glue results from the satisfaction of the actors’ individual information needs so that they can fulfill their responsibilities (Lämmermann et al., 2024). Social glue results from the sense of social togetherness influencing the actor’s behavior.
For future research, we assume exciting questions around the (alleged) tension between control and experimentation. Taking the machine learning lifecycle’s experimental nature into account, requirements such as traceability might be considered a slowing factor. However, lineage or experiment tracking tools might also catalyze experimentation. Thus, research on the use of operative glue for the compliant industrialization of experimentation seems promising. Moreover, in light of the EU AI Act, we see the need to research how to approach the operative glue so that organizations can effectively and efficiently ensure compliance with regulation requirements. Considering the social glue, we ask ourselves what characterizes behavior in systems in which the proportion of human work is reduced or the (perceived) distance between people is increased. In addition, we are excited to see how organizations can ensure ethical alignment regarding philosophically challenging questions, integrating individual employee values and corporate policies. In this context, we also encourage to think about the broader construct of public value and how IS research can get ahead or keep par with examining the social (public) value of AI (Desouza & Dawson, 2023).
Dynamically steering the AI transformation
Since the AI field is constantly changing, companies will be challenged repeatedly to keep pace as technological change can easily outpace the possibilities of organizational change. While big tech companies or highly funded startups might be able to participate in these races, the digital sovereignty of the remaining companies is at risk. However, there is no transformation blueprint that can be generalized for all companies. For instance, there are slow industries, also called asset-intensive industries, where change for digital technologies is slow due to long project time frames, regardless of how exciting innovation potentials might be (Buck et al., 2023). Thus, we deem it essential to find ways that enable all companies to surf the AI waves sovereignly.
We encourage research on how to dynamically steer the AI transformation, integrating all systems of the ecological work systems framework. We highlight three pressing issues from our practical experience and practitioner feedback: (1) What are effective, measurable objectives or key performance indicators for the AI transformation, and how can they be operationalized across the organization in a steering method? (2) How can organizations design robust or adaptive value creation and capture mechanisms? (3) How can continuous and diverse employee development beyond static learning paths be efficiently and effectively approached?
Accepted papers
We have accepted four papers for inclusion in the initial compilation of this topical collection. Each article explores different aspects of the special section’s focus on AI-enabled information systems.
The first article in this topical collection, Information Provision Measures for Voice Agent Product Recommendations – The Effect of Process Explanations and Process Visualizations on Fairness Perceptions, by Helena Weith and Christian Matt (Weith & Matt, 2023), examines the impact of additional information measures on users’ perceptions of fairness and their behavioral responses to voice agent product recommendations (VAPRs). Due to inherent opacities in AI recommendation engines and the limitations of audio-based communication, users may feel unfairly treated during their purchase decisions, potentially harming retailers. The authors utilize information processing and stimulus-organism-response theory to explore how process explanations and process visualizations influence users’ fairness perceptions and behaviors. Through two experimental studies, they discovered that process explanations enhance users’ sense of fairness, whereas process visualizations do not have the same effect. The study highlights that explanations tailored to users’ profiles and past purchase behaviors effectively improve fairness perceptions. This research advances the literature on fair and explainable AI by addressing audio-based constraints in VAPRs and linking these factors to user perceptions and reactions. The findings provide valuable insights for practitioners on employing information provision measures to mitigate perceptions of unfairness and prevent negative customer behaviors.
The second article in the topical collection, AI Literacy for the Top Management: An Upper Echelons Perspective on Corporate AI Orientation and Implementation Ability, by Marc Pinski, Thomas Hofmann, and Alexander Benlian (Pinski et al., 2024), explores the influence of top management team (TMT) AI literacy on a firm’s ability to generate value through AI, focusing on two key characteristics: AI orientation and AI implementation ability. Grounded in upper echelons theory, the study investigates how the AI knowledge of a firm’s TMT affects its capacity to identify AI opportunities (AI orientation) and to execute AI initiatives (AI implementation ability). The authors also consider the moderating role of firm type, distinguishing between startups and incumbent firms. Using observational data from 6986 executives’ LinkedIn profiles and firm data from 10-k statements, the study finds that TMT AI literacy significantly enhances both AI orientation and implementation ability. Moreover, AI orientation mediates the relationship between TMT AI literacy and AI implementation ability. Interestingly, the positive impact of TMT AI literacy on AI implementation ability is more pronounced in startups compared to incumbent firms. This research enriches the upper echelons literature by introducing AI literacy as a critical skill-based dimension of TMTs, complementing existing role-oriented perspectives. It also elucidates the mechanisms through which AI literacy in top management contributes to AI-driven value creation within firms.
The third article in this topical collection, Seeking Empathy or Suggesting a Solution? Effects of Chatbot Messages on Service Failure Recovery, by Martin Haupt, Anna Rozumowski, Jan Freidank, and Alexander Haas (Haupt et al., 2023), investigates the use of failure recovery messages in chatbots to improve user satisfaction and re-use intentions following unsuccessful interactions. As chatbots are increasingly employed for digital customer interactions, their frequent inability to provide appropriate responses can lead to user dissatisfaction, negatively impacting the firm’s service performance. Drawing on the stereotype content model, the authors examine the effects of two types of failure recovery messages—solution-oriented and empathy-seeking—on users’ post-recovery satisfaction. Through three experiments, the study finds that recovery messages positively influence users’ responses, mediated by social cognitions. Specifically, solution-oriented messages enhance perceptions of competence, while empathy-seeking messages increase perceptions of warmth. The study further reveals that the preference for either message type is influenced by how users attribute the failure and the frequency of such failures. These findings offer valuable insights for chatbot developers and marketers on how to design effective recovery strategies that maintain user satisfaction and encourage continued use, thereby enhancing customer experience with digital conversational agents in a cost-effective manner. This research highlights the importance of tailored communication strategies in mitigating the negative impacts of chatbot failures and fostering positive user experiences.
The fourth article in this topical collection, AI-based Chatbots in Conversational Commerce and Their Effects on Product and Price Perceptions, by Justina Sidlauskiene, Yannick Joye and Vilte Auruskeviciene (Sidlauskiene et al., 2023), explores the impact of anthropomorphic verbal design cues in AI-based chatbots on consumer perceptions of product personalization and their willingness to pay higher prices in conversational commerce contexts. Although advancements in natural language processing (NLP) and AI are changing shopping behaviors, many consumers still prefer human interactions over chatbots, which are often seen as impersonal. The study addresses this challenge by examining how human-like characteristics in chatbot communication can enhance the shopping experience. Through a pre-test and two online experiments, the authors find that anthropomorphism significantly enhances perceived product personalization. Additionally, this effect is moderated by situational loneliness, indicating that consumers who feel lonely are more responsive to anthropomorphic cues. The interaction between anthropomorphism and situational loneliness also influences consumers’ willingness to pay a higher price for products. These findings suggest that incorporating human-like verbal elements in chatbot design can improve consumer engagement and satisfaction, particularly for those experiencing situational loneliness. The study provides developers and marketers with valuable insights into the strategic choices when adopting chatbots with human traits, but also sheds more light on the psychological dynamics between loneliness and non-human entities that need to be critically reflected upon.
We would like to thank the authors for their contributions, the reviewers for their valuable and prompt feedback, and Electronic Markets for making this topical collection possible. These joint efforts have enabled us to present new research findings in the rapidly evolving field of AI-enabled information systems.
References
Bailey, D., Faraj, S., Hinds, P., von Krogh, G., & Leonardi, P. (2019). Special issue of organization science. Emerging technologies and organizing. Organization Science, 30(3), 642–646. https://doi.org/10.1287/orsc.2019.1299
Baird, A., & Maruping, L. M. (2021). The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts. Management Information Systems Quarterly, 45(1), 315–341. https://doi.org/10.25300/misq/2021/15882
Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33, 63. https://doi.org/10.1007/s12525-023-00680-1
Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4), 4. https://doi.org/10.2139/ssrn.3741983
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. Management Information Systems Quarterly, 45(3), 1433–1450. https://doi.org/10.25300/MISQ/2021/16274
Berg, J. M., Raj, M., & Seamans, R. (2023). Capturing value from artificial intelligence. Academy of Management Discoveries, 9(4), 424–428. https://doi.org/10.5465/amd.2023.0106
Bronfenbrenner, U. (1979). The ecology of human development. Experiment by nature and design. Harvard University Press.
Buck, C., Clarke, J., Torres de Oliveira, R., Desouza, K. C., & Maroufkhani, P. (2023). Digital transformation in asset-intensive organisations: The light and the dark side. Journal of Innovation, Knowledge, 8(2), 100335. https://doi.org/10.1016/j.jik.2023.100335
Desouza, K. C., & Dawson, G. S. (2023). Doing strategic information systems research for public value. The Journal of Strategic Information Systems, 32(4), 101805. https://doi.org/10.1016/j.jsis.2023.101805
Duda, S., Hofmann, P., Urbach, N., Völter, F., & Zwickel, A. (2024). The impact of resource allocation on the machine learning lifecycle. Business & Information Systems Engineering, 66(2), 203–219. https://doi.org/10.1007/s12599-023-00842-7
European Commission (2019). Ethics guidelines for trustworthy AI. Available online at https://data.europa.eu/doi/10.2759/346720
Fuerstenau, D., Rothe, H. (2014). Shadow IT systems. Discerning the good and the evil. ECIS 2014 Proceedings.
Guggenberger, T., Lämmermann, L., Urbach, N., Walter, A. M., & Hofmann, P. (2023) Task delegation from AI to humans. A principal-agent perspective. In Proceedings of the 44th International Conference on Information Systems.
Haupt, M., Rozumowski, A., Freidank, J., & Haas, A. (2023). Seeking empathy or suggesting a solution? Effects of chatbot messages on service failure recovery. Electronic Markets, 33, 56. https://doi.org/10.1007/s12525-023-00673-0
Hinsen, S., Hofmann, P., Jöhnk, J., & Urbach, N. (2022). How can organizations design purposeful human-AI interactions. A practical perspective from existing use cases and interviews. In 55th Hawaii International Conference on System Sciences.
Jakob, A., Schüll, M., Hofmann, P., & Urbach, N. (2024) Teaming up with intelligent agents. A work system perspective on the collaboration with intelligent agents. ECIS 2024 Proceedings.
Janiesch, C., Zschech, P., & Heinrich, K. (2021). Machine learning and deep learning. Electronic Markets, 31(3), 685–695. https://doi.org/10.1007/s12525-021-00475-2
Jennings, N. R., & Wooldridge, M. (Eds.). (1998). Agent technology. Springer.
Jöhnk, J., Weißert, M., & Wyrtki, K. (2021). Ready or not, AI comes. An interview study of organizational AI readiness factors. Business and Information Systems Engineering, 63(1), 5–20. https://doi.org/10.1007/s12599-020-00676-7
Lämmermann, L., Hofmann, P., & Urbach, N. (2024). Managing artificial intelligence applications in healthcare. Promoting information processing among stakeholders. International Journal of Information Management, 75, 102728. https://doi.org/10.1016/j.ijinfomgt.2023.102728
Leibig, C., Brehmer, M., Bunk, S., Byng, D., Pinker, K., & Umutlu, L. (2022). Combining the strengths of radiologists and AI for breast cancer screening. A retrospective analysis. The Lancet Digital Health, 4(7), e507–e519. https://doi.org/10.1016/s2589-7500(22)00070-x
Leonard-Barton, D. (1995). Wellsprings of knowledge: Building and sustaining the sources of innovation. Harvard Business Review Press.
Maddali, H. T., Dixon, E., Pradhan, A., & Lazar, A. (2022) Investigating the potential of artificial intelligence powered interfaces to support different types of memory for people with dementia. In Extended abstracts on Human factors in computing systems. CHI Conference (pp. 1–7). https://doi.org/10.1145/3491101.3519858
Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The hourglass model of organizational AI governance. arXiv preprint arXiv:2206.00335. https://doi.org/10.48550/arXiv.2206.00335
Mattessich, P. W., & Monsey, B. R. (1992). Collaboration: What makes it work a review of research literature on factors influencing successful collaboration. Amherst H. Wilder Foundation.
McAfee, A. (2024). Generally faster: The economic impact of generative AI. Available online at https://blog.google/technology/ai/a-new-report-explores-the-economic-impact-of-generative-ai/. Accessed 24.09.2024
Möllers, M., Berger, B., & Klein, S. (2024). Contrasting Human-AI Workplace Relationship Configurations. In I. Constantiou, M. P. Joshi, & M. Stelmaszak (Eds.), Research Handbook on Artificial Intelligence and Decision Making in Organizations (pp. 282–303). Edward Elgar Publishing.
Orlikowski, W. J., & Yates, J. (2002). It’s about time: Temporal structuring in organizations. Organization Science, 13(6), 684–700. https://doi.org/10.1287/orsc.13.6.684.501
Pinski, M., Hofmann, T., & Benlian, A. (2024). AI Literacy for the top management: An upper echelons perspective on corporate AI orientation and implementation ability. Electronic Markets, 34, 24. https://doi.org/10.1007/s12525-024-00707-1
Sidlauskiene, J., Joye, Y., & Auruskeviciene, V. (2023). AI-based chatbots in conversational commerce and their effects on product and price perceptions. Electronic Markets, 33, 24. https://doi.org/10.1007/s12525-023-00633-8
Terveen, L. G. (1995). Overview of human-computer collaboration. Knowledge-Based Systems, 8(2–3), 67–81. https://doi.org/10.1016/0950-7051(95)98369-H
Wang, D., Shneiderman, B., Churchill, E., Shi, Y., Maes, P., Fan, X., & Wang, Q. (2020). From Human-Human collaboration to Human-AI collaboration. Designing AI systems that can work together with people. In CHI 2020: CHI Conference on Human Factors in Computing Systems (pp. 1–6).
Weith, H., & Matt, C. (2023). Information provision measures for voice agent product recommendations— The effect of process explanations and process visualizations on fairness perceptions. Electronic Markets, 33, 57. https://doi.org/10.1007/s12525-023-00668-x
Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101, 197–209. https://doi.org/10.1016/j.chb.2019.07.027
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hofmann, P., Urbach, N., Lanzl, J. et al. AI-enabled information systems: Teaming up with intelligent agents in networked business. Electron Markets 34, 52 (2024). https://doi.org/10.1007/s12525-024-00734-y
Published:
DOI: https://doi.org/10.1007/s12525-024-00734-y