Abstract
The problem we address in this paper is that the potential impact of Large Language Models (LLMs) on the research practice in information systems is not well understood. The focus has been on how LLMs could support literature review processes. Therefore, this paper aims to advance knowledge on how Large Language Models (LLMs) could support knowledge exploration through literature reviews. The knowledge contribution consists of meta-requirements that inform the design of LLM-based tools assisting knowledge exploration. The meta-requirements are theoretically justified by scrutinizing established IS literature review methodologies, reported challenges of LLMs and design process experiences. Furthermore, we introduce an LLM supported literature review process model that maps the relationships between the meta-requirements and specific phases of the process model. This work contributes to the field by providing a foundation for designing transparent, controllable, and resource-efficient tools for knowledge exploration, and supporting the rigor of knowledge exploration in information systems research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Austin, J., et al.: Program synthesis with large language models. ArXiv Prepr. Arxiv:210807732 (2021)
Bandara, W., et al.: Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support (2015)
Chang, Y., et al.: A survey on evaluation of large language models. ArXiv Prepr. Arxiv:230703109. (2023)
Chen, M., et al.: Evaluating large language models trained on code. ArXiv Prepr. Arxiv:210703374 (2021)
Davidson, J., Paulus, T., Jackson, K.: Speculating on the future of digital tools for qualitative research. Qualit. Inq. 22(7), 606–610 (2016). https://doi.org/10.1177/1077800415622505
Dobrkovic, A., Döppner, D.A., Iacob, M.-E., van Hillegersberg, J.: Collaborative literature search system: an intelligence amplification method for systematic literature search. In: Chatterjee, S., Dutta, K., Sundarraj, R.P. (eds.) DESRIST 2018. LNCS, vol. 10844, pp. 169–183. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91800-6_12
Dwivedi, Y.K., et al.: Opinion paper: “so what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71, 102642 (2023). https://doi.org/10.1016/j.ijinfomgt.2023.102642
Enholm, I.M., et al.: Artificial intelligence and business value: a literature review. Inf. Syst. Front. 24(5), 1709–1734 (2022). https://doi.org/10.1007/s10796-021-10186-w
Fieser, J., Dowden, B.: Epistemic Value (2011). https://iep.utm.edu/epistemic-value/
Galitsky, B.A.: Truth-o-meter: collaborating with llm in fighting its hallucinations (2023)
Goldkuhl, G., Lind, M.: A multi-grounded design research process. In: Winter, R., Zhao, J.L., Aier, S. (eds.) Global Perspectives on Design Science Research, pp. 45–60. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13335-0_4
Gregor, S., Jones, D.: The anatomy of a design theory. J. Assoc. Inf. Syst. 8(5), 312–335 (2007)
Hadi, M.U. et al.: A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprint (2023)
Hendricks, G., et al.: Crossref: the sustainable source of community-owned scholarly metadata. Quant. Sci. Stud. 1(1), 414–427 (2020)
Hevner, A.R.: A three cycle view of design science research. Scand. J. Inf. Syst. 19(2), 87–92 (2007)
Hevner, A.R., et al.: Design science in information systems research. Mis Q. 28(1), 75–105 (2004)
Jiang, A.Q., et al.: Mistral 7B. ArXiv Prepr. Arxiv:231006825 (2023)
Jin, Q., et al.: Genegpt: augmenting large language models with domain tools for improved access to biomedical information. ArXiv (2023)
Kaddour, J. et al.: Challenges and applications of large language models. ArXiv Prepr. ArXiv230710169. (2023)
Kamnis, S.: Generative pre-trained transformers (GPT) for surface engineering. Surf. Coat. Technol. 466, 129680 (2023)
Kasneci, E., et al.: ChatGPT for good? on opportunities and challenges of large language models for education. Learn. Individ. Differ. 103, 102274 (2023)
Klein, H.K., Myers, M.D.: A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Q. 23(1), 67–94 (1999)
Liu, P., et al.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 55(9), 1–35 (2023)
Liu, T. et al.: A token-level reference-free hallucination detection benchmark for free-form text generation. ArXiv Prepr. Arxiv:210408704. (2021)
Liu, W.: Knowledge exploitation, knowledge exploration, and competency trap. Knowl. Process. Manag. 13(3), 144–161 (2006)
Martin, L., et al.: CamemBERT: a tasty French language model. ArXiv Prepr. Arxiv:191103894 (2019)
Meyer, J.G., et al.: ChatGPT and large language models in academia: opportunities and challenges. BioData Min. 16(1), 20 (2023)
Morana, S., et al.: Tool support for design science research—towards a software ecosystem: a report from a DESRIST 2017 workshop. Commun. Assoc. Inf. Syst. 43(1), 17 (2018)
Ngwenyama, O., Rowe, F.: Should we collaborate with AI to conduct literature reviews? changing epistemic values in a flattening world. J. Assoc. Inf. Syst. 25(1), 122–136 (2024)
Nunamaker, J.F., Jr., Briggs, R.O.: Toward a broader vision for Information Systems. ACM Trans. Manag. Inf. Syst. TMIS. 2(4), 20 (2011)
Okoli, C.: A guide to conducting a standalone systematic literature review. Commun. AIS. 37 (2015)
Paulus, T.M., et al.: Digital tools for qualitative research: disruptions and entanglements. Qual. Inq. 23, 10 (2017). https://doi.org/10.1177/1077800417731080
Peffers, K., et al.: Design science research genres: introduction to the special issue on exemplars and criteria for applicable design science research. Eur. J. Inf. Syst. 27(2), 129–139 (2018). https://doi.org/10.1080/0960085X.2018.1458066
Rossi, S., et al.: Augmenting research methods with foundation models and generative AI (2024)
Santoro, G., Usai, A.: Knowledge exploration and ICT knowledge exploitation through human resource management: a study of Italian firms. Manag. Res. Rev. 41(6), 701–715 (2018)
Schwartz, D., Te’eni, D.: AI for knowledge creation, curation, and consumption in context. J. Assoc. Inf. Syst. 25(1), 37–47 (2024)
Sjöström, J.: DeProX: a design process exploration tool. In: Maedche, A., vom Brocke, J., Hevner, A. (eds.) DESRIST 2017. LNCS, vol. 10243, pp. 447–451. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59144-5_29
Tripp, A., et al.: Sample-efficient optimization in the latent space of deep generative models via weighted retraining. Adv. Neural. Inf. Process. Syst. 33, 11259–11272 (2020)
Valmeekam, K., et al.: Large Language Models Still Can’t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). ArXiv Prepr. Arxiv:220610498 (2022)
Vom Brocke, J., et al.: Standing on the shoulders of giants: challenges and recommendations of literature search in information systems research. Commun. Assoc. Inf. Syst. 37(1), 9 (2015)
Walls, J.G., et al.: Building an information systems design theory for vigilant EIS. Inf. Syst. Res. 3(1), 36–59 (1992)
Watkins, R.: Guidance for researchers and peer-reviewers on the ethical use of Large Language Models (LLMs) in scientific research workflows. AI Ethics 1–6 (2023)
Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: Writing a literature review. MIS Q. xiii—-xxiii (2002)
Wu, T., et al.: A brief overview of ChatGPT: the history, status quo and potential future development. IEEECAA J. Autom. Sin. 10(5), 1122–1136 (2023)
Xu, F.F., et al.: A systematic evaluation of large language models of code. In: Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pp. 1–10 (2022)
Zhou, C. et al.: Detecting hallucinated content in conditional neural sequence generation. ArXiv Prepr. Arxiv:201102593 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sjöström, J., Cronholm, S. (2024). Meta-requirements for LLM-Based Knowledge Exploration Tools in Information Systems Research. In: Mandviwalla, M., Söllner, M., Tuunanen, T. (eds) Design Science Research for a Resilient Future. DESRIST 2024. Lecture Notes in Computer Science, vol 14621. Springer, Cham. https://doi.org/10.1007/978-3-031-61175-9_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-61175-9_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-61174-2
Online ISBN: 978-3-031-61175-9
eBook Packages: Computer ScienceComputer Science (R0)