In this section, we describe the
explanatory model approach, explain the term, and discuss its notion. Figure
1 provides an overview of its characteristics. Then, we outline their potential role in teaching and learning processes and computing education research.
3.1 Characterization of Explanatory Models
What is an explanatory model? An explanatory model is a conceptual model that is an idealized representation of computing objects, such as computational concepts, digital artifacts, or socio-technical systems (i.e. a composition of human beings and digital artifacts). Such models should fulfill educational purposes of providing specific perspectives on and explanations of computational concepts and digital artifacts and their behavior. For example, an explanatory model for data-driven technologies can focus on the role of data [see
25,
26]. They allow students to explain computational concepts or explore and make sense of inner workings and contextual effects of specific digital artifacts.
Why do we use the term ’explanatory models’? While traditional concepts and algorithmic systems could be explained in full detail with the technical truth, this is challenged by the complexity of real-life digital artifacts, current technological developments, and paradigm shifts in the discipline (e.g., regarding AI and ML). Based on the notion of models discussed earlier, the following perspectives could be adopted (especially from the of-for-distinction): First, explanatory models represent computational concepts and digital artifacts, thereby focusing on specific aspects. Frameworks from computing education could assist in this regard, for example, to provide different levels of consideration on AI systems [e.g.,
47] or a set of various aspects for programs [e.g.,
44,
33]. This representational perspective of explanatory models can cover different degrees of complexity and technical precision. For example, an explanatory model of large language models could use the metaphor of stochastic parrots [see
5]. Second, explanatory models serve as tools for exploring and explaining digital artifacts, which involves making sense of their inner workings and reasoning about their behavior. From this perspective, an explanatory model highlights different aspects of digital artifacts and serves as a tool for uncovering inner workings of specific technologies. For example, a mentioned explanatory model of a large language model could then be used to explore and explain the outputs of a specific text generation tool.
Even if building on the notion of models from science, the representational view of models in computing education differs from that in natural sciences. For example, when considering a stone (as a natural object), the question of whether it is a good stone makes no sense as it is just a stone; in contrast, digital artifacts are based on human decisions and values so such a question makes sense. This example illustrates that explanatory models related to digital artifacts or socio-technical systems require other characteristics than models in science and science education. The dual nature theory supports this characteristic property of explanatory models, which stems from philosophical debates on the nature of technical artifacts and is used, for example, in computing or technology education [e.g.,
44,
10]. While comparing explaining natural phenomena and digital artifacts, de Ridder [
13] emphasizes that digital artifacts should be considered with the human agency, while this is different about natural phenomena. According to this theory, two views can be referred to when understanding or explaining digital artifacts [
30,
44,
54]. While the perspective of the architecture or structure of digital artifacts refers to its inner workings (e.g., the algorithmic workings), the perspective of the relevance is related to an interpretation of the intentions, meanings, and social effects of the digital artifact (e.g., why it was designed as it is, or how can it be used). Notably, different aspects of a digital artifact (e.g., an algorithm) can be described from both perspectives. The theory states that digital artifacts are not neutral as different human purposes and intentions are included. The two perspectives are interrelated so that understanding the relevance is necessary to comprehend inner workings; conversely, understanding the architecture is a prerequisite for evaluating the relevance [see
45,
30]. Accordingly, it is argued that a comprehensive understanding of a digital artifact involves both perspectives. For example, digital artifacts often impact individuals and society. Rahwan et al. [
42] describe several examples of potential influences of AI systems. For instance, systems choose information (or misinformation) people see in their news feeds, which can influence individuals’ behaviors, emotions, and opinions [
29,
52]. Accordingly, models about digital artifacts should not be representations of inner workings alone (the architecture perspective) but also - and this is important - should provide explanations for their functions, meanings, and impacts (the relevance perspective).
In summary, explanatory models always draw on architecture and relevance perspectives to highlight the interpretative and explanatory functions. This allows students to reason about digital artifacts, their behavior, and their influences on individuals and society.
Important difference between science and computing. Models in natural science and science education are developed and used to understand natural phenomena. In contrast, in computing education, models and modeling are used to understand and develop digital artifacts rather than primarily to understand given phenomena of the analog world. (Note that some digital artifacts are indeed designed to examine and understand natural phenomena, such as computational simulations, but this specific case deserves in-depth discussion [e.g.,
41].) However, designing digital artifacts in computing also impacts the analog world, which is also considered in computing but always in relation to digital artifacts. Thus, models for computing education have other and additional functions than those known in science. Nevertheless, the experiences in teaching and learning with and about models in science education can be relevant to teaching and learning explanatory models, especially those related to technologies we struggle to understand and explain. This is similar to the argumentation of Rahwan et al. [
42], who argue for adopting methods from natural sciences to investigate and understand the behavior of AI systems.
3.2 Explanatory Models in Teaching and Learning Processes
Based on the characteristics of explanatory models, we discuss use cases for teaching and learning practice and research in computing education.
Students learn explanatory models. Similarly to notional machines that are argued to be explicitly taught [see
17,
48], explanatory models are intended to be made explicit, that is, learning them becomes a concrete learning goal. However, while notional machines are meant as scaffolds or vehicles to support students in learning the ground truth of algorithmic programs, explanatory models are meant as a goal in themselves, including teaching students about these models and enabling them to work with them. Thereby, explanatory models aim to support students’ metacognitive thinking processes, such as reflecting on different perspectives on computational concepts or the nature of computing; similarly, in science education, learning about and with models is argued as serving metacognitive tools [e.g.,
12]. They provide a perspective or lens on computational concepts and digital artifacts. For example, this allows them to explain abstract concepts (e.g., algorithms or large language models) or to explore, explain, and reason about specific digital artifacts, their inner workings, and their behavior (e.g., particular chatbots). This could include using an explanatory model as a lens on large-language model applications, allowing them to find explanations for their behavior, such as reasoning about the generated text. Explanatory models can also be used for designing digital artifacts, such as applying an explanatory model of neural networks to design an ML application. Explicitly learning and using explanatory models also involves learning about their purposes, allowing students to critique and reflect on the contexts in which they can use a model or when it may not be applicable.
Educators use explanatory models. Following explanatory models as learning content, this approach can also help educators design teaching units and materials accordingly. Teaching about explanatory models is meant to support students in forming respective mental models, which are mental constructs (i.e. knowledge structure), for example, related to systems or algorithmic processes [
27]. Mental model theory describes how people perceive, explain, and predict the world [
20]. It is often used in computing education research, like in the approach of notional machines [e.g.,
17] or when examining students’ conceptions about topics like AI [e.g.,
36]. While mental models are personal and internal representations, the external counterparts are conceptual models [
20,
38,
48]; like explanatory models. Their representations could include analogies, artifacts, or visual explanations of systems [
38,
48]. Conceptual models can be useful in teaching to explain structures and inner workings of digital artifacts [
48]. For example, a study from Ben-Ari and Yeshno [
4] indicates that learning a conceptual model about internal word-processing software supported school students in developing conceptual understanding and interacting with the software. Similarly, a study suggests that learning an explanatory model can help students understand everyday technologies [
26]. However, based on experiences in science education, we should note that presenting conceptual models does not necessarily lead to adequate mental models as students often lack knowledge from the discipline or the domain to interpret the presented models [
20].
This relation of explanatory models to intended mental models leads us to their potential support for diagnosing students’ conceptions. Research on students’ perspectives on computational concepts or their preconceptions can be understood as examining explanatory models students know, such as in the study about algorithms mentioned before [see
3]. This can help teachers choose explanatory models to introduce in a learning group. Additionally, explicating these explanatory models can help students reason about computational concepts. For example, using a recipe analogy as an explanatory model of algorithms could include discussing which aspects of algorithms this analogy could adequately represent (e.g., step-wise operations) and where it has its limitations (e.g., loops may then be problematic).
While using notional machines or analogies as vehicles to teach computational concepts is not new, the presented approach calls for making explanatory models explicit and taking learning and using them as one of the primary learning goals. Then, explanatory models serve as learning content but also provide diagnostic functions and can support developing interventions.
3.3 Explanatory Models in Computing Education Research
In addition to using explanatory models in computing education practice, we envision further potential use in research practice. The diagnostic function of explanatory models mentioned above also applies to research practice, such as studies on students’ conceptions of computational concepts. Thus, explanatory models can serve as lenses for empirical research, inspiring the development of respective research instruments. Explicating explanatory models used in intervention studies could benefit communication about rationales and pedagogical ideas and, hence, cumulative knowledge building. For example, clarifying the explanatory model taught in a study’s intervention could help researchers compare the results with other studies on the same explanatory model but with other pedagogical approaches.
Discussions on explanatory models are also related to discussing the nature of the CS discipline and computing education (e.g., caused by the fundamental changes regarding new technologies). The approach challenges the idea that we can teach and explain outcomes of the discipline based on reductions and by breaking them down to the smallest algorithmic details. We may need to rethink whether this approach holds for new computing topics. Practices of working with explanatory models could imply that computing education may need to focus more on scientific traditions in computing education [see
50], also in conveying a coherent and authentic image of the discipline.