1 Introduction

The dissatisfaction with standard economic models, based on general equilibrium theory, and, in the case of macroeconomic research, on analytical dynamic systems of difference equations, led to the development of agent-based computational economics (ACE). What differentiates these two approaches is that the first one focuses on dynamic systems characterised by only negative feedbacks, and therefore having a stable solution (a fixed point), while the second analyses complex systems, featuring constant change of states and emergent properties on the system level, which cannot be inferred from the lower-level elements alone. They appear only during the evolution of a system, through interactions of its different parts, i.e., as an effect of the sum of agents’ endogenous non-linear interactions.

Such an approach, differing from the standard, mathematical analysis-based tools for the analysis of economic systems, offers vast possibilities, and can be applied to problems that cannot be addressed using stable systems of difference equations. However, despite considerable advances, ACE literature faces its own methodological problems, especially in the face of the direction this strand of economic research has taken.

The aim of this review of agent-based computational economics is not to provide a complete summary of models that have been devised in the last thirty years, but to identify main lines along which the research has been proceeding, to evaluate them critically and to propose a modification of research practices that would lead to better understanding of the mechanisms of real and artificial complex economic systems.

In this paper, three issues are discussed. The first concerns the evolution of agent-based economic modelling approach in the last twenty years in the context of uncovering positive and negative feedback mechanisms in complex economic systems, generating endogenous growth, cyclicality as well as acyclical variability, and how the results that have been so far obtained relate to the actual world. This includes a discussion of validation criteria for ACE. The second issue is the problem of the departure from the original ACE agenda: the research practice of the last twenty years has become increasingly alike to the mainstream approach in reference to the purposes of inquiry and model building. This is undesirable, because it surrenders uncovering complex dynamics of structurally rich economic systems and instead aims solely at matching moments of data, not putting enough focus on the study of the role of real-world structural characteristics of economies in growth, cycles, and policy. Finally, future perspectives for agent-based (AB) model building are discussed, and a new procedure for the investigation of feedback loops and causality channels in economic complex systems is proposed.

2 Complex systems

It is often argued that agent-based models (ABM) are more suitable for representing actual economies than those based only on non-algorithmic systems of difference equations because ABM, just like the real-world economies, are complex systems. This does not mean simply that they are composed of many elements and are difficult to understand. Traditionally, a complex system has been characterised as a system that possesses certain properties (Tesfatsion 2006; Mitleton-Kelly 2003; Andriani 2003). First, it consists of many agents, each following a decision rule that specifies his/her actions in response to information and stimuli. Macro-scale patterns arise from many micro-scale actions; these are emergent properties of a complex system, which cannot be inferred before a simulation of the system. These agents are interconnected and thus form a network, which may evolve over time. Interactions between agents result from their decision rules, which are most often case-dependent, and so the interactions are usually non-linear, thereby often making causal patterns obscure. Due to the interconnections, complex systems feature feedback loops, which can be positive or negative. This is contrary to the usual practice of the DSGE strand of economic research, in which the focus is solely on negative feedbacks, which drive the analysed system back to a steady-state. Conversely, complex systems are dissipative and evolutionary. Their changes in time are history-dependent.

Such definitions by characteristics may seem arbitrary or imprecise to some; indeed, the view on what a complex system is—or ought to be—has been evolving in recent years among scientists studying complexity. A “traditional” view on complex systems was that they are characterised foremostly by the emergence property (Holland 1998), but usually the definitions comprised of many more characteristics, such as the ones enumerated in the above paragraph. Snyder et al. (2011) reduced these descriptions and defined a complex system as one composed of interconnected parts that as a whole exhibit one or more properties not present in the individual parts alone. Thus, emergence plays in his view the central role in this concept. Ladyman et al. (2013), on the other hand, put the main focus on interactions, proposing that a complex system is an ensemble of many elements interacting in a disordered way, resulting in robust organisation and memory.

Nonetheless, recently Estrada (2023) has come to the conclusion that the existing definitions of complex systems lack clarity and do not successfully separate the concept from other types of systems; he claimed that even emergence is not sufficient for calling a system complex. Estrada (2023) defined a system to be complex if there is a bidirectional non-separability between the identities of the parts and the identity of the whole. Then, not only the identity of the whole is determined by the constituent parts, but also the identity of the parts are determined by the whole due to the Morinian nature of their interactions. A Morinian interaction between A and B is a relation \(\leftrightarrow\), such that \(\overset{\leftrightarrow }{A_1A_2}\) is different from the mere union of \(A_1\) and \(A_2\), where \(A_1\) and \(A_2\) are two elements of a set of entities S (Estrada 2023). This implies that the interaction has changed the nature of the interacting objects and of the whole they produced. Thus, agent-based complex systems enable to uncover patterns that analytical dynamic systems cannot, precisely because the latter do not possess this property.

Accepting the proposition that all real-world economies function as complex systems has profound consequences for economic research. Adopting an agent-based approach allows dynamic analysis of positive economic feedbacks (i.e., amplifiers causing the growth of all or some of the state variables of a system), which is impossible in the dynamic-systems-based, general equilibrium framework. It also enables the study of many iterations of business cycles, contrary to the general-equilibrium approach using dynamic systems of difference equations, wherein researchers study the impulse response functions resulting from only a single shock.

Complex systems may offer other, broader perspectives on economic growth, especially the role of consumer demand or the feedbacks between firms’ policies, including investment, and the demand of households. Agent-based computational economics allows testing theories of consumer and firm behaviour that are alternative to the prevailing modelling standard. It also enables theory-building by constructing conjectures and formulating theories on the basis of simulations, without the need for closed-form analytical results, which are impossible to obtain for complex systems. With the help of agent-based models, falsifying examples for existing theories may be found (Judd 2006). Finally, agent-based computational economics allows researchers to build rich theoretical models for empirical tests of new hypothesis and theories, especially in cases where analytical, closed-form solutions cannot be found (Axtell and Doyne 2022).

We may view the complexity of real and artificial economic systems as coming from three general sources: the macrostructure of a system (encompassing all sectors of an economy and their internal characteristics, such as the number of agents, institutions, the qualitative types of supply and demand), the decision rules of individual agents and interactions between them (e.g. within a given market), and the interaction of the macrostructure with a given type of microstructure (i.e., the kind of individuals’ behaviour, or market composition, or a particular form of competition in various sectors). These three sources may be represented with various degrees of complicatedness and number of elements. Moreover, these sources of complex dynamics are interdependent, and the evolution of a system is always altered by all of them.

Henceforth, the terms “complexity” and “more/less complex” in the context of ACE will refer to at least one of the following three notions.

1) Macroeconomic complexity: how many sales and production sectors have been included in a model; does the macroeconomic sectoral network (i.e., the connections between sectors as depicted by Leontief matrices) mimic a real counterpart, and if not, then to what extent it can be treated as an approximation to one—how similar are the dynamics of these two static or dynamic Leontief matrices? Are there reduced forms for unmodelled parts of an economy?

2) Mesoeconomic complexity: how various markets are modelled. What is the market composition in terms of competition and concentration – does it match what is known about such markets (in a given real-world economy) in terms of the number of firms, sizes of market shares and fierceness of competition? Are firms from different markets connected directly with their suppliers or via reduced-form devices? How does the general form of these connections reflect various industrial organisation theories or the data? Is one of the supply and demand sides represented aggregatively?

3) Microeconomic complexity: is the individual behaviour modelled by simple, ad hoc rules or is it based on some theories? Is there empirical evidence for the latter (they need not be very elaborate) or for the applied “rules of thumb”?

The idea behind this classification is to keep track of what is the relation of the construction of an ABM to the actually observed structure of the world. This is relevant because it is in principle possible to constantly add layers of a model that would improve on a model’s empirical fit and increase its “difficulty”, but that would not have any empirical counterparts, and therefore lead to overfitting. It also would not be informative of how actual economies function.

3 Agent-based research in economics

3.1 The state of the research

It is not the purpose of this paper to describe in detail the existing AB frameworks or the results of their simulations, for this was to a large extent accomplished by Dawid and Delli Gatti (2018). Instead, the focus of this paper is to succinctly present the core features characterising ABM in a way that allows a general view on their mechanical structure, and thus enabling the evaluation of the degree of their complexity. This is done to establish how far complexity economics research has moved in that respect, and whether it needs extensions or changes to improve our understanding of real-world economic systems.

3.1.1 Macroeconomic agent-based models

All agent-based modelling applied to studying economics features many interacting agents. Each of them acts according to specific decision rules, which are either in the form of simple, ad hoc equations, or are described by algorithmic decision trees,Footnote 1 without imposing market clearing. Thus, each economic AB model is a form of a complex system, but they vary in the degree of complexity. Often only the supply side or the demand side is heterogenous, while the other is either aggregative or populated by representative agents, e.g. workers in the “Keynes+Schumpeter” framework, (Dosi et al. 2019, 2018, 2017, 2015, 2010), or a single firm (Salle and Yıldızoğlu 2014; Salle et al. 2013).

Dawid and Delli Gatti (2018) have proposed a typology of macroeconomic ABM based on dividing them into seven families of models. However, many papers in this research strand do not fit into this characterisation (as the aforementioned authors themselves note). Thus, another, topic-based approach is taken here. Macroeconomic research using agent-based models has revolved around a few main topics. These are: endogenous macroeconomic growth; policy effects on economic volatility and the rates of change of output and other variables; business cycle fluctuations and stability; the role of agents’ expectations in economic growth and cycles (Table 1).

All of the existing approaches can be characterised by having all or some of the following features: one to three production sectors, differentiated vertically, one to three types of consumers (where a “type” refers to consumer behaviour type), a homogeneous banking sector or one consisting of banks that follow decision rules that are qualitatively the same, but subject to idiosyncratic shocks. Given that agent-based models are tools applied to studying complex systems with constantly evolving state, there is no stationary steady state. Moreover, the rejection of general equilibrium framework implies that other measures of internal consistency of a system must be applied in the case of AB models.

Table 1 A selection of macroeconomic agent-based research, ordered by primary topic and year of publication

The importance of the consistency of initial values of an agent-based model with its structure was underlined by Caiani et al. (2016, 2019b). They stressed that most macroeconomic agent-based models are either not stock-flow consistent or they do not provide the initialisation procedure applied for a given model. Both issues are fundamental for computational economic analysis using such methods. The lack of stock-flow consistency implies that values of some of the used variables disappear or appear out of nowhere. Moreover, initial conditions ought to be consistent with the model’s structure, i.e., do not lead to contradictions between the model’s equations in the initial period.

The modelling practice has been mixed in this respect. Some of the models were initialised from calculated steady-states, other were started from an arbitrary point and allowed to exhibit very volatile dynamics before reaching stationary states (that is, those fluctuating within a small closed interval rather than remaining in a single steady state). This initial volatility stems from the fact that, as noted by Dawid and Delli Gatti (2018), typically no ex-ante assumptions on the coordination of individuals are made. That is, no assumptions are made on what initial values would result from the model’s past behaviour, if it existed. Instead, as Caiani et al. (2016) noted, many agent-based frameworks do not report the volatile periods occurring before the stationary states.

However, stock-flow consistency defined as keeping track of financial transactions between agents is not enough to ensure that a model has no outflows or inflows other than random shocks supplied by a researcher. Accounting principles and rules of timing are also crucial: new values are inserted in many AB models by the assumption that wages from time t are both used for expenditure in that period and paid using the very revenues that spending has generated. This would be possible only if wages were paid in a continuous manner (under the assumption that consumption is larger than investment). Such a mechanism is not present in the real world; what is more, paying wages in advance of receiving revenues from sales without this mechanism amounts to firms creating money or financing the entire wage bill with debt. Neither of the two possibilities is justifiable theoretically or observed empirically.

3.1.2 Agent-based financial markets

As for financial agent-based models, aimed at modelling the stock exchange, all of the existing approaches use a small number of trader types to explain the volatility and nonstationarity observed on real markets. Typically, an agent can be a chartist or a fundamentalist. In models aiming at explaining instability or herding behaviour each agent can decide whether to switch between types or not, or changes type with an exogenous probability (Barde 2016; Chiarella and Di Guilmi 2011; Cont and Bouchaud 2000). Models featuring many types of individual traders are scarce; those featuring institutional traders who use, at least partially, aggressive as well as algorithmic, econometric- and routine-based trading strategies are non-existent.

Table 2 A selection of agent-based research on financial markets, ordered by type and year of publication

Agent-based models of a financial market (usually a stock exchange) can be divided into models featuring complex or simple agent heterogeneity. However, “heterogeneity” always takes the form of a few (at most three) basic types of traders (e.g. fundamentalists, chartists and noise traders), or a variation of types within one of these groups. In the latter case, it is manifested in the existing models of financial markets as a distribution of different values of a parameter in a forecasting or decision rule, or as random draws of information shocks or other signals, which create a distribution of outcomes. Table 2 lists examples of such papers, divided according to the used type of the representation of a financial market.

Studies using financial agent-based models have been focused on reproducing volatility and abrupt behaviour of stock markets, such as quick surges and collapses of price values, herding and volatility clustering, favouring simple modelling solutions. That is, these models are characterised by only a few trader types using simple rules or having zero intelligence (Gode and Sunder 1993). Nevertheless, such modelling formulations of traders’ behaviour does not reflect real-world traders’ behaviour. That is, professional traders use algorithmic trading and econometric models, such as ARIMA, filters, the GARCH family and stochastic volatility. High-frequency trading and arbitrage-seeking are also a common phenomena. Despite these facts, such features have not been included in financial agent-based models, and heterogeneity of traders has been investigated only to a small extent. This matters because even though it is now possible to reproduce volatility that resembles the real financial market fluctuations and shocks, the existing models suffer from the Lucas critique due to their lack of structurally realistic microfoundations (as discussed in Sect. 3.2).

While the existing models of financial markets do not feature realistic trading rules or trader types (in the sense of the lack of institutional, high-frequency or algorithmic traders), many of them are open-ended and in principle could be easily modified to incorporate such features. The works of Chiarella et al. (2015), Chen and Yeh (2001), LeBaron (2001) are among the most flexible and suitable for this purpose. It is worth noting that the focus on very simple, ad hoc behaviour of agents is not characteristic of only financial ABMs.

3.1.3 Studying the consequences of evidence-based behavioural microfoundations

Agent-based models allow to represent behaviour as decision-tree algorithms; thus, decisions of agents’ can be context-dependent, optimal or based on some psychological or empirically observed mechanisms. Thus, the ACE framework allows to avoid the criticism that the standard choice theory has been under, and apply models of behaviour that are consistent with econometric and experimental evidence, e.g. that do not feature independence of irrelevant alternatives property (Train 2003), or do not rely on optimisation.

Table 3 A selection of agent-based research investigating the consequences of various behavioural-economic and psychological theories

These possibilities, however, have not been fully exploited. While most ACE frameworks feature elements that are interpreted as behavioural, the tendency in agent-based research has been to focus on very simple, ad hoc models of behaviour, rather than trying to represent complex decision processes or psychological and behavioural theories of human and organisational actions. A notable exception is the research concerning learning processes, which often exploits the findings of experiments and uses concepts such as reinforcement learning.

In macroeconomic agent-based models, consumer behaviour is usually represented by the use of constant spending rate rules or analogues of the permanent-income or buffer-stock models. Despite the existence of numerous empirical papers questioning the permanent income model (the reader is directed to the works of, among else, Boug et al. (2021), Parker (2017), Canzoneri et al. (2007), Yogo (2004), Zeldes (1989), Mankiw et al. (1985), Hansen and Singleton (1983), Flavin (1981)), and that the buffer-stock theory has not been confirmed using micro data on consumer behaviour (Jappelli et al. 2008; Ludvigson and Michaelides 2001), almost no other theories of consumer behaviour have been used in AB modelling.

Most ABM studies have assumed simple, ad hoc rules governing agents’ actions, or decision schemes that have exact or almost exact counterparts in the standard literature, such as Leontief production functions or the aforementioned types of consumer behaviour. There are only a few exceptions to this pattern. Among those papers that have studied the implications of various psychological, and behavioural-economics theories of consumer behaviour, we have the papers of Chudziak (2023), Taghikhah et al. (2021), Muelder and Filatova (2018), Lorentz et al. (2016), Kapeller et al. (2013), Valente (2012), Ciarli et al. (2010), Malerba et al. (2007). As for the behavioural theories of the firm (i.e., ABM based on actual theories, not ad hoc rules), various learning processes, and game-theory-based inquiries, ACE research is even scarcer (Table 3)

3.1.4 Complex agent-based and econophysics networks

Individual behaviour and interactions may not be the only source of complex dynamics (as defined by Epstein (1999)). Given the defining properties of complex systems, studying economic networks, such as supply chains—inter- as well as intra-sectoral connections, networks between consumers and sellers, etc.—seems to have uniquely large potential for uncovering the causal patterns governing economic fluctuations.

However, papers concerning “complex economic networks” feature only one market or a single final goods market with, usually, a production (supply chain) network, consisting of two sectors of differentiated final-good and intermediate-good producers (sometimes with the addition of a third branch of retailers), or with a network of connections between consumers, or feature only a financial market network (Table 4). The latter has the form of creditors-lenders-depositors connections via a banking sector. A notable exception is the work presented by Beltratti et al. (1996), which presents a very general method of constructing artificial economic and financial networks; however, their approach has not been developed into realistic macroeconomic or financial market models.

There are few examples of papers combining two or three aforementioned types of networks in one model. For instance, Gualdi and Mandel (2016, 2019) and Gatti et al. (2010a) analysed model economies consisting of a single sector of heterogeneous firms and a single sector of differentiated consumers. The work of Opolot and Azomahou (2021) is a rare example of representing a network of consumers of a single firm – or customers of the entire sector of producers. Other researchers have focused on creditor-debtor networks featuring banks and firms (Vitali et al. 2016; Riccetti et al. 2013; Cainelli et al. 2012; Gatti et al. 2009). Finally, sometimes the analysed networks are only between peer entities, such as countries, or represent social interactions or geographical dispersion.

Static networks are often modelled in the form of a mathematical graph, with nodes representing agents, while time-variable systems of interconnections change their topology in each period due to random matching or queue and rationing processes. For example, it is often assumed that consumers compare the prices of only subset of randomly chosen firms in each period, as in Gurgone et al. (2018), or that firms randomly select a subset of potential business partners (Gatti et al. 2009). Similarly, Riccetti et al. (2016, 2013) have assumed that the links between firms and banks in the credit networks they have created are random.

Table 4 A selection of agent-based and econophysics networks research, ordered by primary topic and year of publication

It is worth underlining here that econophysics, despite the fact that it covers a large part of economic networks literature, does not entirely overlap with complexity economics, as the stress in the former is on the usage of physics-based or mathematical physics models to economics, often without much economic content (Arthur 2021). That is, within econophysics large real or artificial (such as networks) data sets are explored and researchers seek simple mechanisms within these. Conversely, by the very definition of complex systems, elaborate, multi-layer economic representations featuring many interacting agents generate dynamics and endogenous phenomena that cannot be generated by structurally simple mechanisms or non-agent-based systems.

Indeed, neither in the economic nor econophysics agent-based literature has there been published a paper incorporating many qualitatively varying final-goods sectors characterised by different consumer behaviour (such as, e.g., food, clothes, electronic durable goods and houses). Some of the most elaborate existing studies of ACE economic networks are those in which authors used models featuring three sectors populated by heterogeneous consumers, firms, and banks, respectively; examples of such works are Alexandre et al. (2023), Gurgone et al. (2018), Poledna and Thurner (2016). Consequently, even more elaborate economic models with many final-goods sectors and differentiated supply chains, i.e., including a macroeconomic, sectoral network in addition to microeconomic intra- and intersectoral networks - have not been studied, and no proof has been presented that their complexity is reducible to simpler models without the loss of crucial dynamics information.

Most importantly, there are no AB network models of entire economies that would resemble the multisectorality of final goods’ sectors (and differences between them) or the multi-branched supply chains similar to the real ones. By “resemble” and “similar” it is meant that these macroeconomic structures of an ABM would be consistent with dynamic Leontief matrices of real-world economies. The microeconomic networks of intra-sectoral competition—driven by marketing, including product invention, or cost optimisation—potentially are crucial as well. So far, the most advanced macroeconomic networks have been those of Zhong and He (2022), Seppecher et al. (2018).

In the first paper, Zhong and He (2022) have constructed a stable network of multisectoral artificial economies connected by their Leontief matrices, and populated by heterogeneous firms. The network changes in time due to structural change within countries. Nonetheless, the demand side of the economy has received little attention in their model, as well as marketing, the type of market composition and competition within each sector or microeconomic supply networks. Seppecher et al. (2018) have analysed a five-sector economy, including a three-sector supply chain network, i.e., firms operating in capital, intermediate and consumption goods sectors, consumers and a single bank, representing the banking sector. Their model featured networks within supply chains and between final good sellers and households.

Within the existing models of economic networks, not only few topologies have been analysed, but also the existing literature can be characterised either as not having rich, realistic macrostructure – when compared to real-world dynamic Leontief matrices and multisectorality characterising actual economies—or microstructure. Moreover, it may be not sufficient to analyse any artificial heterogeneous market, as its evolution will most likely depend on the type of initial conditions. Without expanding artificial markets, based on the findings of industrial organisation, to networks of networks with imposed macroeconomic structure, we cannot be sure how much our models tell us about the real world. This leads to validation criteria for agent-based computational economics.

3.2 Criteria of success and empirical estimation of agent-based models

The variety of agent-based macroeconomic, network and financial models stands in contrast to the much more unified DSGE-based approaches to economic analysis. ACE researchers often attempt to validate their models to avoid accusations of arbitrariness and lack of applicability of their frameworks. Currently used validation criteria for contemporary ABMs include estimation and matching stylised facts; realistic microfoundations have not been considered as a correctness measure. In DSGE, ’proper’ microfoundations are assumed to be that of consumers with complete and transitive preferences, and profit-maximising firms. Economic agent-based models are not based on general equilibrium theory and do not include optimising individuals with rational preferences; instead, bounded rationality is assumed.

Bounded rationality, however, may take many forms. Lux and Zwinkels (2018) claimed that empirical verification of the choices made in building the models enforces discipline in model design. In the last twenty years, the subfield of estimation of agent-based models has seen great progress and an abundance of methods has been developed. Table 5 lists many of these papers; classifying these studies is not straightforward and could be misleading, since in many of them different techniques are applied simultaneously. Despite this unquestionable progress, there is issue concerning whether an estimation of a model is equivalent to its verification.

Table 5 A selection of research papers on estimation and empirical calibration of AB models

Empirical verification of macroeconomic agent-based models has been usually understood as matching volatility observed in real-world data (Table 5). Many researchers view empirical estimation as the means of a model validation (Lux and Zwinkels 2018; Fagiolo et al. 2019). Nevertheless, such a view may not only be inconsistent with a form of Lucas critique, but also be contrary to the original agenda of economic agent-based research (Tesfatsion 2006).

Firstly, Tesfatsion (2006) underlined that outcome distributions often have a multi-peaked form suggesting multiple equilibria rather than a central-tendency form permitting simple point predictions. Conversely, the real world is a single time series realisation arising from a poorly understood data-generating process (Tesfatsion 2006). Even if an ACE model credibly represented this real-world data-generating process, it might not be possible to verify the degree of this credibility using statistical tests. Tesfatsion reminded that, an empirically observed outcome might be a low-probability event lying in a relatively rarely-ocurring mode of the outcome distribution for this true data-generating process, or in a thin tail of this distribution.

Secondly, Canova (2009) reminded that an estimation of a theoretical model will be biased unless it correctly represents the real data-generating process. Thus, agent-based research, similarly to the DSGE stream, has gone in the direction of moment-matching of the data, without much wondering about the role of multisectorality or about which version of microfoundations is the closest to the actual decision processes of consumers and firms.

Fagiolo et al. (2019) argued that although researchers typically do not know the true data-generating process of phenomena under study, which governs the generation of a unique realization of some time series and stylized facts that can be empirically observed, the goal of a modeller should be to provide a sufficiently good approximation of the real-world data-generating process by using an agent-based model. They view the empirical validation of an ABM as a process by which one evaluates the extent to which the model is a good representation of the actual data-generating process (Fagiolo et al. 2019). The most often adopted procedure for the development and verification of an agent-based model is the indirect calibration approach, which consists of four steps (Fagiolo et al. 2019; Windrum et al. 2007). The first is the identification of some real-world stylized facts of interest that the modeler wants to explain. Next, the model is specified. Validation and the hypothesis testing are performed in the third step in order to compare model’s output with the data on real-world variables, by the means of comparing their statistics. The fourth step is the application of a model to policy analysis.

In a similar spirit, Lux and Zwinkels (2018) claimed that since AB models are built on the notion of bounded rationality, a researcher faces a large number of degrees of freedom, since deviations from rationality can take many forms. They propose empirical verification of the choices made in building the models and by confronting AB models with empirical data. They motivate it by claiming that simulation exercises with various configurations might generate similar dynamics, but confrontation with empirical data might allow inference on the relative goodness of fit in comparison to alternative explanations.

This characterisation, nonetheless, is too vague to provide credibility for a model or to allow to discern between two different agent-based models aimed at representing the same real-world economy or a process. If one of these models has microfoundations that are more conforming to the findings of empirical research or observation of economic structure, but the input shocks of the other are tuned in such a way that the volatility of its output is closer to the volatility of real-world time series for a chosen economy, does it imply that the second model is “empirically validated” or “better”?

Moreover, how does one evaluate whether a model is a “good” representation of the real-world data generating process? If the assumed macrostructure or microfoundations are wrong—or not approximately correct, as psychological and neuroscientific studies have not yet established how humans exactly behave—then the results of a model with such assumptions will be biased and will not guarantee accurate out-of-sample predictions. Canova (2009) argued that econometric estimates of the parameters of a theoretical model will be biased unless it represents the real-world data-generating process. The same reasoning applies to analysing theoretical models and the conditionality of inferences on the underlying structural assumptions, including those about agents’ behaviour. The claim that only the “predictions”, i.e., the output of an agent-based model and its fit to statistics of empirical time series matter is equivalent to the justification for models based on non-algorithmic systems of systems of difference equations, often criticised by computational economists for their lack of realism.

The focus solely on a model’s output without attention to the realism of microfoundations and economic structure is not only contrary to a form of Lucas critique; it is also very similar to the current practice of the dynamic-systems strand of macroeconomic or financial research. The “epicycle critique” (Fagiolo and Roventini 2017) could be applied also to such moment-matching practices in agent-based research. Contemporary agent-based research repeats the time-series-matching or moment-matching approach of DSGE, including heterogeneous-agents models, criticised by Fagiolo and Roventini (2017), who—in the case of models based on analytical dynamics systems of difference equations—viewed this practice as a form of adding “epicycles”, i.e., non-fundamental features to models unfit of representing reality accurately.

Even though the view expressed by Fagiolo and Roventini (2017) seems to be shared by most researchers using AB models and studying complexity economics, the research practice shows that ACE has not departed from such an approach to model validation. Instead of studying the nature of feedback loops, inter- and intra-sectoral flows and contagion, comparing various assumptions and theories on consumer and firm behaviour, agent-based research has been directed to estimation of structurally simple models and matching moments of a chosen economy’s time series, treating these estimates as the “true” or “correct” ones (which is unjustified in light of Canova’s argument, described above). It was also driven into the direction of drawing conclusions from models with simple macroeconomic (or market) structures and small fundamental heterogeneity of agents, i.e., one that is grounded in economic interactions or individual characteristics of an agent.

Dismissing the assumption of modelling consumers and firms as rational, utility-maximising agents implies that alternative behavioural assumptions have to be made. While boundedly rational behaviour can take various forms, the behavioural and procedural features of macroeconomic agent-based models have been to a large extent surprisingly nondiverse and not rooted in consumer, behavioural and psychological research. Moreover, surprisingly, ACE line of inquiry has rarely concerned the consequences of various theories of the firm. Instead, researchers focused on easy to model technological view of the firm, most often using Leontief production functions (explicit or implicit ones). As for the former claim, the two ways of representing consumption in agent-based research, the permanent income and the buffer-stock models are not supported by econometric studies of consumption. Among else, real-world consumption has been found to be excessively smooth and reactive to changes in the current income, while very little or non-responsive to the variations in interest rates (Boug et al. 2021; Parker 2017; Canzoneri et al. 2007; Yogo 2004; Campbell and Deaton 1989). Evidence for the buffer-stock theory has not been found in the existing microeconometric investigations (Jappelli et al. 2008; Ludvigson and Michaelides 2001).

As for behaviour representation in financial ABM that focus on the interactions on the stock exchange, almost always only a few basic types of agents is used, and if greater heterogeneity is present, it is based on the variation of a parameter in a decision rule that is otherwise the same for all agents. According to LeBaron (2021), in the last few years financial models featuring small numbers of agents have been simplified to enable direct empirical estimation. However, a model’s parameters are estimated at the macro level to get a best fit to the data, while micro-level decisions are rarely scrutinized. The results of LeBaron’s analysis show that models with small number may be imposing some hidden assumptions about individual agent behaviour. This is of course undesirable, as it means that the model that is actually analysed remains unknown.

Lux and Zwinkels (2018) supported the comparison of a model’s output with real-world data time series by the claim that introduction of AB models was empirically motivated. However, Tesfatsion (2006) underlined the importance of making plausible assumptions and adopting realistic specifications for the credibility of inferences drawn from the analysed system’s behaviour. Judd (2006), Tesfatsion (2006) argued that agent-based computational economics has a constructive role for economic theory as it investigates the implications of alternative assumptions about economic systems and allows to analyse non-standard theories of economic behaviour. Since agent-based models are complex systems, it is difficult or impossible to use conventional ways for describing theories, i.e., stating and proving theorems using mathematical analysis and finding closed-form solutions. Such models must be simulated, and the simulations either demonstrate that something is possible, or indicate potential channels of interaction as well as feedback loops, or provide quasi-statistical robustness of the results, given the adopted assumptions.

Moreover, the argument of Canova (2009) extends to theoretical modelling: if the structure of an economic model is inconsistent with real-world economies’ structures, the results of both simulation and estimation will be biased. Of course, no researcher knows a priori which features of actual economies ought to be represented and which can be omitted for the sake of simplification. Agent-based models, however, offer the possibility to determine this, by comparing models of a growing degree of complexity.

Thus, estimating or calibrating an ACE model is always provisional on the validity of the representation of the structure of an economy, the processes driving its growth and fluctuations, and the feedback loops on various levels of the economy: between many different sectors, involving firms from the same branch of sales, production, or the same supply chain, between firms and consumers (workers), feedback loops within macroeconomic and microeconomic production and sales networks. This view is also supported by the argument of Tesfatsion (2006). Additionally, the financial and government sectors will further distort the analysed results, as well as whether or not a given model is an autarky or represents an open economy. One of the main arguments in favour of reduced-form, single- or two-sector models is that they allow to think intuitively about economic processes. They are also easier to implement and calibrate or estimate to match empirical moments of selected macroeconomic time-series. Matching stylised facts is less difficult for simple models as well.

However, the problem with stylised facts is that, as Brenner and Werker (2007) put it, they remain unmotivated and it is impossible to determine whether—or to what extent—they are products of structural processes or chance events. Additionally, Brenner and Werker note that because empirical data is not available or difficult to obtain for many aspects of the microspecification of a model, researchers ought to include all logically possible values for those parameters for which the value cannot be fixed, or to restrict the range of values. This implies that there is a trade-off between the empirical calibration and generality of a model (Brenner and Werker 2007), but also that the attempt to fit a model to the data may result in a biased specification. This can happen if one is focused only at matching the target time series by adopting ad hoc solutions at the level of microspecification rather than allowing the model to remain general and choosing all of the outcomes close to the target only after the simulations for an entire range of parameters has been run. Such an “computational comparative statics” approach (Judd 2006) would allow to investigate what combinations of parameters can contribute to results that fall into the same category when judged by their means, variances, maximum and minimum volatility, etc.

If the validity of agent-based models is judged relatively to how good a representation of the real world they are—in terms of realistic structures, initial values, parameters, output—then its assessment falls into the problem of non-testable problems (Pullum and Cui 2012). Defined by Weyuker (1982), programs are “non-testable” if there does not exist an “oracle”, i.e., there is no tester or an external mechanism that could accurately decide whether or not the output produced by a program is correct, or the tester must expend some extraordinary amount of time to determine whether or not the output is correct. On the other hand, as discussed above, matching empirical moments or stylised facts is not sufficient for choosing between ACE models or determining their relation to the real world in terms of unbiased forecasting and policy analysis.

In the context of an economic agent-based model, determining the correctness of a representation of the world in question would require expanding its structure until all feedback loops, sectors, behaviours, etc. were included in a model, or giving a proof of retaining the structure of the world’s dynamics in the simplified system. This is too large a task at the current state of development of ACE, characterised by models with simple, stylised structure and ad hoc decision rules of agents. Moreover, economic agent-based models fall into the category of programs which were written to determine the answer; if the correct one were known, there would have been no need to write the program (Weyuker 1982).

A different, and—as argued later in this paper—more constructive (from theory perspective) approach is asserting partial validity by adopting—fully or partly—realistic microfoundations and structural assumptions. Epstein (1999) argued that if two models are doing equally well in generating the macrostructure, preference should go to the one that is best at the micro level. Similar reasoning can be applied to any level of analysis and model-building in between of microfoundations and the aggregate output series of a model. A similar point was also stressed by Tesfatsion (2006).

Issues related to determining the appropriate means for validation of agent-based models are closely connected to the question of what are the methodological and philosophical frames for this type of tools. Epstein (1999) wrote that computational ABM fall into neither strictly inductive nor strictly deductive category of sciences, but instead permit a distinctive approach to social science - a generative one. Its defining characteristic is not only generating a macroscopic phenomenon or regularity of interest from local interactions of agents, but also presenting an exact algorithm that generates it. Moreover, Epstein (1999) underlined the crucial role of empirical adequacy of a model’s microspecification; this keeps the analysis linked to the actual phenomenon, which is not guaranteed if reproducing empirical moments is the only goal. Epstein warned against confusing explanation of a process with its description—mere reproducing statistics of real time-series or stylised facts does not necessarily uncover the mechanisms governing these phenomena, as many different theoretical models may reproduce the same sets of such targets.

3.3 Have complex economic behaviour and systems been analysed enough?

Critics of complicated models often accuse these tools as being too convoluted to understand intuitively and claim that a “desired” output can be obtained with the means of a simpler one. But what if in order to attain analytical tractability or less complexity, a model that is constructed represents a measure zero set of economically plausible specifications in terms of microfoundations and macrostructure of an economy (Judd 2006)? Assumptions made for reasons of tractability or parsimony may miss many crucial channels of interactions, feedback loops and phenomena.

The behaviour of complex systems depend on their structure and dimensionality. While it is impossible to a priori guess how many variables and levels of economic activity are needed to include in order to obtain an empirically credible and, in terms of predictions and estimation, useful agent-based model, investigating the economy by the means of increasingly complex models may prove to be beneficial. Models that are incomplete structurally but have “correct” microfoundations can inform researchers about the nature of interconnections and feedbacks on various levels of economic activity. This will allow to form theories about the functioning of an economy consisting of many sectors and many agents, the interactions between supply and demand on these markets, the role of production network structures. Furthermore, structural estimation of these phenomena and objects will be enabled, due to the formation of precise theoretical models.

Such multi-level analysis featuring many structurally different extensions of the same baseline model have not been conducted almost at all, neither in a single paper nor in a series of articles. The only existing examples of such research feature models that have been extended by the addition of a government or a banking sector to a single or two-stages production sector (in the latter case, featuring capital-producing firms and consumption-good producers) and a single workers/consumers sector, sometimes with agents differentiated by idiosyncratic shocks to their productivity or reservation wages.

Another issue concerns the measures of internal validity of a model. Stock-flow consistency together with initial values satisfying all equations of the model have been treated as “the” two measures for agent-based models’ internal consistency. Nonetheless, whether or not these are the sufficient conditions for preventing uncontrolled inflows and outflows from the model depends on how stocks and flows are controlled for. In the majority of macroeconomic agent-based models it is assumed that consumers spend their current wages on contemporary production, and that the revenues from these sales are used to pay these very wages. Such a continuous-time adjustment of wages and revenues does not occur in the real-world economies. Actual consumers use either funds obtained in the past (even if “the past” is only the last month) or debt to make current purchases.

Moreover, if the notion of continuous-time adjustment of wages and revenues is rejected, the discussed modelling solution implies money creation by firms themselves. The latter is, of course, inconsistent with real-world processes. The former, nevertheless, poses problems for the prevalent modelling standards: if both firms and consumers use only funds accumulated in the past and debt, this implies that nominal and real economic growth is conditional on the growth of money, i.e., debt. However, the existing macroeconomic agent-based models feature far too little consumer debt relative to the amounts indicated by the real-world banking sectors’ balance sheets. In fact, the majority of commercial banks’ assets are constituted by mortgages, mortgage-backed securities, housing and consumption loans,Footnote 2 but ACE macroeconomic research has featured only short-term debts of firms and sometimes also one-period consumer credit of low individual value.

The wages-revenues problem described in the previous paragraph is a symptom of a broader challenge that agent-based research has not yet overcome, namely how to represent firms and corporate finance. The majority of models focus only on net worth as a measure of solvency and feature the neoclassical view of the firm applying some form of a Leontief production function (sometimes in an implicit form, e.g. by only specifying conditions regarding marginal or unit costs of labour and capital). Few alternative theories of the firm and their consequences for macroeconomic or market-specific fluctuations have rarely been analysed (most of the exceptions are listed in Table 3). Corporate finance have been taken for granted, especially the fact that net cash flow is a different accounting measure than net worth—among else, depreciation, contrary to real-world practices, in AB models is often included in the profits equation. Similarly, even though a positive change in inventories increases the assets of a firm, this should not mean that the firm has more liquid funds to spend on wages, other types of investment or some another purpose. Nonetheless, this is not accounted for by the single net-worth equations in macroeconomic AB models.

Ever since the Lucas’ critique macroeconomists have underlined the role of microfoundations for an economic model’s validity. Questioning of the mathematical-analysis-based rational consumers was one of the motivations driving ACE research. Nevertheless, few distinct sets of microfoundations have been investigated in the last two decades of agent-based computational economic research. Likewise, despite the vast possibilities offered by algorithmic approaches, few theories of the firm have been modelled within ACE. Almost all agent-based models implement a version of the technological view of the firm, in which a firm minimises its marginal or unit costs, often with small behavioural additions, such as assumptions on the desired leverage ratios or sales targets. Accounting within a firm has been represented only in a simplified way; balance sheets are often confused with profits and loss statements. These aspects are not negligible, and the implications of firms conforming to different rules—behavioural, procedural, accounting-constrained—ought to be explored. This would give more insight on markets’ development, especially in dynamic contexts, where analytical tools relying on equilibrium are of little use.

Moreover, the difficulty of modelling investment decisions and independent debt-taking has often led to assumptions that producers have a “target debt level ratio”. While a cap on debt payments relative to available funds would be a reasonable assumption, it is hard to justify debt-taking behaviour of firms if they do not have investment opportunities. Additionally, in reality many firms use only own funds to finance expenditure, or make investment decisions infrequently—because of the lack of market, not technological, opportunities. These features have not yet been analysed in agent-based models.

Apart from some characteristics that do not reflect real-world structures, agent-based models have consisted of very few production sectors and supply chain levels. Network structures have not been thoroughly analysed in a broad sense, i.e., the number and type of their topologies. There are few papers addressing the role of the economic network structure of supply chains; one of the first ones where the works of Gatti et al. (2009, 2010a), but they have not compared networks with different topologies or economies with more than two production sectors. On the other hand, Gualdi and Mandel (2019, 2016) analysed a constantly changing supply network under the assumption that all goods can serve as inputs, and that they are gross substitutes; in their model, changes of the structure of supply chains result from price variations and are allowed in each period.

In agent-based macroeconomic literature, only the simplest single-sector or two-sector networks have been constructed and analysed (with the exception of three-sectoral network artificial economy studied by Seppecher et al. (2018)). Thus, little is known about inter-sectoral contagion, the role of business-to-business transactions, pushing costs up or down the supply chain. The differences in business cycles between economies characterised by different network topologies are unknown.

Despite abundant possibilities, little economic (sectoral) structure has been included in agent-based analysis of economic systems. Macroeconomic AB models feature only one, two, or at most three (for the latter see, e.g., the work of Seppecher et al. (2019)) production sectors. This limits the degree of analysed complexity and the number of feedback loops relative to the real world that the models are meant to represent or about which they provide insight. It also makes impossible the investigation of demand for different categories of goods and how supply–demand interactions shape business fluctuations, among else, through marketing and behavioural rules of consumers. Structural wage heterogeneity has not been studied; the only income distributions that are present in macroeconomic agent-based models are a result of random wage or (labour) productivity shocks. This does not reflect real-world processes, as it does not account for the structural characteristics of labour markets, labour force and wage distribution.

The fact that only a small number of structural specifications and theories of consumer and producer behaviour were considered in agent-based economic research is the result of the prevalent approach to economic modelling, i.e., using a “simple” model to “explain” a given phenomenon. This may be viewed as contrary to the original agent-based computational economics agenda, which called for broad exploration of various alternative theories and analysing different aspects of economic complexity (Judd 2006).

Similarly to the DSGE approach, macroeconomic AB models have fairly simple structure: one, two or at most three production sectors, consumers differentiated only by idiosyncratic shocks to labour income, or taken to be representative, and a banking sector, usually aggregative. The only drivers of economic growth that have been considered in ACE are random, micro-level shocks drawn from distributions with positive expected values. Various types of disturbances have been considered, such as shocks to the size of investment or random opportunity to invest, shocks to markups, firms’ leverage, dividends, production size, technology/productivity, prices, (reservation) wages, interest rates, bank assets and liabilities, random number of new firms or pairing of sellers and buyers. These random sampling mechanisms of ABM allow to generate volatility and sometimes to match moments of true distributions of some real-world economic variables. Nonetheless, using such reduced-form devices implies that the mechanism of the processes they represent is not explained (Epstein 1999).

These random shocks, paired with the previously discussed meta-time assumption, that wages are financed with the revenues from sales of products on which they are spent, enable economic growth in a model. However, such assumptions also push away the question of money creation: has the new debt (or any other source of new money) risen sufficiently to guarantee enough new money to support the nominal growth resulting from the model? This is never verified; stock-flow matrices do not account for these differences. Notice also that if not the entire wage bill is financed with firms’ past retained earnings and debt, and households spend more than they have earned in the previous period, this either means that firms have created money or that wages are paid in continuous time. Neither of the two options conforms with how real economies function.

The narrow focus on simple models and only a few theories of behaviour shows that the contemporary agent-based computational economics approach, similarly to the standard one, based on analytical dynamic systems of difference equations, is predominantly based on studying simple models in the search for “the” cause of some economic phenomenon, and argue for a parsimonious explanation of observations, easily understood intuitively (Judd 2006). Judd argued that such an approach often ignores the possibility that the true processes could be multidimensional, and that the multiple dimensions of real-world economies could interact to produce phenomena that cannot be explained by a single factor.

Nonetheless, agent-based computational research has been quite conservative in increasing the complexity and dimensionality of models. For example, various supply network structures, demand expansions, demand creation by marketing, competition for customers, cost as well as price strategies, and other aspects of industrial organisation have not been studied in macroeconomic or microeconomic agent-based computational economics. Judd (2006) argued that while parsimony is desirable, true parsimony chooses a model as simple as possible without being too simple, and would not limit our thinking to a “default” modelling approach if the latter does not preserve crucial features of real-world economic structure at some level of economic activity. The validity of implications of both agent-based and heteregoneous-agents macroeconomic models may be questioned because the elements of real economic systems, such as autonomous consumer demand, corporate finance, the timing of paying wages, and wage relative to sales’ growth, or large amounts of infrequently taken out debt are often forgone in the name of simplicity. Nonetheless, these aspects of economic activity are possibly crucial for the functioning of these systems and for our understanding. As Judd (2006) has put it,

Many economists dismiss these complexities (along with many other features of real economic life glossed over in conventional models) arguing that they can’t matter. Some will point to convergence theorems and conclude that the convergence problem is “solved” and that factors affecting the convergence process cannot be important “in the long run”. (...) More generally, no matter how good our intuition is, we do not know which features of an economy are important and which are not until we examine them, and do so in a manner that reveals their quantitative importance.

Nevertheless, this implies that incorporating more features such as the ones listed above into analysis using agent-based models is likely to make intuition or description of a model’s mechanics and processes quite general or difficult to present succinctly. Moreover, channels of influence in a multi-sector model with many behavioural features are often, or may be, obscure. Therefore, it would be useful to devise a method for analysing complex economic systems that allows to discern between the impact that various factors have on economic systems of different degrees of complexity. A general procedure for such research programmes is proposed in the following section.

4 Future perspectives

Validity of any scientific model is always provisional; any model of a real-world process can be estimated given the availability of data, but the act of estimation or calibration itself does not imply that the representation in question is “true” or unquestionably “validated” (Popper 1959). It is because of the impossibility to prove a theory (or a model), as there are no blueprints of reality to compare it to; many different theoretical models can match moments of the same time series of real-world variables. However, what economists can do is compare the dynamics and parameters’ estimates of models differentiated by their level of—economically meaningful – structural complexity and its relation to the observed real-world structures. This approach does not reduce to “naive falsificationism” (Boland 2016), but is based on recursively improving complexity economics theory by discovering possible phenomena generated by complex economic systems, as in the actual popperian agenda of constantly improving theories (Popper 1959; Boland 2016). This can be realised taking models to the data at each stage, and learning how introducing new layers of analysis—expanding macroeconomic structure by adding qualitatively different sectors, marketing, active behavioural consumers with decision rules based on empirical studies, etc.—and making the microeconomic elements of a model more realistic changes our understanding of a system and the estimates of its parameters.

For the reasons outlined above, agent-based computational research should not—or not yet—follow the path of matching aggregate time series by adjusting the distributions of shocks inserted into the model or by estimation of parameters. Since the true processes shaping economic activity are characterised by the existence of multiple sectors satisfying different needs of consumers acting behaviourally or procedurally, limiting the design of models to two production sectors and people using rules that are rejected by empirical studies is likely to yield misleading results and will not improve our understanding of economic systems. Too little is known about the feedback loops and interrelations between structures and elements of complex economic systems. Multi-sectoral (i.e., including more than one type of final good as well as supply chains) and supply–demand interactions, especially demand generation possibilities of firms, demand expansions and the role of housing credit and consumer loans have not been scrutinised enough or at all. If one agrees with the premise that an economy is a complex evolving system, than these processes cannot be taken for granted.

4.1 Recursive discovery of economic complexity

Mathematical-analysis tools are of no help in determining the channels of causality and impact in complex systems. Complicated, multi-variable models are on the other hand difficult to construct, understand, and fit successfully to the data. These are the main reasons for the preference for models with simple economic structure that has prevailed among ACE researchers. Nevertheless, unless it is proven in the future that simple structurally macroeconomic agent-based models’ dynamics can be mapped directly to the complex system that the real world’s economies constitute, it is very likely that such models are not sufficient for understanding, policy analysis and prediction. Even if we currently can—in some cases – reproduce the volatility of aggregate macroeconomic variables’ time series by calibrating appropriately the parameters, no model has been presented that would approach reproducing or accurately forecasting the evolution of an actual economy.

This does not mean that the existing research is obsolete; conversely, complexity economists have explored a broad class of models, identified and classified causal mechanisms as well as emergent phenomena, and established solid foundations for future research, involving economic representations that are larger and richer structurally. Nonetheless, the latter is needed, for increasing our understanding of economic complex systems, raising predictive capabilities of models, and for more credible policy recommendations.

For these reasons, a multistep procedure is proposed for the analysis of complex economic systems (procedure 1). One of its distinguishing features is that full dynamics of a multi-dimensional complex economic system cannot be analysed in a single economic paper (at least not in a one of a standard length), because lower-level models are needed to understand the impact of the introduction of new features (such as additional sectors, a change of the shape of the production network, etc.) on the evolution of the entire system. The distinction of the three general sources of complexity: the macrostructure, its interaction with the microstructure of a system, and individual behaviour also indicates such a path for exploring and testing complex economic dynamics.

It is true that such a procedure goes contrary to the prevailing standard of simplifying a model as much as possible while obtaining a match with the statistics of the series of empirical data (or obtaining a good econometric fit in the case of the standard models based on non-algorithmic dynamic systems of difference equations, that can be estimated directly). This is motivated, again, by the fact that any model can be estimated, but, again, the problem is that if the structure of the representation does not reflect the actual data-generating process then the results of the model’s estimation will be biased (Canova 2009).

Procedure 1: Expanding complexity of a macroeconomic AB model and identifying (multi-) causality as well as the effects of feedback loop

The intuition behind this procedure is to investigate interconnections and the role of various parts of the economy by starting from a small model and later expanding its complexity by adding other features of real-world economies. The developed models can be ordered by their economic content as follows, although of course more than one model will belong to each level.

1) A single market with one side modelled as an aggregate, and the rest of the economy either ignored or modelled aggregatively. A banking sector is not modelled, but agents hold deposits and take out loans. It may be introduced as an aggregate entity, but this will require making assumptions on loan dynamics in the economy, even if the analysed market is one of the largest sources of the emergence of bank credit.

2) A single market with both supply and demand being agent-based. The issue of introducing the banking sector is the same as in 1).

3) Multiple markets; consumers and firms heterogeneous, plus a banking sector, and the rest of the economy (other markets, the stock exchange) either ignored or modelled aggregatively.

4) Multiple markets and a network of supply chains; consumers and firms heterogeneous, plus a banking sector.

5.1-5.4) All of the above cases with heterogeneous banks.

6.1-6.5) All of the above cases with a stock exchange.

7.1-7.6) All of the above cases with the addition of other financial markets.

8.1-8.7) All of the above cases for an open economy.

Similarly, if a theoretical model does not reflect a market’s or an economy’s structure, its results can hardly be treated as very informative about real processes, given the defining characteristics of complex systems. That is, interactions between their various parts—additional sectors of an economy, income distribution of consumers with heterogeneous behaviour rules, networks of supply chains, networks of sellers and customers, etc.—can change substantially and generate different feedback loops as well as emergent processes, thus altering the dynamics of the modelled economy. Some signs that this approach may be adopted by the AB/complexity economists’ community have appeared: there are a few papers using agent-based methods to study a single market of interest, albeit without the aggregate proxy for the rest of the economy, for example the work of Dosi et al. (2023) or Gräbner and Hornykewycz (2022).

The proposition of Richiardi (2017) that agent-based model building ought to become modular, i.e., that each model should be constructed from prefabricated blocks, partly conforms with the above procedure. However, the questions of how these blocks should look like structurally, what agents and decision rules they ought to contain, and how to represent the network structure of an economy require answering, in order to avoid drawing definite conclusions about the world from models that do not reflect the structure of real-world economies. It is also unknown what parts of an economy are important and which are not: as argued above, simple moment-matching does not guarantee that the structure of the model mimics the real data-generating process. Therefore, the dynamics of model economies with various degree of complexity must be understood before drawing causal and policy inferences.

Procedure 1 outlined above conforms with Epstein’s notion of generative science, with its emphasis on creating and presenting exact algorithms representing the mechanism of an investigated phenomenon (Epstein 1999). It is also consistent with the postulates of Brenner and Werker (2007) to conduct parameter and initial values sweeps for those variables for which microeconomic data is unavailable. Such searches, however, ought to be guided by theory or results from previously run experiments on real-world subjects, in order to anchor them in research findings or generalised observations. Procedure 1 conforms also with the four primary objectives of ACE research described by Tesfatsion: empirical understanding, normative design, qualitative insight and theory generation, and methodological advancement (Tesfatsion 2006, 2017).

Another remark concerns how to refer such an approach to the proliferation of elaborate methods for estimation of simulation models. It is not argued here that researchers should abstain from using them until reaching the final stage of theoretical model-building, i.e., until having a multi-sector, open-economy, behaviourally-microfounded agent-based model armed with as many realistic features as one may think of. On the contrary, it will be beneficial to attempt to estimate the models starting somewhere from the middle of the complexity range, to compare various versions of a framework and see how they fit the data.

Building and estimating very large and complicated models can raise natural concerns about overfitting. Nonetheless, overfitting necessitates estimating a model that has too much parameters, that is, some of them would not be justified by the data, in this case – the observed economic processes. First, the proposed bottom-up procedure allows to investigate the effects of adding another piece to an already existing framework. Second, if this new part of a model has a corresponding entity in the actual world, then it ought to be included at some point. Nevertheless, if many of these building blocks are assumed to feature exogenous shocks as inputs to the system, then overfitting becomes a real concern if the model in question is evaluated by its ability to match empirical moments.

However, as underlined a few times in this paper, estimation by itself does not constitute a verification of a model (or at least not a full one). Due to an oversimplification of the structure of a system under scrutiny it is possible to omit an important channel of interaction and thus suffer from information loss, in line with the reasoning of Canova (2009). As Judd (2006) has argued, it is not possible to determine which features of an economy matter and which do not for the processes under scrutiny until these features are incorporated into analysis. Thus, all the ACE research that has been conducted so far is valuable in that it has already explored much of the single- and two-sector artificial economies phenomena, laying foundations for more sophisticated models and analysis.

5 Conclusions

In this paper, the development and achievements of agent-based computational economics research were considered. The most important accomplishments in this field are the application of complexity theory to the analysis of economic as well as financial systems, and various results showing that interactions of autonomous agents without imposing aggregate resource or market clearing constraints produce results that are considerably different from those that are possible to obtain using an equilibrium framework.

Agent-based research, however, has gone too quickly into a direction similar to the line of development of models based on analytical dynamic systems of difference equations. That is, an increasingly large focus has been put on matching moments of real-world time series of data, on matching a set of stylised facts or on estimating a model. This is not necessarily desirable for three reasons. The first is that, following reasoning of Canova (2009), the results of estimation or moment-matching will be unbiased only if the structure of an analysed model reflects the structure of the actual data-generating process. Microfoundations and sectoral structures of the existing models are highly stylised, which may cast doubt on the correctness of such methods of model validation.

This also leads to the second reason: the Lucas critique applies to agent-based models with microfoundations based on theories that are dubious due to numerous empirical refutations or little microeconometric and experimental evidence.

Lastly, analysing only “simple” models with very reduced sectoral structure or only random, not fundamental, heterogeneity precludes considerations of structural intra- and intersectoral feedback loops, interactions between differentiated groups of consumers and firms, differences between various markets, supply chains networks, and other phenomena which characterise complex systems and are fundamental, not random, elements of economies.

One of the major questions we, economists, must answer, is what the purpose of our theoretical and empirical analyses is. Does our goal reduce to discovering the properties of a class of models of our choice, estimate or calibrate them and assess them only against our intuition and measures of goodness-of-fit? Validity of a theory or a model is always provisional (Popper 1959), which is another reason why correct microfoundations and structure ought to be more stressed than estimation as measures of conditional validity of an agent-based economic model. The sole act of estimation cannot be treated as a full verification of the model’s correctness or usefulness for prediction. This is because of the argument made by Canova (2009), that an estimation of a model whose structure does not reflect the real-world data-generating process will yield biased results. Moreover, a similar one can be formulated for theoretical or calibrated models: intuition and qualitative insights provided by such frameworks will be erroneous, unless the structure of the analysed system is not a good approximation to the real processes.

The question, of course, is: what constitutes a “good” approximation? How to assess it, if the estimation and goodness-of-fit measures are not enough? How to determine which features of the economy can be safely ignored, which ones can be dealt with by the means of sectoral, representative, or optimising agents and which ones cannot? One of the major themes in approaches of scientific inquiry is the trade-off between simplicity together with the ease of translating a model’s mechanics into (uncomplicated) intuition and the attempt of credibly representing the real world’s structure to prevent obtaining biased (or “too biased”) results and erroneous depiction of actual processes.

The original answer of ACE approach to these questions was: nobody knows which features of the economy are important characteristics that should be accounted for by the models and which are not, until it is verified (Judd 2006). That is, to determine which aspects of real-world economies are relevant for understanding and predicting their dynamics, models with and without the given studied features should be constructed and their output as well as performance compared.

For these reasons, a more systematic research agenda is proposed. One of its characteristics is that a macroeconomic (or financial-markets) research question cannot—at the current state of economic knowledge—be answered in a single paper, simply because there are too many possible channels of interactions and feedback loops. It is proposed to build models aiming at a particular problem, or at broadening our understanding of a phenomenon, at an increasing level of complexity. In the case of macroeconomic agent-based models this implies analysing the output of various multi-sectoral models, starting with those that feature only one market with heterogeneous consumers and firms. A similar approach may be applied to models representing only a financial market, by introducing more and more real-world elements, such as large institutional traders, forecasting with econometric tools, algorithmic high-frequency trading, etc.

This approach will allow to uncover causal interaction channels an feedback loop mechanisms of economies. It will enable the analysis of interdependencies between real sectors as well as between them and the financial sector. Finally, the proposed step-wise analysis of agent-based complex economic systems will facilitate the evaluation of the nature of causality and the effects that a sector or a phenomenon actually has on the economy, and how it changes for various types of sectoral structures and supply networks.