Abstract
Our goal in this paper is to experimentally investigate whether folk conceptions of explanation are psychologistic. In particular, are people more likely to classify speech acts as explanations when they cause understanding in their recipient? The empirical evidence that we present suggests this is so. Using the side-effect effect as a marker of mental state ascriptions, we argue that lay judgments of explanatory status are mediated by judgments of a speaker’s and/or audience’s mental states. First, we show that attributions of both understanding and explanation exhibit a side-effect effect. Next, we show that when the speaker’s and audience’s level of understanding is stipulated, the explanation side-effect effect goes away entirely. These results not only extend the side-effect effect to attributions of understanding, they also suggest that attributions of explanation exhibit a side-effect effect because they depend upon attributions of understanding, supporting the idea that folk conceptions of explanation are psychologistic.
Similar content being viewed by others
Notes
While there has a great deal of work exploring the side-effect effect itself, using it as a tool to look at the contours of a concept like explanation is a somewhat different approach.
The authors are at pains to distinguish the question of whether or not understanding-generation is necessary for a model to count as an explanation at all from the further question of whether it is a good-making feature of an explanation.
‘Advocated’ proves a bit awkward here, as it does not obviously involve any mental state. Pettit and Knobe (2009) assume that advocating something requires having a pro-attitude towards it; if correct, ‘advocated’ would fall under the general rubric of pro-attitude attribution. There is reason to suspect that this is right—in an unreported pilot experiment, we measured whether claims of the form ‘X said Y’ exhibit an SEE, and found no significant effect. This suggests that what is doing the work in the case of ‘advocated’ is the pro-attitude on top of what is said.
There is (to our knowledge) one possible exception to the general claim that attributions of all-and-only mental states exhibit an SEE. Knobe and Fraser (2008) find that people are more likely to say that X’s actions caused a certain result if the action was norm-violating. However—as anyone who has taught an introductory ethics class can attest—laypeople often conflate causal responsibility with ethical responsibility. As such, we are not inclined to read too much into this result.
Excluding participants on the basis of an attentional or comprehension check is common practice in psychology, and our exclusion rate is not out of line with prior research. Given the probability of internet participants multi-tasking or otherwise not devoting full attention, it is standard practice to recruit a large number of participants (Crump et al. 2013; Oppenheimer, Meyvis, & Davidenko, 2009), with the expectation that surveys tracking more nuanced differences will eliminate upwards of 40% (Downs et al. 2010).
As an added boon, the results of Experiment 1 support the contention that understanding states are evaluated by the same mechanisms that we use to evaluate (other) mental states. This result is a prediction of some (more mentalistic) views of understanding (e.g., Wilkenfeld 2013, Kelp 2015), but a surprising (though not necessarily inconsistent) result on others (e.g., de Regt & Dieks 2005), and directly opposed to still others (e.g., Ylikoski 2009).
One commenter raised the question of whether the SEE for explanation appeared not because participants were psychologistic about explanation, but rather because they were psychologistic about whether something had been offered. To test this hypothesis, we ran a supplementary study (focusing on the moral case) in which we varied whether participants evaluated the statement “The vice-president offered the chairman an explanation of why the new program would harm the environment” or “What the vice-president said amounted to an explanation of why the new program would harm the environment.” We predicted that we would continue to find an SEE, and that it would not interact with wording choice. This is what we found in an initial study with 639 participants (post-exclusion). Due to a typo found in that survey (one statement included “help/harm the environment” rather than simply “harm the environment”), we then ran another 614 participants (post-exclusion) with a corrected copy. We analyzed the combined sample with an ANOVA on explanation rating (i.e., people’s agreement with whichever statement they saw) with norm status (2: conform, violate), wording (2: offered, amounted) and survey number (2: survey 1, survey 2) as between-subjects factors. This analysis revealed significantly higher ratings in the violate condition (N = 625, M = 5.26, SD = 1.799) than in the conform condition (N = 628, M = 4.94, SD = 1.837), F(1, 1245) = 9.79, p = .002, η 2p = .008), with no interaction between norm status and either wording, F(1, 1245) = 1.37, p = .242, or survey number, F(1, 1245) = 2.071, p = .150, and no three-way interaction, F(1, 1245) = .409, p = .523. Interestingly, there was a main effect of wording, with participants giving higher ratings for offered (N = 626, M = 5.43, SD = 1.674) than amounted (N = 627, M = 4.78, SD = 1.913), F(1, 1245) = 39.850, p < .001, η 2p = .031. It is perhaps surprising that it is easier to offer an explanation than to have what one says amount to an explanation, but this finding is orthogonal to the present concern.
While the finding of an UESEE on Alfano et al’s (2012) interpretation does suggest that belief is a component of understanding, it does not imply that it is the only component—therefore, understanding could still be manipulated separately.
Our framing of the discussion in terms of understanding attributions sidesteps the question of whether these patterns of behavior speak to the nature of the concept or to how people deploy it. This is arguably a feature, as Machery (2008) persuasively argues that such questions might be unanswerable given the present state of the philosophy of concepts.
Alexander and Weinberg (2007) make a similar point.
This point about mitigating the SEE is even more poignant if the SEE is not restricted to mental states at all, as perhaps suggested by the example of ‘caused’ in Knobe and Fraser (2008). That being said, also note that the obvious way to connect causal judgments to mental-state judgments in terms of responsibility (see n. 7) does not apply to this case, as offering the explanation of why the decision will yield the result does not confer responsibility for that result.
We are grateful to a reviewer and to Joshua Knobe (in conversation) for articulating interesting versions of this proposal.
References
Achinstein, P. (1983). The nature of explanation. Oxford: Oxford University Press.
Alexander, J., & Weinberg, J. M. (2007). Analytic epistemology and experimental philosophy. Philosophy Compass,2(1), 56–80.
Alfano, M., Beebe, J., & Robinson, B. (2012). The centrality of belief and reflection in Knobe-effect cases. The Monist,95(2), 264–289.
Bechtel, W. (2008). Mental Mechanisms: Philosophical perspectives on cognitive neuroscience. Taylor & Francis.
Beebe, J. (2013). A Knobe effect for belief ascriptions. Review of Philosophy and Psychology,4(2), 235–258.
Beebe, J. R., & Buckwalter, W. (2010). The epistemic side-effect effect. Mind and Language,25(4), 474–498.
Bromberger, S. (1966). Why Questions. In Mind and cosmos: Essays in contemporary science and philosophy (pp. 86–110). Pittsburgh: Pittsburgh University Press.
Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLoS One,8(3), e57410.
Dalbauer, N., & Hergovich, A. (2013). Is what is worse more likely?—The probabilistic explanation of the epistemic side-effect effect. Review of Philosophy and Psychology,4(4), 639–657.
De Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese,144(1), 137–170.
Downs, J. S., Holbrook, M. B., Sheng, S., & Cranor, L. F. (2010). Are your participants gaming the system?: Screening mechanical turk workers. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2399–2402). ACM. http://dl.acm.org/citation.cfm?id=1753688.
Friedman, M. (1974). Explanation and scientific understanding. Journal of Philosophy,71(1), 5–19.
Garfinkel, A. (1981). Forms of explanation. New Haven: Yale University Press.
Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science. Mankato: The Free Press.
Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science,15(2), 135–175.
Kelp, C. (2015). Understanding Phenomena. Synthese,192(12), 3799–3816.
Khalifa, K. (2017). Understanding, Explanation, and Scientific Knowledge. New York: Cambridge University Press.
Kitcher, P. (1989). Explanatory Unification and the causal structure of the world. In P. Kitcher & W. Salmon (Eds.), Scientific explanation (pp. 410–505). Minneapolis: University of Minnesota Press.
Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis,63(3), 190–194.
Knobe, J. (2007). Reason explanation in folk psychology. Midwest Studies in Philosophy, 31(1), 90–106.
Knobe, J., & Fraser, B. (2008). Causal judgment and moral judgment: Two experiments. In W. Sinnott-Armstrong (Ed.), Moral psychology. Cambridge: MIT Press.
Lewis, D. (1980). Mad pain and Martian pain. Readings in the Philosophy of Psychology,1, 216–222.
Lombrozo, T., & Wilkenfeld, D. (2015). Inference to the best explanation versus explaining for the best inference. Science & Education,24(9–10), 1059–1077.
Machamer, P. K., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science,67(1), 1–25.
Machery, E. (2008). The folk concept of intentional action: Philosophical and experimental issues. Mind and Language,23(2), 165–189.
Murray, D., & Lombrozo, T. (2017). Effects of manipulation on attributions of causation, free will, and moral responsibility. Cognitive science,41(2), 447–481.
Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology,45(4), 867–872.
Pettit, D., & Knobe, J. (2009). The pervasive impact of moral judgment. Mind and Language,24(5), 586–604.
Salmon, W. (1984). Scientific Explanation and the Causal Structure of the World. Princeton University Press
Salmon, W. C. (1971). Statistical explanation & statistical relevance. Pittsburgh: University of Pittsburgh Press.
Scriven, M. (1962). Explanations, predictions, and laws. Minnesota Studies in the Philosophy of Science,3, 170–229.
Strevens, M. (2013). No understanding without explanation. Studies in History and Philosophy of Science Part A, 44(3), 510–515.
Uttich, K., & Lombrozo, T. (2010). Norms inform mental state ascriptions: A rational explanation for the side-effect effect. Cognition,116(1), 87–100.
Van Fraassen, B. C. (1980). The scientific image. Oxford: Oxford University Press.
Waskan, J., Harmon, I., Horne, Z., Spino, J., & Clevenger, J. (2014). Explanatory anti-psychologism overturned by lay and scientific case classifications. Synthese,191(5), 1013–1035.
Wilkenfeld, D. A. (2013). Understanding as representation manipulability. Synthese,190(6), 997–1016.
Wilkenfeld, D. A. (2014). Functional explaining: A new approach to the philosophy of explanation. Synthese,191(14), 3367–3391.
Wilkenfeld, D. A., Plunkett, D., & Lombrozo, T. (2016). Depth and deference: When and why we attribute understanding. Philosophical Studies,173(2), 373–393.
Wittgenstein, L. (2013/1921). Tractatus logico-philosophicus. Abingdon: Routledge.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: Oxford University Press.
Ylikoski, P. (2009). The illusion of depth of understanding in science. In H. D. Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press.
Acknowledgements
We would like to thank the University of California, Berkeley, the University of Pittsburgh (including the Center for Philosophy of Science and the department of History and Philosophy of Science), and grants from the John Templeton Foundation and James S. McDonnell Foundation for their generous support. We would also like to thank Joshua Knobe and James Beebe for helpful conversation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wilkenfeld, D.A., Lombrozo, T. Explanation classification depends on understanding: extending the epistemic side-effect effect. Synthese 197, 2565–2592 (2020). https://doi.org/10.1007/s11229-018-1835-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-018-1835-3