Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-28T22:49:48.274Z Has data issue: false hasContentIssue false

Introduction to the special issue: Coherence and correspondence in judgment and decision making

Published online by Cambridge University Press:  01 January 2023

Philip T. Dunwoody*
Affiliation:
Juniata College
*
* Address: Philip T. Dunwoody, Juniata College, 1700 Moore St, Huntingdon, PA, 16652. Email: dunwoody@juniata.edu.
Rights & Permissions [Opens in a new window]

Abstract

Type
Introduction
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2009] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

In 2007, Kenneth Hammond published Beyond Rationality: The search for wisdom in a troubled time. In this book and in prior works, Hammond (Reference Hammond1990; Reference Hammond1996) promotes the recognition and equal acceptance of two different classes of criteria for the assessment of human judgment and decision making (JDM). Coherence criteria are those that are based on normative standards of logic (or some other formal model) while correspondence criteria are based on the accuracy of predicting or judging empirical events. In his 1996 book, Hammond summarizes correspondence research:

Correspondence theory focuses on the empirical accuracy of judgments, irrespective of whether the cognitive activity of the judge can be justified or even described. Although correspondence researchers may be interested in describing the processes that produce the judgments, they rarely inquire into the question of whether these processes are rational, that is, conform to some normative, or prescribed, model of how a judgment ought to be reached. (p. 106)

Coherence research, he argued, is different:

Coherence theorists have opposite interests; they examine the question of whether an individual’s judgment processes meet the test of rationality-internal consistency-irrespective of whether the judgment is empirically accurate. Indeed, no test of empirical accuracy may be available in principle or fact. Thus, for example, if a problem is offered to a subject that is susceptible to a solution by a standard statistical model, the coherence theorist first compares the subject’s answer with that produced by the statistical model, declares the answer to be correct or incorrect, tests (if possible) the process by which the answer is produced, and then evaluates the rationality of the cognitive process(es) involved. (p. 106)

Hammond argued that conclusions about human competence differ as a function of the criteria class used. He argued that when JDM is assessed against coherence criteria, humans often appear incompetent and irrational but, when JDM is assessed against correspondence criteria, humans appear adaptive or “ecologically rational” as Gigerenzer and Todd would say (1999).

In his recent book Hammond states that, “understanding the important field of human judgment cannot go forward, cannot eliminate the current disarray, without our acknowledging the role of coherence and correspondence” (Reference HammondHammond, 2007, p. 225). This is a strong claim and one that deserves discussion and debate.

In 2007, I organized a symposium at the 23rd Annual International Meeting of the Brunswik Society to address Hammond’s claim. Five of the papers in this special issue (Reference Weiss, Brennan, Thomas, Kirlik and MillerDawson & Gregory, 2009; Reference Weiss, Brennan, Thomas, Kirlik and MillerDunwoody, 2009; Reference MosierMosier, 2009; Reference TapeTape, 2009; Reference KatsikopoulosKatsikopoulos, 2009) are based on presentations made at this symposium. Following the symposium Jonathan Baron, current editor of Judgment and Decision Making, and Robin Hogarth discussed the possibility of a special issue focused on coherence and correspondence. Baron then approached me about acting as guest-editor of the special issue and I happily agreed.

The first paper (Reference Weiss, Brennan, Thomas, Kirlik and MillerDunwoody, 2009) offers an overview of the terms coherence and correspondence. These terms come from the philosophy of truth literature and represent different ways to answer the question, “How do we know that a belief or judgment is true?” I apply these two terms to the field of JDM and argue that they should be refined to include intra and interpersonal coherence. I also argue that the third major philosophical theory of truth, pragmatism, is missing from Hammond’s coherence/correspondence framework. Pragmatic criteria are those based on the organism’s goals, and any framework of JDM criteria should include a category based on goal attainment.

Reference Weiss, Brennan, Thomas, Kirlik and MillerDawson and Gregory (2009) offer a brief historical perspective on these different criteria and how they have been used in science and medicine. They apply these terms to understanding two historical episodes; Darwinian evolution and Semmelweis’s work on childbed fever. Tape (2009) also applies these terms to medicine and argues that many debates within medical science stem from the coherence/correspondence distinction. He argues that many treatments based on theory and logic (coherence criteria) persist despite correspondence evidence against them. Debate over the appropriate treatment can be better understood as a debate over the criteria of coherence and correspondence. Tape convincingly argues that medical science would be greatly improved if these terms were more widely known. Shaffer and Hulsey (2009) argue that research evaluating the effectiveness of patient decision aids has used both coherence and correspondence measures without explicit recognition of the concepts. Like Tape, they argue that the debate around the effectiveness of patient decision aides is better understood as a debate over the relevant criteria, coherence or correspondence. They argue that most research evaluating patient decision aids uses coherence criteria and that more research utilizing correspondence criteria is needed.

Katsikopoulos makes a similar argument for understanding the different criteria used in engineering decisions. He argues that different rules for making engineering decisions emphasize either coherence or correspondence but there is no explicit recognition of these terms. He also argues that an increase in coherence does not guarantee an increase in correspondence. All of the aforementioned papers argue that there is some debate over assessments that utilize coherence or correspondence criteria despite no explicit recognition of these concepts. All of the aforementioned papers also share the theme that the debate would be more readily understood and perhaps even resolved if coherence and correspondence were widely understood as distinct yet complementary criteria for assessment.

While Katsikopoulus argues that an increase in coherence does not necessarily lead to an increase in correspondence, Mosier (2009) argues that the modern aviation cockpit is engineered so that coherence is the primary strategy needed to achieve correspondence. She makes an important distinction between intuition and analysis as cognitive modes that can be used to achieve coherence, correspondence, or both. The distinction between analysis/intuition and coherence/correspondence is easily confused and Mosier helps clarify this important difference.

Reference Weiss, Brennan, Thomas, Kirlik and MillerWeiss, Brennan, Thomas, Kirlik, and Miller (2009) continue with an examination of the relationship between coherence and correspondence by examining performance in two different tasks. First, they compare a variety of coherence based criteria, including the CWS measure, in an addition task to see how different coherence criteria compare. Second, they compare different coherence criteria with correspondence criteria in a golf putting task. In this second task, they show a strong correlation between measures that assess coherence and correspondence criteria.

Ganzach (2009) examines research on numerical predictions using cue probability learning (CPL) to compare research traditions that have typically focused on coherence criteria, such as the “Heuristics and Biases” program, with those that have typically focused on correspondence criteria, such as Social Judgment Theory. He argues that participants shift strategies from initial use of bias-prone heuristics to more ecologically valid heuristics as they gain experience.

The eight papers in this special issue represent a significant contribution to the field of JDM research. They make a strong connection between philosophical theories of truth and the practice of JDM researcher. They make a convincing case for the utility of the coherence/correspondence distinction across the domains of engineering, medicine, science, aviation, and JDM. They raise intriguing questions for future research, such as: what is the precise relation between coherence and correspondence? what should be the role of pragmatism (goal achievement) in the assessment of JDM? My hope is that this special issue will increase discussion and awareness of the criteria we use to assess judgment and decision making and that more researchers will examine the same behaviors via multiple classes of criteria. The development and acceptance of a classification scheme for JDM criteria will help organize the main findings in the field as well as help stimulate new research.

Footnotes

*

I am indebted to Kenneth Hammond for his important work on this topic (see Hammond, 1996, 2007, for more on these topics). Without his efforts, these concepts would be unknown to the vast majority of JDM researchers. I am also indebted to Jon Baron and Robin Hogarth for aiding me in putting together this special issue. Their collective professional seniority provided guidance in avoiding oversimplifications and making tough “editorial” calls. This special issue benefited greatly from their help.

References

Dawson, N. V., & Gregory, F. (2009). Correspondence and coherence in science: A brief historical perspective. Judgment and Decision Making, 4, 126133CrossRefGoogle Scholar
Dunwoody, P. T. (2009). Theories of truth as criteria in judgment and decision making. Judgment and Decision Making, 4, 116125.CrossRefGoogle Scholar
Ganzach, Y. (2009). Coherence and correspondence in the psychological analysis of numerical predictions: How error-prone heuristics are replaced by ecologically valid heuristics. Judgment and Decision Making, 4, 175185.CrossRefGoogle Scholar
Gigerenzer, G., & Todd, P. M. (1999). The research agenda. In G. Gigerenzer, Todd, P. M., & the ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 136). Oxford, England: Oxford University Press.Google Scholar
Hammond, K. R. (1990). Functionalism and illusionism: Can integration be usefully achieved? In R. M. Hogarth (Ed.), Insights in decision making: A tribute to Hillel J. Einhorn. Chicago: University of Chicago Press.Google Scholar
Hammond, K. R. (1996). Human judgment and social policy: Irreducible uncertainty, inevitable error, unavailable injustice. New York: Oxford University Press.CrossRefGoogle Scholar
Hammond, K. R. (2007). Beyond rationality: The search for wisdom in a troubled time. New York: Oxford University Press.CrossRefGoogle Scholar
Katsikopoulos, K. V. (2009). Coherence and correspondence in engineering design: Informing the conversation and meeting with JDM research. Judgment and Decision Making, 4, 147153.CrossRefGoogle Scholar
Mosier, K. L. (2009). Searching for coherence in a correspondence world. Judgment and Decision Making, 4, 154163.CrossRefGoogle Scholar
Tape, T. G. (2009), Correspondence and coherence in medicine. Judgment and Decision Making, 4, 134140.CrossRefGoogle Scholar
Weiss, D. J., Brennan, K., Thomas, R., Kirlik, A., & Miller, S. (2009). Criteria for performance evaluation. Judgment and Decision Making, 4, 164174.CrossRefGoogle Scholar