follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend up(back)date: cultural cognition vs. Bayesian updating of scientific consensus | Main | Compromise effects as motivated reasoning: report from Law & Cognition 2016 »
Friday
Sep302016

Modeling the incoherence of coherence based reasoning: report from Law & Cognition 2016

I’ve covered this ground before (in a 3-part set last yr) but this post supplies a compact recap of how coherence based reasoning (CBR), the dynamic featured in Session 5 of the Law & Cognition 2016 seminar, subverts truth-convergent information processing.

The degree of subversion is arguably more extreme, in fact, than that associated with any of the decision dynamics we’ve examined so far.

Grounded in aversion to residual uncertainty, CBR involves a fom of rolling, recursive confirmation bias. 

Where decisionmaking evinces CBR, the factfinder engages in reasonably unbiased processing of the evidence early on in decisionmaking process. But the more confident she becomes in one outcome, the more she thereafter adjusts the weight—or in Bayesian terms the likelihood ratio—associated with subsequent pieces of independent evidence to conform her assessment of them to that outcome.

As her confidence grows, moreover, she revisits what appeared to her earlier on to be pieces of evidence that either contravened that outcome or supported it only weakly, and readjusts the weight afforded to them as well so as to bring them into line with her now-favored view.

By virtue of these feedback effects, decisions informed by CBR are marked by a degree of supreme confidence that belies the potential complexity and equivocality of the trial proof.

Such decisons are also characterized, at least potentially, by arbitrary sensitivity the order in which pieces of evidence are considered. Where both sides in a case have at least some strong evidence, which side's strong evidence is encountered (or cognitively assimilated) “first” can determine the direction of the feedback dynamics that thereafter determine whether the other side’s proof is given the weight it's due.

It should go without saying that this form of information processing is not truth convergent. 

As reflected in the simple Bayesian model we have been using in the course, truth-convergent reasoning demands not only that the decisionmaker update her factual assessments in proportion to the weight—or likelihood ratio—associated with a piece of evidence; it requires that she determine the likelihood ratio on the basis of valid, truth-convergent criteria.

That isn’t happening under CBR.  CBR is driven by an aversion to complexity and equivocality that unconsciously induces the decisionmaker to credit and discredit evidence in patterns that result in a state of supreme over­confidence in an outcome that might well be incorrect.  The preference for coherence across diverse, independent pieces of evidence, then, is an extrinsic motivation that invests the likelihood ratio with qualities unrelated to the truth.

Just how inimical this process is to truth seeking can be usefully illustrated with a simple statistical simulation.

The key to the simulation is the “CBR function,” which inflates the likelihood ratio assigned to the evidence by a factor tied to the factfinder’s existing assessment of the probability of a particular factual proposition.  This element of the simulation models the tendency of the decisionmaker to overvalue evidence in the direction and in proportion to her confidence in a particular outcome.

In the simulation, the CBR factor is set so that a decisionmaker overweights the likelihood ratio by 1 “deciban” for every one-unit increment in the odds in favor a particular outcome (“1:1” to “2:1” to “3:1” etc.). Accordingly, she overvalues the evidence by a factor of 2 as the odds shift from even money (1:1) to 10:1, and by an amount proportionate to that as the odds grow progressively more lopsided.  I’ve discussed previously why I selected at this formula, which is a tribute to Alan Turing & Jack Good & the pioneering work they did in Bayesian decision theory.

This table illustrates the distorting impact of the CBR factor. It shows how a case consisting of eight "pieces" of evidence--four pro-prosecution and four pro-defense--that ought to result in a "tie" (odds of 1:1 in favor of a prosecutor’s charge) can generate an extremely confident judgment in favor of either that party depending on the order of the trial proof

In the simulation, we can generate 100 cases, each consisting of 4 pieces of “prosecution” evidence—pieces of evidence with likelihood ratios drawn randomly from a uniform distribution of 1.05 to 20—and 4 pieces of “defense” evidence--ones with likelihood ratios drawn randomly from the reciprocal values (0.95 to 0.05) of that same uniform distribution.

The histograms illustrate the nature of the “confidence skew” resulting from the impact of CBR in those 100 cases.  As expected, there are many fewer “close cases” when decisionmaking reflects CBR than there would be if the decisionmaking reflected unbiased Bayesian updating.

The skew exacts a toll on outcome accuracy. The toll, moreover, is asymmetric: if we assume that the prosecution has to establish her case by a probability 0.95 to satisfy the “beyond a reasonable doubt” standard, many more erroneously decided cases will involve false convictions than false acquittals, since only those cases in which equivocation is incorrectly resolved in favor of exaggerated confidence in guilt will result in incorrect decisions.  (Obviously, if these were civil cases tried under a preponderance of the evidence standard, the error rates for false findings of liability and false findings of no liability would be symmetric.)

This is one “run” of 100 cases. Let’s put together a full-blown Monte Carlo simulation (a tribute to the Americans working on the Manhattan project; after all, why should the Bletchley Park codebreakers Turing & Good garner all our admiration) & simulate 1,000 sets of 100 cases so that we can get a more precise sense of the distribution of correctly and incorrectly decided cases given the assumptions built into or coherence-based-reasoning model.

If we do that, we see this:

 

Obviously, all these numbers are ginned up for purposes of illustration.

We can’t know (or can’t without a lot of guesswork) what the parameters should be in a model like this.

But we can know even without doing that that we ought to have grave doubts about the accuracy, and hence legitimacy, of a legal system that relies on decisionmakers subject to this decisionmaking dynamic.

Are jurors subject to this dynamic?  That’s a question that goes to the external validity of the studies we read for this session.

But assuming that they are, would professional decisionmakers likely do better? That’s a question very worthy of additional study.

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    Response: 3DS Emulator

Reader Comments (7)

Dan, you come down very hard on CBR for not being "truth-convergent", but I think you give inadequate weight to the idea that CBR has evolved to maintain a necessary cognitive function. Some people know nothing and are nevertheless vocal, or are dishonest and vocal. The human reasoning faculties need to be able to discredit untrustworthy sources of information. That's why we adjust likelihood ratios based upon coherence with our prior experience; you call this a cardinal sin, but in reality, such adjustment is a completely -rational- response to a social environment in which we cannot assume that people presenting us with information are presenting it honestly, or for our benefit.

What -is- truth, anyway? You seem to be proceeding from the idea that truth is like some platonic form that can be found, as opposed to a product of historiography.

So I think in this situation, your modus tollens is my modus ponens. Unbiased reasoning doesn't converge to anything when reasonable choices of priors are uncertain, as you point out. In contrast, coherence-based reasoning produces confidence even despite uncertainty in the reasonableness of priors, at the cost of -only- ~30% mistaken judgments in Monte Carlo simulations of non-iterated first judgments. That error rate is plenty good enough to compete with the opportunity costs of indecision, and that error rate would go down with any consistency in the reliability of information sources. CBR is a good evolutionary solution to the problem of rapid, robust decision-making in the face of unreliable social information.

I agree with your conclusion that the standard of "beyond a reasonable doubt", as opposed to the preponderance of the evidence, is inconsistent with CBR and equal protection/rule of law, but I view this as a problem with the standard, not with CBR. I think that the criminal legal system would be more human-friendly if it used the civil law evidence standard for conviction, and if sentences were correspondingly tuned.

October 1, 2016 | Unregistered Commenterdypoon

@Dypoon--

1. Presumably any decisionmaking regularity we observe will be one that is "adaptive" in an evolutionary sense, or at least connected to byproducts of decisionmaking tendencies that are adaptive. But that doesn't mean that the cognitive tendencies we observe in people are therefore not prone to systematic forms of bias or *not* inimical to certain types of decisonmaking objectives and the like. The adaptive mechanisms in evolution don't guarantee optimal design. Moreover, the envrionment in which we make decisisions now is not the one evolution equipped us for. So when we observe that some human decisionmaking tendency is not suited for some task of consequence, knowing that we are who we are by virtue of evolution doesn't give us any reason to doubt the critique.

2. The capacity to discern defects in our decisionmaking tendencies and to fix them by one means or another -- e.g., by changing the nature of the decisionmaking task, by restort to special kinds of training, or by division of cognitive labor & specialization-- is no doubt adaptive & part of our evolutionary heritage as well. So when we identify some recurring cognitive miscue within a decisoinmaking system like the law, & motivate ourselves to improve on the situation, we are being our evolved selves every bit as much as we are when we blunder due to reliance on evoltionarily conditioned decisionmaking tendencies that generate biased conclusions.

3. The only question about CBR is whether the tendency to overcondifence that it embodies makes decisionmaking of one sort or another "better" than it would be if decisionmaking were instead appropriately sensitive to genuine equivocality and complexity. In any given setting, if the available information doesn't support a confident conclusion, should we nevertheless be supreme confident in a particular outcome. For sure the decisions of the sort made by most professionals--e.g., physicians and business persons--are not better when the desire for coherence blinds them to complexity. It's pretty hard to imagine the law is better w/ such blindness. If we can do better, we should.

October 1, 2016 | Registered CommenterDan Kahan

I'm not arguing that CBR is optimal just because it evolved. What I am arguing is that given that humans use CBR instead of unbiased reasoning by instinct, it seems likely to me that we're not confronted by the same problem that unbiased reasoning solves.

The problem of coming to a judgment starting from a known body of evidence, each piece associated with a certain likelihood ratio of rational judgment outcomes, is indeed very different from the problem of coming to a judgment starting from pieces of evidence that are presented to you by people who may be more or less reliable. Doesn't CBR become more robust than simple Bayesian reasoning when your uncertainties in the log-odds of how people are cherry-picking or curating evidence are at least as big as the uncertainties in the log-odds associated with the evidence itself?

Is it accurate to think of CBR as a generalization of Bayesian reasoning? My impression was that it is; one uses an iterative process akin to repeated storytelling to converge upon a model of who's telling the truth, who's lying, and who's clueless, in addition to evaluating the weight of the evidence itself. At the end of this iterative process, you arrive at a model of source reliability that you can overlay upon the evidence they provide, then apply Bayesian reasoning to conclude.

I note this process is not equivalent to the model of CBR that I think you're using, which is more simply confirmation-seeking. So if I'm arguing that apples are good and you're saying that oranges are bad, maybe that's why we are disagreeing?

The only question about CBR is whether the tendency to overconfidence that it embodies makes decisionmaking of one sort or another "better" than it would be if decisionmaking were instead appropriately sensitive to genuine equivocality and complexity. [Should we ever choose to be knowingly overconfident?]

I don't know if this is helpful or not, but chess engine development is one of my hobbies, and the evolution of chess engines over the long term has been towards engines that conduct deep instead of broad searches, with heavy pruning of unfruitful analysis branches. The skeptical purists always argue that the breadth-heavy search engines will catch up some day, but that day hasn't happened yet for about 14 years of Moore's law. If chess is any analogy, then yes, purposeful overconfidence is -very- useful, precisely because it guides iterated analysis.

You can argue that I should justify why the experience of chess engine development is relevant to any other form of decision-making, especially of the political sort. These are the features that I think make chess relevant:

First, chess is a game where the players are presented with opportunities to slowly change a position. This is like public policy, and unlike games with more simultaneous moves, like the stock market or a high-speed nuclear arms standoff.
Second, there is a great challenge in offering stable position evaluations despite the existence of deep refutations (refutations many moves down the tree): this factor is akin to the real issue in public policy of not knowing what capacities future society will have, or what options they will be able to take.
Third, the first task that any engine has to resolve, confronted with a position, is whether to keep thinking or make a move. This motif is particularly relevant in the gathering of critical political capital necessary to address any socially relevant issue. Should we act now or later? should we wait for more data?

For sure the decisions of the sort made by most professionals--e.g., physicians and business persons--are not better when the desire for coherence blinds them to complexity. It's pretty hard to imagine the law is better w/ such blindness.

For sure? I was about to say that this claim wanted support! My impression was that decisions made by professionals and non-professionals alike are better (or at least more consistent, but we can measure both) when the information provided to them is first de-cluttered. That's why people like executive summaries. More information presented is certainly not always better, and unnecessary complexity is just useless. But do you trust the person who's doing the de-cluttering? Only experts can afford to do their own de-cluttering.

October 2, 2016 | Unregistered Commenterdypoon

@Dypoon---

if it's true that the CBR is driven by an extrinsic aversion for residual uncertainty, then I don't see why the "whose in the lead" premium given that governs weighting of evidence considered serially will weed out unreliable sources of information. Evertything turns on the happenstance of what peice of evidence one happens to hit on first.

It's a "Bayesian" process, sure, but one in which there is an endogeneity between the likelihood and evalutions of previously considered, independent pieces of evidence. That just can't be truth seeking.

One can simulate this process in which one knows what the right answer is & treat accuracy as normative for decisionmaking that does & doesn't evince CBR.

When one does that w/ professional decisionmaking--such as that displayed by physicians--the conclusion is that CBR is degrading accuracy in a disturbing fashion.

see generally

Kostopoulou, O., Sirota, M., Round, T., Samaranayaka, S. & Delaney, B.C. The Role of Physicians’ First Impressions in the Diagnosis of Possible Cancers without Alarm Symptoms. Medical Decision Making (2016).

Kostopoulou, O., Russo, J.E., Keenan, G., Delaney, B.C. & Douiri, A. Information distortion in physicians’ diagnostic judgments. Medical Decision Making 32, 831-839 (2012).

October 2, 2016 | Registered CommenterDan Kahan

@Dypoon -- just to make clear, too, there's nothing about the simulated cases that constrains the evidence to be in equipoise with respect to the results. The pro-prosecution and pro-defense pieces of evidence are being drawn independently from random uniform distiributions. Accordingly, the cases will vary randomly in how strongly they support one side or the other. The impetus to overconfidence, then, isn't a "Buridan's ass" tie breaker or motivator; it's happy to generate an overcondident result that is contrary to the decided weight of the evidence. IN the professinal ludgment studies, dr's are confidently misdiagnosing cases.
Here's what the deviation in confidence level looks like across 100 cases. Presumably when the deviation is approach 100 pct points, the factfinder is being impelled what would have been a reasonably confident decision in the correct direction into a very confident decision in the wrong one. But I can fool more with the simulation to tease that out.

REalize too that the fluctuatins get even wilder as you add more pieces of evidence. Eight is pretty small; consider what things look like w/ just 2 more pieces (10)!

I suspect your chess playing machine is not using CBR!

October 2, 2016 | Registered CommenterDan Kahan

Based on my reading of the studies you linked, I think you are wrongly equating coherence-seeking with the impulse to confirm the dominant impression. These to me are different things; the latter is a primitive form of the former, but other ways of seeking coherence exist. To wit, you can do a relatively simple sensitivity analysis by transposing the order of information you present yourself with just to confront yourself with your own first-impression bias. Not everyone does this, of course, and if you have reason to believe that juries don't, then the sequence of presented evidence matters just as you claim. You're right in saying that the "who's in the lead" effect isn't truth-converging at all.

This is why, to me, it makes more sense to consider coherence-seeking as a process that begins given an unordered set of pieces of evidence, not a sequence. From that view, the critical points are 1) the algorithm you use to assign reliability scores to each piece of evidence, and 2) the decision of when to stop seeking additional information. As far as public policy is concerned, there is an obvious link from point 1) to the issue of expert trust, and from point 2) to curiosity.

How often are the pieces of evidence in real-life situations even close to independent? I'd assume in a usual jury trial context that pieces of evidence are usually highly interdependent. A truly independent piece of evidence, like a shooting distance or a DNA match, is very difficult for the other side to play down.

I suspect your chess playing machine is not using CBR!

Certainly not in any human sense, LOL. What modern chess engines tend to do is focus as much analytical effort on the principal variation as possible. That is to say, after they finish one iteration, they proceed to the next iteration of the algorithm on the assumption that what they have done before (i.e., the bounds they have deduced for the possible worth of the game) will remain correct for one more iteration. It's that inductive step, the assumption of previous correctness, that makes me liken iterative deepening focusing on the principal variation to CBR in humans.

Now when that assertion fails (i.e., evaluation outside known bounds), they backtrack really really hard. I think that's a very salient difference between the simple confirmation-seeking CBR you are modeling here, and possibly more truth-seeking forms of CBR that I suspect more experienced professionals are using.

October 3, 2016 | Unregistered Commenterdypoon

@Dypoon--

then maybe we are talking about different things.

By CBR, I mean only to refer to the decisionmaking tendencye that C&R & Simon et al. are demonstrating: one, reflected in the diagram, in which the likelihood ratio is endogenous to the career of one's iteratively determined assessment of the probability of some hypothesis. Because it resulting in a kind of rolling, recursive confirmtaion bias, this form of CBR is not truth-seeking. It can be shown to demosntrate estimates that are wildly off relative to what one would obtain if one afforded independent pieces of information the diagnostic significance they are due when consdidered serially in a Bayesian fashion.

A more general "motivation" to find coherence in the sense of some "inference to the best explanation" in determining the LR for a given piece of evidence is a different thing. So, of course, is sensitivity to how interdependent pieces of information "cohere" and should be considered in connection with one another to determine their evidentiary import.

Maybe the latter are "evolutionarily" programmed in humans; I"m not sure.

But the former type of CBR seems to be a settled and pervasive feature of human decisionmaking that is inimical to the accuracy in many types of diagnostic tasks.

IN the two articles I linked in the comments, though, I'd say the 1st is unclear. CBR seems to be going on in people physticians who get the wrong answer-- but those who get the "right" answer are also displaying a stubborn persistence in their first diagnosis. My guess is that the study is confounded: it is measuring not on the quality of updating based on new, independent evidence (freedom from confirmation bias) but the quality of experienced intuition or preconscious pereption in forming plausible hypotheses based on pattern recognition, another decisionmaking tendency that is characteristic of *good* professional judgment and that is issential to accuracy in practical settings.

The study reporting is unsatisfactory in this respect b/c it focuses only on false negatives; it doesn't supply information about false positives in those physicians who immediately diagnosed cancer.

October 3, 2016 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>