follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« How big a difference in mean CRT scores is "big enough" to matter? or NHT: A malignant craft norm, part 2 | Main | NHT: A malignant craft norm, part 1 »
Monday
Sep142015

Is the unreal at least *sometimes* rational and the rational at least *sometimes* unreal?

If you can't read the type, click for now & schedule appt w/ optometrist for laterFrom something I'm working on . . .

Identity-protective cognition and accuracy

Identity-protective cognition is a form of motivated reasoning—an unconscious tendency to conform information processing to some goal collateral to accuracy (Kunda, 1990). In the case of identity-protective cognition, that goal is protection of one’s status within an affinity group whose members share defining cultural commitments.

Sometimes (for reasons more likely to originate in misadventure than conscious design) positions on a disputed societal risk become conspicuously identified with membership in competing groups of this sort. In those circumstances, individuals can be expected to attend to information in a manner that promotes beliefs that signal their commitment to the position associated with their group (Sherman & Cohen, 2006; Kahan, 2015b).

We can sharpen understanding of identity-protective reasoning by relating this style of information processing to a nuts-and-bolts Bayesian one. Bayes’s Theorem instructs individuals to revise the strength of their current beliefs (“priors”) by a factor that reflects how much more consistent the new evidence is with that belief being true than with it being false. Conceptually, that factor—the likelihood ratio—is the weight the new information is due. Many cognitive biases (e.g., base rate neglect, which involves ignoring the information in one’s “priors”) can be understood to reflect some recurring failure in people’s capacity to assess information in this way.

That’s not quite what’s going on, though, with identity-protective cognition. The signature of this dynamic isn’t so much the failure of people to “update” their priors based on new information but rather the role that protecting their identities plays in fixing the likelihood ratio they assign to new information. In effect, when they display identity-protective reasoning, individuals unconconsciously adjust the weight they assign to evidence based on its congruency with their group’s position (Kahan, 2015a).

If, e.g., they encounter a highly credentialed scientist, they will deem him an “expert” worthy of deference on a particular issue—but only if he is depicted as endorsing the factual claims on which their group’s position rests (Fig. 1) (Kahan, Jenkins-Smith, & Braman, 2011). Likewise, when shown a video of a political protest, people will report observing violence warranting the demonstrators’ arrest if the demonstrators’ cause was one their group opposes (restricting abortion rights; permitting gays and lesbians to join the military)—but not otherwise (Kahan, Hoffman, Braman, Evans, & Rachlinski, 2012).

In fact, Bayes’s Theorem doesn’t say how to determine the likelihood ratio—only what to do with the resulting factor: multiply one’s prior odds by it. But in order for Bayesian information processing to promote accurate beliefs, the criteria used to determine the weight of new information must themselves be calibrated to truth-seeking. What those criteria are might be open to dispute in some instances. But clearly, whose position the evidence supports—ours or theirs?—is never one of them.

The most persuasive demonstrations of identity-protective cognition show that individuals opportunistically alter the weight they assign one and the same piece of evidence based on experimental manipulation of the congruence of it with their identities. This design is meant to rule out the possibility that disparate priors or pre-treatment exposure to evidence is what’s blocking convergence when opposing groups evaluate the same information (Druckman, 2012).

But if this is how people assess information outside the lab, then opposing groups will never converge, much less converge on the truth, no matter how much or how compelling the evidence they receive. Or at least they won’t so long as the conventional association of positions with loyalty to opposing identify-defining groups remains part of their “objective social reality.”

Bounded rationality?

Frustration of truth-convergent Bayesian information processing is the thread that binds together the diverse collection of cognitive biases of the bounded-rationality paradigm. Identity-protective cognition, we’ve seen, frustrates truth-convergent Bayesian information processing. Thus, assimilation of identity-protective reasoning into the paradigm—as has occurred within both behavioral economics (e.g., Sunstein, 2006, 2007) and political science (e.g., Taber & Lodge, 2013)— seems perfectly understandable.

Understandable, but wrong!

The bounded-rationality paradigm rests on a particular conception of dual-process reasoning. This account distinguishes between an affect-driven, “heuristic” form of information processing, and a conscious, “analytical” one. Both styles—typically referred to as System 1 and System 2, respectively—contribute to successful decisionmaking. But it is the limited capacity of human beings to summon System 2 to override errant System 1 intuitions that generates the grotesque assortment of mental miscues—the “availability effect,” “hindsight bias,” the “conjunction fallacy,” “denominator neglect,” “confirmation bias”—on display in decision science’s benighted picture of human reason (Kahneman & Frederick, 2005).

It stands to reason, then, that if identity-protective cognition is properly viewed as a member of bounded-rationality menagerie of biases, it, too, should be most pronounced among people (the great mass of the population) disposed to rely on System 1 information processing. This assumption is commonplace in the work reflecting the bounded-rationality paradigm (e.g., Lilienfeld, Ammirati, & Lanfield 2009; Westen, Blagov, Karenski, Kilts, & Hamann, 2006).

But actual data are to the contrary. Observational studies consistently find that individuals who score highest on the Cognitive Reflection Test and other reliable measures of System 2 reasoning are not less polarized but more so on facts relating to divisive political issues (e.g., Kahan et al., 2012).

Experimental data support the inference that these individuals use their distinctive analytic proficiencies to form identity-congruent assessments of evidence. When assessing quantitative data that predictably Likelihood ratio is 5x10^8! Seriously! click it!!!trips up those who rely on System 1 processing, individuals disposed to use System 2 are much less likely to miss information that supports their groups’ position. When the evidence contravenes their group’s position, these same individuals are better able to explain it away (Kahan, Peters, Dawson, & Slovic, 2013).

Another study that fits this account addresses the tendency of partisans form negative impressions of their opposing number (Fig. 2). In the study, subjects selectively credited or dismissed evidence of the validity of the CRT as an “open-mindedness” test depending on whether the subjects were told that individuals who held their political group’s position on climate change had scored higher or lower than those who held the opposing view. Already large among individuals of low to modest cognitive reflection, this effect was substantially more pronounced among those who scored the highest on the CRT (Kahan, 2013b).

The tragic conflict of expressive rationality

As indicated, identity-protective reasoning is routinely included in the roster of cognitive mechanisms that evince bounded rationality. But where an information-processing dynamic is consistently shown to be magnified, not constrained, by exactly the types of reasoning proficiencies that counteract the mental pratfalls associated with heuristic information processing, then one should presumably update one’s classification of that dynamic as a “cognitive bias.”

In fact, the antagonism between identity-protective cognition and perceptual accuracy is not a consequence of too little rationality but too much.

Nothing an ordinary member of the public does as consumer, as voter, or participant in public discourse will have any effect on the risk that climate change poses to her or anyone else. Same for gun control, fracking, and nuclear waste disposal: her actions just don’t matter enough to influence collective behavior or policymaking.

But given what positions on these issues signify about the sort of person she is, adopting a mistaken stance on one of these in her everyday interactions with other ordinary people could expose her to devastating consequences, both material and psychic. It is perfectly rational under these circumstances to process information in a manner that promotes formation of the beliefs on these issues that express her group allegiances, and to bring all her cognitive resources to bear in doing so.

Of course, when everyone uses their reason this way at once, collective welfare suffers. In that case, culturally diverse democratic citizens won’t converge, or converge as quickly, on the significance of valid evidence on how to manage societal risks. But that doesn’t change the social incentives that make it rational for any individual—and hence every individual—to engage information in this way.

Only some collective intervention—one that effectively dispels the conflict between the individual’s interest in forming identity-expressive risk perceptions and society’s interest in the formation of accurate ones—could (Kahan et al., 2012; Lessig, 1995).

Rationality ≠ accuracy (necessarily)

. . . . Obviously, it isn’t possible to assess the “rationality” of any pattern of information processing unless one gets what the agent processing the information is trying to accomplish. Because forming accurate “factual perceptions” is not the only thing people use information for, a paradigm that motivates empirical researchers to appraise cognition exclusively in relation to that objective will indeed end up painting a distorted picture of human thinking.

But worse, the picture will simply be wrong. The body of science this paradigm generates will fail, in particular, to supply us with the information a pluralistic democratic society needs to manage the forces that creat the conflict betwen the stake citizens’ have in using their reason to know what’s known and using it to be who they are as members of diverse cultural groups  (Kahan, 2015b).

References

Akerlof, G. A., & Kranton, R. E. (2000). Economics and Identity. Quarterly Journal of Economics, 115(3), 715-753.

Anderson, E. (1993). Value in ethics and economics. Cambridge, Mass.: Harvard University Press.

Druckman, J. N. (2012). The Politics of Motivation. Critical Review, 24(2), 199-216.

Kahan, D. M. (2015a). Laws of cognition and the cognition of law. Cognition, 135, 56-60.

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Hoffman, D. A., Braman, D., Evans, D., & Rachlinski, J. J. (2012). They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev., 64, 851-906.

Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural Cognition of Scientific Consensus. J. Risk Res., 14, 147-174.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2013). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. The Cambridge handbook of thinking and reasoning, 267-293.

Kunda, Z. (1990). The Case for Motivated Reasoning. Psychological Bulletin, 108, 480-498.

Lessig, L. (1995). The Regulation of Social Meaning. U. Chi. L. Rev., 62, 943-1045.

Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving Debiasing Away: Can Psychological Research on Correcting Cognitive Errors Promote Human Welfare? Perspectives on Psychological Science, 4(4), 390-398.

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge ; New York: Cambridge University Press.

Peirce, C. S. (1877). The Fixation of Belief. Popular Science Monthly, 12, 1-15.

Sherman, D. K., & Cohen, G. L. (2006). The Psychology of Self-defense: Self-Affirmation Theory Advances in Experimental Social Psychology (Vol. 38, pp. 183-242): Academic Press.

Sunstein, C. R. (2006). Misfearing: A reply. Harvard Law Review, 119(4), 1110-1125.

Sunstein, C. R. (2007). On the Divergent American Reactions to Terrorism and Climate Change. Columbia Law Review, 107, 503-557.

Westen, D., Blagov, P. S., Harenski, K., Kilts, C., & Hamann, S. (2006). Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election. Journal of Cognitive Neuroscience, 18(11), 1947-1958.

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (25)

Interesting article. Thanks.
From our point of view, Bayesian reasoning is useful and is what a brain does, but the appropriate metric is not whether a particular likelihood function is increased by new data or whether certain priors change. It is how the posterior odds of personal satisfaction or health are changed by accepting the new data. This change in posterior odds is rational but the calculation of the change is not accessible to most people's thinking. So Bayes' theorem applies but the priors and likelihood function that needs to be considered in order to predict posterior odds of a certain 'rational' conclusion is hundreds of time more complex than we want to think about.
(The above is a summary of a really long discussion. If you are craving the long version, contact me.)

September 14, 2015 | Unregistered CommenterEric Fairfield

Dan,

Have you seen this new paper?

"The ideologically objectionable premise model (IOPM; Crawford, 2012) posits that people on the political left and right are equally likely to approach political judgments with their ideological blinders on. That said, they will only do so when the premise of a political judgment is ideologically acceptable. If it’s objectionable, any preferences for one group over another will be short-circuited, and biases won’t emerge."

Sounds familiar, eh? What do you think of it?

September 15, 2015 | Unregistered CommenterNiV

@NiV--

Have seen Crawford's paper(s). Haven't read the new paper you cite. Thanks!

September 17, 2015 | Registered CommenterDan Kahan

Lot's of crossover here, Dan.

http://righteousmind.com/where-microaggressions-really-come-from/

http://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/

Unfortunately, I don't think that these researchers give enough attention to the crossover to Identity Protective Cognition, motivated reasoning, etc.

September 17, 2015 | Unregistered CommenterJoshua

@Joshua-- this "microagression" thing is apparently a new meme ... I'm skeptical of stories with big, vague moving parts like "dignity" & "honor cultures"; they are pretty impervious to evidence...

September 18, 2015 | Registered CommenterDan Kahan

It seems to be exploding. I'm likewise skeptical of this supposed cultural evolution into a victim culture, with college students leading the charge. A cute theory but it seems to lack empirical support. There's no doubt that things like social media have altered social and communicative norms to some extent, but extraordinary claims and all that.

September 18, 2015 | Unregistered CommenterJoshua

Oh, yes. Microaggressions...

We're after power and we mean it [...] There's no way to rule innocent men. The only power any government has is the power to crack down on criminals. Well, when there aren't enough criminals one makes them. One declares so many things to be a crime that it becomes impossible for men to live without breaking laws. Who wants a nation of law-abiding citizens? What's there in that for anyone? But just pass the kind of laws that can neither be observed nor enforced or objectively interpreted – and you create a nation of law-breakers – and then you cash in on guilt. Now that's the system, Mr. Reardon, that's the game, and once you understand it, you'll be much easier to deal with.

It's a standard method of totalitarian societies. People have an innate sense of justice, but if you can persuade them that they're guilty of something, they'll accept anything you do to them without complaint.

The point of "microaggressions" and similar acts of "political incorrectness" is to make virtually any innocent comment or statement a social crime to be determined not by any objective standard, but at the whim of the "victim", who decides whether or not to take offence. This makes people nervous, inclined to placate and desperate to avoid any kind of confrontation or conflict with the person who has such arbitrary power over them. And as people have got ever more constrained in their expression, so the goalposts have moved to maintain the same level of danger and risk. The rules are meant to be broken, because when people break your rules you gain power over them. If you read the history of how totalitarian movements and societies got started, systems of arbitrary and capricious justice, and the demand for systematic mutual denunciations is a common theme.

Exhibiting your own victimhood is a way of demonstrating your "need", and in a system whose overriding principle is "From each according to his ability, and to each according to his needs", need gets you more. People claim to be micro-aggressed against because it gets them lenient treatment, buys them free sympathy and support from the rest of society, gives them power over others, and acts as a pre-emptive defence against others who might do the same to themselves. Human nature never changes, and the response to such perverse incentives is all too predictable.

It's not a new transition in moral culture, it's a very old one.There's nothing new under the sun.

September 18, 2015 | Unregistered CommenterNiV

==> "It's a standard method of totalitarian societies...This makes people nervous, inclined to placate and desperate to avoid any kind of confrontation or conflict with the person who has such arbitrary power over them. And as people have got ever more constrained in their expression, so the goalposts have moved to maintain the same level of danger and risk. "

Yes. I pine for the days when people could fly their Confederate flags on statehouses without being nervous about recrimination. Or when no one was nervous about making innocent homophoic slurs. We were so much better off then.

September 18, 2015 | Unregistered CommenterJoshua

BTW - an excellent case study for examining victim culture.

http://judithcurry.com/2015/09/17/rico/

September 18, 2015 | Unregistered CommenterJoshua

http://rationalwiki.org/wiki/War_on_Christmas

One minute they persecute you for the political incorrectness of saying "Happy Holidays," and the next minute you're in a gulag. I don't know about you, NiV, but my shelter is coming along nicely.

September 18, 2015 | Unregistered CommenterJoshua

@NiV & @Joshua--

I thought it had to do w/ mean nanotechnology robots.

bTW, care to predict *who* fears AI? Stay tuned...

September 18, 2015 | Registered CommenterDan Kahan

"Yes. I pine for the days when people could fly their Confederate flags on statehouses without being nervous about recrimination. Or when no one was nervous about making innocent homophoic slurs. We were so much better off then."

Yes. You were. People invented this "free speech" thing for a reason. But as history teaches us, we only realise that after it's too late.

Those who do not remember the lessons of the past are condemned to repeat it. What, you think nobody else in history has ever cried: "Oh, but that sort of thing could never happen here"?

"I thought it had to do w/ mean nanotechnology robots."

Like bacteria? Oh, yes, that too.
:-)

"bTW, care to predict *who* fears AI?"

The Luddites? :-)

I think the main fear people have is for their jobs.

But assuming you're asking about the Terminator-style robot apocalypse, I would hazard a guess that it goes together with fear of technology, industrialisation, and environmental degradation. The sort of people who are pessimistic about human intervention in the world.

Although I'd also expect there to be several discrete groups with different characteristics. People who see nature as red in tooth and claw may assume that the same principles will apply to artificial life as well. People who like scifi and technology are more likely to be aware of the possibilities and speculations, and so on.

September 18, 2015 |