follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Weekend Up(back)date: 3 theories of risk, 2 conceptions of emotion

From Kahan, D.M. Emotion in Risk Regulation: Competing Theories, in Emotions and Risky Technologies. (ed. S. Roeser) 159-175 (Springer Netherlands, 2010).

2. Three Theories of Risk Perception, Two Conceptions of Emotion

The profound impact of emotion on risk perception cannot be seriously disputed.  Distinct emotional states–-from fear to dread to anger to disgust (Slovic, 2000)--and distinct emotional phenomena--from affective orientations to symbolic associations and imagery (Peters & Slovic, 2007)--have been found to explain perceptions of the dangerousness of all manner of activities and things--from pesticides (Alhakami & Slovic, 1994) to mobile phones (Siegrist, Earle, Gutscher, & Keller, 2005), from red meat consumption (Berndsen & van der Pligt, 2005) to cigarette smoking (Slovic, et al., 2005).

More amenable to dispute, however, is exactly why emotions exert this influence.  Obviously, emotions work in conjunction with more discrete mechanisms of cognition in some fashion.  But which ones and how?  To sharpen the assessment of the evidence that bears on these questions, I will now sketch out three alternative models of risk perception--the rational weigher, the irrational weigher, and the cultural evaluator theories--and their respective accounts of what (if anything) emotions contribute to the cognition of risk.

2.1. The Rational Weigher Theory: Emotion as Byproduct

Based on the premises of neoclassical economics, the rational weigher theory asserts that individuals, over time and in aggregate, process information about risky undertakings in a way that maximizes their expected utility.  The decision whether to accept hazardous occupations in exchange for higher wages, (Viscusi, 1983) to engage in unhealthy forms of recreation in exchange for hedonic pleasure, (Philipson & Posner, 1993) to accept intrusive regulation to mitigate threats to national security (Posner, 2006) or the environment (Posner, 2004) -- all turn on a utilitarian balancing of costs and benefits.

On this theory, emotions don’t make any contribution to the cognition of risk.  They enter into the process, if they do at all, only as reactive byproducts of individuals’ processing of information:  if a risk appears high relative to benefits, individuals will likely experience a negative emotion--perhaps fear, dread, or anger--whereas if the risk appears low they will likely experience a positive one--such as hope or relief (Loewenstein, et al., 2001). This relationship is depicted in Figure 2.1.

2.2. The Irational Weigher Theory: Emotion as bias

The irrational weigher theory asserts that individuals lack the capacity to process information that maximizes their expected utility.  Because of constraints on information, time, and computational power, ordinary individuals must resort to heuristic substitutes for considered analysis; those heuristics, moreover, invariably cause individuals’ evaluations of risks to err in substantial and recurring ways (Jolls, Sunstein, & Thaler, 1998). Much of contemporary social psychology and behavioral economics has been dedicated to cataloging the myriad distortions--from the “availability cascades” (Kuran & Sunstein, 1998) to “probability neglect” (Sunstein, 2002) to “overconfidence” bias (Fischhoff, Slovic, & Lictenstein, 1977) to “status quo bias” (Kahneman, 1991)--that systematically skew risk perceptions, particularly those of the lay public.

For the irrational weigher theory, the contribution that emotion makes to risk perception is, in the first instance, a heuristic one.  Individuals rely on their visceral, affective reactions to compensate for the limits on their ability to engage in more considered assessments (Loewenstein, et al., 2001).  More specifically, irrational weigher theorists have identified emotion or affect as a central component of “System 1 reasoning,” which is “fast, automatic, effortless, associative, and often emotionally charged,” as opposed to “System 2 reasoning,” which is “slower, serial, effortful, and deliberately controlled” ((Kahneman, 2003, p. 1451), and typically involves “execution of learned rules” (Frederick, 2005, p. 26).  System 1 is clearly adaptive in the main--heuristic reasoning furnishes guidance when lack of time, information, and cognitive ability make more systematic forms of reasoning infeasible--but it remains obviously “error prone” in comparison to the more the “more deliberative [and] calculative” System 2 (Sunstein, 2005, p. 68).

Indeed, according to the irrational weigher theory, emotion-pervaded forms of heuristic reasoning can readily transmute into bias.  The point isn’t merely that emotion-pervaded reasoning is less accurate than cooler, calculative reasoning; rather it’s that habitual submission to its emotional logic ultimately displaces reflective thinking, inducing “behavioral responses that depart from what individuals view as the best course of action”--or at least would view as best if their judgment were not impaired (Loewenstein, et al., 2001).  Proponents of this view have thus linked emotion to nearly all the cognitive biases shown to distort risk perceptions (Fischhoff, et al., 1977; Sunstein, 2005). The relationship between emotion, rational calculation of expected utility, and risk perception that results is depicted in Figure 2.2.

2.3. The Cultural Evaluator Theory: Emotion as Expressive Perception

Finally there’s the cultural evaluator theory of risk perception.  This model rests on a view of rational agency that sees individuals as concerned not merely with maximizing their welfare in some narrow consequentialist sense but also with adopting stances toward states of affairs that appropriately express the values that define their identities (Anderson, 1993).  Often when an individual is assessing what position to take on a putatively dangerous activity, she is, on this account, not weighing (rationally or irrationally) her expected utility but rather evaluating the social meaning of that activity (Lessig, 1995).  Against the background of cultural norms (particularly contested ones), would the law’s designation of that activity as inimical to society’s well-being affirm her values or denigrate them (Kahan, et al., 2006)?

Like the irrational weigher theory, the cultural evaluator theory treats emotions as entering into the cognition of risk.  But it offers a very different account of how--one firmly aligned with the position that sees emotions as constituents of reason.

Martha Nussbaum describes emotions as “judgments of value” (Nussbaum, 2001). They orient a person who values some good, endowing her with the attitude that appropriately expresses her regard for that good in the face of a contingency that either threatens or advances it.  On this account, for example, grief is the uniquely appropriate and accurate judgment for someone who values another who has died; fear is the appropriate and accurate judgment for someone who values her or another’s well-being in the face of an impending threat to it; anger is the appropriate and accurate judgment for someone who values her own honor in response to an action that conveys insufficient respect.  People who fail to experience these emotions under such circumstances--or who experience these or other emotions in circumstances that do not warrant them--lack a capacity of discernment essential to their flourishing as agents capable of holding values and pursuing them.

Rooted heavily in Aristotelian philosophy, Nussbaum’s account is, as she herself points out, amply grounded in modern empirical work in psychology and neuroscience.  Antonio Damasio’s influential “somatic marker” account, for example, identifies emotions with a particular area in the brain (Damasio, 1994).  Persons who have suffered damage to that part of the brain display impaired capacity to recognize or imagine conditions that might affect goods they care about, and thus lack motivation to respond accordingly.  They are perceived by others and often by themselves as mentally disabled in a distinctive way, as suffering from a profound kind of moral and social obtuseness that makes them incapable of engaging the world in a way that matches their own ends.  If being rational consists, at least in part, of “see[ing] which values [we] hold” and knowing how to “deploy these values in [our] judgments,” then “those who are unaware of their emotions or of their emotional lacks” will necessarily be deficient in a capacity essential to being “a rational person” (Stocker & Hegeman, 1996, p. 105).

The cultural evaluator theory views emotions as enabling individuals to perceive what stance toward risks coheres with their values.  Cultural norms obviously play a role in shaping the emotional reactions people form toward activities such as nuclear power, handgun possession, homosexuality, and the like (Elster, 1999). When people draw on their emotions to judge the risk that such an activity poses, they form an expressively rational attitude about what it would mean for their cultural worldviews for society to credit the claim that that activity is dangerous and worthy of regulation, as depicted in Figure 2.3.  Persons who subscribe to an egalitarian ethic, for example, have been shown to be particularly sensitive to environmental and technological risks, the recognition of which coheres with condemnation of commercial activities that generate distinctions in wealth and status.  Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like--an attitude that expresses their commitment to the autonomy of markets and other private orderings (Douglas, 1966).  Individualistic persons worry instead about the risk that gun control--a policy that denigrates individualist values--will render law-abiding citizens defenseless (Kahan, Braman, Gastil, Slovic, & Mertz, 2007).  Persons who subscribe to hierarchical values worry about the dangers of drug distribution, homosexuality, and other forms of behavior that defy traditional norms (Wildavsky & Dake, 1990).

This account of emotion doesn’t see its function as a heuristic one.  That is, emotions don’t just enable a person to latch onto a position in the absence of time to acquire and reflect on information.  Rather, as a distinctive faculty of cognition, emotions perform a unique role in enabling her to identify the stance that is expressively rational for someone with her commitments.  Without the contribution that emotion makes to her powers of expressive perception, she would be lacking this vital incident of rational agency, no matter how much information, no matter how much time, and no matter how much computational acumen she possessed.



Law & Cognition 2016, Sessions 6 & 7 recap: To bias or not to debias--that is the question about deliberation

So how does deliberation affect the “bounded rationality” of jurors? Does it mitigate it? Make it worse?

Those were the questions we took up in the last couple of sessions of Law & Cognition

The answer, I’d say, is . . . who the hell knows!

The basis for this assessment are two excellent studies, one of which seems to put deliberation in a really good light, and another that seems to put it in a really bad one.

Seems to is the key part of the assessment in both cases.

The first study was Sommers, S.R., On Racial Diversity and Group Decision Making: Identifying Multiple Effects of Racial Composition on Jury Deliberations, Journal of Personality and Social Psychology 90, 597-612 (2006).  I identified this one several yrs ago as the “coolest debiasing study I’ve ever read,” and I haven’t read anything since that affects its ranking.

Sommers examines the effect of varying the racial composition of mock jury panels assigned to hear a case against an African-American who is alleged to have sexually assaulted a white victim.  White jurors, he reports, formed more pro-defense views and also engaged in higher quality deliberations when they were on racially mixed panels as opposed to all-white ones.

But the key finding was that this effect had nothing to do with actual deliberations; instead it had to do with the anticipation of them.

White members of the mixed panels were more disposed to see the defendant as innocent even before deliberations began.

Once deliberations did start, moreover, the whites on the mixed panels were less likely to make erroneous statements and more likely to make correct ones independently of any contributions to the discussion made by the African-American jurors.

The prospect of having to give an account to a racially mixed panel, Sommers convincingly surmises, activated unconscious processes that accentuated the attention that whites on the mixed-race panels paid to the trial proof and thus improved the accuracy of their information processing. 

It’s a really great example of how environmental cues can achieve a debiasing effect that a conscious instruction to “be objective” or “fair” or to “pay attention” etc. demonstrably cannot (indeed, such instructions, the readings for this week reminded us, often have a perverse effect).

I’m not sure, though, that the result tells us anything about whether and when deliberation in general can be expected to have a positive effect on information processing in legal settings.

Indeed, the second study we read, Schkade, D., Sunstein, C.R. & Kahneman, D., Deliberating about dollars: The severity shift, Columbia Law Rev. 100, 1139-1175 (2000), furnished us with reason to think that deliberation can be expected to exacerbate legal reasoning biases, at least in some circumstances.

SSK did a massive study in which 500 6-member panels deliberated on 15 separate civil cases presenting demands for punitive damages.  After watching films of these cases, the subjects individually completed forms that solicited their rankings of the ‘level” of punishment that was appropriate on a 0-8 scale and their assessment of the amount of punitive damages that should be awarded.  They then deliberated with their fellow mock jurors and made collective determinations on the same issues.

SSK found two interesting things.

First, in relation to the 0-8 “level of punishment” judgments, there was a group-polarization dynamic. Group panels tended to reach punishment-level judgments that were less severe than those of their median members in cases that presented relatively less egregious behavior. In case that presented relatively more egregious behavior, they tended to reach punishment-level judgments that were more severe than those of their median members.

Yet second, in all cases, there was a “severity shift” in the dollar amount of punitive damages awarded.  That is, in both the less egregious and the more egregious cases, the jury panels tended to agree on damage awards larger than the one favored by their median members—and indeed, in many cases, larger than the biggest one favored by any individual jury member before deliberation.

This is just plain weird, right?  I mean, the damages awards got bigger relative to what individual jurors favored on average even in the cases in which the panels’ deliberations produced a “punishment level” assessment that was less severe than of the median member of the panel!


As SSK show, moreover, the resulting punitive awards displayed a massive amount of variability.  SSK don't supply any graphic displays of the distributions (the biggest shortcoming of the paper, in my view), but they do supply enough information in tabular form to demonstrate that the distribution of awards was massively right skewed.

Indeed, SSK gravely rehearse just how severely the variability generated by the dynamics they uncovered would hamper the efforts of parties to predict the outcome of cases, something that generally is bad for the rational operation of law and for the decisionmaking of people who have to live with it.

But I have to be honest: I’m not 100% sure they really made the case on unpredictability.

They argued that it’s really difficult to pin down the likely outcome if one is drawing results randomly from a massively skewed distribution.  But they didn’t show that someone who knows about the dynamics they uncovered would be unable to use that information to improve his or her predictions of likely case outcomes.

For sure, those dynamics involved some pretty whacky shit at the micro-level—in the formation of individual jury verdicts.

But the question is whether the resulting macro-level pattern of judgments admit of statistical explanation based on the available information.

That information consists of the “punishment level” ratings of the individual jurors and the 6-member panels; what was the relationship between those and the resulting punitive verdicts?

SSK don’t say anything about that!

click me to see the coolest simulated group polarization effect since Euclid invented geometry!Just for fun, I created a little simulation (here’s the Stata code) to see if it might at least be possible that something that looked as whacky as what SSK observed might still be amenable to a measure of statistical discipline.

In the simulation, I created 3000 jurors, each of whose members, like SSK’s subjects, individually rated a “case” on a 0-8 “punishment level” scale. 

I then put the jurors on 500 juries, whose members, like SSK’s subjects, evinced (by design, of course) a group-polarization effect in their collective “punishment level” judgments. 

Then, to generate massively skewed punitive awards like SSK’s, I multiplied those jury-level “punishment level” judgments by a factor drawn randomly from a randomly generated and massively right skewed distribution of values.  The resulting array of punitive awards looked just as chaotically lopsided as SSK’s:

Nevertheless, when I regressed the damage awards on the jury verdicts I was able to explain 33% of the variance. Not bad!

I was able to do even better–40% of the variance explained—when I regressed the log-transformed values on the verdicts, a conventional statistical trick when one is dealing with right-skewed data.

This result turned out to be very much in line with observational studies, which suggest that a simple model that regresses punitive awards on compensatory ones can explain about over half the variance in punitive judgments (Eisenberg et al. 1997)!

Practically speaking, then, there’s potentially still a lot one can do to predict results even in a world as whacky as SSK’s.  All a lawyer would have to be able to do to make such predictions is form a reasonable estimate of the punishment-level assessment jurors would make of particular case, and then he or she would be able to give advice reflecting an analysis that reduces the variance in the resulting punitive damage awards by 40%.

Making the punishment-level estimate, moreover, wouldn’t be that hard.  SSK demonstrated that, unlike their damage-award judgments, the study subjects’ 0-8 punishment level assessments displayed a remarkable degree of coherence. People basically agreed, in other words, how egregious the behavior in the experiment's 15 cases was.

An experienced lawyer would thus likely be able to intuit “how bad” an average juror would think the behavior in such a case was.  And if the lawyer were really on the ball, then he or she could fortify his or her judgment with the results of an mock juror experiment that solicited 150 or so mock jurors’ assessments.

I definitely can’t be sure that the data in the SSK experiment would be as well behaved as my simulated data were, of course.

But I think we can be sure that looking inside the kitchen door of individual juries’ deliberations is not actually the right way to figure out how predictable their judgements are.  One has to take a nice statistical bite of the results and see how much variance one can digest!

But that said, SSK definitely is in the running for my “coolest biased-deliberation study I’ve ever read” award. . . .

Class dismissed!


Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D. & Wells, M.T. The predictability of punitive damages. The Journal of Legal Studies 26, 623-661 (1997).



Pew on Climate Polarization: Glimpses of cognitive dualism . . .

I’ve now digested the Pew Research Center’s “The Politics of Climate" Report. I think it’s right on the money—and delivers a wealth of insight.

What most readers seem to view as the highlights are interesting, certaintly, but you have to dig down a bit to get to the really good 
"believe it or not" stuff. . . .

1. Conservation of polarization. People have been focusing on what is in fact the Report headline, namely, that there’s deep political polarization on all matters climate.

That’s not news, of course.

Still even in the “not news” portion of the Report there is something of informational value.

The Report documents the astonishing level of stability in public attitudes—with individuals of diverse political outlooks being highly divided, and only about 50% of the population accepting human-caused global warming overall—for over a decade.

It’s easy for people to get confused about immense inertia of public opinion on climate change because advocacy pollsters are constantly “messaging” an “upsurge,” “shift,” “swing” etc. in public perceptions of climate change.

Likely they are doing this based on the theory that “saying it will make it so.”  It doesn’t.  It just confuses people who are trying to figure out how to improve public engagement with the best evidence.

Good for Pew for recognizing that the most valuable thing a public opinion researcher can do is tell people what they need to know and not just what they want to hear.

2. New & improved science literacy. There's also been a fair amount of attention to what Pew finds on science literacy: that more of it doesn’t mitigate polarization but in fact accentuates it across a range of climate change issues.

Again, that's not news.  The perverse relationship between science literacy and climate change was emphasized in a recently issued National Academy of Sciences report, which synthesized data that included various CCP studies, including the one featured in our 2012 Nature Climate Change paper.

But what is new and potential really important about the Pew Report is the measure that it has constructed to measure public science literacy.

The need for a better public science literacy measure was the primary message of the National Academy Report, which concluded that the NSF Science Indicators battery—the traditional measure—is too easy and lacks sufficient connection to critical reasoning.  Addressing these shortcomings was the motivation behind the development of CCP’s “Ordinary Science Intelligence” assessment.

It’s really great that Pew is now clearly devoting itself to this project, too. Its new test, it’s clear, contains items that are more difficult than the ones in its previous tests. Moreover, the Report indicates that Pew  is using item response theory, a critical tool in developing a valid and reliable science comprehension assessment, to determine which array of items to include.

It would be super useful to have even more information on Pew’s new science literacy test. I’ll say more about this “tomorrow.”

But it is certainly worth noting today that this is exactly the sort of work that distinguishes Pew, a genuine creator of insight into public opinion, from the pack of 5&dime commericial public opinion purveyors.

3.  Cognitive Dualism.  As the 14 billion readers of this blog know, “cognitive dualism” refers to the phenomenon in which people who use their reason for “identity-protective” ends switch to using it for “science-knowledge acquiring” ones when they are engaged in activities that depend on the latter.

One example is Salman Hameed’s Pakistani Dr, who disbelieves in evolution “at home,” where he is a practicing Muslim, but believes in it “at work,” in order to be an oncologist and also a person who takes pride in his identity as a science-trained professional.

We see the same thing in science curious evolution non-believers who, when furnished with a superbly done documentary that doesn’t proselytize but just wows them with human ingenuity, can appreciatively agree that it has deepened their insight into the natural history of our species.

Cognitive dualism is also on display in the reasoning of the Kentucky Farmer: his experience of membership in his cultural group is enabled by believing that climate change “hasn’t been scientifically proven”; but to succeed as a farmer he engages in no-till farming, buys more crop insurance, changes his crop selection and rotation, and excitedly purchases Monsanto Fieldview Pro “climate forecaster” (powered by the world’s best climate change science!)—because he accepts the best available evidence on climate change for purposes of being a successful farmer.

Well, Pew’s survey tells us that there are a lot more cognitive dualists out there. 

E.g., although only 15% of “conservative Republicans” say they believe that the “earth is warming mostly due to human activity,” almost double that percentage agree that “restrictions on power plant carbon emissions” (29%) and “international agreements to limit carbon emissions” (27%) would “make a big difference” in “address[ing] climate change”!

Plainly, many who are answering the “do you believe in human-caused climate change?” question in an identity protective fashion are answering the “what would make a difference in reducing climate change” in a “what do you know, what should we do?” one.

Outside of SE Florida, the question posed by our politics is the first and not the second. 

That’s what has to change if we are to make progress as a self-governing society in addressing the issues that climate change poses about how to secure our well-being.

4.  Attitudes toward climate scientists.  Last is definitely not least: there is some really grim news for scientists in this poll . . . .

Generally speaking, I’ve been very skeptical that distrust of scientists, by anyone, explains conflict over decision relevant science.  The U.S. is a pro-science society by any sensible measure (including multiple ones that Pew has developed).  On climate change in particular—as on other contested science issues—both sides think their position is consistent with scientific consensus

But this survey had some responses that are making me reassess my understanding.

The survey items in question weren't ones that have to do with the skepticism of conservative climate change disbelievers, though. They were ones suggesting that even liberal climate change "believers" are a bit skeptical about what "climate scientists" are saying.

According to the survey, only 55% of “Liberal Democrats”—a group 79% of whom accept human-caused climate change is occurring—believe that climate scientists “research findings . . . are influenced by” the “best available evidence . . . most of the time . . . .” 

That’s really an eye opener. Apparently, even among those most disposed to believe in” human-caused climate change, there are a substantial number of people who think “climate scientists” aren’t being entirely straight with them. . . .

What could explain this sort of cognitive dualism?

More study is warranted, certainly, to figure this out.

But since we know that “liberal Democrats” don’t watch Fox News and instinctively dismiss everything that conservative advocacy groups say, a plausible hypothesis is that the advocates whom these individuals do credit have imprinted in their minds a highly politicized picture of who climate scientists are and what they are up to.

That wouldn’t be particularly surprising.  The principal groups speaking for climate scientists have played a central role in making “who are you, whose side are you on?” the dominant question in our climate-change discourse.

That’s a science communication problem that needs to be fixed.


Weekend update: "Culturally Antagonistic Memes and the Zika Virus: An Experimental Test" now "in press" 

This paper is now officially "forthcoming" in Journal of Risk Research.


Today's activity: read new Pew Report on climate change polarization

Read twice, comment once, as they say



Et tu, AOT? Fracking freaks me out part 3

What happens to politically diverse citizens’ perceptions of the risk of fracking as those individuals' scores on an "Actively Open-minded Thinking" (AOT) battery increase?

Why, their perceptions become more polarized, of course!

Actually, this is a weird result. It's another reason why “Fracking freaks me out!

To be sure, fracking is not the only putative risk that twists, distorts, eviscerates reason in this way.

AOT polarization of belief in climate change-- not so freaky! click me click meClimate change does, too, something that Jonathan Corbin & I demonstrate in connection with AOT in our forthcoming Research & Politics paper, & that I & collaborators have observed in connection to various other measures of critical thinking as well.

But not every putative risk exerts this effect; indeed, most don’t.

Consider nuclear power: citizens are politically polarized over the risks it poses in general, but as they score higher on AOT their perceptions converge.

That fracking is part of the toxic family of risk sources that generate more disagreement as reasoning proficiency increases might be not be so amazing but for its relative youth.  The basic technology is in fact quite old, but fracking really didn’t assume a large profile in U.S. energy production and certainly not in public consciousness until at least 2010, when large-scale operations started to ramp up in the massive Marcellus formation.

hey, what about me -- drones?! click on me!In that short interval, fracking has catapulted from “huh?” to “whaaaaa!,” leaping over blue-chip polarizers like nuclear, not to mention long-standing pseudo-polarizing junk bonds like GM foods.

Anyone who thinks he or she can “easily” explain this development for sure earns a low score on actively open-minded thinking and science-of-science communication curiosity.


Law & Cognition 2016, Session 6: Reading list & questions

Have at it!


Weekend up(back)date: cultural cognition vs. Bayesian updating of scientific consensus

We've been having so much fun with "Bayesian vs. X" diagrams in Law & Cognition 2016 that I thought I'd dredge up a vintage use of this heuristic.  This is from Kahan, D.M., Jenkins-Smith, H. & Braman, D., Cultural Cognition of Scientific Consensus, J. Risk Res. 14, 147-174 (2011)

 5.1. Summary of findings

The goal of the study was to examine a distinctive explanation for the failure of members of the public to form beliefs consistent with apparent scientific consensus on climate change and other issues of risk. We hypothesized that scientific opinion fails to quiet societal dispute on such issues not because members of the public are unwilling to defer to experts but because culturally diverse persons tend to form opposing perceptions  of  what  experts  believe.  Individuals  systematically  overestimate  the degree of scientific support for positions they are culturally predisposed to accept as a result of a cultural availability effect that influences how readily they can recall instances of expert endorsement of those positions.

The study furnished two forms of evidence in support of this basic hypothesis. The first was the existence of a strong correlation between individuals’ cultural values and their perceptions of scientific consensus on risks known to divide persons of opposing worldviews. Subjects holding hierarchical and individualistic outlooks, on the one hand, and ones holding egalitarian and communitarian outlooks, click me for a closer look!on the other, significantly disagreed about the state of expert opinion on climate change, nuclear waste disposal, and handgun regulation. It is possible, of course, that one or the other of these groups is better at discerning scientific consensus than the other. But because the impressions of both groups converged and diverged from positions endorsed in NAS ‘expert consensus’ in a pattern reflective of their respective predispositions, it seems more likely that both hierarchical individualists and egalitarian communitarians are fitting their perceptions of scientific consensus to their values.

The second finding identified a mechanism that could explain this effect. When asked to evaluate whether an individual of elite academic credentials, including membership in the NAS, was a ‘knowledgeable and trustworthy expert’, subjects’ answers proved conditional on the fit between the position the putative expert was depicted as adopting (on climate change, on nuclear waste disposal, or on handgun regulation) and the position associated with the subjects’ cultural outlooks. . . .

5.2. Understanding the cultural cognition of risk

Adding this dynamic to the set of mechanisms through which cultural cognition shapes perceptions of risk and related facts, it is possible to envision a more complete picture of how these processes work in concert. On this view, cultural cognition can be seen as injecting a biasing form of endogeneity into a process roughly akin to Bayesian updating.

Even as an idealized normative model of rational decision-making, Bayesian information processing is necessarily incomplete. Bayesianism furnishes an algorithm for rationally updating one’s beliefs in light of new evidence: one’s estimate of the likelihood of some proposition should be revised in proportion to the probative weight of any new evidence (by multiplying one’s ‘prior odds’ by a ‘likelihood ratio’ that represents how much more consistent new evidence is with that proposition than with its negation; Raiffa 1968). This instruction, however, merely tells a person how a prior estimate and new evidence of a particular degree of probity should be combined to produce a revised estimate; it has nothing to say about what her prior estimate should be or, even more importantly, how she should determine the probative force (if any) of a putatively new piece of evidence.

Consistently with Bayesianism, an individual can use pretty much any process she wants – including some prior application of the Bayesian algorithm itself – to determine the probity of new evidence (Raiffa 1968), but any process that gauges the weight (or likelihood ratio) of the new evidence based on its consistency with the individual’s prior estimate of the proposition in question will run into an obvious difficulty. In the extreme, an individual might adopt the rule that she will assign no probative weight to any asserted piece of evidence that contradicts her prior belief. If she does that, she will of course never change her mind and hence never revise a mistaken belief, since she will necessarily dismiss all contrary evidence, no matter how well founded, as lacking credibility. In a less extreme variant, an individual might decide merely to assign new information that contradicts her prior belief less probative weight than she otherwise would have; in that case, a person who starts with a mistaken belief might eventually correct it, but only after being furnished with more evidence than would have been necessary if she had not discounted any particular item of contrary evidence based on her mistaken starting point. A person who employs Bayesian updating is more likely to correct a mistaken belief, and to do so sooner, if she has a reliable basis exogenous to her prior belief for identifying the probative force of evidence that contravenes that belief (Rabin and Schrag 1999).

When mechanisms of cultural cognition figure in her reasoning, a person processes information in a manner that is equivalent to one who is assigning new information probative weight based on its consistency with her prior estimation (Figure 9). Because of identity protective cognition (Sherman and Cohen 2006; Kahan et al. 2007) and affect (Peters, Burraston, and Mertz 2004), such a person is highly likely to start with a risk perception that is associated with her cultural values. She might resolve to evaluate the strength of contrary evidence without reference to her prior beliefs. However, because of culturally biased information search and culturally biased assimilation (Kahan et al. 2009), she is likely to attend to the information in a way that reinforces her prior beliefs and affective orientation (Jenkins-Smith 2001).

Perhaps mindful of the limits of her ability to gather and interpret evidence on her own, such an individual might choose to defer or to give considerable weight to the views of experts. But through the cultural availability effect examined in our study, she is likely to overestimate the proportion of experts who hold the view consistent with her own predispositions. Like the closed-minded Bayesian whose assessment of the probative value of new information is endogenous to his prior beliefs, then, such an individual will either not change her mind or will change it much more slowly than she should, because the same predisposition that informs her priors will also be unconsciously shaping her ability to recognize and assign weight to all manner of evidence, including the opinion of scientists (Zimper and Ludwig 2009).


Jenkins-Smith, H. 2001. Modeling stigma: An empirical analysis of nuclear waste images of Nevada. In Risk, media, and stigma: Understanding public challenges to modern science and technology, ed. J. Flynn, P. Slovic, and H. Kunreuther, 107–32. London/Sterling, VA: Earthscan.

Kahan, D.M., D. Braman, J. Gastil, P. Slovic, and C.K. Mertz. 2007. Culture and identity- protective cognition: Explaining the white-male effect in risk perception. Journal of Empirical Legal Studies 4, no. 3: 465–505.

Kahan, D.M., D. Braman, P. Slovic, J. Gastil, and G. Cohen. 2009. Cultural cognition of the risks and benefits of nanotechnology. Nature Nanotechnology 4, no. 2: 87–91.

Peters, E.M., B. Burraston, and C.K. Mertz. 2004. An emotion-based model of risk perception and stigma susceptibility: Cognitive appraisals of emotion, affective reactivity, world- views, and risk perceptions in the generation of technological stigma. Risk Analysis 24, no. 5: 1349–67

Rabin, M., and J.L. Schrag. 1999. First impressions matter: A model of confirmatory bias. Quarterly Journal of Economics 114, no. 1: 37–82.

Raiffa, H. 1968. Decision analysis. Reading, MA: Addison-Wesley.

Sherman, D.K., and G.L. Cohen. 2006. The psychology of self-defense: Self-affirmation theory. In Advances in experimental social psychology, ed. M.P. Zanna, 183–242. San Diego, CA: Academic Press

Zimper, A., and A. Ludwig. 2009. On attitude polarization under Bayesian learning with non-additive beliefs. Journal of Risk and Uncertainty 39, no. 2: 181–212


Modeling the incoherence of coherence based reasoning: report from Law & Cognition 2016

I’ve covered this ground before (in a 3-part set last yr) but this post supplies a compact recap of how coherence based reasoning (CBR), the dynamic featured in Session 5 of the Law & Cognition 2016 seminar, subverts truth-convergent information processing.

The degree of subversion is arguably more extreme, in fact, than that associated with any of the decision dynamics we’ve examined so far.

Grounded in aversion to residual uncertainty, CBR involves a fom of rolling, recursive confirmation bias. 

Where decisionmaking evinces CBR, the factfinder engages in reasonably unbiased processing of the evidence early on in decisionmaking process. But the more confident she becomes in one outcome, the more she thereafter adjusts the weight—or in Bayesian terms the likelihood ratio—associated with subsequent pieces of independent evidence to conform her assessment of them to that outcome.

As her confidence grows, moreover, she revisits what appeared to her earlier on to be pieces of evidence that either contravened that outcome or supported it only weakly, and readjusts the weight afforded to them as well so as to bring them into line with her now-favored view.

By virtue of these feedback effects, decisions informed by CBR are marked by a degree of supreme confidence that belies the potential complexity and equivocality of the trial proof.

Such decisons are also characterized, at least potentially, by arbitrary sensitivity the order in which pieces of evidence are considered. Where both sides in a case have at least some strong evidence, which side's strong evidence is encountered (or cognitively assimilated) “first” can determine the direction of the feedback dynamics that thereafter determine whether the other side’s proof is given the weight it's due.

It should go without saying that this form of information processing is not truth convergent. 

As reflected in the simple Bayesian model we have been using in the course, truth-convergent reasoning demands not only that the decisionmaker update her factual assessments in proportion to the weight—or likelihood ratio—associated with a piece of evidence; it requires that she determine the likelihood ratio on the basis of valid, truth-convergent criteria.

That isn’t happening under CBR.  CBR is driven by an aversion to complexity and equivocality that unconsciously induces the decisionmaker to credit and discredit evidence in patterns that result in a state of supreme over­confidence in an outcome that might well be incorrect.  The preference for coherence across diverse, independent pieces of evidence, then, is an extrinsic motivation that invests the likelihood ratio with qualities unrelated to the truth.

Just how inimical this process is to truth seeking can be usefully illustrated with a simple statistical simulation.

The key to the simulation is the “CBR function,” which inflates the likelihood ratio assigned to the evidence by a factor tied to the factfinder’s existing assessment of the probability of a particular factual proposition.  This element of the simulation models the tendency of the decisionmaker to overvalue evidence in the direction and in proportion to her confidence in a particular outcome.

In the simulation, the CBR factor is set so that a decisionmaker overweights the likelihood ratio by 1 “deciban” for every one-unit increment in the odds in favor a particular outcome (“1:1” to “2:1” to “3:1” etc.). Accordingly, she overvalues the evidence by a factor of 2 as the odds shift from even money (1:1) to 10:1, and by an amount proportionate to that as the odds grow progressively more lopsided.  I’ve discussed previously why I selected at this formula, which is a tribute to Alan Turing & Jack Good & the pioneering work they did in Bayesian decision theory.

This table illustrates the distorting impact of the CBR factor. It shows how a case consisting of eight "pieces" of evidence--four pro-prosecution and four pro-defense--that ought to result in a "tie" (odds of 1:1 in favor of a prosecutor’s charge) can generate an extremely confident judgment in favor of either that party depending on the order of the trial proof

In the simulation, we can generate 100 cases, each consisting of 4 pieces of “prosecution” evidence—pieces of evidence with likelihood ratios drawn randomly from a uniform distribution of 1.05 to 20—and 4 pieces of “defense” evidence--ones with likelihood ratios drawn randomly from the reciprocal values (0.95 to 0.05) of that same uniform distribution.

The histograms illustrate the nature of the “confidence skew” resulting from the impact of CBR in those 100 cases.  As expected, there are many fewer “close cases” when decisionmaking reflects CBR than there would be if the decisionmaking reflected unbiased Bayesian updating.

The skew exacts a toll on outcome accuracy. The toll, moreover, is asymmetric: if we assume that the prosecution has to establish her case by a probability 0.95 to satisfy the “beyond a reasonable doubt” standard, many more erroneously decided cases will involve false convictions than false acquittals, since only those cases in which equivocation is incorrectly resolved in favor of exaggerated confidence in guilt will result in incorrect decisions.  (Obviously, if these were civil cases tried under a preponderance of the evidence standard, the error rates for false findings of liability and false findings of no liability would be symmetric.)

This is one “run” of 100 cases. Let’s put together a full-blown Monte Carlo simulation (a tribute to the Americans working on the Manhattan project; after all, why should the Bletchley Park codebreakers Turing & Good garner all our admiration) & simulate 1,000 sets of 100 cases so that we can get a more precise sense of the distribution of correctly and incorrectly decided cases given the assumptions built into or coherence-based-reasoning model.

If we do that, we see this:


Obviously, all these numbers are ginned up for purposes of illustration.

We can’t know (or can’t without a lot of guesswork) what the parameters should be in a model like this.

But we can know even without doing that that we ought to have grave doubts about the accuracy, and hence legitimacy, of a legal system that relies on decisionmakers subject to this decisionmaking dynamic.

Are jurors subject to this dynamic?  That’s a question that goes to the external validity of the studies we read for this session.

But assuming that they are, would professional decisionmakers likely do better? That’s a question very worthy of additional study.


Compromise effects as motivated reasoning: report from Law & Cognition 2016

Bit behind . . . doing best I can!

1. Overview. In session 4, we started looking at cognitive dynamics that evince “bounded rationality”— decisionmaking patterns that reflect human beings’ imperfect computational capacities. “Context dependent preferences”—the tendency of people to shift their relative evaluations of paired options when irrelevant alternatives are added to the choice set—fall into this category. We read Kelman, M., Rottenstreich, Y. & Tversky, A., Context-Dependence in Legal Decision Making,  J. Legal Stud. 25, 287- (1996) (KRT), which reported several experiments showing how context-dependent preferences could distort legal determinations. One issue posed by the class discussion was whether we could assimilate KRT’s account of the operation of context-dependent preferences to our expanding account of biased factual judgments in law.

2. “Compromise effects” generally. Context-dependent preferences are of two types: ones reflecting “compromise effects” and others reflecting “contrast effects.” I will confine my discussion here to the former.

Reflecting an implicit aversion to extremeness, a comprises effect—or CE—occurs when a person’s preference for one option shifts to another that has been rendered “intermediate” along some salient decisionmaking dimension by the addition of a third, irrelevant option. The classic example is the decision of consumers who would have purchased “regular” rather than “premium” gas to select the latter when “super premium” is offered as well.             

3. KRT in particular. KRT conduct multiple experiments that show subjects’ propensity to select one homicide grade over another in patterns reflecting CE.  Rather than reproduce them, I will present a composite representation based on the “Self-defense?” problem from Session 1.

Let’s imagine that case had been tried on alternative theories of murder, defined as intentional killing, and voluntary manslaughter, defined as intentional killing based on an honest but unreasonable belief that deadly force was necessary to avert an immediate threat of death or great bodily harm.  Imagine further that were the case tried on this basis to multiple juries, 50% of them would find Rick (the defendant) guilty of murder and 50% of voluntary manslaughter.

Now imagine the case is tried on these two theories plus a third: either “hate crime” murder, defined as an intentional killing motivated by animus against the victim based on his or her group identity; or complete self-defense, defined as an honest and reasonable belief that deadly force was necessary to avert an immediate threat of death of great bodily harm.

Consistent with KRT studies, we might predict that CEs would alter the proportion of murder and voluntary manslaughter verdicts even if the juries rejected the third theory.

Where the hate-crime theory was added to the charge array, murder would be rendered intermediate in extremeness.  The KRT studies thus predict that it would thereby be rendered more attractive relative to the least extreme option of voluntary manslaughter.  Let’s imagine that murder would now be the option selected by 75% of the juries presented with these three theories and manslaughter the one selected the remaining 25%.

In contrast, where the complete self-defense theory was added, voluntary manslaughter would become intermediate, and thus gain in attractiveness relative to the most extreme option of murder.  Consistent with KRT, we might imagine that murder would now be preferred by only 25% of the juries and manslaughter preferred by the remaining 75%.

4. Compromise effects as motivated factual cognition. Ordinarily, decisionmaking shifts reflecting CE are viewed as evincing the lability of individuals’ preferences. Clearly that is so in a case like the classic example involving the shift from selection of regular to selection of premium gas.

But in our “Self-defense?” thought experiment, as in the actual KRT experiments, CEs are operating over alternative factual perceptions.  The choice between voluntary manslaughter and murder verdicts turns on whether or not the juries credit the defense claim that Rick honestly believed himself to be at risk of immediate lethal harm when he intentionally shot Frank. In altering the proportion of murder and manslaughter verdicts, then, the addition of the irrelevant option—either “hate crime” murder or perfect self-defense—must therefore be understood to be inducing jurors to shift in their assessments of that key fact.

The operation of CE on fact perceptions makes it possible to assimilate context-dependent preferences to the course’s growing taxonomy of non-truth-convergent cognitive dynamics.

Members of that taxonomy are spelled out in relation to a simple Bayesian model in which the assessed probability of a particular factual proposition is revised by a factor equal to how much more consistent a piece of evidence is with that proposition than with some alternative.  For decisionmaking consistent with this model to be truth convergent, the likelihood ratio—the factor reflecting the weight assigned tothat evidence—must be determined on the basis of valid, truth-convergent criteria. 

That isn’t so, e.g., when decisionmaking reflects confirmation bias, in which case the likelihood ratio is determined by the conformity of the new evidence with one’s priors.  Nor is it so when decisionmaking reflects the “story telling model,” in which case the likelihood ratio is determined by the conformity of the evidence to a story-template selected prior to evaluation of all the evidence in the case.

Where some unconscious preference unrelated to truth-seeking determines the likelihood ratio or weight associated with a piece of information, the decisionmaking process can be said to reflect motivated reasoning.  Cultural cognition is a species of motivated reasoning: it reflects the stake that individuals have in conforming their factual perceptions to conclusions that affirm rather than threaten their cultural identities.

Where a CE shapes jurors’ factual determinations, context-dependent preferences can be seen as a species of motivated reasoning, too.  In that case, the likelihood ratio is being determined not by valid truth-convergent criteria but rather by the conformity of the evidence to an outcome consistent with jurors’ unconscious preference for a non-extreme outcome. 


2 climate changes & "debating" them in the Presidential race

Here is what I said when asked, by the author of this story, for a comment on questions on climate change (or the lack thereof) in the presidential debate:

I think there are two "climate changes" in America: one in relation to which nearly all citizens form beliefs & take stances that express their identity as members of opposing cultural groups; and another in relation to which at least some citizens (a subset of the first)  are already making practical decisions -- as business actors, individual property owners, and citizens -- aimed at protecting their tangible interests.

Politicians won't make much progress & could well get themselves into trouble when they discuss or get into debates on the first climate change.

But if they can succeed in the addressing the second, they have the potential not only to gain support but to move the country forward in addressing an issue of immense consequence to our well being.

Easier said than done, I suppose.  

But I think there are a lot of people out there, Republicans and Democrats, who know that they and their communities need a lot of support. Smart, public-spirited politicians in places like S.E. Florida (the congressional delegation of which recently created a bipartisan climate action caucus) are figuring out how to show that they are committed to getting them that help.

Anyone smart enough to be president ought to recognize that he or she should be giving those people the same sort of assurance that he or she is going to be there for them in the next 4 yrs.



Science curiosity: who, why, what, & WTF (talk summary & slides)

What I recall saying, essentially, at SMASH--Science Media Awards & Summit in the Hub--last Thursday. Slides here.

1. SCS.  This talk describes a tool for use in the evidence-based production of science films and related science media.  The tool, the “Science Curiosity Scale” (SCS), enables individuals to be profiled in relation to their disposition to consume science-related media for personal edification and enjoyment.  SCS also has other interesting, unexpected properties, the existence of which suggests the affinity between the craft of science filmmaking and the project to promote more constructive public engagement with science generally.

2. Why. (a) The book/movie Moneyball furnishes a useful backdrop for explication of the philosophy behind SCS.  A supposed account of how statistical techniques were used to improve the general management of a professional baseball team, Moneyball rests on the premise that intuition and experience are unreliable guides for complex decisionmaking.

The philosophy behind SCS regards Moneyball’s premise as sheer, unadulterated bull shit. There is no substitute for craft sense—a perceptive faculty informed by immersion in professional norms and calibrated by personal experience—in complex decisionmaking, including science filmmaking.

But in science filmmaking as in other domains, the currency of craft sense is information.  The premise of the “science of science communication,” of which SCS is a product, is that the methods of science can be used to augment the stock of information available to science filmmakers and other professional science communications so that they can exercise their craft sense in a manner that they can have reason to be even more confident will generate the outcomes they are seeking to produce.

(b) The most compelling sign that the science of science communication can be of value to science filmmakers is that they themselves so often disagree about how to conduct their craft. Some of the disagreements are general and recurring—the subject, in fact, of perennial panel discussions at annual gatherings like this one—while others are specific to the production of particular films. But whether systemic or episodic, issues that defy resolution on the basis of professionals’ collective judgment testify to their need for information beyond that to which they have ready access through their shared experience.  In such circumstances, the empirical methods featured in the science of science communication aim not to supplant professional judgment but to aid it by generating information that those who possess such judgment would agree will help them to assess the relative plausibility of competing positions on the issues that divide them and thereafter supply the basis for action, the common assessment of which will promote the continued evolution of their common craft sense.

(c) The science curiosity scale was self-consciously designed in response to a disputed conjecture among science communication professionals: the missing audience hypothesis.  At least some science documentary filmmakers believe that number of persons who view premier science films, on public television and in other venues, is smaller than it should be. Pointing to audience demographic disparities—ones founded in age, gender, region of the country, income, and even ideology—they surmise that there are correctible features of these offerings, collateral to their science content, that are unintentionally signaling to people with certain cultural identities that these materials are “not for them” and discouraging them from turning to these films to satisfy their interest in knowing what is known by science.  Other science-communication professionals disagree: the size of the audience for premier science films, and its composition, they argue, are a simple reflection of how the taste for learning what is known by science is distributed in the general population.  Call this the natural audience hypothesis explanation for the constrained appeal of premier science films.

Working with science filmmakers and related science-communication professionals on both sides of this issue, the CCP/APPC/TB science-of-science filmmaking team devised SCS to help resolve the impasse between proponents of these competing positions.  The idea was to develop a measure  of  the disposition to seek out and consume science films and related science media for personal enjoyment.  MAH and NAH make opposing predictions about what such a measure will show: NAH implies it will reveal that the taste for enjoyment of high-quality science films just is uneven in the general population, whereas NAH predicts that it will show that there are segments of the population whom high-quality science films are failing to engage notwithstanding those individuals' appetite to seek out and consume such material for personal enjoyment and edification.

3. What.  SCS is a standardized assessment instrument.  The idea behind it is to measure a latent or unobserved disposition to seek out and consume science-related material for personal enjoyment.

Such measures have a tortured history. Using absurd self-report measures, they invariable generate skewed results with no predictive validity. Many in the decision sciences had simply given up on the possibility of devising a valid science curiosity measure.

Our scale development strategy was geared to overcoming these difficulties. To avoid the “social desirability bias” associated with self-report measures, the scale embedded such items in a larger array of “interest” questions disguised as a  social-marketing survey. It also used more reliable and objective behavioral and performance-based measures.

The resulting scale had very satisfactory psychometric qualities, meaning that its constituent items cohered in a manner that suggested they were measuring a real disposition that varied continuously and normally in the general population.

But most importantly of all, we were able to behaviorally validate it.  That is, we were able to show that in fact it did very powerfully predict who would engage with science documentary films and who wouldn’t.

4. Who. SCS can be used to assess the MAH/NAH dispute in a couple of different ways. First, one can examine the distribution of SCS across demographic groups of interest. NAH implies we should see disparities—among racial, age, gender, and ideological groups—that reflect observed audience disparities for premier science films. It doesn’t seem that we do; SCS is remarkably uniform across the population.

Second, we can try to use SCS to explain observed disparities in audiences for science films. Here there is at least limited support for the MAH. Individuals who are more hierarchical and individualistic, conservative female, white & older seem to be less engaged with at least certain films that we tested than one might expect.  Why might that be the case? That is something that can be tested in additional experimental studies using SCS.

Click me! I dare you!5. WTF? Now the “what the fuck” features of SCS . . . . There are two!

The first concerns the engagement with evolution science films of individuals who don’t believe in evolution. There is only a modest discrepancy in the science curiosity of individuals who do and don’t believe in evolution.  Moreover, we found that conditional on the same level of science curiosity, individuals who do and don’t believe in evolution have comparable levels of engagement with evolution-science films. They are not the missing audience!

The second WTF concerns the  relationship between science curiosity and political information processing. We all know that Americans are polarized on a variety of science issues; many of us know what these divisions actually get worse as science comprehension increases. But turns out, surprisingly, that these divisions click me! I beg you!don’t get worse as science curiosity goes up; instead they abate. When we observed this, we decided to do an experiment to investigate. What we found was that unlike most individuals, those high in SCS willingly sought out information that was contrary to their political predispositions. This result plausibly explains why they are less polarized in general and why polarization among them, unlike in others, doesn’t go up as their science comprehension increases.

This result highlights the likely synergies between the use of scientific methods to study science communication across domains. What one learns in one is likely to have unexpected payoffs in others. Actually, for that reason the payoffs should be expected, although what they’ll be will be a surprise.

That sort of surprise is exactly what moves science curious people.


Weekend update: "Note on Perverse Effect of Actively Open-minded Thinking" now "in press" in Resarch & Politics

This paper is now officially "forthcoming" in Research & Politics . . . 


On the complexity of Expressive Rationality

from Rationality and Belief in Evolution . . .

5. Addressing the complexity of expressive rationality 

5.1.  Two tiers of two forms of rationality

Human rationality is complex.  Instrumental rationality (maximizing goal/desire fulfillment) and epistemic rationality (how accurately beliefs map the world) can both be conceived as having two tiers (Stanovich, 2013). 

Following Elster (1983), a so-called thin theory of instrumental rationality evaluates only whether desire-fulfillment is being maximized given current desires. Its sole criteria of appraisal are whether the classic axioms of choice are being adhered to.  But people aspire to rationality more broadly conceived (Elster,1983; Stanovich, 2004).  They want their desires satisfied, true, but they are also concerned about having the right desires.   The instrumental rationality a person achieves must be contextualized by taking into account what actions signify about a person’s character (as someone who follows through on one’s plans, who is honorable and loyal, who respects the sanctity of nature, and so forth).  Narrow instrumental rationality is thus sometimes sacrificed when one’s immediate desires compete with one’s higher commitments to being a particular kind of person (Stanovich, 2013).

Epistemic rationality has levels of analysis parallel to that of instrumental rationality.  Coherence axioms and the probability rules supply a thin theory of the rationality, one that appraises beliefs solely in terms of their contribution to accuracy.  But because what one believes, no less than what one values, can signify the kind of person one is, a broader level of epistemic rationality places a constraint—one discussed under many different labels (symbolic utility, expressive rationality, ethical preferences, and commitment [Stanovich, 2004])—on truth seeking in certain contexts. Just as immediate desires can be subordinated to “higher ends” in the domain of instrumental rationality, so in the domain of epistemic rationality truth seeking can sometimes be sacrificed to symbolic ends.

5.2. Separating the rationality tiers from the irrationality chaff

These two tiers of instrumental and epistemic rationality make studying rationality complicated, too. How is one to know whether decisionmaking that deviates from the first tier of either instrumental or epistemic rationality is expressively rational on the second or is instead simply irrational? The conflict between what we referred to as the “bounded rationality” and “expressive rationality” theories of “disbelief” in evolution put exactly that question.

The answer we supplied rests on a particular inferential strategy forged in response to the so-called Great Rationality Debate—the scholarly disagreement about how much human irrationality to infer from the presence of a non-optimal responses on heuristics and biases tasks (Cohen, 1981; Gigerenzer, 1996; Kahneman & Tversky, 1996; Stanovich, 1999; Stein, 1996; Tetlock & Mellers, 2002).  Some researchers have argued against inferring irrationality from nonoptimal responses in such experiments on the ground that the study designs evaluate subjects’ responses against an inapt normative model. The observed patterns of responses, these scholars argue, turn out not to be irrational at all once the subjects’ construal of the problem is properly specified and once the correct normative standard is applied (see Stanovich, 1999; Stein, 1996).

Spearman’s positive manifold—the fact that different measures of cognitive competence always correlate with each other, (Carroll, 1993; Spearman, 1904)—can be used to assess when such an objection is sound (Stanovich, 1999; Stanovich & West, 2000).  Indicators of cognitive sophistication (cognitive ability, rational thinking dispositions, age in developmental studies) should be positively correlated with the correct norm on a rational thinking task.  If one observes a negative correlation between such measures and the modal response of the study subjects, then one is warranted in concluding that the experimenter was indeed using the wrong normative model to judge the rationality of the decision making in question. For surely it is more likely that the experimenter was in error than the subjects were when the individuals with more computational power systematically selected the response that the experimenter regards as nonnormative.

We used a variant of this strategy in weighing the evidence generated by our data analyses.  The magnification, rather than the dissipation, of conflict among those who scored highest on the CRT, we argued, furnishes a reason to be extremely skeptical of the conclusion that controversy over evolution can be chalked up to a deficit in one side’s capacity for “analytic thinking.”

In existing literature, this strategy has been applied at what might be termed the micro-level—that of applying a particular quantitative norm to a specific task.  The way we have interpreted our findings here might be viewed as applying the strategy at a macro-level, one that tries to understand what kind of rational reasoning the subject is engaged in: a narrow epistemic rationality of truth-seeking, or a broader one of identity signaling and symbolic affirmation of group identity.

5.3. The tragic calculus of expressive rationality

What choices and beliefs mean is intensely context specific. Part of what makes stripped-down “rational choice” models so appealing is that they ruthlessly prune away all these elements of the decisionmaking context. But the simplification, we’ve suggested, comes at a steep price: the mistaken conflation of all manner of expressively rational decisionmaking with behavior evincing genuine bias (Stanovich, 2013).

Accounts that efface expressive rationality are popular, however, not just because they are simple; they are attractive, too, because behavior that is expressively rational is often admittedly ugly. Among the “higher ends” to which people intent on experiencing particular identities have historically subordinated their immediate material desires are spite,  honor,  and vengeance, not to mention one or another species of group supremacy. 

Clearly, it would be obtuse to view all expressive desires and beliefs as malicious. But as Stephen Holmes (1995), Albert Hirschman (1977), Steven Pinker (2011), and others have taught us, there was genuine wisdom in the Enlightenment-era project to elevate the moral status of self-interest as a civilizing passion distinctively suited for extinguishing the sources of selfless cruelty (Holmes, 1995, p. 48) that marked human relations before the triumph of liberal market institutions.

The species of expressive rationality to which we have linked disbelief in evolution should fill us with a certain measure of moral trepidation as well.  It is, we’ve explained, individually rational, in an expressive sense, for persons to be guided by the habits of mind that conform their beliefs on culturally disputed issues to ones that predominate in their group. But when all individuals do this all at once, the results can be collectively disastrous.  In such circumstances, citizens of pluralistic self-governing societies are less likely to converge, or converge nearly so quickly, on the best available evidence on societal risks that genuinely threaten them all.  What’s more, their public discourse is much more likely to be polluted with the sort of recrimination and contempt characteristic of public stance-taking on factual claims that have become identified with the status of contending cultural groups (Kahan et al., 2016).

These predictable consequences, however, will do nothing to diminish the psychic incentives that make it individually rational to process information in an expressive fashion.  Only disentangling positions on facts from identity-expressive meanings—and thus counteracting the incentives that rational persons of all outlooks have to adopt opposing expressive stances to protect their cultural identities—can extricate them from this sort of collective action dilemma (Lessig, 1996; Kahan, 2015a, 2015b).

The sort of analysis presented in this paper is intended to aid in that process.  Exposing the contribution that expressive rationality makes to one specific instance of this public-reason pathology not only helps to inform those committed to dispelling it. It also helps clear the air of the toxic meme that such conflict is a product of one side or the other’s “bounded rationality” (Stanovich & West, 2007, 2008; Kahan, Jamieson et al., 2016). 


WSMD? JA! Cultural outlooks & science curiosity

This is approximately the 6,533rd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

So a colleague gave a presentation in which an audience member asked what the relationship was between science curiosity and cultural worldviews.

Well, here's a couple of ways to look at that:

From this perspective, it's clear that science curiosity is pretty normally distributed in all the cultural worldview quadrants.  They will all have a mix of types, some of whom really want to watch Your Inner Fish & others of whom would prefer to watch Hollywood Rundown.

But if one bears down a bit, one sees this:

The distributions aren't perfectly aligned. And while it's obviously pretty unusual to be in the 90th percentile or above for any "group," Egalitarian Communitarians, about 15% of whom score that high, are over 2x as likely to have an SCS score above that threshold as either a Hierarch Individualist or Hierarch Communitarian.

This is a bit greater than the disparity that one sees in gender (men are about 2x more likely to score at or above the 90th percentile on SCS) and noticeably greater than the disparity one observes in relation to religiosity (secular are about 1.6x more likely to score at or above the 90th percentil than are religious individuals).

Is this significant in practical terms? I'm really not sure.  

We know that SCS scores predict greater engagement with science entertainment material and also greater willingness to expose oneself to information that is contrary to one's political predispositions on an issue like climate change.

But I don't feel I have enough experience yet with SCS to say what the the score "thresholds" or "cutoffs" are that make a big practical difference, and hence enough experience yet to say what sorts of disparities in science curiosity matter for what end.

I'm curious about these things, and about what explains disparities of this sort.

How about you? 


More on (non)relationship between disgust & perceived risks of vaccines & GM foods

Yesterday” I presented some evidence that vaccine attitudes are unrelated to disgust.  Today I’ll present  some more.

Yesterday’s evidence consisted of a comparison of how disgust sensibilities relate to support for the policy of universal vaccination, on the one hand, and how they relate to a bunch of other policies one would expect either to be disgust driven or completely unrelated to disgust.

It turned out that the disgust-vaccine relationship was much more like the relationship between disgust and policies unaffected by disgust sensitivity—like campaign finance reform and tax increases —than like the relation between disgust and policies like gay marriage and legalization of prostitution.  Which is to say, there really wasn’t any meaningful relationship between disgust and attitudes toward mandatory vaccination at all.

Today’s post will use a similar strategy to probe the link (or lack thereof) between disgust and vaccine risk perceptions.

To measure disgust sensitivity, we’ll again use the conventional “pathogen disgust” scale, which other researchers have reported to be correlated—although only weakly and unevenly—with vaccine attitudes.

To measure vaccine risk perceptions, we’ll use the trusty (indeed, some would say miraculously discerning) Industrial Strength Risk Perception Measure.

The ISRPM solicits subjects’ appraisals of “how serious” a risk is on a 0-7 scale. It has been shown to be highly correlated with more fine-grained appraisals of putative risks and even with risk-taking behaviors.

There is a correlation between perceptions of the risk of childhood vaccines, measured with the ISRPM, and the pathogen disgust scale. It is r = 0.17. 

Is that big? I don’t think so.

But the more important point is that it is smaller than the correlation between the disgust scale and a host of other risk perceptions relating to activities that no one would think have anything to do with disgust.

These include air plane crashes, elevator accidents, kids downing in swimming pools, and mass shootings.

The correlation between vaccine risks and disgust sensitives was about the same as the correlation between disgust sensitives and fear of artificial intelligence and workplace accidents.

Again, no one believes that these other concerns are driven by disgust.  They are just a random collection of risk perceptions that are kind of odd. 

Since it’s not plausible to see the the correlation between these ISRPMs and the pathogen disgust scale as evidence that differences in disgust sensitives explain variance in fear of falling down elevator shafts, of getting impaled by a broken-off aileron from an exploding DC-10, of having one’s car appropriated by a gun-wielding meth-infused maniac, or of seeing a drowned toddler floating in swimming pool, we shouldn’t take the correlation between the vaccine ISRPM and the pathogen disgust scale as evidence that differences in people’s disgust sensitivities explain variance in perceptions of vaccine risks either.

In an earlier post I showed that this random assortment of ISRPMs form a scale, which I proposed to call the “scaredy-cat” or SCAT index. The SCAT index measures a random-ass (sorry for technical jargon ) sensibility to worry about things generally.

That makes  SCAT a nice validator or test index.  If anyone asserts that something explains variance in a risk perception, it better explain variance in that risk perception better than SCAT or else we’ll have no more reason to believe that the thing in question explains variance than that nothing in particular besides an undifferentiated propensity to worry does.

Well, when SCAT goes head to head with disgust, it blows it away –on both vaccine risk perceptions and gentically modified food risk eprceptions.

if you like squared semi-partial correlations, you'll love this! Click away!When they are both modeled as as predictors of vaccine risk perceptions, the effect size of the SCAT predictor is 9x as big as that of the pathogen-disgust preductor.

And guess what? Its effect size (measured in terms of respective squared semi-partial correlations; see Cohen et al. 2003, pp. 72-74) is 4x as big as the effect size of the disgust scale when the two are treated as predictors GM food risk perceptions.

That’s strong evidence that neither of these risk perceptions are meaningfully explained in any meaningful way by disgust.

There's at least one very well done & interesting empirical study finding a correlation between vaccine & GM food attitudes & disgust sensibilities (Clifford & Wendell 2015).

But to conclude that disgust “explains” variance in a risk perception, one has to show more than that the risk perception in question correlates with disgust. One has to show that it correlates with disgust (validly measured) more powerfully than do risk perceptions that clearly have zilch to do with disgust.

Based on this evidence and that featured in my earlier post, I'm now of the view that that can’t be done in the case of vaccine and GM food risk perceptions. 


Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (L. Erlbaum Associates, Mahwah, N.J., 2003).

 Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z



More on the rationality of (dis)belief in evolution

Yesterday I posted a new paper coauthored by me and by Keith Stanovich of the University of Toronto. The paper presented data showing that public controversy in the U.S. over the reality of human evolution is best accounted for by a theory of expressive rationality.  Today I’ll say a bit about what that that claim means.

The idea that expressive rationality explains controversy over evolution is an alternative to another position, which sees the controversy as originating in bounded rationality.

All manner of cognitive miscue, it’s now clear, is rooted in the tendency of people to rely overmuch on heuristic information processing, which is rapid, intuitive, and affect driven (Kahneman & Frederick 2005).  

What we call the bounded rationality theory of disbelief”—or BRD—seeks to assimilate rejection of the theory of human evolution to this species of reasoning.  Because our life is filled with functional systems designed to operate that way by human beings, we naturally intuit, the argument goes, that all functional “objects in the world, including living things,” must have been “intentionally designed by some external agent” (Gervais 2015, p. 313).

It’s hard for people to resist that intuition—in the same way that’s it’s hard for them to stifle the expectation that tails is “due” after three consecutive tosses of heads (the “gambler’s fallacy”) or to suppress the conviction that the outcome of a battle was foreordained once they know its outcome (“hindsight bias”). 

Only those who are proficient in checking intuition with conscious, effortful information processing are likely to be able to overcome it.

Well, this is a plausible enough conjecture.  Indeed, BRD proponents have supported it with evidence—namely, data showing a positive correlation between belief in evolution and scores on the Cognitive Reflection Test (Frederick 2005), a critical reasoning assessment that measures the disposition of individuals to interrogate intuitions in light of available data.

But this evidence doesn’t in fact rule out an alternative hypothesis, which we call the “expressive rationality theory of disbelief” or “ERD.”

ERD assimilates conflicts over evolution to cultural conflicts over empirical issues such as the reality of climate change, the safety of nuclear power, and the impact of gun control.

Positions on these issues have become suffused with antagonistic social meanings, turning them into badges of membership in and loyalty to competing groups.  Under such circumstances, we should expect individuals not only to form beliefs that protect their standing within their groups but also to use all the cognitive resources at their disposal, including their capacity for conscious effortful information processing, to do so.

And that’s what we do see on issues like climate change, nuclear power, and guns, where higher CRT scores are associated with even greater cultural polarization (Kahan 2015).

ERD predicts that that’s what we should see on beliefs on evolution, too.  Positions on evolution, like positions on climate change, nuclear power, guns, etc., signify what sort of person one is and whose side one is on in incessant cultural status competition, this one between people who vary in their level of religiosity. Accordingly, the individuals who are most proficient in critical reasoning—the ones who score highest on the Cognitive Reflection Test—should be the most polarized on religious grounds over the reality of climate change.

That’s the test that needs to be applied, then, to figure out if public controversy on evolution, like ones on these other issues, are an expression of individuals’ stake in forming identity-expressive outlooks or instead a consequence of their overreliance on heuristic information processing.

BRD needn’t be seen as implying the silly claim that “culture doesn’t matter” on beliefs on evolution.  But if it’s true that “individuals who are better able to analytically control their thoughts are more likely to eventually endorse evolution’s role in the diversity of life and the origin of our species” (Gervais 2015, p. 321), then relatively religious individuals who score high on the CRT should be less inclined to believe in religious than those who score low on that assessment

click on me for a realllll good time!If, in contrast, individuals are using all the cognitive resources at their disposal to form identity-congruent beliefs on evolution, those highest in CRT should be the most divided on the reality of human evolution.

That’s what we found in our empirical tests.

These tests included both a re-analysis of the data that BRD proponents had relied on and an analysis of and data from an independent nationally representative sample. 

In both sets of analysis, higher CRT scores did not uniformly predict greater belief in evolution. Rather they did so only conditional on holding a relatively secular or nonreligious cultural style.  For individuals who were more religious, in contrast, CRT scores were associated with either no change or even a slight intensification (in the national sample) of resistance to belief in evolution.

As a result polarization intensified in keeping with CRT scores.

In the paper, we relate these findings to the inherent complexity of rationality, which seeks not only to maximize accuracy of beliefs but also the compatibility of them with people’s self-conceptions, a matter Keith has written extensively about (e.g., Stanovich 2004, 2013).

I’ll say more about that “tomorrow.”


Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Gervais, W. Override the Controversy: Analytic Thinking Predicts Endorsement of Evolution, Cognition 142, 312-321 (2015).

Kahan, D.M. Climate-Science Communication and the Measurement Problem, Advances in Political Psychology 36, 1-43 (2015).

Kahneman, D. & Frederick, S. A model of heuristic judgment. in The Cambridge handbook of thinking and reasoning (ed. K.J.H.R.G. Morrison) 267-293 (Cambridge University Press, 2005).

Stanovich, K.E. The Robot's Rebellion : Finding Meaning in the Age of Darwin (Univ. Chicago Press, Chicago, 2004).

Stanovich, K.E. Why Humans Are (Sometimes) Less Rational Than Other Animals: Cognitive Complexity and the Axioms of Rational Choice, Thinking & Reasoning 19, 1-26 (2013).


On the rationality of (dis)belief in evolution -- new paper

More on this anon . . . .

Rationality and Belief in Human Evolution


Dan M. Kahan
Yale University

Keith E. Stanovich
University of Toronto


This paper examines two opposing theories of disbelief in evolution. One, the “bounded rationality” account, attributes disbelief to the inability of individuals to suppress the strongly held intuition that all functional systems, including living beings, originate in intentional agency. The other, the “expressive rationality” account, holds that positions on evolution arise from individuals’ tendency to form beliefs that signal their membership in and loyalty to identity-defining cultural groups. To assess the relative plausibility of these theories, the paper analyzes data on the relationship between study subjects’ beliefs in evolution, their religiosity, and their scores on the Cognitive Reflection Test (CRT), a measure of critical-reasoning proficiencies including the disposition to interrogate intuitions in light of available evidence.  Far from uniformly inclining individuals to believe in evolution, higher CRT scores magnified the division between relatively religious and relatively nonreligious study subjects.  This result was inconsistent with the bounded rationality theory, which predicts that belief in evolution should increase in tandem with CRT scores for all individuals, regardless of cultural identity.  It was more consistent with the expressive rationality theory, under which individuals of opposing cultural identities can be expected to use all the cognitive resources at their disposal to form identity-congruent beliefs. The paper discusses the implications for both the study of public controversy over evolution and the study of rationality and conflicts over scientific knowledge generally.




Does disgust drive anti-vax sentiment? Does't look like it to me . . . . 

Is disgust a source of popular opposing to universal childhood immunization?

This is a familiar contention.

But there’s not much evidence for it.

The principal basis for the claim consists in impressionistic reconstructions of popular and historical sources.

Some thoughtful researchers have also presented empirical data (Clifford & Wendell 2015). Although interesting and suggestive, these data show only a weak, uneven correlation between popular attitudes toward vaccination & disgust sensitivity.

What’s more, the study in which these data were presented didn't examine the relationship between disgust sensibilities and attitudes toward any other issues

If disgust “drives” antivax sentiment, then presumably disgust’s influence on vaccine attitudes should “look like” its influence on other policy and risk attitudes that we have good reason to think are disgust driven. By “look like” I mean should have a comparably strong effect, and be in the same direction as the impact of disgust on these attitudes and policies.

By the same token, it disgust is a meaningful influence on vaccine attitudes, then the relationship between the two shouldn’t “look like” the relationship, or basically nonrelationship, that exists between disgust and policy and risk attitudes that we have good reason to think aren’t disgust driven.

This sort of external validation test is essential given how spotty the reported correlations are between disgust sensitives and vaccine attitudes.

Well, some colleagues and I collected data that enables this sort of evaluation.  In my view, it weights strongly against the asserted disgust-antivax thesis.

There are more data than I’ll present today, but for a start, consider how disgust relates to support for the policy of mandatory universal childhood immunization.

To measure disgust, we used the conventional “pathogen disgust” scale, which other researchers (Clifford & Wendell 2015) have reported to be correlated with vaccine attitudes.

To measure subjects’ attitudes toward mandatory universal childhood immunizations, we asked them to tell us on a six-point scale how strongly they supported or opposed “requiring children who are not exempt for medical reasons to be vaccinated against measles, mumps, and rubella.”

To enable the comparison that I described, we also measured how strongly subjects supported or opposed a collection of other policies that one would expect to be either related or unrelated to disgust sensitivities.

In relation to the former, we observed the expected result. Disgust sensitives (modestly) predicted opposition to gay marriage and legalization of prostitution.

They also predicted support for making Christianity the “official religion” of the US and for imposing the death penalty for murder, policies that reflect moral evaluations—“purity” in connection with the former and “punitiveness” in relation to the latter (e.g., Stevenson et al. 2015)—that are understood to have a nexus with disgust.

Likewise we observed that disgust sensitivities were inert in relation to policies one would expect not to be related to disgust. There was no meaningful relationship between disgust, e.g., and support for raising taxes for the wealthiest Americans, for legalizing on-line poker, or for amending the Constitution to permit prohibiting corporate campaign contributions.

Okay, then.  So what about universal mandatory vaccination?

Well, contrary to the disgust-antivax thesis, It turned out that there was no meaningful relationship between support or opposition to that policy and disgust, as reflected in this standard measure. Indeed, the very small effect we observed was in the opposite direction from what that thesis posits—that is, as disgust sensitivities increased, so did support for universal immunization, although by a factor no serious researcher would take seriously (r = 0.07, p < 0.05).  

raw data-- eat, eat! [click, click!]In sum, the relationship between disgust sensitives and vaccine policy attitudes “looks” identical to the relationship between disgust and policies disgust-unrelated policies and nothing like the relationship between disgust and disgust-related ones.  Not what one would expect to see in the evidence if in fact the disgust-antivax hypothesis were correct.

There’s more, as I said. I’ll get to it “tomorrow.”

But if disgust doesn’t drive antivax sensibilities, what does?

The answer, I think, is that nothing systematically does.

Contrary to the popular media trope, there is tremendous support for mandatory vaccination in the US (Kahan 2016; CCP 2014; Kahan 2013)—a point I’ve stressed repeatedly in this blog & that is reaffirmed by the 80%-level of support reflected in the policy item featured here.

As also emphasized a zillion times, this level of support is uniform across cultural and political and religious groups of all descriptions. Among the groups that bitterly disagree on issues like climate change and evolution, there is consensus that universal immunization against common childhood diseases is a great idea.

mmmmm ... regressions... click for a bite!There are segments of society who feel otherwise. But they are small in number and more importantly consist of people who are outliers in any cultural group of which they are a part.

This makes vaccine hesitancy a “boutique” risk perception—one that is held only by fringe elements for reasons that have no wider resonance with the groups of which those individuals are a part & in which risk perceptions normally take shape.

For that reason, what “drives” anti-vaccine sentiment will always evade detection by broad-based survey techniques.

To help address the problem of vaccine hesitancy—and it is a problem, even if it is confined to opinion-group fringes and geographic enclaves--researchers shouldn’t be using survey methods but should instead by using more fine-grained tools like behaviorally validated screening instruments (Opel et. al. 2013).

This is one of the points made in an excellent report recently by the Department of Health and Human Service’s National Vaccine Advisory Committee (2015).

Researchers should read it. Everyone else should, too.  



Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z

Cultural Cognition Project Lab. Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Analysis. CCP Risk Studies Report No. 17

Horberg, E., Oveis, C., Keltner, D., & Cohen, A. B. (2009). Disgust and the moralization of purity. Journal of Personality and Social Psychology, 97(6), 963-.

Kahan, D. M. (2013). A Risky Science Communication Environment for Vaccines. Science, 342(6154), 53-54. doi: 10.1126/science.1245724

Kahan, D.M. Protecting the Science Communication Environment: the Case of Childhood Vaccines, CCP Working Paper No. 244, (2016).

National Vaccine Advisory Committee, Assessing the State of Vaccine Confidence in the United States: Recommendations from the National Vaccine Advisory Committee (2015).

Opel, D. J., Taylor, J. A., Zhou, C., Catz, S., Myaing, M., & Mangione-Smith, R. (2013). The relationship between parent attitudes about childhood vaccines survey scores and future child immunization status: A validation study. JAMA Pediatr, 167(11), 1065-1071. doi: 10.1001/jamapediatrics.2013.2483

Stevenson, M. C., Malik, S. E., Totton, R. R., & Reeves, R. D. (2015). Disgust Sensitivity Predicts Punitive Treatment of Juvenile Sex Offenders: The Role of Empathy, Dehumanization, and Fear. Analyses of Social Issues and Public Policy, 15(1), 177-197. doi: 10.1111/asap.12068





Leveraging science curiosity ... a fragment

People keep asking me, "How can we increase science curiosity to counter polarization?!" I dunno. We need more studies to figure that out. But my hunch is that we are likely better off trying to figure out, with more studies, how to leverage science curiosity--that is, how to get the widest possible benefit we can in public discourse out of the contributions that "naturally" science curious people make to it.... From our paper "Science Curiosity and Political Information Processing" (in press, Advances in Pol. Psych.):

5. Now what?

We believe the data we’ve presented paints a surprising picture.  The successful construction of a psychometrically sound science curiosity measure—even one with the constrained focus of the scale described in this paper—might already have seemed improbable. Much more so, however, would have been the prospect that such a disposition, in marked contrast to others integral to science comprehension, would offset rather than amplify politically biased information processing. Our provisional explanation (the one that guided the experimental component of the study) is that the intrinsic pleasure that science curious individuals uniquely take in contemplating surprising insights derived by empirical study counteracts the motivation most partisans experience to shun evidence that would defeat their preconceptions. For that reason science curious individuals form a more balanced, and across the political spectrum a more uniform, understanding of the significance of such information on contested societal risks.

We stress, however, the provisionality of these conclusions. It ought to go without saying that all empirical findings are provisional—that valid empirical evidence never conclusively “settles” an issue but instead merely furnishes information to be weighed in relation to everything else one already knows and might yet discover in future investigations. In this case in particular, moreover, the novelty of the findings and the formative nature of the research from which they were derived would make it reasonable for any critical reader to demand a regime of “stress testing” before she treats the results as a basis for substantially reorganizing her understanding of the dynamics of political information processing.

Obviously, the same measures and designs we have featured can and should be applied to additional issues. But potentially even more edifying, we believe, would be the development of additional experimental designs that would furnish more reason to credit or to discount the interpretation of the data we’ve presented here. We describe the basic outlines of some potential studies of that sort.

* * *

5.3. Science communication

Also worthy of further study is the significance of science curiosity for effective science communication.  We have presented evidence that science curiosity negates the defensive information processing characteristic of PMR. If this is correct, we can think of at least two implications worthy of further study.

click me! You'll be soooo glad you didThe most obvious concerns the possibility of promoting greater science curiosity in the general population.  If in fact science curiosity does negate the polarizing effects of PMR, then it should be regarded as a disposition essential to good civic character, and cultivated self-consciously among the citizens of the Liberal Republic of Science so that they may enjoy the benefit of the knowledge their way of life makes possible (Kahan 2015b).

This is easier said than done, however. Indeed, much much easier. As difficult as the project to measure science curiosity has historically proven itself to be, the project to identify effective teaching techniques for inculcating it and other dispositions integral to science comprehension has proven many times as complicated. There’s no reason not to try, of course, but there is good reason to doubt the utility of the admonition that educators and others to “promote” science curiosity as a remedy for countering the myriad deleterious consequences that PMR poses to the practice of enlightened self-government.  If people knew how to do this, they’d have done it already.

Better, we suspect, would be to furnish science communicators with concrete guidance on how to get the benefit of that quantum of science curiosity that already exists in the general population (Jamieson & Hardy 2014).  This objective is likely to prove especially important if the cognitive-dualism account of how science curiosity counters PMR proves correct.  This account, as we have emphasized, stresses that individuals can use their reason for two ends—to form beliefs that evince who they are, and to form beliefs that are consistent with the best available scientific evidence.  They are more likely to do the latter, though, when there isn’t a conflict between that two; indeed, many of the difficulties in effective science communication, we believe, are a consequence of forms of communication that needlessly put people in the position of having to choose between using their reason to be who they are and using it to know what is known by science—a dilemma that individuals understandably tend to resolve in favor of the former goal (Kahan 2015a). To avoid squandering the value that open-minded, science curious citizens can contribute to political discourse and to the broader science excommunication environment, science communicators should scrupulously avoid putting them in that position.

Indeed, helping science filmmakers to learn how to inadvertently put science curious individuals to that choice is one of the aims of the research project that generated the findings reported in this paper. If we are right about science curiosity and PMR, then this is an objective that science communicators in the political realm must tackle too. 


Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, 14, 1-12 (2015b).