follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Does reliance on heuristic information processing predict religiosity? Yes, if one is a liberal, but not so much if one is a conservative . . . 

A colleague and I were talking about the relationship between religiosity, conservativism, and scores on the Cognitive Reflection Test (CRT), and poking around in our data as we did so, and something kind of interesting popped out.

It’s generally accepted that religiosity is associated with greater reliance on heuristic (System 1) as opposed to conscious, effortful (System 2) information processing (Gervais & Norenzyan 2012; Pennycook et al. 2012; Shenhav, Rand & Greene 2012).

But it turns out that that effect is conditional, at least to a fairly significant extent, on political outlooks!

That is, there is a strong negative association with the disposition to use conscious, effortful information processing—as measured by the CRT—and religiosity in liberals.

But the story is different for conservatives. For them, there isn’t much of a relationship at all between the disposition to use System 2 vs. System 1 information processing and religiosity; the most reflective—the ones who score highest on CRT—are about as committed to religion as those who are the most disposed to rely heuristic information processing.

Jeez, what do the 14 billion readers of this blog make of this??


click for your raw data, plus regression ouput--essential elements of a balanced statistical diet!1. As per usual, I measured political outlooks with a standardized scale comprising the (standardized) sums of a 5-point liberal-conservative ideology item and a 7-point partisan identification item (alpha = 0.78); and “religiosity" with standardized scale comprising the (standardized) sum of a 4-point importance of religion item, a 6-point frequency of church attendance item, & a 7-point frequency of prayer item (alpha = 0.88).

2. CRT had a correlation of r = 0.00 with Left_right, which is consistent with what studies using nationally representative samples tend to find (Kahan 2013; Baron 2015).



Baron, J. Supplement to Deppe et al. (2015). Judgment and Decision Making 10, 2 (2015).

Gervais, W.M., Shariff, A.F. & Norenzayan, A. Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of personality and social psychology 101, 1189 (2011).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Pennycook, G., Cheyne, J.A., Seli, P., Koehler, D.J. & Fugelsang, J.A. Analytic cognitive style predicts religious and paranormal belief. Cognition 123, 335-346 (2012). 

Shenhav, A., Rand, D.G. & Greene, J.D. Divine intuition: cognitive style influences belief in God. Journal of Experimental Psychology: General 141, 423 (2012).



Law & Cognition 2016, Session 2 recap: Models--a start

1. Bayesian information processing (BIP). In BIP, the factfinder is treated as determining facts in a manner consistent with Bayes’s theorem. Bayes’s theorem specifies the logical process for combining or aggregating probabilistic assessments of some hypothesis. One rendering of the theorem is prior odds x likelihood ratio = posterior odds. “Prior odds” refer to one’s initial or current assessment, and “posterior odds” one’s revised assessment, of the likelihood of the proposition. The “likelihood ratio” is how much more consistent a piece of information or evidence is with the hypothesis than with the negation of the hypothesis. By way of illustration:

Prior odds. My prior assessment that Lance Headstrong used performance-enhancing drugs is 0.01 or 1 chance in 100 or 1:99.

Likelihood ratio. I learn that Headstrong has tested positive for performance-enhancing drug use. The test is 99% accurate. Because 99 of 100 drug users, but only 1 of 100 nonusers, would test positive, the positive drug test is 99 times more consistent with the hypothesis that Headstrong than with the contrary hypothesis (i.e., that he did not).

Posterior odds. Using Bayes theorem, I now estimate that the likelihood Headstrong used drugs is 1:99 x 99 = 99:99 = 1:1 = 50%. Why? Imagine we took 10,000 people, 100, or 1%, of whom we knew used performance-enhancing drugs and 9,900 of whom, 99%, we knew had not. If we tested all of them, we’d expect 99 of the users to test positive (0.99 x 100), and 99 nonusers (0.01 x. 9,900) to test positive as well. If all we knew was that a particular individual in the 10,000 tested positive, we would know that she was either one of 99 “true positives” or one of the 99 “false positives.”  Accordingly, we’d view the probability that he or she was a true user as being 50%.

In practical terms, you can think of the likelihood ratio as the weight or probative force of a piece of evidence.  Evidence that supports a hypothesis will have a likelihood ratio greater than one; evidence that contradicts a hypothesis will have a likelihood ratio less than one (but still greater than zero). When the likelihood ratio associated with a piece of information equals one, that information is just as consistent with the hypothesis as it is with the negation of the hypothesis; or in practical terms, it is irrelevant.

Fig. 1. BIP. Under BIP, the decisionmaker combines his or her existing estimation with new information in the manner contemplated by Bayes’s theorem—that is, by multiplying the former (expressed in odds) by the likelihood ratio associated with the latter and treating the product as his or her new estimate. Note that the value of the prior odds for the hypothesis and the likelihood ratio for the new evidence are presupposed by Bayes’s theorem, which merely instructs the decisionmaker how to combine the two.

2. Confirmation bias (CB). CB refers to a tendency to selectively credit or dismiss new evidence in a manner supportive of one’s existing beliefs. Accordingly, when displaying CB, a person who considers the probative value of new evidence is precommitted to assigning to new information a likelihood ratio that “fits” his or her prior odds—that is, a likelihood ratio that is greater than one if he or she currently thinks the hypothesis is true or a likelihood ratio that is either one or less than one if he or she currently thinks the hypothesis is false. So imagine I believe the odds are 100:1 that Headstrong used steroids. You tell me that Headstrong was drug tested and ask me if I’d like to know the result, and I say yes. If you tell me that he tested positive, I will assign a likelihood ratio of 99 to the test (because it has an accuracy rate of 0.99), and conclude the odds are therefore now 9900:1 that Headstrong used drugs. However, if you tell me that Headstrong tested negative, I will conclude that you are a very unreliable source of information, assign your report of the test results a likelihood ratio of 1, and thereby persist in my belief that the likelihood Headstrong is a user is 100:1. Note that CB is not contrary to BIP, which has nothing to say about what the likelihood ratio is associated with a piece of information. But unless a person has a means of determining the likelihood ratio for new evidence that is independent of his or her priors, that person will never correct a mistaken estimation—even if he or she is supplied with copious amounts of evidence and religiously adheres to BIP in assessing it.


Fig. 2. CB. “Confirmation bias” can be thought of as a reasoning process in which the decisionmaker determines the likelihood ratio for new evidence in a manner that reinforces (or at least does not diminish) his or her prior odds. Such a person can still be seen to be engaged in Bayesian updating, but since new information is always given an effect consistent with what he or she already believes, the decisionmaker will not correct a mistaken estimate, no matter how much evidence the person is supplied.

3. Story telling model (STM) & motivated reasoning (MR). Using the BIP framework, one can understand STM and MR as supplying a person’s prior odds (another thing simply assumed rather than calculated by BIP) and as determining the likelihood ratio to be assigned to evidence.  For example, if I am induced to select the “opportunistic, amoral cheater who will stop at nothing” story template, I might start with a very strong suspicion—prior odds of 99:1—that Headstrong used performance-enhancing drugs and thereafter construe pieces of evidence in a manner that supports that conclusion (that is, as having a likelihood ratio greater than one). If Headstrong is a member of a rival of my own favorite team, identity-protective cognition might exert the same impact on my cognition. Alternatively, if Headstrong is a member of my favorite team, or if I am induced to select the “virtuous hero envied by less talented and morally vicious competitors” template, then I might start with a strong conviction that Headstrong is not a drug user (prior odds of 1:99), and construe any evidence to the contrary as unentitled to weight (likelihood ratio of 1 or less than 1).  

It is possible, too, that STM and MR work together.  For example, identity-protective cognition might induce me to select a particular story template, which then determines my priors and shapes my assignment of likelihood ratios. If STM and MR, individually or in conjunction, operate in this fashion, then a person under the influence of either or both will reason in exactly the same manner as CB, for in that case, his or her priors and his or her likelihood-ratio assessments will arise from a common cause (cf. Kahan, Cultural Cognition of Consent).

Fig. 3. STM & MR. STM and MR can be understood as determinants of the decisionmakers’ prior odds and of the likelihood ratio he or she assigns to new evidence. They might operate independently (left) or in conjunction with one another (right; other complimentary relations are possible, too). In this model, the decisionmaker will appear to display confirmation bias, since the prior odds and likelihood ratio have a common cause. 

4.  What else? As we encounter additional mechanisms of cognition, consider how they relate to these “models.”


Law & Cognition 2016, Session 2 reading list & q's

Next Tuesday's assignemt. See you "in class" (feel free to comment away!)


Law & Cognition 2016, Session 1 recap: Whence the Likelihood Ratio?

I'm going to do my gosh darned best to recap each session of the seminar this yr. Here's Session 1 ...

The objective of session 1 was two-fold: first, to introduce Pennington & Hastie’s “Story Telling Model” (STM) as a mechanism of jury information processing; and second, to establish the “missing likelihood ratio” (MLR) as the heuristic foundation for engaging mechanisms of jury information processing generally.

In the “Self-defense?” problem puts the MLR problem in stark terms.

In the problem, we are presented with a series of facts the significance of which is simultaneously indisputableand highly disputed.  What’s undeniable is that each of these facts plainly matters for the outcome. What’s unclear, though, is how.

Rick paused for a period of time after exiting the building and viewed Frank as he approached him from across the street. Was Rick frozen in fear? Adopting a stance of cautious circumspection? Or was he effectively laying a trap, allowing Frank to advance close enough to enable a point-blank fatal shot and create a credible claim of his need to have fired it? 

Likewise, Rick emerged from a secured building lobby accessible only by use of an electronic key. Was his failure to seek immediate refuge in it upon spying Frank evidence of his intention to lure Frank close enough to him to make a deadly encounter appear imminent—or would it possibly have put Rick in deadly peril to turn his back on Frank in order to re-enter with use of the electric key?

Were Frank’s words—“What are you looking at, you freak? I’m going to cut your damned throat!”—a ground for perceiving Frank as harboring violent intentions? Or was the very audacity and openness of the threat inconsistent with the stealth that one would associate with an actor intent on robbing another?

Frank had begun to lurch toward Rick moments before Rick fired the shot. Was Frank’s erratic advance grounds for viewing him as a lethal risk or for seeing him as too stupefied by drink to reach Rick at all, much less apprehend him had Rick made any effort to escape?

Rick immediately called 911; doesn’t that show he harbored law-abiding intentions? But doesn’t the calm matter-of-fact tone of his communication show he wasn’t genuinely in fear for his life?

What if we roll back the tape?  Rick had read of the string of robberies in his neighborhood; didn’t that give him grounds for fearing Frank? But what did it give him grounds for fear of? One cannot lawfully resort to deadly force to repel the taking of one's property, even the forcible taking of it.

Rick started to carry a concaled gun after reading of the robberies.  Was that the reaction of a person who honestly feared for his life—or one of a person who lacked regard for the supreme value of life embodied in the self-defense standard, which confines use of deadly force to protection of one’s own vital physical interests?

In the face of these competing views of the facts, “Bayesian fact-finding” is an exercise in cognitive wheel spinning.

Formally, Bayes Theorem says that a factfinder should revise his prior estimate of some factual proposition or like hypothesis (expressed in odds) by multiplying it by a factor equivalent to how much more consistent a new piece of information is with that proposition than with an alternative one: posterior odds = prior odds x  likelihood ratio.

Legal theorists argue about whether this is a psychologically realistic picture of juror decisionmaking in even a rough-and-ready sense.

But as a problem like self-dense helps to show, the Bayesian fact-finding instruction is bootless in a case like Self-Defense.

There the decisionmaking issue there is all about what “likelihood ratio” or weight to assign all the various pieces of evidence in the case.

Do we assign a likelihood ratio “greater than 1” or “less” to Rick’s behavior in buying the gun, in standing motionless outside the building as Frank approached, in failing to seek protection inside the lobby, in placing a call to 911 in the manner he did; ditto for Frank’s tottering advance and his bombastic threat?

Bayes’s Theorem tells us what to do with the likelihood ratio but only after we have derived it—and has nothing to say about how to do that.

This is the MLR dilemma.  It’s endemic to dynamics of juror decisionmaking.  And it’s the problem that theories like Hastie and Pennington’s Story Telling Model (STM) are trying to solve.

STM says that jurors are narrative processors. They assimilate the facts to a pre-existing story template, one replete with accounts of human goals and intensions,  the states of affairs that trigger them, and the consequences they give rise to.

In a  rational reconstruction of jury fact-finding, the story template is cognitively prior to any Bayesian updating. That is, rather than being an outcome constructed after jurors perform a Bayesian appraisal of all the pieces of evidence in the case, the template exists before the jurors hear the case, and once activated functions as an orienting guide that  motivates the jury to conform the individual pieces of evidenced adduced by the parties to the outcome it envisions.

Indeed, it also operates to fill in the inevitable interstitial gaps relating to intentionality, causation, and other unobservables that form the muscle and sinew necessary to transform the always skeletal trial proof into a full-bodied reconstruction of some real-world past event.

Schematically, we can think of the story template as shaping juror’s priors, as supplying information or evidence over and above what is introduced at trial, and then determining the likelihood ratio or weight to be assigned to all the elements of the trial proof.

Click it! Click it good!

Whence the template? Every juror, P&H suggest, comes equipped with an inventory of templates stocked by personal experience and social learning.

The trial is not a conveyor belt of facts presented to the jury for it to use, one-by-one, to fabricate a trial outcome.

It is a contest in which each litigant endavors to trigger as quickly and decisively as possible selection of the most favorable case-shaping template from the jury’s inventory . . . .

Or so one would gather from P&H.

The questions for us, always, about such an account are always 3: (1) is it  true; if so (2) what use is it for a lawyer; and (3) what significance does it have for those intent on making the law work as well as it ppossibly can?

What are the answers?

You tell me!


Law & Cognition 2016: On-line seminar

Well, it’s that time of year again . . . school starts Monday!

Just as I did last year, I’ll be teaching the seminar Law & Cognition this fall.   This document on “course information & topics” is what passes for a syllabus, although if you like you explore the complete set of “reading lists” for an earlier version of the seminar.

Now, last year I did manage to post occasional “class reports,” including a three-part set on the “impossibility” of rules of evidence (parts one, two & three).

But this year, I’m going to try to be more diligent about posting class summaries of the sort that would allow “virtual participation,” including on-line discussion.  

Indeed, it would be great if this course developed the sort of on-line presence that the 2015 Science of Science Communication one did—the weekly discussions there were amazing, mainly owing to Tamar Wilner’s regular and insightful essays.

Well, we’ll see anyway!

But without further ado, let’s turn to week one. 

The reading list & study questions are posted below (and here, if you prefer to download .pdf versions).  If you can’t get ahold of Pennington & Hastie, then a great substitute is unfirewalled here.

But also read the “How would a Bayesian Factfinder behave?” document—I anticipation that will be the lynchpin of discussion, at least at the start, when class meets on Tuesday.

“See you” there!


Protecting the Vaccine Science Communication Environment (new paper)



Let me at 'em -- again! SCS returns to the cage to whup on CRT & evolution belief...

Well, the Science Curiosity Scale (SCS), having watched from the sidelines “yesterday” as CRT and AOT went head-to-head (surely not toe-to-toe) on belief in evolution got pretty restless & decided she had to climb back  into the ring—I mean steel cage—to get a piece of the action.

oooooo.... the carnage!As we all know, SCS mauled the hapless trio of Ordinary Science Intelligence (OSI), Actively Open-minded Thinking (AOT), and the Cognitive Reflection Test (CRT) on belief in human-caused climate change.

Whereas the latter three were all associated with the magnification of political polarization over climate change, SCS alone was associated with greater acceptance of it regardless of partisan identity.  It is plausible to see this result as reflecting the power of science curiosity to counteract “motivated system 2 reasoning” (MS2R), the tendency of cognitively sophisticated individuals to use their advanced reasoning proficiency to reinforce their identity-defining beliefs.

Well, SCS decided to come out of retirement & duel CRT again, this time on evolution.

As Jonathan Corbin noted “yesterday,” CRT predicts belief in evolution only conditional on religiosity. That is, it predicts greater belief for non-religious folks, but not for religious ones.

This is consistent with MS2R: disbelief in evolution being an identity-defining belief, one would expect religious individuals who are higher in cognitive proficiency to be even less likely to believe in it.

One can corroborate this more readily with the Ordinary Science Intelligence Assessment, a measure of cognitive proficiency that is more discerning than the 3-item CRT test. Because the CRT throws away information on variance for half the population, the picture is blurrier, although with a large enough sample, one can still see that the trend in belief in evolution is negative, not just flat, as it appears in the left panel here:

But anyways, it’s not negative or flat—it’s positive for SCS, as shown in the right panel.

Here's your raw data, btw! Drink up! That is, SCS, unlike CRT, predicts greater acceptance of evolution unconditionally, or regardless of religiosity (which is defined here via a scale that aggregates frequency of prayer, church attendance, and importance of religion in life).

Well, there you go: another day, another steel-cage motivated reasoning mauling at the hands of SCS!

The question is why? And who--who are these guys??

You know, I thought I “had this all figured out.” Then we just b/c someone will surely ask: the relationships between religiosity & CRT & SCShappened to take a peek at how SCS, developed to advance God’s plan (she’s got a sense of humor just like everyone else!) of promoting enjoyment of cool movies about evolution & other science topics, relates to polarized science issues.

Now I’m confused as hell all over again.

Asked & already answered: CRT & SCS relationshipGood.  If there is a hell, the suffering one is made to experience there forever isn’t being too hot (that’s just Connecticut in August).

It’s being bored b/c everything you look at comes out the way you expected.


So is AOT measuring AOT? I think so; the steel cage results don't necessarily imply otherwise

This is an excerpt from my and Jonathan aka "cognitive steel-cage match Don King" Corbin's paper on AOT and climate change polarization. I'm posting it as a follow up to my own response to @MaineWayne's perceptive question in response to Jonathan's post from "yesterday" on the grizzly AOT vs. CRT steel cage match.

The results of the study [showing that higher AOT scores magnify rather than mitigate political polarization over the reality of climate change] could be understood to suggest that the standard measure of AOT included in the data we analyzed is not valid.  Actively Open-minded Thinking is supposed to evince a motivation to resist “my side” bias in information processing (Stanovich et al., 2013). Thus, one might naturally expect the individuals highest in AOT to converge, not polarize all the more forcefully, on contested issues like climate change.  Because our evidence contravenes this expectation, it could be that the AOT scale on which our results are based is not faithfully measuring any genuine AOT disposition.

We do not ourselves find this last possibility convincing. Again, the results we report here are consistent with those reported in many studies that show political polarization to be associated with higher scores on externally validated, objective measures of cognitive proficiency such as the CRT test, Numeracy, and science literacy (Lewandowsky & Oberauer 2016; National Research Council 2016; Kahan, 2013, 2016; Kahan et al., 2012).  Because such results do nothing to call these measures into doubt, we do not see why our results would cast any doubt on the validity of the AOT scale we used, which in fact has also been validated in other studies (e.g., Haran et al., 2013; Baron et al. 2015; Mellers et al., 2015). 

Instead we think the most convincing conclusion is that the disposition measured by the standard AOT scale, like the dispositions measured by these other cognitive-proficiency measures, is one that has become tragically entangled in the social dynamics that give rise to pointed, persistent forms of political conflict (Kahan, in press_b). As do other studies, ours “suggest[s] it might not be people who are characterised by more or less myside bias, but beliefs that differ in the degree of myside bias they engender” (Stanovich & West 2008, p. 159). “Beliefs” about human-caused climate change and a few select other highly divisive empirical issues are ones that people use to express who they are, an end that has little to do with the truth of what people, “liberal” or “conservative,” know (National Rsearch Council 2016; Kahan 2015).[1]


[1] Science curiosity might be an individual difference in cognition that evades this entanglement and promotes genuine  receptivity to counter-attitudinal evidence among persons of opposing political outlooks (Kahan et al. in press).


Baron J, Scott S, Fincher K, and Metz, SE (2015) Why does the cognitive reflection test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition 4: 265-284.

Haran U, Ritov I, and Mellers BA (2013) The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making 8: 188.

Jost JT, Glaser J, Kruglanski AW, and Sulloway FJ (2003) Political conservatism as motivated social cognition. Psych. Bull. 129: 339-375.

Jost JT, Hennes, EP, and Lavine H (2013) “Hot” political cognition: Its self-, group-, and system-serving purposes. In: Carlson DE (ed.) Oxford handbook of social cognition. New York: Oxford University Press, 851-875.

Kahan DM (2013) Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8: 407-424.

Kahan DM (2016) “Ordinary science intelligence”: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J. Risk Res., available at

Kahan DM, Landrum AR, Carpenter K, Helft L, and Jamieson KH Science curiosity and political information processing (in press). Advances in Political Psychology. Available at:

Kahan DM, Peters E, Dawson E and Slovic P (2013) Motivated numeracy and enlightened self-government. Cultural Cognition Project Working Paper No. 116. Available at:

Kahan DM, Peters E, Wittlin M, Slovic P, Ouellette LL, Braman D, and Mandel G (2012) The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2: 732-735.

Kahan, DM (2015). Climate-Science Communication and the Measurement Problem. Advances in Political Psychology, 36, 1-43.

Lewandowsky S, and Oberauer (2016) Motivated Rejection of Science. Current Directions in Psych. Sci., DOI: 10.1177/0963721416654436.

Mellers, B, Stone, E, Atanasov, P, Rohrbaugh, N, Metz, SE, Ungar, L, Bishop, M., Horowitz, M, Merkle E and Tetlock, P  (2015)  The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of Experimental Psychology: Applied 21: 1-14.

National Research Council (2016) Science Literacy: Concepts, Contexts and Consequences: A Report of the National Academies of Science, Engineering and Medicine. Wash. DC: National Academies Press.

Stanovich, K and West R (2008) On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning 14: 129-167.

Stanovich KE, West RF, and Toplak ME (2013) Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science 22: 259-264.


Still another cognitive-style steel cage match: CRT vs. AOT go "head to head" on belief in climate change & belief in evolution

Jonathan aka "Don King" CorbinThe carnage continues! SCS (aka "Science Curiosity Scale") is taking a rest after having bashed its way to the top of the open-mindedness rankings, but this week we bring you CRT vs. AOT in a match arranged by the Don King of the cognitive-style steel cage match world, Jonathan Corbin!

If You Open Your Mind Too Much Your Brain Might Fall Out, But At Least It’ll Have a Parachute to Soften the Landing

Jonathan Corbin

Frank Zappa said, “A mind is like a parachute. It doesn’t work if it is not open” (though Thomas Dewar may have coined the phrase).  This is the motivation behind the psychological scale measuring actively open-minded thinking (AOT; Baron, 2008).  AOT is a self-report scale meant to measure the tendency to which an individual seeks out information that conflicts with one’s own beliefs.  So, simply having an open mind is only half of what this scale tries to measure – the other half is the propensity to look for information that disagrees with your current beliefs.  At first look, AOT seems like a silver bullet in terms of understanding why some people seem so resistant to scientific information that threatens their beliefs. 

Recent work by Dan Kahan and colleagues has shown that another individual difference measure – Science Curiosity – has been shown to relate to increased acceptance of human-caused global warming regardless of political affiliation.  Whereas performance measures like the Cognitive Reflection Test (which measures some combination of impulse control/inhibition and mathematical ability) and measures of scientific knowledge predicted increased polarization on politically charged scientific issues like climate change, science curiosity predicted the opposite!  As soon as I saw this result, I was immediately curious about how the AOT would do in such a comparison.  The obvious prediction is that AOT should perform just like science curiosity – an increased predilection for seeking out information that disagrees with one’s beliefs should definitely predict increased acceptance of human-caused climate change!

Dan was nice enough to direct me to his publicly available dataset in which they measured climate change beliefs as well as AOT (along with CRT, science knowledge, and many other variables), allowing us to test the hypothesis that individuals higher in AOT should be more accepting of climate change regardless of political affiliation.  As you’ve probably guessed if you read Dan’s previous post, it turns out that AOT was more similar to performance measures like the CRT, showing greater polarization with higher scores on the scale.

So, unfortunately it appears to be the case that AOT is not the silver bullet that I once thought it could be.  Perhaps, rather than Zappa’s quote of the mind as a parachute, I should be looking to Tim Minchin, who said, “If you open your mind too much, your brain might fall out.”  To further explore this pattern, I looked at another contentious topic – evolution.  Rather than examining political identification, for this analysis, I relied on religiosity (given that there is also a reason for many highly religious individuals to deny evolution as an identity protective measure).  The other reason I looked at religiosity is that there is a lot of AOT research linking higher religiosity with lower AOT.  This is interpreted as evidence that greater religiosity is associated with a heavier reliance on associative intuition (or “going with your gut”) as opposed to deliberative thinking (Gervais & Norenzayan, 2012; Pennycooke et al., 2013).  Few (if any) other studies collect nationally representative samples with such a large number of participants, so Kahan’s ordinary science intelligence dataset allowed us to test whether greater AOT in religious individuals relates to increased acceptance of evolution. 

Results show a similar pattern to the climate change question, with CRT and AOT behaving similarly in that higher AOT failed to predict greater acceptance of evolution in the highly religious.

If there is any consolation, it is that we can say that higher AOT in the highly religious did not predict decreased belief in evolution.  However, this data certainly does not give hope for the prediction that belief should increase with greater AOT among the highly religious.  Similar to political identity and climate change, whereas the overall relationship between AOT and belief in Evolution remains positive, broken down by religiosity, the picture quickly becomes more complicated.

By now, you are probably asking yourself whether there is any real difference between the CRT and AOT.  Definitionally, they are distinct (though expected to share variance), however, so far I haven’t given you much data to encourage that belief.  Well first of all, there is other research out there to support a difference.  For example, Haran, Ritov, and Mellers (2013) examined both AOT and CRT scores in relation to forecasting accuracy and information acquisition (basically what predicts how much information you’re willing to take in as well as your accuracy in predicting an outcome related to such information).  They demonstrated that AOT predicted superior forecasting over and above any effect of CRT (and this was mediated by information acquisition). 

We can also look for differences in the ordinary science intelligence dataset that we previously examined.  Rather than looking at individualls' belief in evolution, I analyzed level of agreement with the statement ““From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?”.  This question differed from the last, in that it does not ask about agreement with human-caused climate change – it only asks if there is solid evidence based on “what you’ve read and heard”.  The data showed that there was no main effect of CRT and no interaction between CRT and political affiliation (political affiliation did predict agreement with conservatives less likely to agree than liberals).  However, AOT did show a significant relationship, predicting greater agreement.

So, where does this leave us? It seems that although AOT is likely distinct from performance measures like the CRT, it falls into the same trap when it comes to science issues that generate conflicts with individuals’ identities.  Despite the fact that AOT is meant to measure one’s propensity toward seeking out belief-inconsistent information, it fails to predict higher levels of agreement with evidence-based claims that cue these identities. 

Given the final analysis reported here (and the literature as a whole), claiming that the result boils down to measurement error is probably incorrect.  It is more likely that one’s propensity to seek out information (particularly information that conflicts with one’s beliefs) is simply insufficient in countering the strength of cultural identity in swaying reasoning.  With regards to the evidence for human-caused climate change, there is an enormous amout of information available online.  Simply see the following website for a list of arguments in favor and against human-caused climate change.  This seems to be the perfect resource for someone high in AOT.  However, a lot of these arguments on both sides are technical, and it is possible that someone high in AOT may not be satisfied with trusting experts’ interpretation of the evidence, and would rather judge for themselves.  The need to judge for themselves mixed with the desire to come to conclusions that support one’s identity could very well increase polarization (or at the very least lead to no increase in support for those who’s identities support disagreement).  (It is worth the reminder that these are post-hoc explanations that require testing).

So, is Zappa correct in that an open mind a parachute or should we listen to Minchin who says that it is a recipe for losing one’s brain?  Well, the answer (because it is psychology) is--it depends!  When dealing with non-polluted science topics you should expect a positive relationship between AOT and agreement (maybe above and beyond performance measures like the CRT).  However, once you throw in the need to protect one’s identity, AOT is not going to be the solution.  So, why is science curiosity different from AOT?  Perhaps science curiosity is less about belief formation and more of a competing identity.  Whereas AOT is focused on how someone forms and changes beliefs, science curiosity is simply the need to consume scientific information.  Maybe instead of trying to throw information at people hoping that it’ll change their minds, we should start fostering a fascination with science. 


Baron, J. (2008). Thinking and Deciding. Cambridge University Press.

Gervais, W. M., & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science336(6080), 493-496.

Haran, U., Ritov, I., & Mellers, B. A. (2013). The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making8(3), 188.

Kahan, D. M. (2016). ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. Journal of Risk Research, 1-22.

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). Cognitive style and religiosity: The role of conflict detection. Memory & Cognition42(1), 1-10.


New NAS Report on Science Literacy, Cultural Values & Political Conflict on Policy-relevant Facts

Why don't we all spend the day reading this? Looks important & interesting . . . .


SCS vs. AOT ... latest "politically motivated reasoning steel cage match" 

So here’s a bit more on the new paper that I mentioned yesterday.

In it, Jonathan Corbin & I analyze how Actively Open-minded Thinking (AOT) relates to acceptance of (“belief in”) human-caused climate change.  AOT reflects the disposition to  seek out, engage, and give appropriate weight to evidence that challenges one’s existing beliefs (Baron 2008; Stanovich and West, 1997).

But we found that higher levels of AOT, as measured by a standard scale (Baron et al. 2015; Harat et al. 2013), magnify political polarization over the reality of human-caused climate change. 

This is surprising because AOT consists in a tendency to resist confirmation bias of the sort that would predictably reinforce partisan divisions on contested issues. So one might well have expected AOT to result in some degree of convergence, not enhanced divergence, in the beliefs of those partisans who score highest on a standard AOT measure.

As I’ve noted in some posts relating to a recent paper in the Annenbenberg Public Policy Center/Cultural Cogniton Project Science of Science Communication Initiative series (Kahan, Landrum, Carpenter, Helft & Jamieson in press), science curiosity does seem to generate that sort of convergence.  As rare glimpse of precise moment of conception of academic paper . . . click it!partisans’ science curiosity, measured by the APPC/CCP Science Curiosity Scale (SCS) increase, their acceptance of human-caused climate change uniformly increases.

Indeed, the magnification of polarization perversely associated with greater science comprehension generally is negated in individuals who score high in SCS.

Jonathan and I wanted to figure out if this was a feature SCS shared with AOT.

But in fact, the greater magnification of polarization that these reasoning dispositions manifest—a dynamic I’ve referred (or attributed) to “motivated system 2 reasoning”—seems to affect AOT, too.

So in this regard, AOT, like numeracy, CRT, and Ordinary Science Intelligence is recruited as a foot soldier in the imperial campaign of our identity-protective selves to rule over the empire of our cogntive life . . . .

Our that’s one interpretatioin. Maybe something else is going on!

But in any case, SCS alone seems to resist this tendency.

So in this sense, the paper is an outgrowth of the latest string of motivated-reasoning “seel-cage matches,” in which SCS has gone toe-to-toe, neuron-to-neuron against an all-star cast of reasoning-disposition measures and bested all of them in the search for an individual difference that counteracts the tendency of people to form and persist and beliefs that cohere with their identity-defining group affiliations.

In this ase, AOT and SCS were not in the same data set, so it was sort of a virtual cage-match. So critical, reflectiveve readers should take that into account as well in taking stock of the results.

SCS continues on its undefeated rampage in brutal cognitive steel-cage matches ... I hope everyone is wearking protective headgear! Click click click!That that into account along with all the other considerations that bear on the weight to be assigned on bit of evidence relevant to an issue or set of issues no one study or even set of studies should ever be taken to “definitively resolve.”

The advancement of knowledge consists in the permanent assimilation of all that is known without all that we may yet come to know in our assessment the relative plausibility of competing conjectures.

Now there is at least one other thing to say about my and Jon’s new paper: it’s inconsistency with the so-called “asymmetry thesis,” which posits that the incidences of politically motivated reasoning are a feature uniquely or at least predominantly associated with ideological conservatism as a personality trait (e.g., Jost et al. 2003).

More on that “tomorrow. . . .”


Baron J (2008) Thinking and deciding. New York: Cambridge University Press.

Baron J, Scott S, Fincher K, and Metz, SE (2015) Why does the cognitive reflection test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition 4: 265-284.

Haran U, Ritov I, and Mellers BA (2013) The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making 8: 188.

Jost JT, Glaser J, Kruglanski AW, and Sulloway FJ (2003) Political conservatism as motivated social cognition. Psych. Bull. 129: 339-375.

Kahan, D.M., Landrum A.R., Carpenter, K., Helft., L., & Jamieson, K.H. Science curiosity and political information processing (in press). Advances in Political Psychology),

Stanovich KE, West RF (1997) Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology 2: 342-357.


Does "Actively Open Minded Thinking" Aggravate Political Polarization on Climate Change? Looks like it . . . (New paper)

I’ll likely have more to say about this little (it really is little—< 2,000 words) paper.  But for now suffice it to say that I’ve fallen off the wagon and am back to the asymmetry thesis . . . . It will never let me go!


SCS vs. CRT: Another politically motivated reasoning steel cage match!

I’ve been getting a lot of questions about how the   Annberg Public Policy Center/Cultural Cogntion Project “Science Curiosity Scale” (SCS) relates to other measures of open-mindedness.

The Cognitive Reflection Test (Frederick 2005), which assesses the disposition of individuals to consciously and deliberately interrogate their intuitions, is often viewed as such a measure (e.g., Campitelli & Labollita 2010; Pennycook, Cheyne et al. 2013).

One would expect there to be a modest correlation between a measure of open-mindedness and science curiosity, and there is one between CRT and SCS:

But the correlation is only modest: the probability that someone in the top general population decile of CRT—someone who scores a perfect 3—is only 2x as likely as someone who scores zero on CRT to be in the top general population decile of SCS. 

Obviously, the two aren’t measuring the same thing.

This is a picture of the "raw" data for the figure below. Always always consume raw data before exposing yourself to a multivariate analysis--or you could end up bamboozled!They also don’t predict the same style of political information processing.

Despite arguably being the best measure of reflective thinking (Toplak, West & Stanovich 2013), CRT magnifies politically motivated reasoning (Kahan 2013).

That’s why polarization on an issue like human-caused climate change increases as CRT scores go up. As their SCS scores go up, in contrast, individuals don’t become more polarized but rather become more accepting, regardless of their political outlooks.

Here's the regression model for the last figure. If someone shows you only this when reporting his or her data, demand your money back!Indeed, the polarizing influence of science comprehension is suppressed by higher science curiosity as measured by SCS.

As I explained “yesterday,” this is plausibly attributed to the willingness of individuals who are high in science curiosity to expose themselves information that contravenes their political predispositions, something that partisans ordinarily are loath to do and that is not predicted by other dispositions, including CRT, associated with greater science comprehension.

This is one of the findings in the APPC/CCP paper “Science Curiosity and Political Information Processing,” which is forthcoming in Advances in Political Psychology.

There’s no question that the CRT measures an important species of critical reasoning.  I’d experience a degree of shock that even the subjects in the Milgram experiment would have balked at imposing were it to turn out that SCS came within a mile of CRT in predicting resistance to heuristic information processing generally.

But when it comes to predicting resistance to politically biased information processing, SCS picks up on an individual difference in cognition that evades CRT.


Campitelli, G. & Labollita, M. Correlations of cognitive reflection with judgments and choices. Judgment and Decision Making 5, 182-191 (2010).

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Landrum, A.R., Carpenter, K., Helft, L. & Jamieson, K.H. Science Curiosity and Political Information Processing. Advances in Political Psychology  (in press).

Pennycook, G., Cheyne, J.A., Barr, N., Koehler, D.J. & Fugelsang, J.A. Cognitive style and religiosity: The role of conflict detection. Memory & Cognition 42, 1-10 (2013).

Toplak, M.E., West, R.F. & Stanovich, K.E. Assessing miserly information processing: An expansion of the Cognitive Reflection Test. Thinking & Reasoning 20, 147-168 (2014).




Cultural cognition of weather: a cool (or warm) guest post!

And now for something completely different-- a guest post from someone who knows what he's talking about! (And is this just my politics speaking or was July really friggin hot?!)

Cultural Cognition of Weather
by Larry Hamilton
Carsey School of Public Policy, Univ. New Hampshire

December of 2015 was the warmest ever recorded in New Hampshire, by far. Indeed, in temperature anomaly terms (degrees above or below average) it was the warmest of any month for at least 121 years. January, February and March of 2016 were less extreme but each still ranked among the top 15, making winter 2015–2016 overall the state’s warmest on record — eclipsing previous records set successively in 1998, 2002 and 2012 (Figure 1).

Seeing in this record a research opportunity, colleagues and I added a question to a statewide telephone survey conducted in February 2016, to ask whether respondents thought that temperatures in the recent December had been warmer, cooler, or about average for the state. Two months later (April), we asked a similar question about the past winter as a whole. Physical signs of the warm winter had been unmistakable, including mostly bare ground, little shoveling or plowing needed, poor skiing, spring-like temperatures on Christmas day, and early blooming in a state where winters often are snowy and springs late. Not surprisingly, a majority of respondents correctly recalled the warm season. Their accuracy displayed mild but statistically significant political differences, however. Tea Party supporters, and people who do not think that humans are changing the climate, less often recalled recent warmth (Hamilton & Lemcke-Stampone 2016). Although percentage differences were not large, these patterns echoed greater differences seen in studies that asked about longer-term changes. Our February and April surveys had found counterparts on a much more immediate, tangible scale.

Fig. 1

Although the February and April 2016 results fit with broader patterns, they were not overwhelming by themselves. Believing in the value of replication, we asked the question one more time on a July 2016 survey, with winter several months behind. Most people still recalled the unseasonable warmth. Our July wording and results are as follows:

Thinking back to earlier this year, would you say that THIS PAST WINTER, the weather where you live was generally colder, warmer, or about average for winter in your area? ROTATE 1–3

1          Colder than average winter for your area (4%)
2          Warmer than average winter for your area
3          About average winter for your area
98        DK/NA (4%)

Political and climate-belief gaps now appeared wider than they had been earlier in the year. Figure 2 shows one striking example: a 21-point gap between supporters of Clinton and Trump (this was, after all, primarily a political poll).

Fig. 2Figure 3 breaks down the percentage of “warmer” responses on the July survey by other respondent characteristics, including their beliefs about climate change. P-values summarize tests from probability-weighted logit regression.

Fig. 3One notable pattern in Figure 3 involves political identification; we see a 17-point gradient from Tea Party supporters to Democrats in recollections about the winter they had all just experienced. Climate-change beliefs produce wider differences: respondents who don’t believe that climate is changing, or that climate is changing but for natural reasons, were much less likely to recall the warm winter.

Figure 4 places this July poll in context with political gradients (using the same 4-party scheme) from five previous surveys that asked longer-term climate/weather questions. Panels (a) and (b) involve atmospheric CO2 levels and Arctic sea ice (Hamilton 2012, 2015). Panels (c) and (d) depict results from two Northeast Oregon surveys that asked whether summers there had become warmer in the past two decades (Hamilton et al. 2016). Panel (e) charts responses to a question about whether flooding in New Hampshire had increased over the past decade (Hamilton et al. in press). Panel (f) repeats the unpublished July survey results described earlier, on whether New Hampshire’s recent winter had been warm.

Fig. 4What underlies this replicable pattern? Atmospheric CO2 levels and Arctic sea ice are not directly experienced by most people. They are measured and communicated mainly by scientists, so public resistance to these well-observed realities might be conceived as a problem of science communication, highlighting the need for ideologically-tailored methods. But science communication on these topics already involves many different organizations, research teams, and individual scientists taking diverse and ofttimes innovative approaches. An alternative hypothesis is that the partisan gradients reflect not shortcomings of science communication but the efficacy of counter-science communication, convincing ideologically receptive audiences that undisputed facts are false. The sociological literature about such counter-messaging has recently been summarized by Dunlap and McCright (2015).

Science communication seems distant, moreover, from panels c–f, which involve phenomena that can be directly experienced. Warmer, dryer summers in Northeast Oregon have exacerbated insect and disease threats to forests, both directly and indirectly contributing to the frequency of large wildfires. Such changes are visible, and in isolation seem equally compatible with individual beliefs that climate is change is happening either for natural or anthropogenic reasons — which together comprise 85% of the respondents in both Oregon surveys. Nevertheless, we find steep political gradients. Similar observations apply to flooding in New Hampshire, which has caused significant damage, and is most salient not through scientific reports but through news coverage if not personal experience. Again, most news coverage made no explicit connections with climate change, and most people (89% on these surveys) agreed anyway that climate is changing, whether from human or natural causes.

Although wildfires and floods might not impact everyone, or impress them with decadal change, the snowiness or un-snowiness of a winter affects daily life for just about everyone living in New Hampshire. Panel f depicts ideology-influenced perceptions at the mundane scale of recent weather.


Dunlap, R.E. & A.M. McCright. 2015. “Challenging climate change: The denial countermovement.” Pp. 300–332 in R.E. Dunlap & R.J. Brulle (eds), Climate Change and Society: Sociological Perspectives. New York: Oxford University Press.

Hamilton, L.C. 2012. “Did the Arctic ice recover? Demographics of true and false climate facts.” Weather, Climate, and Society 4(4):236–249. doi: 10.1175/WCAS-D-12-00008.1

Hamilton, L.C. 2015. “Polar facts in the age of polarization.” Polar Geography 38(2):89–106. doi: 10.1080/1088937X.2015.1051158

Hamilton, L.C., J. Hartter, B.D. Keim, A.E. Boag, M.W. Palace, F.R. Stevens & M.J. Ducey. 2016. “Wildfire, climate, and perceptions in northeast Oregon.” Regional Environmental Change doi: 10.1007/s10113-015-0914-y

Hamilton, L.C. & M. Lemcke-Stampone. 2016. “Was December warm? Family, politics, and recollections of weather.” Durham, NH: Carsey School of Public Policy.

Hamilton, L.C., C.P. Wake, J. Hartter, T.G. Safford & A. Puchlopek. in press. “Flood realities, perceptions, and the depth of divisions on climate.” Sociology doi: 10.1177/0038038516648547



Science curiosity vs. politically motivated reasoning: An experimental steel cage match!

So today I’ll finally tell you what we did in the information-seeking experiment featured in our new paper “Science Curiosity and Political Information Processing.”

It was pretty darn simple.

We assigned subjects to one of two conditions. In each, subjects were presented with two news story headlines: a “climate realist” one, which announced that scientists had uncovered evidence consistent with human-caused climate change; and a “climate skeptical” one, which announced that scientists had uncovered evidence that qualified or called into question the human contribution to climate change.

The difference in the conditions concerned the relative novelty of the opposing pieces of scientific evidence being featured in the respective headlines.

Thus, in Condition 1—“Realist unsurprising, Skeptical surprising”—the  respective newspaper headlines were “Scientists Find Still More Evidence that Global Warming Actually Slowed in Last Decade” and “Scientists Report Surprising evidence: Arctic Ice Melting Even Faster Than Expected.”

In contrast, in Condition 2—“Realist surprising, Skeptical unsurprising” condition—the respective headlines read, “Scientists Report Surprising Evidence: Ice Increasing in Antarctic, Not Currently Contributing To Sea Level Rise” and “Scientists Find Still More Evidence Linking Global Warming to Extreme Weather.”

Subjects were instructed to “pick the story most interesting to you,” and told they’d be asked some questions after they finished reading it.

Aversion to “counterattitudinal” information—that is, information that is contrary to one’s political outlooks—is one of the incidences of politically motivated reasoning.  When given the option, partisans tend to seek out information that is consistent with their predispositions rather than information that is contrary to them (Hart, Albarracín et al. 2009).

That’s exactly what we observed among subjects who were relatively low in science curiosity.

Among subjects who were relatively high in science curiosity, however, we saw the opposite effect.  Thus, relatively right-leaning science-curious subjects—who tended to be climate skeptical—nevertheless preferred the novel or “surprising” realist news story over the unsurprising skeptical story.

Likewise, relatively left-leaning science-curious subjects—who tended to be climate concerned—preferred the surprising skeptical story over the unsurprising realist one.

The effect sizes, moreover, were quite large: moderately science curious subjects were on average 32-percentage points (± 19, LC = 0.95) more likely to select the story that was contrary to their political predispositions than were moderate science incurious ones.

We are motivated to investigate this hypothesis by an unexpected observation in our “science of science filmmaking” studies.  As subjects’ science curiosity increased, their perceptions of contentious risks tended to move in the same direction. Moreover, high-curiosity subjects seemed to resist the normal tendency of individuals to polarize as their proficiency in science comprehension increased. 

We surmised that these individuals might be indulging their appetite for surprise by more readily examining evidence that contravened their political predispositions.  Being exposed to a greater volume of “counterattitudinal data,” they’d form views that were more uniform, and less prone to polarization conditional on science comprehension.

The experiment results supported this hypothesis.

Does this “prove” that science curiosity negates politically motivated reasoning?


It’s a mistake to think empirical evidence ever proves anything

What it does, if it is the product of a valid design, is furnish more reason than one otherwise would have had for crediting one competing account of some phenomenon over another. 

As I explained yesterday, the hypothesis that that science curiosity offsets politically motivated reasoning, is a plausible conjecture—but so is the hypothesis that science curiosity, like other cognitive elements of science comprehension, magnifies this biased form of information processing.

On the scale that registers the strength of the evidence for these respective hypotheses, the experiment result puts an increment of weight down on the side of the first hypothesis.

How much weight?

Well, you can decide that!

But if you are curious for our own views, read the paper: It catalogs our own qualifications and sources of residual uncertainty—and outlines a set of questions for further investigation.

We’re really curious to see if this result stands up to even more critical testing!


Hart, W., Albarracín, D., Eagly, A.H., Brechan, I., Lindberg, M.J. & Merrill, L. Feeling validated versus being correct: a meta-analysis of selective exposure to information. Psychological Bulletin 135, 555-588 (2009).

Kahan, D.M., Landrum, A.R., Carpenter, K., Helft, L. & Jamieson, K.H. Science Curiosity and Political Information Processing. Advances in Political Psychology  (in press).


Science curiosity & politically biased information processing--the (inevitable!) OCTUSW response

OKay, yesterday I promised to say more about the information-exposure experiment we conducted to test the conjecture that “science curiosity” tends to negate politically biased information processing.

Maybe first though I should say something about why this sort of result isn’t an obvious one.

Or actually why it is is obvious-- but why a result the other way would have been obvious, too!

The best studies, in my view, are ones that test opposing plausible conjectures.  This is the upshot of the “more things are plausible than are true” principle, which I attribute to Duncan Watts (2011).

It’s the premise, basically, of his cool book “Everything is Obvious: Once you know the Answer.”  Because more explanations for interesting social phenomena are plausible than are actually true, if one doesn’t use empirical methods to extricate the true (or more likely true) from the sea of plausible but false explanations, one drowns in a sea of just-so story-telling.

But of course, once one does the work of presenting valid empirical evidence that furnishes more reason to believe one plausible conjecture rather than its rival, someone will inevitably trot out the boring  OCTUSW--"Of course--that's unsurprising--so what"—response. 

To which the answer is, YAIIFTOTBTTWHBEO!, or “Yup; and if I’d found the opposite to be true, that would have been equally ‘obvious’! Aren’t you glad, then, that I actually went to the trouble of trying to generate some actual evidence instead of just lazily taking a bunch of plausible behavioral mechanisms, adding water & stirring—to produce the instant pseudo-science profundity that passes for decision science in op-ed pages & best-selling books?”

Indeed, I make a point of doing only studies about which someone could say, "Of course, that's unsurprising so what" no matter which way the study result comes out

But by the time you say all  this, of course, Mr. or MS OCTUSW has moved on to some other topic about which he or she can make this or some equally penetrating remark.


Why would a result the opposite of what we found—viz., that highly science curious individuals, unlike less curious ones, willingly expose themselves to evidence that confounds their political predispositions—not have been particularly surprising?

The answer is “motivated system 2 reasoning” – or MS2R.

MS2R refers to the tendency of the reasoning proficiencies associated with science comprehension to magnify rather than abate politically motivated reasoning (PMR)—the tendency to conform evidence to one’s political predispositions. 

Cognitive reflection, numeracy, science literacy—they all do that (Kahan in press).

Does that outcome seem obvious, too?


But the opposite effect—the tendency of proficiency in these sorts of reasoning abilities to temper political polarization—certainly is plausible, and any evidence that they do would certainly have been “obvious—once one knew the answer.”

Most cognitive biases—from base rate neglect to the availability heuristic, from ratio bias to the conjunction fallacy—reflect an overreliance on the rapid, intuitive, affect-driven “System 1” information processing as opposed to the more deliberative, conscious, dispassionate “System 2” kind characteristic of good “scientific” reasoning.

PMR compromises truth-convergent Bayesian reasoning in a manner akin to these biases. So why wouldn’t one expect it, too, to be attributable to overreliance on heuristic, system 1 reasoning ?


But false.  Tons of observational & experimental data at this point show that cognitive reflection, numeracy, science literacy, etc.,  are all associated with greater political polarization.

Under the conditions that generate PMR, people use their science-comprehension reasoning proficiencies to reinforce their biased assimilation of evidence to the position that coheres with their political predispositions.

That’s MS2R!

Now science curiosity—just like cognitive reflection, numeracy, knowledge of basic science facts, etc.—is cognitive element of science comprehension.

I went over this in a post a couple of days ago that showed  that people high in science curiosity are significantly more likely to be high in science comprehension than are those who are low in science curiosity.

Can you see now why it would have been perfectly plausible to surmise—and perfectly obvious to find—that science curiosity, like these other elements of science comprehension, magnify political polarization?

But there’s a perfectly respectable conjecture the other way: that unlike these other elements of science comprehension, science curiosity involves an appetite to be surprised—to experience the awe and wonder of contemplating surprising insights derived from the signature methods of science.  Maybe the habitual exercise of that disposition develops habits of mind that counteract rather than accentuate PMR.

Maybe! Or maybe not!

Only one way to tell . . . . Do a valid empirical study.

Oh-- & then do another, & another & another –and progressively update one’s views on the respective probability of these two perfectly plausible hypotheses—viz., science curiosity amplifies & science curiosity mitigates PMR.

So there you go, Mr./MS. OCTUSW.

We are now ready to (re)turn to the more interesting question: what was the evidence we relied on and how much reason does it give us to credit the “science curiosity mitigates” PMR hypotheses.

But I've said enough for one day, so  I’ll have to do that “tomorrow.”

Again, though, if your insides are being consumed by curiosity on the experiment design and results, don't suffer--just download our paper & read it . . . . right now!


Kahan, D.M. The Politically Motivated Reasoning ParadigmEmerging Trends in Social & Behavioral Sciences, (in press_b),

Kahan, D.M., Landrum, A.R., Carpenter, K., Helft, L. & Jamieson, K.H. Science Curiosity and Political Information Processing. Advances in Political Psychology  (in press).

Watts, D.J. Everything is Obvious: Once You Know the Answer: How Common Sense Fails (Atlantic Books, 2011).



Science curiosity & climate change (de)polarization (new paper)

Consider four propositions of ascending curiousness.

  1. Increasing science curiosity is associated with greater acceptance of human-caused climate change in the general population.

  2. This effect holds regardless of political outlooks

  3. Increasing science curiosity counteracts the association between increased science comprehension and and political polarization on societal risks such as climate change and fracking.

  4. As science curiosity goes up, individuals of all political outlooks become more interested in engaging information contrary to their political predispositions on climate change.

Proposition (1) is kind of interesting, but until it is combined with proposition (2), it doesn’t tell one much of anything.  A population-wide association between some disposition and a belief or attitude is interesting only if there isn’t significant variation in that relationship among different sorts of people. If there is, then the population-wide effect obscures that and invites specious inferences about how the disposition in question influences the relevant belief or attitude.

Let’s call the class of specious inferences the “Pat” fallacy: because “Pat," who is “average” along every conceivable dimension, doesn’t exist, it is a meaningless exercise to treat how some disposition in “Pat” affects “Pat’s” beliefs, attitudes, etc., if in fact relevant dimensions of identity affect the relationship of the disposition to beliefs, attitudes, etc., in real-life, truly existing people.

Click me! I'm the best image in this post!But once we know that there is a uniform relationship between some disposition and some belief or attitude (or one that is uniform in relation to some meaningful aspect of individuals' identities), then we can start to assess the significance of that.

The clue to the significance here is revealed by (3).  We know (because it’s been shown 15x10^3 times) that pretty much every conceivable reasoning disposition relevant to science comprehension magnifies rather than ameliorates political polarization on societal risks.  That happens because where positions on a risk or like fact become badges of membership in and loyalty to one or another tribal group, people will face strong psychic pressure to use their reasoning proficiencies to filter information in a manner that promotes their beliefs to the ones that that predominate in their groups.

Science curiosity is a reasoning disposition that can reasonably be understood to be integral to science comprehension. So one might expect it to magnify polarization on issues like climate change, too.

No! Don't listen to her! I'm the best image! Click me!!!!But it doesn’t. It has the opposite effect!

Why? Why?? Why???

This is the question that the 14 billion readers of this blog were left to grapple with about 5 mos ago when propositions 1-3, which were observed in Study No. 1 of the Cultural Cognition Project/Annenberg Public Policy Center “Science of Science Filmmaking Initiative.”

One conjecture was that science-curious individuals might be using their reason in a way that counteracts the usual consequences of politically motivated reasoning (PMR). 

Generally speaking, PMR is associated with biased information search: that is, partisans tend not only to fit their assessments of information to their predispositions, but to focus their attention on information sources that can be expected to confirm rather than challenge the positions that cohere with their political outlooks (Hart, Albarracín, et al. 2009)..

But scientifically curious people have an appetite to be surprised by the insights generated by the use of science’s signature methods of disciplined observation, measurement, and inference.  That appetite might impel them, unconsciously, to expose themselves more readily than their less curious political peers to expose themselves to information that is contrary to their predispositions.  If so, they might end up with perceptions of risk that are at least a bit closer to those of their political opposites who are scientifically curious and who are doing the same thing.

That was the animating hypothesis of an experiment, the outcome of which is the basis of proposition 4.  In that experiment, we—my collaborators at CCP and APPC—tested just how readily partisans would expose themselves to surprising scientific evidence on climate change when that evidence was contrary to their political predispositions (Kahan, Landrum, Carpenter, Helft & Jamieson in press).

Absurd! I'm clearly the coolest image in this post. Click me, or I'll destroy the entire internet!!!We found that individuals who were low to moderate in curiosity wouldn’t do it. They opted for “familiar” evidence supportive of the position associated with their own political outlooks.

But highly curious subjects behaved differently. Confronted with the chance to peruse some surprising evidence that challenged their existing views, they went for it.

I guess they just couldn’t resist!

What exactly did we do to elicit this observation? Well, I’ll tell you about that “tomorrow.”

Or if you are just so curious you can’t wait until then, you can check out our new CCP/APPC Science of Science Communication Initiative paper, “Science Curiosity and Political Information Processing” for details!


Hart, W., Albarracín, D., Eagly, A.H., Brechan, I., Lindberg, M.J. & Merrill, L. Feeling validated versus being correct: a meta-analysis of selective exposure to information. Psychological bulletin 135, 555 (2009).

Kahan, D.M., Landrum, A.R., Carpenter, K., Helft, L. & Jamieson, K.H. Science Curiosity and Political Information Processing. Advances in Political Psychology  (in press).


"Mirror mirror on the wall ... who is the most partisan of all?!" MAPKIA Episode No. 978!

Hey, everybody, I think you know what it's time for . . . .

That’s right-- another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

To get the technicalities out of the way, here's the posting of the "official statement of contest terms & conditions,"  as mandated by the Gaming Commission:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Okay, this is a tricky one!

It’s going to take (a) a Feynmanite/Selbstian level of analytical thought, (b) a Fredrickian resistance to the seductive tug of WEKS, plus (c) a Barry-Bonds-sized dose of political-psychology HGH (& yes former Freud expert & current stats legend Andrew Gelman and Josh " 'Hot Hand Fallay' Fallacy" Miller both remain eligible for this MAPKIA pending their appeals for testing positive in the aftermath of their stunning post “CCP-APPC Political Polarization IQ Test”™ victories).

Let’s start by creating a “political partisanship index.”  The recipe for that is as follows:

  1. Take a left-right political outlook scale formed by standardizing the sum of the sums of responses to conventional 7-point political-party identification and 5-point liberal-conservative ideology survey items. A very nice feature of this approach when one uses it with a nationally representative sample is that “0” is “moderate Independent,” while -1 and +1 SD are “liberal Democrat” and “conservative Republican,” respectively. Scores in the vicinity of -1.8 and +1.8 will be “Extremely liberal, Strong Democrat” and “Extremely conservative, Strong Republican,” respectively. In case you’ve forgotten how nicely this simple scale performs in picking in partisan polarization on contested issues, check out the policy-polarization figure below or watch a re-run of the wildly popular episode on the “CCP-APPC PPQ IQ Test”™).

  2. Then take the absolute value of the scores on this Left_right scale. The result is a “Partisanship Index” (PI), one that registers the intensity of one’s left-right outlooks without regard to their valence. Thus, if one is either a “liberal Democrat” or a “conservative Republican,” one gets a PI score of “1.0.” If one is either an “Extremely liberal, Strong Democrat” and “Exremely conservative, Strong Republican,” one gets a PI score of 1.8. A milqetoast politically sissy who is a “moderate Independent” will get a score of “0.”

Okay, got that?  Good. (If you are curious for what the relationshop between Left_right and PI looks like without smoothing--and why the intercept for zero on y axis is slightly above zero--good for you! Click here).

Now here is the MAPKIA question:

What is the profile of a “super partisan”? On the basis of characteristics like (a) gender, (b) race, (c) income, (d) education, (e) science comprehension (measured by OSI), (f) science curiosity (measured with SCS), (g) religiosity, (h) cultural worldivews (measured with the CCW scales) etc. or appropriate combinations thereof, who is the most partisan “type” of person (i.e., gets the highest PI score) in U.S. society????

You know the rules: don’t just gesture toward an answer in some vague discursive way; be specific, both about what your conjecture and why, and tell me how to test it using the sort of data that typically appears in a CCP data set.

Realize that basically the question is, What's the relationship between the specified characteristics and partisanship? If you want to specificy simple correlations between partisanship and one or more of these attributes or (better still)  combinations of them, that's fine!

But if you have some more clever way to specify how the characteristics should be combined into some latent-variable "identity" variable or how the relationship between the characteristics (individually or in combination) should be related to the Partisanship index (in that regard, you might want to check out "yesterday's" post on how science curiosity and science comprehension relate to each other), go for it!

Now, an important proviso: Do not tell me to just jam every one of these characteristics onto the right hand side of a goddam linear regression and “see what comes out statistically significant.”  The reason  is that the results of such an analysis will be gibberish. 

Actually, the R2  will be fine & might be interesting if you want to get an idea of the the upper limit of the possibilities for explaining PI. But the parameter estimates will be meaningless in relation to our task, which is to identify the sorts of real-world people who are super partisans. 

And with that . . . mark, get set, MAPKIA! 


What is the relationship between science curiosity & science comprehension? A fragment . . .

From something I’m woring on (and useful refinement of this discussion of how to think about size of individual differences in one or another reasoning disposition). . .

c. Compared to ordinary science intelligence. Science curiosity—generally or as measured here—ought to be have some relationship to science comprehension. It is difficult to experience the pleasure of contemplating scientific insight if one is utterly devoid of any capacity for making sense of scientific evidence. Similarly, if one is aggressively uncurious about scientific insights, one is less likely to acquire the knowledge or the experience-based habits of mind to reason well about scientific insights.

Yet the two dispositons shouldn’t be viewed as one and the same.  Many people who can detect covariances and successfully compute conditional probailities—analytical tasks essential to making sense of empirical evidence—are nevertheless uninterested in science for its own sake.  Even more obvious, many people who are only modestly proficient in these technical aspects of assessing empirical evidence are interested—passionate even—about science. In sum, one would expect a science-curiosity measure, if valid, to be modestly correlated with but definitey not equivalent to a valid science comprhension measure.

SCS, the science-curious measure we formed (Kahan, Landrum & Carpenter 2015), has these properties.  The association between SCS and the Ordinary Science Intelligence (OSI) assessment (Kahan 2016) was r = 0.26 in our two data collections. To make this effect more practically meaningful,

SCS has these properties.  The association between SCS and the Ordinary Science Intelligence (OSI) assessment (Kahan 2016) was r = 0.26 in our two data collections. To make this effect more practically meaningful, the relationship between these measures implies that that individuals in the top quartile of SCS are over four times more likely than those in the bottom quartile to score in 90th percentile or above on the OSI assessment (Figure 6).  This is a degree of association consistent with the expectation that higher science curiosity contributes materially to higher science comprehension. Nevertheless, in both studies science comprehension lacked meaningful predictive power in relation to engagement with the three science videos featured in our two studies (Figure 7). In other words, SCS measures a disposition that is apparently integral to the kind of proficiency in scientific reasoning measured by OSI, yet generates a form of behavior—the self-motivated consumption of science information for its own sake—that is unassociated with science comprehension by itself.


Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res. (2016), advance on line at 

Kahan, D., Landrum, A. & Carpenter C. Evidence-based Science Filmmaking Initiative, Study No. 1 (2015), at





What antagonistic memes look like: the case of the HPV Vaccine

From the new APPC/CCP Working Paper, Culturally Antagonistic Memes & the Zika Virus

2.1. In general

* * *

“Memes” refer to ideas and practices that enjoy wide circulation and arouse self-reinforcing forms of attention as well as spontaneous adaptation and elaboration (Balkin 1998; Blackmore 1999). A small subset of these sorts self-replicating ideas and practices, the ones we call “culturally antagonistic memes” refer to highly evocative, highly inflammatory argumentative tropes used by members of one group to stigmatize another.

When they figure in debates over risk, these contempt-pervaded tropes invest positions on them with affective resonances symbolic of opposing groups’ values or identities.  In the resulting discourse climate, individuals will come to perceive risk regulation as “express[ing] the public worth of one subculture’s norms relative to those of others, demonstrating which cultures have legitimacy and public domination” and thereby “enhnanc[ing] the social status of groups carrying the affirmed culture and degrad[ing] groups carrying that which is condemned as deviant” (Gusfield 1968, p. 59). Conducted in the idiom of instrumental consequences, the stances diverse citizens adopt on which activities genuinely threaten society and which policies truly mitigate the attendant dangers are become rhetorical subterfuges in an “ongoing debate about the ideal society” (Douglas &Wildavsky 1982, p. 36). 

This process is effected through a decisive switch in the sort of information processing that is characteristic of the AH-CCT model. From a reliable and consensus-generating guide to valid decision relevant-science, the affective heuristic and cultural cognition at this point combine to generate a divisive, nontruth-convergent source of identity-protective cognition (Sherman & Cohen 2002; Kahan 2010).

By fusing contending positions on a risk or like facts to opposing group identities, antagonistic memes effectively transform positions on them into badges of membership in, and loyalty to, competing groups. Because this state of affairs pits opposing groups’ knowledge-certification systems against one another, the forms of information-processing associated with cultural cognition and the affect heuristic will under these conditions necessarily lose their power to generate truth-convergent forms of consensus across them.

This switch will not cause such information processing to abate, however.  There is rarely any personal action that an individual can take that will affect the level of danger that a societal risk poses to him or anyone he cares about; his decisions as a consumer, voter, or participant in public debate won’t matter enough, for example, to affect the course of climate change, or the regulation of fracking, or the siting of nuclear waste facility.  In contrast, such an individual’s personal behavior, including the attitudes he evinces on issues infused with social meanings, will typically have tremendous significance for the impressions that others form of his character (Sherman & Cohen 2002; Lessig 1996).  As a result, it will be individually rational, if collectively disastrous, for individuals to form habits of mind that reliably produce identity-affirming rather than accurate ones when societal risks become infused with meanings that divide their groups from others (Kahan 2015b).  

Hi, it's me again! Click me to say hello!Indeed, these habits of mind will become seamlessly interwoven into the capacities essential for assessing scientific information. “Motivated system 2 reasoning” refers to the tendency of individuals to use their proficiency in Numeracy, cognitive reflection, and science comprehension to ferret out and credit identity-congruent evidence and explain away the rest (Kahan in press_b).  Much as a virus does to the genetic material of an otherwise healthy cell, identity protective cognition effectively insinuates itself into reasoning dispositions essential to recognizing the best available evidence (Kahan 2013; Kahan, Peters et al.  2013).  Their cognitive faculties having been redirected in this fashion, the individuals most adept in these forms of reasoning will end up the most polarized on culturally contentions risks (Hamilton 2011, 2012; Kahan, Peters et al..  2012).

Identity-protective cognition is thus not a not a natural outgrowth of but rather a pathological deformation of the processes associated with the AH-CT model. The trigger of this pathology, moreover, is the advent of culturally antagonistic memes (Figure 1).

2.2. A concrete illustration

Many persistently contested science issues fit this pattern.  But we will focus on one that we believe is particularly well suited for illustration: the U.S. experience with the HPV vaccine.

I dare you: click me!

The HPV vaccine confers (near-perfect) immunity to the human papilloma virus, an extremely common s

exually transmitted disease that cause cervical cancer.  The vaccine also has the distinction of being the only childhood immunization recommended for universal administration by the U.S. Centers for Disease Control that is not now on the schedule of mandatory school-enrollment immunizations in the United States.  Legislative proposals to add it were defeated in dozens of states in the years from 2007 to 2008 as a result of intense political controversy over the safety and effectiveness of the vaccine (Kahan 2013).

Although the proposal to add the HPV vaccine to the list of mandatory vaccinations divided the public along predictable lines, the conflict over it was in fact not inevitable.  Only a few years before nearly every state had endorsed the CDC’s proposal for universal administration of the HBV vaccine, which likewise confers immunity for a sexually transmitted disease, hepatitis-b, that causes cancer (of the liver).  The HBV vaccine is now given in infancy, but at that time it was an adolescent shot, just like the HPV vaccine.  During the years in which legislative battles were raging over the latter vaccine, nationwide vaccination rates for the former were well over 90% (ibid).

Like every other childhood vaccine that preceded it, the HBV vaccine was considered and approved for inclusion in state universal-immunization schedules by non-political public health agencies delegated this expert task by state legislatures.  The vast majority of parents thus learned of the vaccine for the first

time when consent to administer it was sought from their pediatricians, trusted experts who advised them the vaccine was a safe addition to the array of prophylactic treatments for keeping their children healthy.  Just as important, regardless of who these parents were—Republican or Democrat, devout evangelical or atheist—they were all afforded ample evidence that parents just like them were getting their kids vaccinated for HBV.  This is a science communication environment in which the AH-CCT model can be expected to generate largely convergent affective reactions across all groups—exactly the outcome that was observed.

The HPV’s vaccine path to public awareness, in contrast, was much more treacherous. Seeking to establish a dominant position in the market before the approval of a competing shot, the manufacturer of the HPV vaccine  orchestrated a nationwide campaign to establish immunization mandates by statutes enacted by state legislatures.  What was normally a routine, nonpolitical decision—the administrative updating of states’ mandatory-vaccination immunization schedules—thus became a high-profile, highly partisan dispute.  People became acquainted with the vaccine not during visits to their pediatricians’ office but while viewing Fox News, MSNBC, and other political news outlets. There they were bombarded with reports on the “slut shot” (Taormino 2006) and “virgin vaccine” (Page 2006) for school girls, a framing enabled by the manufacturer’s decision to seek fast-track FDA approval of a women’s-only shot as part of company’s plan to vault over the conventional, less speedy, depoliticized administrative-approval process (Gollust, Lorusso et al.  2015).

These media stories and resulting social media reaction were replete with what we are referring to as “culturally antagonistic memes.”  “Trust us: Vioxx, Now Gardasil,” declared a viral internet feature that mocked the manufacturer’s own advertising campaign (Figure 2). “HPV vaccine: Republicans prove themselves morons once again,” sneered liberal commentators (2011). “They value your virginity more than your life,” another righteously intoned; “there was a time when only the loony left believed that the loony right favored death over sex; not any more” (Goodman 2005).  Individualist-oriented commentators retorted: “Let’s use teenage girls as lab rats for a monopoly” (Erickson 2011).

These are exactly the conditions one would expect to fuse a risk issue to antagonistic social meanings, thereby triggering identity-protective cognition on the vaccine’s risks and benefits (Fowler & Gollust 2015; Bolsen, Druckman & Cook 2013).  Studies confirmed that exactly that happened (Gollust, Dempsey et al.  2010; Kahan et al.  2010).


“HPV Vaccine: Republicans Prove Themselves Morons Once Again.” Why Evolution Is True. (Sept. 14, 2011).

Bolsen, T., Druckman, J.  & Cook, F.L.  The effects of the politicization of science on public support for emergent technologies.  Institute for Policy Research Northwestern University Working Paper Series (2013).

Bolsen, T., Druckman, J.N.  & Cook, F.L.  The influence of partisan motivated reasoning on public opinion.  Political Behav. 36, 235-262 (2014).

Bolsen, T., Druckman, J.N. & Cook, F.L. Citizens’, scientists’, and policy advisors’ beliefs about global warming. The ANNALS of the American Academy of Political and Social Science 658, 271-295 (2015).

Douglas, M. & Wildavsky, A.B. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers (University of California Press, Berkeley, 1982).

Douglas, M. Purity and Danger: An Analysis of Concepts of Pollution and Taboo (1966).

Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011).

Erickson, Erick. Let’s Use Teenage Grils as Lab Rats for a Monopoly. RedSate. (Aug. 17, 2011), at

Fowler, E.F. & Gollust, S.E. The content and effect of politicized health controversies. The ANNALS of the American Academy of Political and Social Science 658, 155-171 (2015).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affair 29, 2041-2046 (2010).

Gollust, S.E., LoRusso, S.M., Nagler, R.H. & Fowler, E.F. Understanding the role of the news media in HPV vaccine uptake in the United States: Synthesis and commentary. Human Vaccines & Immunotherapeutics, 1-5 (2015).

Goodman, Ellen. Abstinance-only crowd laments cancern breakthrough. Boston Globe. (Nov. 14, 2005), at

Gusfield, J.R. On Legislating Morals: The Symbolic Process of Designating Deviance. Cal. L. Rev. 56, 54 (1968).

Hamilton, L.C. Education, politics and opinions about climate change evidence for interaction effects. Climatic Change 104, 231-242 (2011).

Hamilton, L.C., Cutler, M.J. & Schaefer, A. Public knowledge and concern about polar-region warming. Polar Geography 35, 155-168 (2012). 

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013b).

Page, Christina. The Virgin Vaccine. Nerve. (June 28, 2006), at

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Taormino, Tristan. The Slut Shot. Village Voice., (Aug. 15, 2006), at




Page 1 ... 3 4 5 6 7 ... 41 Next 20 Entries »