follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Bookends in the study of individual differences in politically biased comprehension of science

Did a talk at the University of Oklahoma Center for Risk & Crisis Management last Thurs. The questions & discussion were really great.

Here are the main points, rationally reconstructed, that I made (slides here).

1. We know a lot about politically motivated reasoning (PMR) as a “main effect” in the processing of policy-relevant facts.  Generally speaking PMR refocuses individual attention away from “truth-convergent” and toward identity-protective styles of information processing, the goal of which is to promote formation of beliefs that effectively express individuals’ membership in and loyalty to opposing cultural groups.

2. We don’t know as much about individual differences in PMR.  That is, researchers so far have not paid as much attention to dispositions or personality traits that might either accentuate or mitigate the impact of PMR on information processing.

if you have one last click in your life-time supply, this is where to spend it!3. One thing we do know something about, however, is politically motivated system 2 reasoning: Various forms of cognitive proficiency—ones that no doubt help individuals to determine the truth in most settings—seem to aggravate or magnify PMR.  A good number of observational studies suggest this.  And CCP’s “Motivated Numeracy” study supplies experimental data indicating that individuals high in dispositions essential to science comprehension use cognitive proficiency to form and persist in identity-evincing beliefs.

4. There is also at least one measure of reasoning style that appears to have the opposite effect—i.e., that appears to constrain PMR.  That disposition is science curiosity.  Other science-comprehension-related dispositions seem to magnify PMR as the strength of those dispositions increase. The general effect of increased science curiosity, however, is the same on individuals’ of varying political outlooks. In addition, individuals who score highest on the Science Curiosity Scale (SCS) scores also do not polarize as much as their scores on the Ordinary Science Intelligence assessment increases.

5. What are the implications of all this? 

Well, first, it is a mistake to read this literature to imply that increased science comprehension is “bad.” The problem isn’t with that disposition; it is with a science communication environment that has become infused with antagonistic social meanings that transform positions on disputed decision-relevant forms of science with membership in and loyalty to opposing cultural groups.  The upshot, then, is that we should identify means of protecting the science communication environment from being polluted with such meanings so that we can get the benefit of the insights of those citizens who are most proficient in science comprehension.

click me! You'll be astonishedSecond, we should be exploring how science curiosity can be used to help detoxify a polluted science communication environment.  Can we foster science curiosity in the population, either as a fixed trait or as a state that characterizes their engagement with controversial issues?  Can we feature the open-mindedness of individuals high in science curiosity as models of the way in which citizens in a pluralistic self-governing community should reason?

You tell me!


*Now* where am I? Oklahoma City!

Am heading out early (today) to see what cool things the researchers at OU Center for Risk and Crisis Management are up to!

Will send postcards.


Science of Science Communication seminar: Session 8 reading list (climate change 2)

Feel free to comment if you are playing along at home . . . .


Hurry up & get your copy of "Expressive rationality of inaccurate perceptions" before sold out!

Now in print --

If can't leap paywall, the preprint is pretty close to final.



3 forms of "trust in science" ... a fragment

From something I'm working on . . . 

Three forms of trust in science

There are a variety of plausible claims about the role of science attitudes in controversies over decision-relevant science. These claims should be disentangled.

One such claim attributes public controversy to disagreements over the reliability of science. Generally speaking, people make decisions based on their understandings of the consequences of selecting one course of action over another. Science purports to give them information relevant to identifying such consequences: that vaccinating one’s children will protect them (and others) from serious harms; that the prevailing reliance on fossil fuels as an energy source will generate environmental changes inimical to human wellbeing, etc. How readily people will make use of this type of information will obviously depend on an attitude toward science—viz., that it knows what it is talking about.

We will call this attitude decisional trust in science.  Trust is often used to denote a willingness to surrender judgment to another under conditions that make the deferring party vulnerable.  People evince what we will call “decisional trust” in science when they treat the claims that science makes as worthy of being relied on under conditions  in which misplaced confidence would in fact be potentially very costly to them.

That attitude can be distinguished from what we’ll call institutional trust of science.  We have in mind here the claim that controversy over decision-relevant science often arises not from distrust of validly derived scientific knowledege but distrust of those who purport to be doing the deriving.  People who want to rely on science for guidance might still be filled with suspicion of powerful institutions—universities, government regulatory authorities, professions and professional associations—charged with supplying them with scientific information.  They might not be willing, then, to repose confidence in, and make themselves vulnerable to, these actors when making important decisions.

Both of these attitudes should be distinguished from still another kind of attitude that figures in some accounts of how science attitudes generate public controversy. We’ll call this one acceptance of the authority of science.

Science in fact competes for authority with alternative ways of knowing—albeit less fiercely today in liberal democratic societies than in other types of societies.  Religions, for example, tend to identify criteria for ascertaining truth that involve divine revelation and the privileged access to the same by particular individuals identified by their status or office.  Science confers the status of knowledge, in contrast, only on what can be ascertained by disciplined observation—in theory, anybody’s—and thereafter adjudicated by human reason—anyone’s—as a valid basis for inference.

The Royal Society motto Nullius in verba—“take no one’s word for it”—reflects a bold and defiant statement of commitment to the authority of science’s way of knowing in relation to alternatives that involve privileged access to revealed truths. This is—or certainly was at the time of Royal Society was founded—a profound stance to adopt.

 But it would of course be silly to think that the volume of knowledge science generates could possibly be made use of without “taking the word” of a great many people committed to generating knowledge in this way.  The authority of science as a way of knowing, in a practical sense, presupposes decisional trust in and institutional trust of science.

But it is perfectly plausible—perfectly obvious—that some people could be uneasy with science because they really don’t view its way of knowing as authoritative relative to one of its competitors.  We should be sure we are equipped to recognize that attitude toward science when we see it, so that we can measure the contribution it could be making to conflicts over science.


Where am I? Knoxville!

A triple header of talks today at U. Tenn.  I've been warmly greeted here consistent with the historic friendship of our respective states' university systems....


I think the Tennessee player is out of bounds? What do you think?



Science of Science Communication seminar: Session 9 reading list (teaching evolution)

 Here's another! (session 7 was on "science of science filmmaking"; session 8 on "climate, part 2" ... I'll post those "tomorrow"™.


Science of Science Communication seminar: Session 6 reading list (climate change 1)

Here you go!


Trust in science vs. reliance on religious faith--another fun GSS item

Any surprises here? (In case you don't remember, relatively religious peope have more "confidence" in "those running" the "science community" than in "those running "organized religion.")


Here's the model on which the 2nd figure is based.


Tomorrow: Sea level science communication panel


What do you make of *this*? More on partisan differences in trustworthiness of "university" scientists

Careful now . . . .

Like “yesterday's”™ item (WHICHSCI), this one (SCIIMP1) made a one-time-only appearance in 2006 GSS.

Companion items asked whether it was important that "the people who do [science] have advanced degrees in their field"; that "conclusions [be] based on solid evidence"; that "researchers carefully examine different interpretations, even ones they disagree with"; and that  "the results are consistent with religious beliefs." Responseswere all skewed in patterns that reflected a pro-science sensibility. Check out the GSS codebook if you are curious about toplines on those -- they are all skewed toward a pro-science outlook.

Here is regression model, in case anyone is interested.


Should I update my priors on partisanship & trust in industry vs. university scientists? By how much & in what direction?!

I'm still stuck in GSS can.  Actually, it's more like a bag of potato chips; you can't stop munching until you've emptied the thing.

But anyway, the 2006 GSS had an item that solicited respondents' attitudes toward "industry" vs. "university scientists." 

Well, "we all know" that conservatives hold university scientists in contempt for their effeminate, elitist ways & that liberals regard industry scientists as shills.

But here's what GSS says about partisanship & industry vs. university scientists . . . .

 WEKS strikes again?.. Or is this just more survey artifact?

Maybe this ...

is more informative?  Or will people w/ different priors just disagree about the practical significance of this difference in the probability of finding industry scientists less reliable than university ones?...


What can we *really* conclude from the GSS's 2010 item on the risk of GM/GE crops? An expert weighs in

Never fails! My posts from “yesterday”™ and “the day before yesterday”™ have lured a real, bona fide expert to come forward. The expert in this case is William Hallman, the Chair of the Department of Human Ecology and faculty member of the Department of Nutritional Sciences and of the Bloustein School of Planning and Public Policy at Rutgers University. He is also currently a Resident Scholar in the Science of Science Communication initiative at the Annenberg Public Policy Center.

William Hallman:

As you probably suspect, I am sympathetic to your argument that because so few Americans really know anything about them, asking people about the safety of GM crops is problematic in general.  So, starting with the premise that most Americans are unlikely to have a pre-formed opinion about the safety of GM crops before being asked to think about the issue in the survey, I think that we should assume that most of the answers given to the question are impressionistic, and likely influenced by the wording of the question itself. Which is:

 “Do you think that modifying the genes of certain crops is: “Extremely dangerous for the environment . . . Not dangerous at all for the environment.”

I agree with the idea suggested by @Joshua, that because the risk targeted is “danger to the environment,” it is plausible that the differences seen are because conservative Republicans may be less likely to endorse the idea that anything is dangerous for the environment.  If you were to ask about risks to human health, you might get a different pattern of responses.

But that’s not all.  The root of the question refers to crops. That is, to plants/agriculture, and not to food.  So, are conservative Republicans also less likely to view crops/agriculture a threat to the environment in general? My guess is ‘probably’, but I don’t have good data to back up that assertion.

But wait, there’s more. . .  The question doesn’t actually refer to GMO’s.  It asks whether modifying the genes of crops is dangerous.  I’m don’t know where the specific question falls in the overall line of questioning.  Were there questions about GMOs preceding this?  If not, participants may not have grasped that the question was really about Genetic Engineering.  Technically, you can “modify the genes of certain crops” through standard crossbreeding/hybridization methods. It is, in part, why the FDA has never liked the broad term “Genetic Modification.”  If the question had asked, “Do you think the genetic engineering of crops is dangerous for the environment,” I think you would get a different pattern of responses.  As a side note, I have ancient data that shows that more than a decade ago, Americans were as likely to approve of foods produced through crossbreeding as they were for foods produced through genetic engineering.  


More GM food risk data to chew on--compliments of GSS

Okay, then. 

Here are some simple data analyses that reflect how a wider range of GSS science-attitude variables relate to perceptions that GM crops harm the environment, and how that relationship is affected by partisanship.

I’d say they tell basically the same story as my initial analysis of CONSCI, the item that measures “confidence” in “those running” the “scientific community”: basically, that higher, pro-science scores on these measures is associated with less concern with GM crops. This is so particularly among right-leaning respondents; indeed, left-leaning ones don't really move at all when one looks at risk perceptions in relation to the composite "proscience" scale.

There is also a small zero-order correlation (r (1189) -0.12, p < 0.01) between GENEGEN—the GSS’s 2010 GM risk perception item—and the composite left-right scale that I constructed and that is coded so that higher scores denote greater conervatism.

All of this is out of keeping with the usual finding of a lack of partisan influence on GM food risks. I have reported many times that there is no partisan effect when GM food risks are measured with the Industrial Strength Risk Perception measure.  Surveys conducted by other opinion analysts using different measures have shown the same thing.

So what’s going on?

One possibility, suggested by loyal listener @Joshua, is that the GSS’s GM-concern item looks at people’s anxiety about the impact of GM crops on “the environment” as opposed to the safety of consuming GM foods.  The “environmental risk” cue is enough information for the public—which is otherwise pretty clueless (“cueless”?) about GM risks—to recognize how the issue ought to cohere with their political outlooks.

Seems persuasive to me . . . but what do you—the 14 billion daily readers of this blog—think?!

Oh, one more thing: I did a quick search and found only one paper that addresses partisanship and the GSS’s “GENEGEN” item. If others know of additional ones, please let me & all the readers know.

Oh, one more "one more" thing. Here are the raw data:



More GSS "science attitudes" measures & their effect on perception of GM crop risks

So on reflection, what I posted here now seems less cogent than I had thought.  I'm sending it back to the shop so that it can be replaced with something more enlightening on the various additional (that is, in addition to "CONSCI") GSS "science attitude"  items and concern over GM crop risks.


Trust in science & perceptions of GM food risks -- does the GSS have something to say on this?

So here’s something fun.

I found it while scraping the bottom of the barrel of a can of GSS data that I had consulted to prepare remarks on the role of “trust in science/of scientists” that I gave at a National Academy of Sciences event a couple of weeks ago.

The GSS has a variety of measures that could be construed as measuring “trust” of this sort. The most famous one is its “Confidence in Institutions” query. It solicits how much “confidence” respondents have in  “those running” various  institutions, including the “science community.” The item permits one of three responses: “hardly any,” “only some,” and “a great deal.”

The wording is kind of odd, but the item is a classic, having been included in every GSS study conducted in 1974.  One can find dozens of studies that use it for one thing or another, including proofs of partisan differences in trust of science.

Well, it turns out that in 2010, the GSS asked this question:

Do you think that modifying the genes of certain crops is:

1. Extremely dangerous for the environment

2. Very dangerous

3. Somewhat dangerous

4. Not very dangerous

5. Not dangerous at all for the environment.

So I decided to see what would happen when one uses trust in since, as measured by the institutional confidence item, to predict responses to the genetically modified crops item. I also included a political orientation measure formed by aggregating responses to the GSS’s 7-point liberal-conservative ideology item and its 7-point party orientation item.

In my analysis, I measured the probability  that a respondent would select a response from 1-3—the ones that evince a perception of danger.

Here’s what I found:



I hadn’t expected partisan identity to matter at all, given that surveys now typically find no meaningful correlation between attitudes toward GM foods and party identity. You can see, though, that there is a bit of a partisan effect here, with those are right-leaning ideologically inclined to find less danger in GM crops as their Always eat your raw data before trying to draw inferences from a regression model. Click it!“confidence” in the “scientific community” increases.  For a "conservative Republican," the estimated difference in the probability of finding GM crops to be environmentally dangerous at the "great deal" vs. "hardly any" response levels is -18% (+/- 15% at 0.95 LOC).

Left-leaning respondents, in contrast, don't budge a centimeter as their science-community “confidence” increases (-3%, +/- 12%). 

What should we make of this, if anything? 

I’m not sure, actually.  I still am inclined to see responses to GM food questions as meaningless—a survey artifact—given how few people are actually aware of what GM foods are. Obviously, here, the level of concern expressed is way out of line with people’s behavior in consuming prodigious amounts of GM food.

We also don’t have any decent validation of the “confidence in science” measure: I’ve never encountered it being used to explain other attitudes in a way that would give one confidence that it really measures trust in science. The same goes, moreover, for all the other “trust” measures in the GSS, which consistently find high levels of trust in science among politically diverse citizens.

But maybe this finding should nudge me in the other direction?

You  tell me what you think & maybe I’ll revise my view!




Check it out-- a matchmaking sight for scholarly collaborators!

This is pretty cool.  I'm going to go out on a limb & predict it will eventually be bought out by one of the on-line dating services, which will then offer one-stop shopping for scholars looking for professional & personal. matches.


Potential Zika polarization in pictures

Maybe you can get the gist of the experiment in pictures?  If not, you can always read the (open-source) paper (Kahan, D.M., Jamieson, K.H., Landrum, A. & Winneg, K., Culturally antagonistic memes and the Zika virus: an experimental test, J. Risk Res. 20, 1-40 (2017)).

A model

Toxic memes ...

Affective impacts ...

Information-processing degradation




The trust-in-science *particularity thesis* ... a fragment

From something I'm working on . . . .

It is almost surely a mistake to think that highly divisive conflicts over science are attributable to general distrust of science or scientists.  Most Americans—regardless of their cultural identities—hold scientists in high regard, and don’t give a second’s thought to whether they should rely on what science knows when making important decisions.  The sorts of  disagreements we see over climate change and a small number of additional factual issues stem from considerations particular to those issues (National Research Council 2016). The most consequential of these considerations are toxic memes, which have transformed positions on these issues into badges of membership in and loyalty to competing cultural groups (Kahan et al 2017; Stanovich & West 2008).

We will call this position the “particularity thesis.”  We will distinguish it from competing accounts of how “attitudes toward science” relate to controversy on policy-relevant facts. We’ve already adverted to two related ones: the “public ambivalence” thesis, which posits a widespread public unease toward science or scientists; and the “right-wing anti-science” thesis, which asserts that distrust of science is a consequence of holding a conservative political orientation or like cultural disposition. . . .


Kahan, D.M., K.H. Jamieson, A. Landrum & K. Winneg, 2017. Culturally antagonistic memes and the Zika virus: an experimental test. Journal of Risk Research, 20(1), 1-40.

National Research Council 2016. Science Literacy: Concepts, Contexts and Consequences. A Report of the National Academies of Science, Engineering and Medicine. Washington DC: National Academies Press.

Stanovich, K. & R. West, 2008. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning, 14, 129-67.


Some more canned data on religiosity & science attitudes

As I mentioned, in putting together a show for the National Academy of Sciences, I took a look at the 2014 GSS data.  

Here's a bit more of what's in there:

Actually, the left-hand panel is based on GSS 2010 data. But I hadn't looked at that particular item before.

The right-hand panel is based on GSS 2008, 2010, 2012, & 2014.  It is an update of a data display I created before the 2014 data (the most recent that has been released by the GSS) were available.

If, as reasonable, you want  confirmation that the underlying scales I've constructed are reliabily measuring the disposition that we independently have good reason to associate with religiosity, here are how these survey respondents respond to the GSS's "evolution" item:

I still find it astonishing that there isn't a more meaningful difference in the attitudes of religious & non-religious respondents on the "science attitude" measures.  Guess I had a case of WEKS on this.  

But these data do reinforce my view that religion is not the enemy of the Liberal Republic of Science.

There are  much more serious destructive forces to worry about . . . .