follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "Messaging" scientific consensus: ruminations on the external validity of climate-science-communication studies, part 2 | Main | "Resolved: Climate change is not a 'crisis'": Using cultural cognition research in high school ecology class »

External validity of climate-science-communication studies: ruminations part 1

The following is an excerpt from a paper I'm writing.  One of the paper's central themes is external validity.

Roughly, "internal validity" refers to the quality of a design that warrants drawing inferences from the results to what is going on in the study. "External validity" refers to the quality of a design that warrants drawing infernces from the results of a study to the real-world phenomenon the study is supposed to be engaging or modeling.  

I'm convinced that the study and practice of climate-science communication both reflect insufficient attention to external validity issues, and that this disregard is significantly dissipating the effectiveness of--wasting the resources committed to--communicating climate science. 

I'll post the paper in the near future. It has some cool data in it!

But in the meantime, I'll post a few bits -- somewhere between 2 and 17-- as blog posts.

* * *

3. What does “belief in” global warming measure?

Just as we can use empirical methods to determine that “belief in evolution” measures “who one is” rather than “what one knows,” so we can use these methods to assess what “belief in global warming” measures. An illuminating way to start is by seeing what a valid measure of “belief in global warming” looks like.

Figure 3 presents a scatter plot of the responses to a survey item that asked respondents (1800 members of a nationally representative sample) to rate “how much risk … global warming poses to human health, safety, or prosperity” in “our society.” The item, which I’ll call the “Industrial Strength Measure” (ISM), used an eight-point response scale, running form “none at all” to “extremely high risk,” with each point in between assigned a descriptive label.  The survey participants are arrayed along the y-axis in relation to their score on “Left_Right,” a reliable (α = 0.78) composite scale formed by aggregating their responses to a seven-point “party self-identification” measure (“Strong Republican” to “Strong Democrat”) and a five-point “ideology” one (“Very liberal” to “Very conservative). The color-coding of the observations—orange to red for higher risk ratings, yellow for middling ones, and green to blue for lower ones—helps to reveal the strength of the correlation between the global-warming risk ISM and left-right political outlooks.

Exactly how “strong,” though, is that correlation?  An “r” of  “- 0.65” might intuitively seem pretty big, but determining its practical significance requires a meaningful benchmark.  As it turns out, subjects’ responses to the party self-identification and liberal-conservative ideology items are correlated to almost exactly the same degree (r = 0.64, p <  0.01). So in this nationally representative sample, perceptions of the risk of global warming are as strongly associated with respondents’ right-left political outlooks as the indicators of their political outlooks are with one another. 

We could thus combine the global-warming ISM with the party-identification and liberal-conservative ideology items to create an even more reliable political outlook scale (α = 0.81), one with which we could predict with even greater accuracy people's positions on issues like Obamacare and Roe v. Wade.  From a psychometric perspective, all three of these items are measuring the same thing—a latent (unobserved) disposition that causes different groups of people to adopt coherent sets of opposing stances on political matters (DeVellis 2012).

The global-warming ISM has another interesting property, one it shares with ISMs for other putative hazards: it coheres very strongly with professed beliefs about the facts relevant to assessing the specified risk source (Dohmen eta al. 2011; Ganzach et al. 2008; Weber et al. 2002). “Cronbach’s α” is a conventional measure of scale reliability that ranges from 0.0 to 1.0; a score of 0.70 is generally regarded as signifying that a set of indicators  display the requisite degree of intercorrelation necessary to qualify as measure of some underlying latent variable. When global-warming ISM is combined with items measuring whether people believe that “average global temperatures are increasing,” that “[h]human activity is causing global temperatures to rise,” and that global warming will result in various “bad consequences for human beings” if not “counteracted,” the resulting scale has a Cronbach’s α of 0.95.   These “belief” items, then, can also be viewed as measuring the same thing as the “risk seriousness” item—viz., a latent disposition to form coherent sets of beliefs about the facts and consequences of the climate change.

Not surprisingly—indeed, as a matter of simple logic—there is a comparably high degree of coherence between “belief in climate change” and political outlooks. In this sample, some 75% of the individuals whose scores placed them to the “left” of the mean on the political outlook scale indicated that they believe human activity is the primary source of global warming. Only 22% of those who scores placed them to the “right” of the mean indicated that they believed that, and 58% of them indicated that they did not believe there was “solid evidence that the average temperature on earth has been getting warmer over the past few decades.” These figures are in accord with ones consistently reported by scholars and public opinion research centers for over a decade.

Nevertheless, advocacy groups regularly report polls that paint a very different picture. “A new study,” their press releases announce, show that “an overwhelming majority of Americans”—“Blue State and Red ones alike,” enough “to swing” an upcoming presidential election etc.— “support taking action” immediately to combat global warming. Disturbingly, the producers of such polls do not always release information about the survey’s wording or the (mysteriously characterized) methods used to analyze them. But when they do, informed observers point out that the questions posed were likely to confuse, mislead, or herd the survey respondents toward desired answers (Kohut 2010). 

Given the source of these surveys, one could infer that they reflect an advocacy strategy aimed at fostering “overwhelming majority support” for “action on climate change” by insisting that such support already exists. If so, the continued generation of these surveys itself displays determined inattention to over a decade’s worth of real-world evidence showing that advocacy polls of this sort have failed to dissipate the deep partisan conflict measured by various straightforward items relating to global warming.

Indeed, that is the key point: items that show “an overwhelming majority of Americans” believe or support one thing or another relating to climate change are necessarily not measuring the same thing as items that cohere with ISM. The question, then, is simply which items—ones that cohere with one another and ISM and that attest to polarization over climate change, or ones that do not cohere with anything in particular and that report a deep bipartisan consensus in favor of “taking action”—are more meaningfully tracking the real-world phenomena of interest. Unless one is prepared to conclude that the latent or unobserved disposition that causes coherent responses to political outlook and various global warming “belief” and risk perception items are irrelevant for making sense of the public opinion over climate change in the United States, it follows that survey questions that do not cohere with those ones are.

Serious opinion scholars know that when public-policy survey items are administered to a general population sample, it is a mistake to treat the responses as valid and reliable measures of the particular positions or arguments those items express.  One can never be sure that an item is being understood as one intended. In addition, if, as is so for most concrete policy issues, the items relate to an issue that members of the general population have not heard of or formed opinions on, then the responses are not modeling anything that people in the general population are thinking in their everyday world; rather they are modeling only how such people would respond in the strange, artificial environment they are transported into when a pollster asks them to express positions not meaningfully connected to their lives (Bishop 2005; Shuman 1998).

Of course many public policy issues are ones on which people have reflected and adopted stances of meaning and consequence to them.  But even in that case, responses to survey items relating to those issues are not equivalent to statements or arguments being asserted by a participant in political debate.  The items were drafted by someone else and thrust in front of the survey participants; their responses consist of manifestations of a pro- or con- attitude, registered on a coarse, contrived metric.

Because the response to any particular item is at best only a noisy indicator of that attitude, the appropriate way to confirm that an item is genuinely measuring anything, and to draw inferences about what that is, is to show that responses to it cohere with other things (responses to other items, behavior, performance on objective tests, and so forth) the meaning of which is already reasonably understood. Whatever thatitem does measure, moreover, can be measured more precisely when that item is appropriately combined into a scale with others that measure that same thng (Bishop 2005; Zaller 1992; Berinsky & Druckman 2007; Gliem & Gliem 2003).

The striking convergence of items measuring perceptions of global warming risk and like facts, on the one hand, and ones measuring political outlooks, on the other, suggests they are all indicators of a single latent variable.  The established status of political outlooks as indicators of cultural identity supports the inference that that is exactly what that latent variable is. Indeed, the inference can be made even stronger by replacing or fortifying political outlooks with even more discerning cultural identity indicators, such as cultural worldviews and their interaction with demographic characteristics such as race and gender: the resulting latent measures of identity will be even more strongly correlated with climate change risk perceptions and related attitudes (McCright & Dunlap 2012; Kahan et al. 2012).  In sum, whether people “believe in” climate change, like whether they “believe in” evolution, expresses who they are

Part 2

Part 3


Berinsky, A.J. & Druckman, J.N. The Polls—Review: Public Opinion Research and Support for the Iraq War. Public Opin Quart 71, 126-141 (2007).

Bishop, G.F. The Illusion of Public Opinion : Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, Lanham, MD, 2005).

DeVellis, R.F. Scale development : theory and applications (SAGE, Thousand Oaks, Calif., 2012).

Dohmen, T., Falk, A., Huffman, D., Sunde, U., Schupp, J. & Wagner, G.G. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011).

Ganzach, Y., Ellis, S., Pazy, A. & Ricci-Siag, T. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008).

Gliem, J.A. & Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. (Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, The Ohio State University, Columbus, OH, 2003). Available at

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kohut, A. Views on climate change: What the polls show. N.Y. Times A22 (June 13, 2010), available at 

McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, 1-16 (2012).
Shuman, H. Interpreting the Poll Results Better. Public Perspective 1, 87-88 (1998).

Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002).

Zaller, J.R. The Nature and Origins of Mass Opinion (Cambridge Univ. Press, Cambridge, England, 1992).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>