follow CCP

Recent blog entries
Monday
Jul282014

Undertheorized and unvalidated: Stocklmayer & Bryant vs. NSF Indicators “Science literacy” scale part I

The paper isn’t exactly hot off the press, but someone recently lowered my entropy by sending me a copy of Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education, Part B, 2(1), 81-101 (2012)

Cool article!

The piece critiques the NSF’s Science Indicators “factual knowledge” questions.

As is well known to the 9.8 billion readers of this blog (we’re down another couple billion this month; the usual summer-holiday lull, I’m sure), the Indicators battery is pretty much the standard measure for public “science literacy.”

The NSF items figure prominently in the scholarly risk perception/science communication literature. 

With modest additions and variations, they also furnish a benchmark for various governmental and other official and semi-official assessments of “science literacy” across nations and within particular ones over time.

I myself don’t think the Indicators battery is invalid or worthless or anything like that.

But like pretty much everyone I know who uses empirical methods to study public science comprehension, I do find the scale unsatisfying. 

What exactly a public sicence comprehension scale should measure is itself a difficult and interesting question. But whatever answer one chooses, there is little reason to think the Indicators’ battery could be getting at that.

The Indicators battery seems to reduce “science literacy” to a sort of catechistic assimilation of propositions and principles: “The earth goes around the sun, not the other way ’round”[check];  “electrons are smaller that atoms” [check]; “antibiotics don’t kill viruses—they kill bacteria!,” [check!].

We might expect an individual equipped to reliably engage scientific knowledge in making personal life decisions, in carrying out responsibilities inside of a business or as part of a profession, in participating in democratic deliberations, or in enjoying contemplation of the astonishing discoveries human beings have made about the workings of nature will have become familiar with all or most of these propositions.

NSF Indicators "factual knowledge" battery & int'l results (click it!)But simply being familiar with all of them doesn’t in itself furnish assurance that she’ll be able to do any of these things.

What does is a capacity—one consisting of the combination of knowledge, analytical skills, and intellectual dispositions necessary to acquire, recognize, and use pertient scientific or empirical information in specified contexts.  It’s hardly obvious that a high score on the NSF’s “science literacy” test (the mean number of correct reponses in a general population sample is about 6 of 9) reliably measures any such capacity—and indeed no one to my knowledge has ever compiled evidence suggesting that it does. 

This—with a lot more texture, nuance, and reflection blended in—is the basic thrust of the S&B paper.

The first part of S&B consists of a very detailed and engaging account of the pedigree and career of the Indictors’ factual-knowledge items (along with various closely related ones used to supplement them in large-scale recurring public data collections like the Eurobarometer). 

What’s evident is how painfully innocent of psychometric and basic test theory this process has been.

The items, at least on S&B’s telling, seem to have been selected casually, more or less on the basis of the gut feelings and discussions of small groups of scientists and science authorities.

Aside from anodyne pronouncements on the importance of “public understanding of science” to “national prosperity,” “the quality of public and private decision-making,” and “enriching the life of the individual,” they made no real effort to articulate the ends served by public “science literacy.” As a result, they offered no cogent account of the sorts of knowledge, skills, dispositions, and the like that securing the same would entail.

Necessarily, too, they failed to identify the constructs—conceptual representations of particular skills and dispositions—an appropriately designed public science comprehension scale should measure. 

Early developers of the scale reported Cronbach’s alpha and like descriptive statistics, and even performed factor analysis that lent support to the inference that the NSF “science literacy” scale was indeed measuring something.

Eurobarometer variantBut without any theoretical referent for what the scale was supposed to measure and why, there was necessarily no assurance that what was being measured by it was connected to even the thinly specified objectives the proponents of them had in mind.

So that’s the basic story of the first part of the S&B article; the last part consists in some related prescriptions.

Sensibly, S&B call for putting first things first: before developing a measure, one must thoughtfully (not breezily, superficially) address what the public needs to know and why: what elements of science comprehension are genuinely important in one or another of the contexts, to one or another of the roles and capacities, in which ordinary (nonexpert) members of the public make use of scientific information?

S&B suggest, again sensibly, that defensible answers to these questions will likely support what the Programme for International Student Assessment characterizes as an “assets-based model of knowledge” that emphasizes “the skills people bring to bear on scientific issues that they deal with in their daily lives.”  (Actually, the disconnect between the study of public science comprehension and the vast research that informs standardized testing, which reflects an awe-inspiring level of psychometric sophistication, is really odd!) 

Because no simple inventory of “factual knowledge” questions is likely to vouch for test takers’ possession of such a capacity, S&B propose simply throwing out the NSF Indicators battery rather than simply supplementing it (as has been proposed) with additional "factual knowledge" items on “topics of flight, pH, fish gills, lightning and thunder and so on.”

Frankly, I doubt that the Indicators battery will ever be scrapped. By virtue of sheer path dependence, the Indicators battery confers value as a common standard that could not easily, and certainly not quickly, be replaced. 

In addition, there is a collective action problem: the cost of generating a superior, “assets-based” science comprehension measure—including not only the toil involved in the unglamorous work of item development, but also the need to forgo participating instead in exchanges more central to the interest and attention of most scholars—would be borne entirely by those who create such a scale, while the benefits of a better measure would be enjoyed disproportionately by other scholars who’d then be able to use it.

I think it is very possible, though, that the NSF Indicators battery can be made to evolve toward a scale that would have the theoretical and practical qualities that S&B.

As they investigate particular issues (e.g., the relationship between science comprehension and climate change polarization), scholars will likely find it useful to enrich the NSF Indicators batter through progressive additions and supplementations, particularly with items that are known to reliably measure the reasoning skills and dispositions necessary to recognize and make use of valid empirical information in everyday decisionmaking contexts.

That, anyway, is the sort of process I see myself as trying to contribute to by tooling around with and sharing information on an “Ordinary science intelligence” instrument for use in risk perception and science communication studies.

Even that process, though, won’t happen unless scholars and others interested in public science comprehension candidly acknowledge the sorts of criticisms S&B are making of Indicators battery; unless they have the sort of meaningful discussion S&B propose about who needs to know what about science and why; and unless scholars who use the Indicators battery in public science comprehension research explicitly address whether the battery can reasonably be understood to be measuring the forms of knowledge and types of reasoning dispositions on which their own analyses depend.

So I am really glad S&B wrote this article!

Nevertheless, “tomorrow,” I’ll talk about another part of the S&B piece—a survey they conducted of 500 scientists to whom they administered the Indicators’ “factual knowledge” items—that I think is very very cool but actually out of keeping with the central message of their paper! 

Thursday
Jul242014

How to achieve "operational validity"? Translation Science!

Never fails! By recklessly holding forth on a topic that is obviously more complicated than I am making it out to be, I have again provoked a reflective, informed response from someone who really knows something! 

Recently I dashed off a maddeningly abstract post on the “operational validity” of empirical science communication studies. A study has “high” operational validity, I suggested, if it furnishes empirical support for a science-communication practice that real-world actors can themselves apply and expect to work more or less “as is”; such a study has “low operational validity” if additional empirical studies must still be performed (likely in field rather than lab settings) before the study’s insights, as important as they might be, can be reliably brought to bear on one or another real-world science communication problem. 

I wanted to distinguish the contribution that this concept, adapted from managerial studies (Schellenberger 1974), makes to assessment of a study’s practical value from those made by assessments of the study’s “internal” and “external” validity.  

For a study to be of practical value, we must be confident from the nature of its design that its results can be attributed to the mechanisms the researcher purports to be examining and not some other ones (“internal validity”).  In addition, we must be confident that the mechanisms being investigated are ones of consequence to the real-world communication dynamics that we want to understand and influence—that the study is modeling that and not something unrelated to it (“external validity”).

But even then, the study might not tell real-world communicators exactly what to do in any particular real-world setting.  

Indeed, to be confident that she had in fact isolated the relevant mechanisms, and was genuinely observing their responsiveness to influences of interest, the researcher might well have resorted (justifiably!) to devices intended to disconnect the study from the cacophony of real-world conditions that account for our uncertainty about these things in everyday life.

In this sense, low operational validity is often built into strategies for assuring internal and external validity (particularly the former).

That’s not bad, necessarily.

It just means that even after we have gained the insight that can be attained form a study that has availed itself of the observational and inferential advantages furnished by use of a simplified “lab” model, there is still work to be done—work to determine how the dynamics observed in the lab can reliably be reproduced in any particular setting.  We need at that point to do studies of higher “operational validity” that build on what we have learned from lab studies. 

How should we go about doing studies that add high operational validity to studies of insights gained “in the lab”?

Science communication scholar Neil Stenhouse has something to say about that!

--dmk38

How to achieve operational validity: Translation Science

Neil Stenhouse

Neil StenhouseIt is very unlikely that any real organization would want to use the stimuli from a messaging study, for example, without at least a few substantial changes. They would certainly want their organization to be identified as the source of the message. These changes would influence the effect the messages had on their audience. What kind of changes would the organization want to make? How much would that change the effects of the message? How could the message be made acceptable and usable by these organizations, yet still retain the effectiveness it had in previous experiments? 

Communication practitioners wanting to put social science insights to use could very well ask questions like: how do you use the insights of cultural cognition experiments to design an effective large-scale messaging campaign for the Environmental Defense Fund? Alternatively, how do you use these insights to design a town hall meeting on climate change in Winchester, VA? How could you take a short passage about geoengineering, for example, that had a depolarizing effect on hierarchs and egalitarians (Kahan et al., 2012), and design a meeting that had a similar depolarizing effect? And if you did so, how well would it work? 

I recently wrote a paper about research designed to answer questions like these (Stenhouse, 2014). It turns out that at least in one discipline, people are already doing a substantial amount of research that tests not only which kinds of interventions are effective, but figures out the nitty gritty points of what’s needed to effectively transplant the core of the lab-tested intervention into actual operational use in the real world. It addresses an important part of Dan’s concern with making communication research “evidence-based all the way down” (Kahan, 2013).

In public health, there is a whole subdiscipline – and multiple journals – on what is known as translation science, or implementation science (Glasgow et al., 2012). Researchers in public policy and international development are beginning to address this also (Cartwright & Hardie, 2012; Woolcock, 2013).

Translation science can be summarized with an example of testing an exercise program. With traditional public health research, a research team, often from a university, would design an exercise program, implement it, and measure and carefully document the results. Who lost weight? How much? Do they intend to keep exercising? And so on.

Ricky Stenhouse, Jr. (not Neil Stenhouse)With translation research, as well as these kinds of outcomes, there is an additional focus on recording and describing the things involved in implementing these programs in the field, at scale (Glasgow et al., 1999).

For example, the research team might take their exercise program to a sample of the kinds of organizations that would be delivering the intervention if its use actually became widespread – e.g. hospital staff, community health organizations, church recreation group organizers (Bopp et al., 2007). The researchers would aim to answer questions like: how many of the organizations we approached actually wanted to implement the intervention?

Some organizations might be against it, for cost reasons, or political reasons (e.g. perhaps a hospital’s doctors have pre-existing arrangements with the providers of another intervention).

When an organization agrees to use an intervention, do they implement it correctly? Perhaps the intervention has multiple complex steps, and busy hospital staff may occasionally make errors that cause the intervention to be ineffective.

In short, traditional tests measure whether something works in the lab, under ideal, controlled conditions. Translation science measures whether something works in the real world, under typical real-world conditions (Flay, 1986; Glasgow et al., 2003). And in addition, by measuring the things that can be expected to affect whether it works in the real world – such as whether organizations like it, or how easy it is to implement – translation science can help figure out how to make interventions more likely to work in the real world.

For example, if researchers find out that an intervention is difficult for hospital staff for implement, and find out precisely which part is most difficult to understand, then they might be able to find a way of making it simpler without compromising the efficacy of the intervention.

Cool paper by Neil Stenhouse!Translation science provides the “operational validity” Dan was talking about. It answers questions like: What does it even look like when you try to put the results of experiments into real-world practice? How do you do that? What goes wrong? How can you fix it so it works anyway? 

These kinds of questions are important for anyone who wants their insights to be applied in the real world – and especially important if you want them to be applied at scale. I think many researchers on climate communication would be in the latter category. While good traditional research can help us understand a lot about human psychology and behavior, it only does part of the job in putting that knowledge to use.

One question likely to come up is: Why should social scientists do this work, as opposed to the practitioners themselves?

I argue that they should do this work for the same reasons they should do any work – their skill in recording, conceptualizing and describing social processes (Stenhouse, 2014).

If we want rigorous, generalizable, cumulative knowledge about human behavior, we need social scientists. If we want rigorous, generalizable, cumulative knowledge about how to apply social interventions, we need social scientists there too. We need people who understand both the inner workings of the intervention and the context in which it is deployed, so that they can effectively negotiate between the two in creating the optimal solution.

Ricky Stenhouse Jr driving a race car -- pffff, who cares?Questions about division of labor here are certainly open to debate. Should all social scientists doing work with an applied purpose do some translation research? Should some specialize in lab work, and others in translation science, and occasionally collaborate?

These questions, as well as questions about how to shift academic incentives to reward translation science adequately, remain to be decided.

However, I would argue that especially in areas with urgent applied purposes, people are currently not doing nearly enough of this kind of work. We want our findings to be applied in the real world. Currently there are gaps in our knowledge of how to translate our findings to the real world, and other disciplines provide practical ideas for how to fill those gaps in our knowledge. We are not doing our jobs properly if all of us refuse to try taking those steps.

Neil Stenhouse (nstenhou@gmu.edu) is a PhD candidate from the George Mason University Center for Climate Change Communication.

Bopp, M., Wilcox, S., Hooker, S. P., Butler, K., McClorin, L., Laken, M., ... & Parra-Medina, D. (2007). Using the RE-AIM Framework to Evaluate a Physical Activity Intervention in Churches. Preventing chronic disease4(4).

Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford University Press.

Flay, B. R. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive medicine15(5), 451-474.

Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health93(8), 1261-1267.

Glasgow, R. E., Vinson, C., Chambers, D., Khoury, M. J., Kaplan, R. M., & Hunter, C. (2012). National Institutes of Health approaches to dissemination and implementation science: current and future directions. American Journal of Public Health102(7), 1274-1281.

Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: the RE-AIM framework.American Journal of Public Health89(9), 1322-1327.

Kahan, D. M. (2013). Making climate-science communication evidence-based—all the way down. Culture, Politics and Climate Change. London: Routledge. Available at: http://papers. ssrn. com/sol3/papers. cfm.

Kahan, D. M., Jenkins-Smith, H., Tarantola, T., Silva, C. L., & Braman, D. (2012). Geoengineering and climate change polarization: Testing a two-channel model of science communication. Annals of the American Academy of Political and Social Science.

Schellenberger, R. E. (1974). Criteria for Asssessing Model Validity for Mangerial Purposes. Decision Sciences, 5(4), 644-653. doi: 10.1111/j.1540-5915.1974.tb00643.x

Stenhouse, N. (2014). Spreading success beyond the laboratory: Applying the re-aim framework for effective environmental communication interventions at scale. Paper to be presented at the 2014 National Communication Association Annual Convention.

Woolcock, M. (2013). Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation19(3), 229-248.

 

 

 

Wednesday
Jul232014

Constructing an "Ordinary climate science intelligence" assessment: a fragment ...

From Climate Science Communication and the Measurement Problem, Advances in Pol. Psych. (forthcoming):

6.  Measuring what people know about climate science

What do members of the public know about scientific evidence on climate science? Asking whether they “believe in” human-caused climate change does not measure that.  But that does not mean what they know cannot be measured.

a. A disentanglement experiment: the “Ordinary Climate Science Intelligence” instrument. Just as general science comprehension can be measured with a valid instrument, so can comprehension of the science on climate change in particular. Doing so requires items the responses to which validly and reliably indicate test-takers’ climate science comprehension level.

The idea of “climate science comprehension” is hardly straightforward. If one means by it the understanding of and facility with relevant bodies of knowledge essential to doing climate science research, then any valid instrument is certain to show that the level of climate science comprehension is effectively zero in all but a very tiny fraction of the population.

But there are many settings in which the quality of non-experts’ comprehension of much more basic elements of climate science will be of practical concern. A high school science teacher, for example, might aim to impart an admittedly non-expert level of comprehension in students for the sake of equipping and motivating them to build on it in advanced studies. Likewise, without being experts themselves, ordinary members of the public can be expected to benefit from a level of comprehension that enables them reliably to recognize and give proper effect to valid climate science that bears on their decisionmaking, whether as homeowners, businesspeople, or democratic citizens.

Assume, then, that our goal is to form an “ordinary climate science intelligence” (OCSI) instrument.  Its aim would certainly not be to certify possession of the knowledge and reasoning dispositions that a climate scientist’s professional judgment comprises.  It will come closer to the sort of instrument a high school teacher might use, but even here no doubt fall short of delivering a sufficiently complete and discerning measure of the elements of comprehension he or she is properly concerned to instill in students.  What the OCSI should adequately measure—at least this would be the aspiration of it—is a form of competence in grasping and making use of climate science that an  ordinary person would benefit from in the course of participating in ordinary decisionmaking, individual and collective.

There are two challenges in constructing such an instrument.  The first and most obvious is the relationship between climate change risk perceptions and individuals’ cultural identities.  To be valid, the items that the assessment comprises must be constructed to measure what people know about climate science and not who they are.

A second, related problem is the potential for confounding climate science comprehension with an affective orientation toward global warming risk.  Perceptions of societal risk generally are indicators of a general affective orientation. The feelings that a putative risk source evokes are more likely to shape than be shaped by individuals’ assessments of all manner of factual information pertaining to it (Loewenstein et al. 2001; Slovic et al. 2004).  There is an ambiguity, then, as to whether items that elicit affirmation or rejection of factual propositions relating to climate change are measuring genuine comprehension or instead only the correspondence between the propositions in question and the valence of respondents’ affective orientations toward global warming. Existing studies have found, for example, that individuals disposed to affirm accurate propositions relating to climate change—that burning fossil fuels contributes to global warming, for example—are highly likely to affirm many inaccurate ones—e.g., that atmospheric emissions of sulfur do as well—if those statements evince concern over environmental risks generally (Tobler, Visschers & Siegrist 2012; Reynolds et al. 2010).

Two steps were taken to address these challenges in constructing an OCSI instrument, which was then administered to the same survey participants whose general science comprehension was measured with the OSI scale.  The first was to rely on an array of items the correct responses to which were reasonably balanced between opposing affective orientations toward the risk of global warming.   The multiple-choice item “[w]hat gas do most scientists believe causes temperatures in the atmosphere to rise” (“Carbon”) and the true-false one “human-caused global warming will result in flooding of many coastal regions” (“Floods”) evince concern over global warming and thus could be expected to be answered correctly by respondents affectively predisposed to perceive climate change risks as high. The same affective orientation, however, could be expected to incline respondents to give the incorrect answer to items such as “human-caused global warming will increase the risk of skin cancer in human beings” (“Cancer”) and “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will reduce with photosynthesis by plants” (“Photosynthesis”). By the same token, those respondents affectively disposed to be skeptical of climate change risks could be expected to supply the correct answer to Cancer and Photosynthesis but the wrong ones , Carbon and Floods. The only respondents one would expect to be likely to answer all four correctly are ones who know and are disposed to give the correct response independent of their affective orientations.

The aim of disentangling (unconfounding) affective orientation and knowledge was complimented by a more general assessment-construction tenet, which counsels use of items  that feature incorrect responses that are likely to seem correct to those who do not genuinely possess the knowledge or aptitude being assessed (Osterlind 1998). Because the recent hurricanes Sandy and Irene both provoked considerable media discussion of the impact of climate change, the true-false item “[h]uman-caused global warming has increased the number and severity of hurricanes around the world in recent decades” was expected to elicit an incorrect response from many climate-concerned respondents of low or modest comprehension (who presumably would be unaware of the information the IPCC 5th Assessment (2013, I: TS p. 73) relied upon in expressing “low confidence” in “attributions of changes in tropical cyclone activity to human influence” to date, based on “low level of agreement between studies”).  Similarly, the attention furnished in the media to the genuine decrease in the rate at which global temperatures increased in the last 15 years was expected to tempt respondents, particularly ones affectively disposed toward climate-change skepticism, to give the incorrect response to the true-false item “globally averaged surface air temperatures were higher for the first decade of the twenty-first century (2000-2009)  than for the last decade of the twentieth century (1990-1999).”

The second step taken to address the distinctive challenge of constructing a valid OCSI assessment was to introduce the majority of items with the clause “Climate scientists believe that  . . . .” The goal was to reproduce the effect of the clause “According to the theory of evolution . . .” in eliminating the response differential among religious and nonreligious individuals to the NSF Indicators’ Evolution item.  It is plausible to attribute this result to the clause’s removal of the conflict relatively religious respondents experience between offering a response that expresses their identity and one that signifies their familiarity with a prevailing or consensus position in science.  It was anticipated that using the “Climate scientists believe” clause (and similar formulations in other items) would enable respondents whose identity is expressed by disbelief in human-caused global warming to answer  OCSI items based instead on their understanding of the state of the best currently available scientific evidence.

To be sure, this device created the possibility that respondents who disagree with climate scientists’ assessment of the best available evidence could nevertheless affirm propositions that presuppose human-caused climate change.  One reason not to expect such a result is that public opinion studies consistently find that members of the public on both sides of the climate debate  don’t think their side’s position is contrary to scientific consensus (Kahan et al. 2011).

It might well be the case, however, that what such studies are measuring is not ordinary citizens knowledge of the state of scientific opinion but their commitment to expressing who they are when addressing questions equivalent to “belief in” global warming. If their OCSI responses show that individuals whose cultural identity is expressed by denying the existence of human-caused global warming nevertheless do know what scientists believe about climate change, then this would be evidence that it is the “who are you, whose side are you on” and not the “what do you know” question when they address the issue of global warming in political settings.

Ultimately, the value of the information yielded by the OCSI responses does not depend on whether citizens “believe” what they say they know “climate scientists believe.” Whether they do or not, their answers would necessarily remain valid measures of what such respondents understand to be scientists’ view of the best available evidence. Correct perceptions of the weight of scientific opinion is itself is a critical form of science comprehension, particularly for individuals in their capacity as democratic citizens.  Items that successfully unconfound who are you, whose side are you on from what do you know enable a valid measure of this form of climate science comprehension.

Achieving this sort of decoupling was, it is important to reiterate, the overriding motivation behind construction of the OCSI measure.  The OCSI measure is at best only a proto- assessment instrument. A fully satisfactory “climate science comprehension” instrument would need to be simultaneously broader—encompassing more knowledge domains—and more focused—more calibrated to one or another of the settings or roles in which such knowledge is useful. 

But validly assessing climate-science comprehension in any setting will require disentangling knowledge and identity.  The construction of the OCSI instrument was thus in the nature of an experiment—the construction of a model of a real-world assessment instrument—aimed at testing whether it is possible to measure what people know about climate change without exciting the cultural meanings that force them to pick sides in a cultural status conflict.

References

 IPCC. Climate Change 2013: The Physical Science Basis, Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, England, 2013).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

Osterlind, S.J. Constructing test items : multiple-choice, constructed-response, performance, and other formats (Kluwer Academic Publishers, Boston, 1998).

Reynolds, T. W., Bostrom, A., Read, D., & Morgan, M. G. (2010). Now What Do People Know About Global Climate Change? Survey Studies of Educated Laypeople. Risk Analysis, 30(10), 1520-1538. doi: 10.1111/j.1539-6924.2010.01448.x

Tobler, C., Visschers, V.H.M. & Siegrist, M. Addressing climate change: Determinants of consumers' willingness to act and to support policy measures. Journal of Environmental Psychology 32, 197-207 (2012).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Wednesday
Jul162014

Measuring "ordinary science intelligence": a look under the hood of OSI_2.0

As the 12 billion readers of this blog (we are down 2 billion, apparently because we’ve been blocked in the Netherlands Antilles & Macao. . .) know, I have been working on & reporting various analyses involving an “ordinary science intelligence” (OSI) science-comprehension measure.

Indeed, one post describing how it relates to political outlooks triggered some really weird events—more than once in fact!

But in any case, I’ve now assembled a set of analyses and put them into one document, which you can download if you like here.

The document briefly describes the history of the scale, which for now I’m calling OSI_2.0 to signify that it is the successor of the science comprehension instrument (henceforward “OSI_1.0”) featured in “The polarizing impact of science literacy and numeracy on perceived climate change risks,” Nature Climate Change 2, 732-735 (2012).

Like OSI_1.0, _2.0 is a synthesis of existing science literacy and critical reasoning scales.  But as explained in the technical notes, OSI_2.0 combines items that were drawn from a wider array of sources and selected on the basis of a more systematic assessment of their contribution to the scale’s performance.

The goal of OSI_2.0 is to assess the  capacity of individuals to recognize and give proper effect to valid scientific evidence relevant to their “ordinary” or everyday decisions—whether as consumers or business owners, parents or citizens. 

A measure of that sort of facility with science—rather than, say, the one a trained scientist or even a college or high school science student has—best fits the mission of OSI_2.0 as to enable “empirical investigation of how individual differences in science comprehension contribute to variance in public perceptions of risk and like facts.”

Here are some of the things you, as a regular reader of this blog who has already been exposed to one or another feature of OSI_2.0, can learn from the document:

1. The items and their derivation.  The current scale consists of 18 items drawn from the NSF Indicators, the Pew Science & Technology battery, the Lipkus/Peters Numeracy scale, and Frederick’s Cognitive Reflection Test.  My next goal is to create a short-form version that performs comparably well;  8 items would be great & even 10 much better. . . . But in any case, the current 18 and their sources are specifically identified.

2. The psychometric properties of the scale.  The covariance structure, including dimensionality and reliability, are set forth, of course.  But the cool thing here, in my view, is the grounding of the scale in Item Reponse Theory.

mmmmmm... item response curves ...There are lots of valid ways to combine or aggregate individual items, conceived of as observable or manifest “indicators,” into a scale conceived of as measuring some unobserved or latent disposition or trait.

The distinctive thing about IRT is the emphasis it puts on assessing how each item contributes to the scale’s measurement precision along the range of the disposition treated as a continuous variable.  This is a nice property, in particular, when one is designing some sort of knowledge or aptitude assessment instrument, where one would like to be confident not only that one is reliably relating variance in the disposition as a whole to some outcome variable of interest but also that one reliably assessing individual differences in levels of the disposition within the range of interest (usually the entire range).

IRT information curves for OSI_2.0 & components thereofIRT is a great scale development tool because it helps to inform decisions not only about whether items are valid indicators but how much relative value they are contributing.

One thing you can see with IRT is that, as it is measured by the OSI_2.0 scale at least, the sort of “basic fact” items (“Electrons are smaller than atoms—true or false?”; “Does the Earth go around the Sun, or does the Sun go around the Earth?”) are contributing mainly to measurement discrimination at low levels of “ordinary science intelligence.”

One gets credit for those, certainly, but not as much as for correctly responding to the sorts of quantitative and critical reasoning items that come from the Numeracy scale and the Cognitive Reflection Test.

That’s as it should be in my view: a person who has the capacity to recognize and make use of valid science will no doubt have used it to acquire knowledge of a variety of basic propositions relating to the physical and biological sciences; but what we care about—what we want to certify and measure—is her ability to enlarge that stock of knowledge and use it appropriately to advance her ends.

3. External validity. The technical notes report analyses that show that OSI_2.0 is, unsurprisingly, correlated with education and with open-mindedness (as measured by Baron’s Actively Open-minded Thinking scale) but doesn’t reduce to them and in fact more accurately predicts performance on tasks that demand or display a distinctive science-comprehension capacity (like covariance detection).

4. Other covariates.  There are correlations with race and gender but they are actually pretty small.  None with political outlooks (but note: I didn’t even check for a correlation with belonging to the Tea Party—I’ve learned my lesson!  Actually, I can probably be coaxed into checking & reporting this; what “identity with the Tea Party” measures is a pretty interesting question! But I’ll do it a post in the middle of the night & written in pig latin to be sure to avoid a repeat of the sad spectacle that occurred the last time.).

"patterns ... everywhere in nature ... what about the stock market?!"5. The science-comprehension invalidity of “belief in” questions relating to evolution and global warming.  The notes illustrate the analytical/practical utility of OSI_2.0 by showing how the scale can be used to assess whether variance in response to standard survey items on evolution and global warming reflect differences in science comprehension.  They aren’t!

That, of course, is the conclusion of my new paper Climate Science Communication and the Measurement Problem, which uses OSI_2.0 to measure science comprehension.

click me .... resistance is futile ...But the data in the notes present a compact rehearsal of the findings discussed there and also add additional factor analyses, which reinforce the conclusion that “belief in” evolution and “belief in” global warming items are in fact indicators of latent “group identity” variables that feature religiosity and right-left political outlooks, respectively, and not indicators of the latent “ordinary science intelligence” capacity measured by the OSI_2.0 scale. 

The analyses were informed by interesting feedback on did on a post on factor analysis and scale dimensionality—maybe the commentators on that one will benefit me with additional feedback!

 

Tuesday
Jul152014

"Bounded rationality": the Grigori Rasputin of explanations for public perceptions of climate change risk

Another excerpt from Climate Science Communication and the Measurement Problem. 

4.  Is identity-protective cognition irrational?

The idea that “disbelief” in global warming is attributable to low “science literacy” is not the only explanation for public conflict over climate change that fails to survive an encounter with actual evidence. The same is true for the proposition that such controversy is a consequence of “bounded rationality.”

Indeed, the “bounded rationality thesis” (BRT) is probably the most popular explanation for public controversy over climate change.  Members of the public, BRT stresses, rely on “simplifying heuristics” that reflect the emotional vividness or intensity of their reactions to putative risk sources (Marx, Weber, Orlove, Leiserowitz, Krantz, Roncoli & Phillips 2007) but that often have “little correspondence to more objective measures of risk” (Weber 2006).  Those more objective measures, which “quantify either the statistical unpredictability of outcomes or the magnitude or likelihood of adverse consequences” (id.), are the ones that scientists employ. Using them demands an alternative “analytical processing” style that is acquired through scientific training and that “counteract[s] the emotionally comforting desire for confirmation of one’s beliefs” (Weber & Stern 2011).

BRT is very plausible, because it reflects a genuine and genuinely important body of work on the role that overreliance on heuristic (or “System 1”) reasoning as opposed to conscious, analytic (“System 2”) reasoning plays in all manner of cognitive bias (Frederick 2005; Kahneman 2003). But many more surmises about how the world works are plausible than are true (Watts 2011).  That is why it makes sense for science communication reasearchers, when they are offering advice to science communicators, to clearly identify accounts like BRT as “conjectures” in need of empirical testing rather than as tested “explanations.”

BRT generates a straightforward hypothesis about perception of climate change risks.  If the reason ordinary citizens are less concerned about climate change than they should be is that that they over-rely on heuristic, System 1 forms of reasoning, then one would expect climate concern to be higher among the individuals most able and disposed to use analytical, System 2 forms of reasoning .  In addition, because these concious, effortful forms of analytical reasoning are posited to “counteract the emotionally comforting desire for confirmation of one’s beliefs” (Weber & Stern 2011), one would also predict that polarization ought to dissipate among culturally diverse individuals whose proficiency in System 2 reasoning is comparably high.

This manifestly does not occur.  Multiple studies, using a variety of cognitive proficiency measures, have shown that individuals disposed to be skeptical of climate change become more so as their proficiency and disposition to use the forms of reasoning associated with System 2 increase (Hamilton, Cutler & Schaefer 2012; Kahan, Peters et al. 2012; Hamilton 2011).  In part for this reason—and in part because those who are culturally predisposed to be worried about climate change do become more alarmed as they become more proficient in analytical reasoning—polarization is in fact higher among individuals who are disposed to make use of System 2, analytic reasoning than it is among those disposed to rely on System 1, heuristic reasoning (Kahan, Peters et al. 2012).  This is the result observed among individuals who are highest in OSI, which in fact includes Numeracy and Cognitive Reflection Test items shown to predict resistance to System 1 cognitive biases (Figure 6).

The source of the public conflict over climate change is not too little rationality but in a sense too much. Ordinary members of the public are too good at extracting from information the significance it has in their everyday lives. What an ordinary person does—as consumer, voter, or participant in public discussions—is too inconsequential to affect either the climate or climate-change policymaking. Accordingly, if her actions in one of those capacities reflects a misunderstanding of the basic facts on global warming, neither she nor anyone she cares about will face any greater risk. But because positions on climate change have become such a readily identifiable indicator of ones’ cultural commitments, adopting a stance toward climate change that deviates from the one that prevails among her closest associates could have devastating consequences, psychic and material.  Thus, it is perfectly rational—perfectly in line with using information appropriately to achieve an important personal end—for that individual to attend to information on in a manner that more reliably connects her beliefs about climate change to the ones that predominate among her peers than to the best available scientific evidence (Kahan, 2012).

If that person happens to enjoy greater proficiency in the skills and dispositions necessary to make sense of such evidence, then she can simply use those capacities to do an even better job at forming identity-protective beliefs.  That people high in numeracy, cognitive reflection, and like dispositions use these abilities to find and credit evidence supportive of the position that predominates in their cultural group and to explain away the rest has been demonstrated experimentally (Kahan, Peters, Dawson & Slovic 2013; Kahan 2013b).   Proficiency in the sort of reasoning that is indeed indispensable for genuine science comprehension does not bring the beliefs of individuals on climate change into greater conformity with those of scientists; it merely makes those individuals’ beliefs even more indicators or measures of the relationship between those beliefs and the identities of those who share their defining commitments.

When “what do you believe” about a societal risk validly measures “who are you?,” or “whose side are you on?,” identity-protective cognition is not a breakdown in individual reason but a form of it. Without question, this style of reasoning is collectively disastrous: the more proficiently it is exercised by the citizens of a culturally diverse democratic society, the less likely they are to converge on scientific evidence essential to protecting them from harm. But the predictable tragedy of this outcome does not counteract the incentive individuals face to use their reason for identity protection.  Only changing what that question measures—and what answers to it express about people—can. 

References 

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Hamilton, L.C. Education, politics and opinions about climate change evidence for interaction effects. Climatic Change 104, 231-242 (2011).

Hamilton, L.C., Cutler, M.J. & Schaefer, A. Public knowledge and concern about polar-region warming. Polar Geography 35, 155-168 (2012)

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013b).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Marx, S.M., Weber, E.U., Orlove, B.S., Leiserowitz, A., Krantz, D.H., Roncoli, C. & Phillips, J. Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ Chang 17, 47-58 (2007).

Weber, E. Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming does not Scare us (Yet). Climatic Change 77, 103-120 (2006).

Weber, E.U. & Stern, P.C. Public Understanding of Climate Change in the United States. Am. Psychologist 66, 315-328 (2011).

Monday
Jul072014

Five theses on climate science communication (lecture summary & slides)

The following is the outline of a lecture that I gave at the super awesome Royal Canadian Institute for the Advancement of Science on June 25, 2014 (slides here). The audience comprised a large group of people united by their curiosity and love of science but otherwise as diverse as the pluralistic democracy in which they live; it was an honor to be able to engage them in conversation. My paper Climate science communication and the measurement problem elaborates on the themes and presents additional data. 

I

What ordinary members of the public “believe” about climate change doesn’t reflect what they know; it expresses who they are.  

Responses to survey questions on “belief in” evolution have no correlation with understanding of evolution or with comprehension of science generally.  Instead, they indicate a cultural identity that features religiosity.

The same goes for survey questions on “belief in” human-caused climate change. Responses to them are interchangeable with responses to survey items used to measure political and cultural outlooks and they have no correlation either to understanding of climate science or science comprehension generally.

II

Public confusion over climate is not a consequence of defects in rationality; it is a consequence of the rational effect people give to information when they live in a world in which competing positions on disputed risks express membership in opposing cultural groups.

“Bounded rationality”—or limitations in the capacity of most people to give appropriate effect to scientific information on risk—is the most popular popular explanation for persistent public confusion over climate change.  But the durability of this claim itself reflects a form of persistent inattention to empirical evidence, which shows that political polarization over global warming is most intense among those segments of the population whose critical reasoning proficiencies make them the least prone to cognitive bias.

The BR hypothesis misunderstands what ordinary people are doing when they engage information on climate change and other culturally disputed risk issues.  They can’t plausibly be understood to be trying to minimize their exposure to the danger those risk sources pose, since their personal beliefs and actions are too inconsequential to have any impact. 

The positions they take will be understood, however, to signify their membership in and loyalty to one or another competing cultural group. To protect their standing in such a group—membership in which is vital to their emotional & material well-being—individuals can be expected to give to information the effect that aligns them most reliably with their group.  The more acute their powers of reasoning, moreover, the better a job they will do in this regard.

The problem is not too little rationality but too much in a world in which positions on risks and other policy-relevant facts have become entangled in cultural status competition.

III

Communicating valid science about climate change (or about the expert consensus of climate scientists) won’t dispel public conflict; only dissolving the connection between positions on the issue and membership in competing cultural groups will.

 

If individuals are using their reason to fit information to the positions that reinforce their connection to identity-defining groups, then bombarding them with more and more information won’t diminish polarization. Indeed, studies show that individuals selectively credit and discredit all manner of evidence—including scientific-consensus “messaging” campaigns—in patterns that enable them to persist in identity-defining belies.

Because that form of reasoning is rational—because it promotes individuals’ well-being at a personal level—the only way to prevent it is to change the relationship that holding positions on global warming has with the identities of culturally diverse citizens.

IV

Ordinary members of the public already know everything they need to about climate science; the only thing that don’t  know (yet) is that the people they recognize as competent and informed use climate science in making important decisions.

Survey items that assess “belief in” human-caused global warming doesn’t measure what people know about climate change, but that doesn’t mean nothing can. 

As is the case for assessing knowledge relating to evolution, it is possible to design a “climate science literacy” instrument that disentangles expressions knowledge from group identity.

The administration of such a test to a nationally representative sample shows that in fact there is little meaningful difference among culturally diverse citizens, who uniformly understand climate change to be a serious risk.

That shared understanding does not lead to popular political support for policies to mitigate climate change, however, because the question “climate change” poses as a political issue is the same one posed by the survey measures of what people “believe” about it: not what do you know but who are you, whose side are you on?

People recognize and make use of all manner of decision-relevant science not by “understanding” it but by aligning their own behavior consistently with that of people they trust and recognize as socially competent.

The actors that members of diverse groups look to in fact are already making extensive use of climate science in their individual and collective decisionmaking.

Climate science communicators ought to be making it easier for members of all groups to see that.  Instead, they are trapped in forms of advocacy—including perpetual, carnival-like “debates”—that fill the science communication environment with toxic forms of cultural animosity.

V

What needs to be communicated to ordinary decisionmakers is normal climate science; what needs to be communicated to ordinary people is that using climate science is as normal for people like them as using the myriad other kinds of science they rely on to make their lives go well.

Practical decisionmakers of all sorts are eagerly seek and use information about climate science.  The scientists who furnish that information to them  (e.g., those at NCAR and the ones in the Department of Agriculture) do an outstanding job.

But what ordinary people, in their capacity as citizens, need to know is not “normal climate science” ; it is the normality of climate science.  They need to be shown that those whom they trust and recognize as competent already are using climate science in their practical decisionmaking.

That is the form of information that ordinary members of the public ordinarily rely on to align themselves with the best available scientific evidence.

It is also the only signal that can be expected to break through and dispel the noise of cultural antagonism that is now preventing constructive public engagement with climate science.

There are small enclaves in which enlightened democratic leaders are enabling ordinary people to communicate the normality of climate science to one another

The rest of us should follow their example.

Wednesday
Jul022014

3 kinds of validity: Internal, external & operational

Some of the thoughtful things people said in connection with my 3-part series on the “external validity” of science-communication studies made me realize that it would be  helpful to say a bit more about that concept and its connection to doing evidence-based science communication.

In the posts, I described “internal validity” as referring to qualities of the design that support drawing inferences about what is happening in the study, and “external validity” as referring to qualities of the design that support drawing inferences from the study to the real-world dynamics it is supposed to be modeling.

I’m going to stick with that.

But what makes me want to elaborate is that I noticed some people understood me to be referring to “external validity” more broadly as the amenability of a science-communication study to immediate or direct application.  I was thought to be saying “be careful: you can’t just take the stimulus of a ‘framing’ experiment or whathaveyou, send it to people in the mail or wave it around, etc., and expect to see the results from the lab reproduced in the world.”

I would (often) say that!

But I’d say it about many studies that are externally valid.

That is, these studies are modeling something of consequence in the world, and telling us things about how those dynamics work that it is important to know.  But they aren’t always telling us what to do to make effective use of that knowledge in the world.

That’s usually a separate question, requiring separate study. 

This is the very point I stress in my paper, “Making Climate Science Communication Evidence-based—All the Way Down.” There I say there must be no story-telling anywhere in an evidence-based system of science communication

It’s a mistake—an abuse of decision-science—for someone (anyone, including a social scientist) to reach into the grab-bag of mechanisms, pull out a few, fabricate a recommendation for some complicated phenomenon, and sell it to people as “empirically grounded” etc.

Because there are in fact so many real mechanisms of cognition that play a role in one or another aspect of risk perception and the like, there will always be more plausible accounts of some problem—like the persistence of public conflict over climate change—than are true!

Such accounts are thus conjectures or hypotheses that warrant study, and should be clearly designated as such.

The hypotheses have to be tested—with internally and externally valid methods—designed to generate evidence that warrants treating one or another conjecture as more worthy of being credited than another.

Very very important!

But almost never enough. 

The kinds of studies that help to decide between competing plausible mechanisms in science communication typically simplified models of the real-world problem in question.  The models deliberately abstract away from the cacophony of influences in those settings that make it impossible to be sure what’s going on. 

An internally valid study is one that has successfully isolated competing mechanisms from these confounding effects and generated observations that give us more reason to credit one, and less reason to credit the other, than we otherwise would have had.

(Yes, one can test “one” mechanism against the “null” but then one is in effect testing that mechanism against all others. Such designs frequently founder on the shoals of internal validity precisely because, when they “reject the null,” they fail to rule out that some other plausible mechanism could have produced the same effect. I’ll elaborate on why it makes more sense to use designs that examine the relative strength of competing mechanisms instead “tomorrow.”)

Such a study is useful, of course, only if the mechanisms that are being tested really are of consequence in the real-world, and only if the simplifying model hasn’t abstracted away from influences of consequence for the operation of those mechanisms.

That’s the focus of external validity.

But once someone has done all that—guess what?

Such a study  won’t (or at least almost never will ) tell a real-world communicator “what to do.”

How could it? The researcher, in order to be confident that she is observing the influence of the mechanisms of interest and that they are behaving in ways responsive to whatever experimental manipulation she performed, has deliberately created a model that abstracts away from all myriad influences that apply in any particular real-world setting,

If the study succeeds, it helps to identify what plausible mechanisms of consequence a real-world communicator should be addressing and—just as importantly—which plausibly consequential ones he should in fact ignore.

But there will be more plausible ways to engage that mechanism in a manner that reproduces in the world the results the experimenter observed in the lab—than are true, too!

The only way to connect the insight generated by the lab study to the real-world is to do in the real-world exactly what was done in the lab to sort through the surplus of plausible conjectures: that is, by constructing internally and externally valid field studies that give real-world communicators more reason to believe than they had before that one plausible conjecture about how to engage the communication mechanism of consequence is more likely correct than another one.

In other words, evidence-based science communication practice must be evidence  based all the way down.

No story telling in lieu of internally and externally valid studies of the mechanisms of cognition that one might surmise is at work.

And no story telling about how a lab study supports one or another real-world strategy for communication.

Researchers who carry on as if that their lab studies support concrete prescriptions in particular real-world settings are being irresponsible.  They should instead be telling real-world communicators exactly what I’m saying here—that field testing, informed by the judgment of those who have experience in the relevant domain—are necessary.

And if they have the time, inclination, and patience, they should then offer to help carry out such studies.

This is the m.o. of the Southeast Florida Evidence-based Science Communication Initiative that the Cultural Cognition Project, with very generous and much appreciated funding from the Skoll Global Threats Fund, is carrying out in support of the science-communication efforts of the Southeast Florida Climate Compact.

But now, getting back to the “external validity” concept, it should be easier to see that when I say a study is "externally invalid," I’m not saying that it doesn’t generate an immediately operational communication strategy in the field.  

It won't.

But the same can be said for almost all externally valid lab studies.

When I  say that a study isn’t “externally valid,” I’m saying it is in fact not modeling the real-world dynamics of consequence.  Accordingly, I mean to be asserting that it furnishes no reliable guidance at all.

So to be clear about all this, let’s add a new term to the discussion: operational validity.

“Operational validity,” a term I’m adapting from Schellenberger (1974), refers to that quality of a study design that supports the inference that doing what was done in the study will itself generate in the real-world the effects observed in the study.

A study has “high operational validity” if in fact it tests a communication-related technique that real-world actors can themselves apply and expect to work.  For the most part, those will be field-based studies.

A study that is internally and externally valid has “low operational validity” if, in order for it to contribute to science communication in the real-world, additional empirical studies connecting that study’s insights to one or another real-world communication setting will still need to be performed. 

A study with “low operational validity” can still be quite useful.

Indeed, there is often no realistic way to get to the point where one can conduct studies with high operational validity without first doing the sort of stripped-down, pristine “low operational validity” lab studies suited to winnowing down the class of cognitive mechanisms plausibly responsible for any science-communication problem.

But the fact is that when researchers have generated these sorts of studies, more empirical work must still be done before a responsible science-communication advisor can purport to answer the “what do I do?” question (or answer it other than by saying "you tell me!  & I'll measure ...").

So:

Three distinct concepts: internal validity; external validity; operational validity.

All three matter.

This is, admittedly, too abstract a discussion.  I should illustrate.  But I’ve spent enough time on this post (about 25 mins; 30 mins is the limit).

If there is interesting discussion, then maybe I’ll do another post calling attention to examples suggested by others or crafted by me.

 Reference

Schellenberger, R.E. Criteria for Assessing Model Validity for Managerial Purposes. Decision Sciences 5, 644-653 (1974).

 

Tuesday
Jul012014

Climate science literacy, critical reasoning, and independent thinking ...

Who you are, not what you know...My paper “Climate Science Communication and the Measurement Problem” features a “climate science literacy” (CSL) test. 

I’ve posted bits & pieces of the paper & described some of the data it contains.  But I really haven’t discussed in the blog what I regard as most important thing about the CSL results. 

This has to do with the relationship between the CSL scores, critical reasoning, and independent or non-conformist thinking.  I’ll say something—I doubt the last thing—about that now!

1. The point of the exercise: disentangling knowledge from identity. I’ll start with the basic point of the CSL—or really the basic point of the study that featured it and the Measurement Problem paper.

Obviously (to whom? the 14 billion regular readers of this blog!), I am not persuaded that conflict over culturally disputed risks in general and climate change in particular originates in public misunderstandings of the science or the weight of scientific opinion on those issues. 

That gets things completely backwards, in fact: It is precisely because there is cultural conflict that there is so much public confusion about what the best available evidence is on the small (it is small) class of issues that display this weird, pathological profile

Given the stake they have in protecting their status in these groups, people can be expected to attend to evidence—including evidence about the “weight of scientific opinion” (“scientific consensus”)—in a manner that reliably connects their beliefs to the position that prevails in their identity-defining groups.

But there are two ways (at least) to understand the effect of this sort of identity-protective reasoning.  In one, the motivated assimilation of information to the positions that predominate in their affinity groups generates widespread confusion over what “position” is supported by the best available scientific evidence.

Call this the “unitary conception” of the science communication problem.

Under the alternative “dualist conception,” “positions” on societal risk issues become bifurcated.  They are known to be both badges of group membership and matters open to scientific investigation.

Applying their reason, individuals will form accurate comprehensions of both positions.  

Which they will act on or express, however, depends on what sort of “knowledge transaction” they are in.  If individuals are in a transaction where their success depends on forming and acting on the position that accurately expresses who they are, then that “position” is the one that will govern the manner in which they process and use information.

If, in contrast, they are in a “knowledge transaction” where their success depends on forming and acting on the positions that are supported by the best available evidence, then that is the “position” that will orient their reasoning.

For most people, most of the time, getting the “identity-expressive position” right will matter most. Whereas people have a tremendous stake in their standing in cultural affinity groups, their personal behavior has no meaningful impact on the danger that climate change or other societal risks pose to them or others they care about.

But still, every one of them does have an entirely separate understanding of the “best-available-evidence” position.  We don’t see that—we see only cultural polarization on an issue like climate change—because politics confronts them with “identity-expressive” knowledge transactions only.

So too do valid methods of public opinion study (observational and experimental) geared to modeling the dynamics of cultural conflict over climate science.

Politics and valid studies both assess citizens' climate-science knowledge with questions that measure who they are, whose side they are on.

But if we could form a reliable and valid measure that disentangles what people know from who they are, we would then see that these are entirely different things, entirely independent objects of their reasoning.

Or so says the "dualist" view of the science communication probolem.

The aim of the “climate science literacy” or CSL measure that I constructed was to see if it was possible to achieve exactly this kind of disentanglement of knowledge and identity on climate change.

I refer to the CSL measure, in the paper and in this blog, as a “proto-” climate-science literacy instrument.  That’s because it's only a step toward developing a fully satisfactory instrument for measuring what people know about climate science. 

Indeed, the idea that there could be an instrument of that sort is absurd. There would have to be a variety, geared to assessing the sort of knowledge that individuals in various settings and roles (“high school student,” “business decisionmaker,” “policymaker,” “citizen” etc.) have to have.

But if the “dualist” conception of the science communication problem is correct, then in any such setting, a CSL, to be valid, would have to be designed to measure what people know and not who they are.

Seeing whether that could be done was the mission of my CSL measure. In that respect, there is nothing “proto-” about it.   

2. The strategy

The strategy I followed to construct a CSL of this sort is discussed, of course, in the paper.  But that strategy consisted of basically two things.

The first was an effort to create a set of items that would avoid equating “climate science literacy” with an affective orientation toward climate change. 

For the most part, that’s what perceptions of societal risks are: feelings with a particular valence and intensity.  As such, these affective orientations are more likely to shape understandings of information than be shaped by them.

The affective orientation toward climate change expresses who people are as members of opposing cultural groups engaged in a persistent and ugly form of status competition.  If we ask “climate science literacy” questions the answers to which clearly correspond to the ones people use to express their group identities, their answers will tell us thatwho they are—and not necessarily what they know.

To avoid this confound, I tried to select a set of items the correct responses to which were balanced with respect to the affective attitudes of “concern” and “skepticism.”  Scoring high on the test, then, would be possible only for those whose answers were not “entangled” in the sort of affective reaction that defines who they are, culturally speaking.

Second, I used a semantic device that has proven successful in disentangling identity and knowledge in measuring people’s positions on evolution.

As I’ve discussed in this blog (and as I illustrate with data in the paper), the true-false question “humans evolved from another species of animal” doesn’t measure understanding of evolution or science comprehension generally.  Rather it measures a form of identity indicated by religiosity.

But if one simply prefaces the statement “According to the theory of evolution,” the question elicits responses that don’t vary based on respondents’ religiosity. Because it doesn’t force them to renounce who they are, the reworded question makes it possible for religious respondents to indicate what they know about the position of science.  (The question is then revealed, too, to be far too easy to tell us anything interesting about how well the person answering it comprehends science.)

I thus used this same device in constructing the CSL items. I either prefaced true-false ones with the phrase “Climate scientists believe . . .” or used some other form of wording that clearly separated “knowledge” from “belief.”

3. The “results”

The results strongly supported the “dualistic” position—i.e., that what people know about climate change is unrelated to their “belief in” human-caused climate change.  Their position on that measures who they are in the same manner as items involving their political outlooks generally

In this way, it becomes possible to see that the cultural polarization that attends climate change is also not a consequence of the effect that cultural cognition has on people’s comprehension of climate science.

It is a consequence of the question that the “climate change” poses to ordinary citizens.

Democratic politics is one of the “knowledge transactions” that measures who one is, whose side one is on, not what one knows about the weight of the best scientific evidence.

People on both sides of the issue, it turns out, don’t know very much at all about climate science.

But if democratic politics were asking them “what they know,” the answer would be a bipartisan chorus of, “We are in deep shit.”

So climate communicators should be working on changing the meaning of the question—on creating conditions that, like the reworded evolution question and related classroom instructional techniques in that setting, make it possible for citizens to express what they know without renouncing who they are.

If you want to see how that's done, book yourself a flight down to SE Florida.  Right now.

4. The “holy shit!” part: the vindication of reason as a source of independent thinking

Now, finally, I get to what for me is the most gratifying part: the vindication of critical reasoning.

The CSL measured featured in the paper is positively correlated with science comprehension in both “liberal Democrats” and “conservative Republicans”!

Why is this so amazing?

As the 14 billion regular readers of this blog know, a signature of the pathology that has infected public discourse on climate change is the impact of science comprehension in magnifying polarization.

 The individuals whose science comprehension and critical reasoning dispositions are most acute are themost polarized.

What you *know*-- not who you are!

Experiments show that individuals high in the dispositions measured by science literacy batteries, the Cognitive Reflection Test, the Numeracy scale and the like use their reasoning proficiency to selective conform their assessment of evidence to the position that predominates in their group.

Polarization over climate change is not a sign that people in our society lack science comprehension.

It is proof how hostile the putrid spectacle of cultural status competition is to the value our society should be getting form the science intelligence it manages to impart in its citizens.

As the 14 billion regular readers know, too, this doesn’t amuse me.  On the contrary, it fills me with despair.

I was heartened in a simple “methods” sense that the CSL had the indicated relationship with science comprehension.  That the two rise in tandem helps to validate the CSL as a measure of what people know, and to corroborate the conclusion that “what do you believe about climate change?,” on which polarization increases as people become more science comprehending, measures nothing other than who they are, what side they are on.

But on an emotional level, I was much more than simply heartened.

I was elated to see the vitality of reason and critical thinking as a source of independent thinking and open-mindedness—to be assured that in fact this aspect of our intelligence hadn’t been annihilated by the sickness of cultural status competition, if it ever existed in the first place.

Remember, the CSL was deliberate designed to disentangle knowledge from identity. 

One of the central devices used to achieve this effect was to balance the items so that respondents’ affective orientation toward climate change—concern or skepticism—would be uncorrelated with their CSL scores.

Thus, to do well on the CSL, individuals had to answer the questions independently of their affective orientations, and hence with the source of them: their cultural identities.

The people who did that the most successfully were those who scored the highest in science comprehension, a disposition that features critical reasoning skills like cognitive reflection and  numeracy, as well as substantive science knowledge.

More later on this, but look: here are your Ludwicks!

This is what happens when one measures what people know.

But this is how it can be, too, in our political life.

If we can just make democratic politics into the sort of “knowledge-assessment transaction” that doesn’t  force people to choose between expressing what they know and expressing who they are.

Monday
Jun302014

Are judges biased? Or is everyone else? Some conjectures

I had some email correspondence with John Callender & it seemed like reproducting it would be a fitting way to mark the end of the U.S. Supreme Court's "OT '13" Term.

John writes:

I may be misinterpreting your views or applying them incorrectly. But I've been struck by your recent writings on the pernicious role of cultural meanings in individuals' attempts to assess expert knowledge when evaluating risk, and as a result I end up seeing that phenomenon in lots of other places.

Most recently, I saw what looked like a similar effect at work in something blogger Kevin Drum wrote about: the tendency of US Supreme Court justices to agree with each other.:

Drum was commenting on this item from the New York Times' "The Upshot":

Drum wrote:

When it comes to high-profile cases, you get a lot of 5-4 decisions. But on the majority of less celebrated cases, when the political spotlight is turned off, there's a surprising amount of consensus on what the law means.

That makes me wonder if something similar to the effect that pollutes the cognitive process of those assessing expert opinion on things like climate change and GMOs might be at work in the case of Supreme Court justices. Granted, they are themselves experts reaching judgments about the thing they're expert in (the law). But it seems possible that the same bias that negatively effects peoples' ability to accurately assess expert opinion when cultural identity and status-maintenance gets tangled up in the question could be a factor in justices' tendency to disagree more in some cases than in others.

My response:

Thanks for sharing these reflections w/ me; they are super interesting & important.

Let's say that it's true that we see more "agreement" among judges than we'd expect relative to divided views of non-judges.  There would be two (at least) plausible hypotheses: (a) the judges are converging as a result of some conformity mechanism that biases their assessment of the issues vs. (b) the *people reacting* to the judges' decisions are  being influenced by cultural cognition or some like mechanism that generates systematic differences of opinion among them, but the judges are converging on decisions that are "correct" or "valid" in relation to professional judgment that is distinctive of judges or lawyers.

I elaborate -- only a bit! -- in this blog post:

The idea that public division over controversial supreme court opinions might reflect a kind of "neutrality communication" failure akin to "validity communication" failure that generates division on issues like climate change etc. (hypothesis b) also figures pretty prominently in

Also in our experimental paper

Hypothesis (b) is in my view primarily an alternative to the prevailing view that (c) judges are "political" -- they vote on basis of ideology etc.  The frequency of unanimous decisions -- or even ideologically charged ones that divide Justices but not strictly on ideological lines -- challenges that view.  Hypothesis (a) & (b) both try to make sense of that, then!

We need more empirical study -- both because it would be interesting to know which hypothesis -- (a), (b) or (c) -- is closer to the truth & because the answer to that question has important normative & prescriptive implications.

 

Saturday
Jun282014

Weekend update: Debate heats up on impact of melting north pole on sea level rise!

Holy smokes!

NOAA has been prevailed upon to reverse its previous position on whether the melting of the  North Pole icecap will affect sea levels!

Until yesterday, the position of the agency, as least as reflected in its "Arctic Theme Page FAQ," was, "No":

 

Well, after I suggested that anyone of the 14 billion regular readers of this blog who disagreed w/ me that seal levels wouldn't rise should "take it up with NOAA," someone apparently did -- & got the agency to change its view:

Cool!

But what's the explanation? You won't figure that out from the new FAQ...

It isn't disputed (except by 5 people who wrote me emails...) that a piece of floating ice (an ice cube, say, in a glass of water) displaces a volume of water equal to the volume of liquid water it turns into when melted.

Also it isn't disputed that the North Pole ice cap is simply floating on the arctic sea (although I did hear from a couple of people who said it isn't right to call the "ice cap" on the North Pole an "ice cap"; they should take it up with NOAA too!).

Apparently, though, there is reason to think that "little" is the right answer to the question.

The floating ice at the North Pole is frozen fresh water (not quite but close!), while the body of water in which it sits -- the Arctic Sea -- is salt water.  Salt water and fresh water have different volumes, and apparently this means that less water is displaced by a frozen piece of fresh water than is added to the salt water when that ice melts.

Or so says the source -- a Nature Climate Change blog -- that I'm told was brought to NOAA's attention.

Summarizing an article from Geophysical Res. Letters, the blog states, "[r]etreating Arctic sea ice and thinning ice shelves around Antarctica contribute about 50 micrometers, or half a hairbreadth, to the 3 millimeter or so annual rise in global sea level..."

Presumably the amount contributed by the melting North Pole ice cap is smaller, since the GRL paper states that over 99% of the world's floating ice is in the Antarctic.

But even 1% of 1/2 a hairsbreadth still is something!

Another blogger who noticed this article stated:

Melting sea ice or ice shelves can indeed change sea level. It turns out that I was probably the first person to compute by how much the sea ice can do so, and there's a story for tomorrow about why I wasn't the person to publish this in the scientific literature even though I had the answer more than a decade before the next person to look at the problem.

I'm not sure if that day came-- be interesting to hear the story.

But look: good enough for NOAA & the Geophysical Res. Letters, double good enough for me!  

I have to say though that even if the old NOAA FAQ was poorly worded (as climate scientist Michael Mann stated yesterday on twitter when he kindly responded to my plea for help in sorting through all this) the new NOAA FAQ still strikes me as below the agency's usually stratospheric standards.  

The old answer at least made sense.  The new one doesn't -- b/c it doesn't furnish any explanation for how melting floating ice will raise sea level a "little."

Indeed, the "so" in the new NOAA FAQ--"Ice on the ocean is already floating, so if the North Pole ice cap melts it has little effect"-- strikes me as a true non-sequitur.  

How much the melting fresh-water ice raises sea level depends on how much volume it has relative to the body of salt water it sits in. So if the melting ice in the North Pole will raise sea levels but only a "little" -- as seems to be true-- the explanation is that the volume of floating ice is comparatively small, not that it is "floating," a fact that by itself would imply its melting will have "no effect," just as the the old NOAA answer stated.

If the goal is to help people comprehend, then it is necessary to give them a cogent explanation, not just a "true"/"false" answer for them to memorize.

But hey, I'm really glad that my "climate science literacy" test apparently helped to get this straightened out!  Indeed, I feel smarter now!

Still, I'm worried I might have opened up a can of frozen worms here...

There are lots & lots of additional credible science authorities on-line that draw the distinction the old NOAA FAQ did between the melting sea-ice floating in the Arctic or North Pole region ("no effect-- like a floating ice cube!") and melting ice sitting on land masses at the Antarctic or South Pole region & elsewhere. 

Consider:

 

Moreover, the internet is teaming with helpful sources that show middle-school and high-school science teachers how to create an instructive science exercise based on the difference between the floating North Pole ice cap and the land-situated South Pole one:

Indeed, I suspect skilled teaching could explain an interesting feature of the results I obtained when I administered my proto- science-literacy instrument to a national sample.

A full 86% of respondents classified as "true" the statement "Scientists believe that if  the north pole icecap melted as a result of human-caused global warming, global sea levels would rise."

But the 14% that answered "false" clearly had a much better grasp of the nature and consequences of climate change.

E.g., two other true-false items stated:

Climate scientists believe that human-caused global warming will increase the risk of skin cancer in human beings;

and 

Climate scientists believe that the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will reduce photosynthesis by plants.

On each question, over 2/3 of the respondents got the wrong answer.  

That's not good.

It suggests that most of the 75% of the respondents who correctly selected "CO2" as the gas scientists believe increases global temperatures still don't really know very much about climate change.

If they did, they wouldn't think that "skin cancer" will increase. That's suggestive of the long-standing confusion that a hole in the ozone is causing climate change.

Also, if they really got that CO2 is causing climate change b/c it is a "greenhouse" gas, they wouldn't believe that climate change is going to starve the plants that live in greenhouses....

But here is the critical point: if a study participant answered "true" to "North Pole" -- that is, if that person indicated he or she thinks climate scientists believe that a melting North Pole ice cap will raise sea levels--then there was a 67% chance he or she would get both the "skin cancer" and "photosynthesis" items wrong!

In contrast, there was less than a 25% chance that someone who answered "false" to North Pole would answer those questions incorrectly.

It might be the "wrong" answer, but actually, "false" in response to "North Pole" is a better predictor of whether someone is likely to show comprehension of other fundamental facts about climate change.

And in fact, that is what a test item on an instrument like this is supposed to do.

The function of a good science-comprehension instrument isn't just to certify that someone has absorbed or memorized the "right" responses to a discrete set of questoins.

It is to measure a latent (i.e., not directly observable) quality of the test-taker -- her possession of the dispositions and skills that contribute to acquiring additional knowledge and giving it proper effect.

The proto- climate-literacy instrument in fact measures that capacity more accurately when one scores "false" as the "correct" response!

Actually, maybe "false" is the right answer to "North Pole"? Perhaps, as the NOAA FAQ prior to yesterday, and all the myriad other science sources on-line reflect, most "climate scientists" do "believe" mistakenly that a melting North Pole won't raise sea levels?

No idea!

For sure, though, I'd revise this item in refining and improving this proto- climate-literacy instrument.

But one last point: in relation to my purpose in constructing the instrument, none of this matters!

My goal was to test they hypothesis that survey items that ask respondents whether they "believe in" human-caused climate change don't actually measure what they know'; instead they measure who they are as members of competing cultural groups for whom positions on this issue have become badges of membership & loyalty.

Using techniques that have proven effective in determining whether "belief in evolution" measures science comprehension (it doesn't), I devised climate-literacy items designed to disentangle indications of knowledge from expressions of identity.

The results suggested that there is indeed no relationship between what people say they "believe" about human-caused climate change and what they know about climate science.

Because there is clearly no meaningful relationship between getting the "right" or "wrong" answers on the proto- climate-science literacy test and either people's cultural identities or their beliefs about climate change, it doesn't matter which answer one treats and which as wrong in that respect. That is, there is still no relationship.

To me, that's really really good news.  

It means that the ugly, mindless illiberal status competition associated with competing positions on "belief" in human-caused climate change doesn't impel people to form polarized understandings of what science knows.

However confusing it might be to figure out what "scientists believe" about the impact of a melting North Pole on sea levels, people-- of all cultural & political outlooks -- have already gotten the memo that climate scientists believe human-caused climate change is creating a series of challenges that need to be addressed through collective action.

Now it is time for those who are trying to promote constructive public engagement with climate science to get the memo that disentangling climate science from culturally assaultive forms of political action is the single most important objective they all confront.

Friday
Jun272014

Scientific dissensus on effect of melting "north pole ice cap"? 

Apparently, some people think the answer to the climate science literacy item "Climate scientists believe that if the North Pole icecap melted as a result of human-caused global warming, global sea levels would rise"  is "true" & not "false."

Take it up with NOAA:

But here's another thing: it just doesn't matter.

The "Ordinary Climate Science Intelligence" assessment was designed to test the hypothesis that the question "do you believe in human-caused climate change?" measures something different from climate-science knowledge questions that are worded to avoid threatening survey respondents' cultural identities.

The evidence that would support that hypothesis would be the lack of any meaningful correlation between responses to the "knowledge" questions & (a) measures of political identity & (b) measures of "belief in climate change."

That's what the "test" results showed.

Go ahead & change the scoring on "North Pole." Or on any other item.  Or all of them!  

That conclusion won't change.

 

Friday
Jun272014

What SE Florida can teach us about the *political* science of climate change

This is another section from new paper "Climate Science Communication and the Measurement problem.
 

That paper, which I posted yesterday, presents data showing that "conservative Republicans" know just as much as "liberal Democrats" about climate science (a very modest amount) and more importantly are just as likely to be motivated to see scientific evidence of climate change as supporting the conclusion that we face huge risks.

They adopt in politics the "skeptical" stance that is measured by survey items on "belief in" human-caused global warming as a rational response to the hostile cultural meanings that the climate-change issues has become entangled in in our politics.

This section of the paper shows how local politicians in SE Florida are disentangling what citizens know from who they are -- and the breakthrough they are achieveing politically on climate as a result.

7.  Disentanglement

a.  We could all use a good high school teacher.  * * *

b.  Don’t ignore the denominator. * * *

c.  The “normality” of climate science in Southeast Florida. Southeast Florida is not Berkeley, California, or Cambridge, Massachusetts.  Southeast Florida’s political climate, for one thing, differs at least as much from the one that Berkeley and Cambridge share as the region’s natural climate does from each of theirs. Unlike these homogenously left-leaning communities, Miami-Dade, Broward, Palm Beach, and Monroe counties are politically conventional and diverse, with federal congressional delegations, county commissions, and city governments occupied by comparable proportions of Republicans and Democrats. 

Indeed, by one measure of “who they are,” the residents of these four counties look a lot like the United States as a whole. There is the same tight connection between how people identify themselves politically and their “beliefs” about global warming—and hence the same deep polarization on that issue.  Just as in the rest of the U.S., moreover, the degree of polarization is highest among the residents who display the highest level of science comprehension (Figure 19).

But like Berkeley and Cambridge—and unlike most other places in the U.S.—these four counties have formally adopted climate action plans. Or more precisely, they have each ratified a joint plan as members of the Southeast Florida Regional Climate Change Compact.  Unlike the largely hortatory declarations enacted by one or another university town, the Compact’s Regional Climate Action Plan sets out 110 substantive “action items” to be implemented over a multi-year period.[1] 

Many of these, understandably, are geared to protecting the region from anticipated threats. The Plan goals include construction of protective barriers for hospitals, power-generating facilities, and other key elements of infrastructure threatened by rising sea levels and storm surges; the enactment of building codes to assure that existing and new structures are fortified against severe weather; measures to protect water sources essential both for residential use and for agriculture and other local businesses.

But included too are a variety of measures designed to mitigate the contribution  the four counties make to climate change.  The Plan thus calls for increased availability of public transportation, the  implementation of energy-efficiency standards, and the adoption of a “green rating” system to constrain carbon emissions associated with construction and other public works.

The effects will be admittedly modest—indeed, wholly immaterial in relation to the dynamics at work in global climate change. 

But they mean something; they are part of the package of collective initiatives identified as worthy of being pursued by the city planners, business groups, and resident associations—by the conservation groups, civic organizations, and religious groups—who all participated in the public and highly participatory process that generated the Plan.

That process has been (will no doubt continue to be) lively and filled with debate but at no point has it featured the polarizing cultural status competition that has marked (marred) national political engagement with climate science.  Members of the groups divided on the ugly question that struggle poses—which group’s members are competent, enlightened, and virtuous, and which foolish, benighted, and corrupt—have from the start taken for granted that the well-being of all of them demands making appropriate use of the best available scientific evidence on climate. 

The Compact Plan carries out a 2011 legislative mandate—enacted by the state’s Republican-controlled legislature and signed by its Tea Party Republican Governor—that all municipal subdivisions update their Comprehensive Plans to protect public health and resources from “impacts of rising sea levels,” including “coastal flooding due to extreme high tides and storm surge.”  The individual county commissioners who took the lead in forming the compact included Republicans and Democrats. Nor was there partisan division in the approval process for the Compact Action Plan.

What makes Southeast Florida so different from the rest of the country? 

Indeed, what makes Southeast Florida, when it addresses climate change inside the Compact's decisionmaking process, so different from the Southeast Florida that. like the rest of the country, is polaized on climate change?

The explanation is that the Compact process puts a different question from the one put in the national climate change debate.  The latter forces Southeast Floridians, like everyone else, to express “who they are, whose side they are on.”In contrast, the decisionmaking of the Compact is effectively, and insistently, testing what they know about how to live in a region that faces a serious climate problem. 

The region has always had a climate problem.  The models and plans that local government planners use today  to protect the region’s freshwater aquifers from saltwater intrusion are updated versions of ones their predecessors used in the 1960s. The state has made tremendous investments in its universities to acquire a level of scientific expertise on sea-level and related climate dynamics unsurpassed in any other part of the Nation.

People in Florida know that the region’s well-being depends on using the information that its scientists know.  The same ones who are politically divided on the question do you “believe in” human-caused global warming overwhelmingly agree that “local and state officials should be involved in identifying steps that local communities can take to reduce the risk posed by rising sea levels”; that “local communities should take steps to combat the threat that storm surge poses to drinking water supplies”; and that their “land use planners should identify, assess, and revise existing laws to assure that they reflect the risks posed by rising sea level and extreme weather” (Figure 20).

That’s normal.  It’s what government is supposed to do in Southeast Florida. And it better be sure to pick up the garbage every Wednesday, too, their citizens (Republican and Democrat) would add.

Public mtg on climate action in SE FlaThe Compact effectively informed its citizens of the appropriateness of using the best available science for these ends but not through a “messaging” campaign focused on “scientific consensus” or anything else. 

The Compact’s “communication strategy” was its process.  The dozens of open meetings and forums, convened not just by the Compact governments but by business, residential, and other groups in civil society filled the region’s science communication environment with exactly the information that ordinary people rationally rely on to discern what’s known to science: the conspicuous example of people they trust and recognize as socially competent supporting the use of science in decisionmaking directly bearing on their lives.

No polluting the science communication environment with partisan meanings!Indeed, far from evoking the toxic aura of tribal contempt that pervades “messaging” campaigns (“what? are you stupid? What part of ‘97% AGREE!’ don’t you understand?!”), Compact officials aggressively, instinctively repel it whenever it threatens to contaminate the region’s deliberations.  One of those occasions occurred during a heavily attended “town meeting,” conducted in connection with the Compact’s 2013 “Regional Climate Leadership Summit,” a two-day series of presentations and workshops involving both government officials and representatives of key public stakeholder groups. 

The moderator for the town meeting (a public radio personality who had just moved to Southeast Florida from Chicago) persistently tried to inject the stock themes of the national climate change debate into the discussion as the public officials on stage took turns answering questions from the audience.  What do Republicans in Washington have against science? And what “about the level of evidence that’s being accepted by private industry”—how come its doing so little to address climate change?

After an awkward pause, Broward County’s Democratic Mayor Kristin Jacobs replied.  “I think it’s important to note,” she said, gesturing to a banner adorned by a variety of corporate logos, “that one of the sponsors of this Summit today is the Broward Workshop. The Broward Workshop represents 100 of the largest businesses in Broward County.” The owners of these businesses, she continued, were “not only sponsoring this Summit,” but actively participating in it, and had organized their own working groups “addressing the impacts of water and climate change.”  “They know what’s happening here,” she said to the moderator, who at this point was averting her gaze and fumbling with his notes.

Town Hall mtg, 11/7/13. Mayor Jacobs, far right“I would also point out,” Jacobs persisted, “when you look across this region at the Summit partners, the Summit Counties, there are three Mayors that are Republican and one  that’s Democrat, and we’re working on these issues across party lines.” Pause, silence.  “So I don’t think it is about party,” she concluded. “I think it is about understanding what the problems are and fixing them and addressing them.”

Five of the lead chapter authors of the National Climate Assessment were affiliated with Florida universities or government institutions. As more regions of the country start to confront climate threats comparable to ones Florida has long dealt with, Florida will share the knowledge it has invested to acquire about how to do so and thrive while doing it.

But there is more Florida can teach.  If we study how the Compact Counties created a political process that enables its diverse citizens to respond to the question “so what should we do about climate change?” with an answer that reflects what they all know, we are likely to learn important lessons about how to protect enlightened self-government from the threat posed by the science of science communication’s measurement problem.

 



[1] I am a member of the research team associated with the Southeast Florida Evidence-based Science Communication Initiative, which supplies evidence-based science-communication support for the Compact.

Thursday
Jun262014

New paper: "Climate Science Communication and the Measurement Problem"

As all 14 billion readers of this blog know, when I say "tomorrow" or "this week," I really mean "tomorrow" or "next week."  Nevertheless, as I said I'd do earlier this week, I really am posting this week--as in today--a new paper

It's the one from which the " 'external validity' ruminations" (parts one, two, & three) were drawn.

The coolest part about it (from my perspectiveve at least!) is the data it presents from the administration  of two "science comprehension" tests, one general and the other specifically on climate change, to a large nationally representative sample.  

The results surprised me in many important respects. I've spent a good amount of time trying to figure out how to revise my understanding of science communication & cultural conflict in light of them (and talked with a good number of people who were kind enough to listen to me explain how excited & disoriented the results made me feel). I think I get these results now. But I still have the unsettled, unsettling feeling that I might now be standing stationary in the middle of a wide, smooth sheet of ice!

I'll post more information, more reflections on the data. But to get the benefit of hearing what those with the motivation & time to read the paper think about the argument it presents, I'll just post it for now.

8. Solving the science of science communication’s measurement problem—by annihilating it

My goal in this lecture has been to identify the science of science communication’s “measurement problem.”  I’ve tried to demonstrate the value of understanding this problem by showing how it contributes to the failure of the communication of climate science in the U.S.

At the most prosaic level, the “measurement problem” in that setting is that many data collectors do not fully grasp what they are measuring when they investigate the sources of public polarization on climate change. As a result, many of their conclusions are wrong. Those who rely on those conclusions in formulating real-world communication strategies fail to make progress—and sometimes end up acting in a self-defeating manner.

But more fundamentally, the science of science communication’s measurement problem describes a set of social and psychological dynamics. Like the “measurement problem” of quantum mechanics, it describes a vexing feature of the phenomena that are being observed and not merely a limitation in the precision of the methods available for studying them.

There is, in the science of science communication, an analog to the dual “wave-like” and “particle-like” nature of light (or of elementary particles generally). It is the dual nature of human reasoners as collective-knowledge acquirers and cultural-identity protectors.  Just as individual photons in the double-slit experiment pass through “both slits at once” when unobserved, so each individual person uses her reason simultaneously to apprehend what is collectively known and to be a member of a particular cultural community defined by a set of highly distinctive set of commitments.  

Moreover, in the science of science communication as in quantum physics, assessment perturbs this dualism.  The antagonistic cultural meanings that pervade the social interactions in which we engage individuals on contested science issues forces them to be only one of their reasoning selves.  We can through these interactions measure what they know, or measure who they are, but we cannot do both at once.

This is the difficulty that has persistently defeated effective communication of climate science.  By reinforcing the association of opposing positions with membership in competing cultural groups, the antagonistic meanings relentlessly conveyed by high-profile “communicators” on both sides effectively force individuals to use their reason to selectively construe all manner of evidence—from what “most scientists believe” (Corner, Whitmarsh & Dimitrios 2012; Kahan, Jenkins-Smith & Braman 2011) to what the weather has been like in their community in recent years (Goebbert, Jenkins-Smith, Klockow, Nowlin & Silva 2012)—in patterns that reflect the positions that prevail in their communities.  We thus observe citizens only as identity-protective reasoners.  We consistently fail to engage their formidable capacity as collective-knowledge acquirers to recognize and give effect to the best available scientific evidence on climate change.

There is nothing inevitable or necessary about this outcome.  In other domains, most noticeably the teaching of evolutionary theory, the use of valid empirical methods has identified means of disentangling the question of what do you know? from the question who are you; whose side are you on?, thereby making it possible for individuals of diverse cultural identities to use their reason to participate in the insights of science.  Climate-science communicators need to learn how to do this too, not only in the classroom but in the public spaces in which we engage climate science as citizens.

Indeed, the results of the “climate science comprehension”  study I’ve described supports the conclusion that ordinary citizens of all political outlooks already know the core insights of climate science.  If they can be freed of the ugly, illiberal dynamics that force them to choose between exploiting what they know and expressing who they are, there is every reason to believe that they will demand that democratically accountable representatives use the best available evidence to promote their collective well-being.  Indeed, this is happening, although on a regrettably tiny scale, in regions like Southeast Florida.

Though I’ve used the “measurement problem” framework to extract insight from empirical evidence—of both real-world and laboratory varieties—nothing in fact depends on accepting the framework.  Like “collapsing wave functions,” “superposition,” and similar devices in one particular rendering of quantum physics, the various elements of the science of science communication measurement problem (“dualistic reasoners,” “communicative interference,” “disentanglement,” etc.) are not being held forth as “real things,” that are “happening” somewhere. 

They are a set of pictures intended to help us visualize processes that cannot be observed and likely do not even admit of being truly seen. The value of the pictures lies in whether they are useful to us, at least for a time, in forming a reliable mental apprehension of how those dynamics affect our world, in predicting what is likely to happen to us as we interact with them, and in empowering us to do things that make our world better.

I think the science of “science communication measurement problem” can serve that function, and do so much better than myriad other theories (“bounded rationality,” “terror management,” “system justification,” etc.) that also can be appraised only for their explanatory, predictive, and prescriptive utility.  But it is the imperative to make sense of —and stop ignoring—observable, consequential features of our experience.  If there are better frameworks, or simply equivalent but different ones, that help to achieve this goal, then they should be embraced.

But there is one final important element of the theoretical framework I have proposed that would need to be represented by an appropriate counterpart in any alternative.  It is a part of the framework that emphasizes not a parallel in the “measurement problems” of the science of science communication and quantum physics but a critical difference between them.

The insolubility of quantum mechanics’ “measurement problem” is fundamental to the work that this construct and all the ones related to it (“the uncertainty principle,” “quantum entanglement,” and the like) do in that theory.  To dispel quantum mechanic’s measurement problem (by, say, identifying the “hidden variables” that determine which of the two slits through which the photon must pass, whether we are watching or not) would demonstrate the inadequacy (or “incompleteness”) of quantum mechanics.

But the measurement problem that confronts the science of science communication, while connected to real-world dynamics of consequence and not merely the imperfect methods used to study them, can be overcome.  The dynamics that this measurement problem comprises are ones generated by the behavior of conscious, reasoning, acting human beings.  They can choose to act differently, if they can figure out how.

The utility of recognizing the “science of science communication measurement problem”  thus depends on the contribution that using that theory can ultimately make to its own destruction. 

Tuesday
Jun242014

What’s that hiding behind the poll? Perceiving public perceptions of biotechnology

Hey look! Here's something you won't find on any of those other blogs addressing how cultural cognition shapes perceptions of risk: guest posts from experts who actually know something! This one, by Jason Delborne, addresses two of my favorite topics: first, the meaning (or lack thereof) of surveys purporting to characterize public attitudes on matters ordinary people haven't thought about; & second, GM foods (including delectable mosquitoes!  MMMMM MMM! 

Jason Delborne:

Whether motivated by a democratic impulse or a desire to tune corporate marketing, a fair amount of research has focused on measuring public perceptions of emerging technologies. In many cases, results are reported in an opinion-poll format: X percentage of the surveyed population support the development of Y technology (see examples on GM food, food safety,  and stem cells/cloning).

But what is behind such numbers, and what are they meant to communicate?

From a democratic perspective, perhaps we are supposed to take comfort in a majority support of a technology that seems to be coming our way. 51% or more support would seem to suggest that our government “by and for the people” is somehow functioning, whereas we are supposed to feel concerned if our regulators were permitting a technology to move forward that had 49% support or less. A more nuanced view might interpret all but the most lopsided results as indicative of a need for greater deliberation, public dialogue, and perhaps political compromise.

From a marketing perspective, the polling number offers both an indicator of commercial potential and a barometer of success or failure in the shaping of public perceptions. An opponent of a given technology will interpret high approval numbers as a call to arms – “Clearly we have to get the word out about this technology to warn people of how bad it is!” And they will know they are succeeding if the next polling study shows diminished support.

Below the headline, however, lie two aspects of complexity that may disrupt the interpretations described above. First, survey methodologies vary in their validity, reliability, and their strategic choices that construct “the public” in particular ways. Much has been written on this point (e.g., The Illusion of Public Opinion and The Opinion Makers), and it’s worth a critical look. A second concern, however, is whether such measures of support provide any meaningful insight into the “public perception” of a technology.

Several of my colleagues recently conducted a survey in Key West, FL, where the Mosquito Control Board has proposed the use of Oxitech’s genetically-modified mosquitoes as a strategy to reduce the spread of dengue fever (see “Are Mutant Mosquitos the Answer in Key West?”). My colleagues have not yet published their research, but they kindly shared some of their results with me and gave me permission to discuss it in limited fashion at the 2014 Biotechnology Symposium and in this blog post. They were thoughtful in their development of a survey instrument and in their strategic choices for defining a relevant public. They also brought a reflexive stance to their research design that nicely illustrates the potential disconnect between measures of public perception and the complexity of public perception.

Reporting from a door-to-door survey, Elizabeth Pitts and Michael Cobb (unpublished manuscript) asked whether residents supported the public release of GM mosquitos. The results would seem to comfort those who support the technology:

With a clear majority of support, and opposition under 25% of survey respondents, we might assume that little needs to be done – either by the company developing the mosquito or the state agency that wishes to try it. Only the anti-GM campaigners have a lot of work to do – or maybe such numbers suggest that they should just give up and focus on something else.

But the story does not and should not end there. The survey protocol also asked respondents to describe the benefits and risks of GM mosquitos – enabling the coding of their open-ended responses as follows in the next two tables.

These tables do not exactly offer rock-solid pillars to support the apparently straightforward “polling numbers”. First, despite having just been told a short version of how GM mosquitos would work to control the spread of dengue fever, very few respondents seemed to have internalized or understood this key point. In fact, we should not even take solace in the fact that 40% of respondents mentioned “mosquito control” as a benefit – the GM mosquito is designed to reduce the population only of the species of mosquito that transmits Dengue fever, which may have little impact on residents’ experience of mosquitos (of all species) as pesky blood-sucking pests. Second, nearly one-third of respondents had no response at all to either the benefits or hazards questions – suggesting a lack of engagement and/or knowledge with the topic. Third, nearly 40% of respondents expressed one or more concerns, many of which are at least superficially reasonable (e.g., questions about ecological consequences or unintended impacts on human health). While the survey data do not tell us how concerned residents were, such concerns have the potential to torpedo the 60% support figure, depending on subsequent dissemination of information and opinions.

To me, these data reveal the superficiality of the “approval rating” as a measure of public perception; yet, those are the data that are easiest to measure and most tempting for our media to report and pundits to interpret. It is a lovely sound bite to sum up a technology assessment in a poll measuring support or approval.

As someone who has practiced and studied public engagement (for example, see 2013a, 2013b, 2012, 2011a, 2011b, 2011c, 2011d), I would argue that if we truly care about how non-experts perceive an emerging technology – whether for democratic or commercial purposes – we need to focus on more messy forms of measurement and engagement. These might be more expensive, less clear-cut, and perhaps somewhat internally inconsistent, but they will give us more insight. We also must at least entertain the idea that opinion formation may reflect an evaluative process that does not rely only upon “the facts.” My hope would be that such practices would promote further engagement rather than quick numbers to either reassure or provoke existing fields of partisans. 

Jason Delborne is an Associate Professor of Science, Policy, and Society in the Department of Forestry and Environmental Resources, and an affiliated faculty member in the Center on Genetic Engineering and Society, at North Carolina State University.

 

Monday
Jun232014

They've already gotten the memo! What the public (Rs & Ds) think "climate scientists believe"

I’ve explained in a couple of posts why I think experimental evidence in support of “messaging” scientific consensus is externally invalid and why real-world instances of this “messaging” strategy can be expected to reinforce polarization.

But here is some new evidence (from a new paper, which I'll post this week) that critically examines the premise of the “message 97%” strategy: namely, that political polarization over climate change is caused by a misapprehension of the weight of opinion among climate scientists.

It isn't.

Consider:

That’s what members of the U.S. general public, defined in terms of their political outlooks (based on their score in relation to the mean on a continuous scale running "left" to "right"), “believe” about human-caused global warming.

Old news.

But here are a set of items that indicate what they think “climate scientists believe” (each statement except the first was preceded with that clause):

 

Got it?

Overwhelming majorities of both Republicans and Democrats are convinced that “climate scientists believe" that  CO2 emissions cause the temperature of the atmosphere to go up—probably the most basic fact scientific proposition about climate change.

In addition, overwhelming majorities of both Republicans and think that “climate scientists believe” that human-caused climate change poses all manner of danger to people and the environment.

Thus, they correctly think that “climate scientists believe” that “human-caused global warming will result in flooding of many coastal regions.” 

But they also incorrectly think that "climate scientists believe" that the melting of the North Pole ice cap will cause flooding. 

Healthy majorities of both Republicans and Democrats correctly think that “climate scientists believe” that global warming increased in the first decade of this century—but mistakenly think that “climate scientists believe” that human-caused climate change “will increase the risk of skin cancer” as well.

Again, these are the responses of the same nationally representative sample of respondents who were highly polarized on the question whether human-caused climate change is happening.

Here’s what’s going on:

1.  Items measuring “belief in human caused global warming” & the equivalent do not measure perceptions of “what people know,” including what they think “climate scientists believe.”

“Belief in human-caused global  warming” items measure “who one is, what side one is on” in an ugly and highly illiberal form of cultural status competition, one being fueled by the idioms of contempt that the most conspicuous spokespeople on both sides use.

As I’ve explained, the responses that individuals give to such items in surveys are as strong an indicator of their political identity as items that solicit self-reported liberal-conservative ideology and political-party self-identification.

What individuals know—or think they know—about climate science is a different matter.  To measure it, one has to figure out how to ask a question that is not understood by survey respondents as “who are you, whose side are you on.”  

Consider, in this regard, the parallel with “belief” in evolution.  When asked whether they believe in evolution, members of the US general population split 50-50, based not on understanding of evolution or science comprehension generally but on the centrality of religion to their cultural identities.

But when one frames the question as what scientists understand the evidence to be on evolution, then the division disappears.  A question worded that way enables relatively religious individuals to indicate what they know about science without having to express a position that denigrates their identities.

Same here: ask “what do climate scientists believe,” and the parties who polarize on the identity-expressive question “do you believe in global warming? do you? do you?” and you can see that there is in fact bipartisan agreement about what climate scientists think!

2.  Different impressions of what “climate scientists believe” clearly aren’t the cause of polarization on global warming.

The differences between Republicans and Democrats on “what climate scientists believe” ‘is trivial.  It doesn’t come close to explaining the magnitude and depth of the division on “human-caused global warming.”

Otherwise, the debate between Democrats and Republicans would be only over how much to spend to develop new nanotechnology sun screens to protect Americans from the epidemic of skin cancer that all recognize is looming.

Why did anyone ever think otherwise -- that the problem was simply not enough people had been told yet that there is scientific consensus on human-caused climate change?

Because it was plausible to believe that, for a while, given the correlation between responses to  items asking survey respondents “do you believe in human-caused climate change” and ones asking them whether they believed“scientific consensus” was consistent or inconsistent with the position they held.

There was always a competing explanation: that survey items on “scientific consensus”—because they are not constructed to disentangle knowledge and identity—were in fact measuring the same thing as the “what do you believe about global warming” questions: namely, who are you, whose side are you on.

A decade’s worth of real-world evidence on the impact of “messaging” consensus has now rendered the former position wholly untenable.

And now here’s some new survey evidence—items constructed to separate the “who are you, whose side are you on” question from “what do you know” question—that is much more consistent with the alternative hypothesis, and with the real world and experimental data that support that explanation.

Climate scientists update their models after ten years of evidence suggest one or another parameter of their models was not right.

Climate science communicators must be willing to do the same—or else they are not genuinely being guided by science in their craft.

3.  Members of the public already get that climate scientists think that we face a huge problem.

The data I’ve presented obviously don't suggest that members of the public know very much about what scientists believe.  They are in fact as likely to be wrong about that as right.

However encouraging it is to see that they understand  CO2 is a “greenhouse gas,” it is painful to realize that they think  CO2 will kill the plants inside a greenhouse.

But the mistakes are all in the same direction: in favor of the answer that “climate scientists believe” global warming poses a huge risk for the environment and human beings in particular.

Basically, items like these are indicators of a latent (unobserved) disposition to attribute to climate scientists the position “we are screwed if we don’t do something.”

That might not be a nuanced and discerning enough view to get you an “A” on a high school “climate science” exam.

But if civic knowledge consists in recognizing the policy significance of what science knows (melting polar ice causes sea level rise) as opposed to various technical details (e.g., that the North Pole ice cap is a big ice cube floating in the Arctic sea & thus won’t displace ocean water when it melts), then there is already more than enough civic understanding to motivate political responsiveness

The problem—what’s blocking this civic knowledge from being translated into action—is something else.  That’s what science communicators and others need to work on.

4. Consensus “messaging” campaigns don’t address the problem—except to the extent that predictably partisan forms of them make things worse.

If there is already a strong, bi-partisan disposition to view climate science as saying “we are in deep shit trouble, folks,” then “messaging” that doesn’t tell people anything they don’t already know.

The reason that ordinary citizens are polarized on doing something about climate change is that such policies have become infused with cultural meanings that signify each group’s contempt for the other. 

Climate change, as Al Gore says, is a “struggle for the soul of America”—and as long as it remains so, people will resist an outcome that says they and people they look up to are “stupid and evil.

Disentangling climate science from cultural status conflict must be the key objective.

“Messaging” scientific consensus doesn’t do that. On the contrary, it just adds another assaultive idiom – “97% AGREEEEEEE, MORON!!!” –to the already abundant stock of tropes one side uses to express how much contempt it has for its opponent in an ugly, senseless cultural status competition.

5.  Is there any alternative interpretation of these data?

Sure!

Someone could say, reasonably, that asking people what they think “climate scientists believe” is different from measuring whether those people themselves believe what they climate scientists have concluded.

I don’t think that's a convincing explanation for the discrepancy between the bipartisan consensus on the “what do climate scientists believe?” items and the “do you believe in human-caused global warming?" items.

As I’ve explained, I think the two are measuring different things, and—sadly, the question that is posed by the “climate change debate” is measuring what the latter items do: who you are, what side are you on?

We need to change the way politics frames the question -- so that it measures what we know, including what we collectively are fully capable of recognizing as science's best understanding of the evidence.

But the point is that even if someone thinks the best explanation for the data is that "Republicans distrust scientists"--another issue that depends on making valid measurements of public opinion-- then obviously “messaging” consensus is a not a responsive strategy.

Of course, the even bigger point is this: climate-science communicators will get nowhere if they accept interpretations of bits and pieces of evidence that are manifestly inconsistent with the evidence as a whole.           

Saturday
Jun212014

Authors: to assure no one can read your articles, publish in a Taylor & Francis journal!

They obviously have some exceptionally horrendous licensing policy, since even major university libraries do not have on-line access to T&F periodicals for 1 yr after article publication.

For sure $226 for the whole issue of Human & Eological Risk Assessment is a great deal, too!

 

 

Friday
Jun202014

Response: An “externally-valid” approach to consensus messaging

John Cook, science communication scholar and co-author of Quantifying the consensus on anthropogenic global warming in the scientific literature, Environmental Research Letters 8, 024024 (2013), has supplied this thoughtful response to the first of my posts on "messaging consensus." --dmk38

Over the last decade, public opinion about human-caused global warming has shown little change. Why? Dan Kahan suggests cultural cognition is the answer: 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact ... they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. 

It’s certainly the case that cultural values influence attitudes towards climate. In fact, not only do cultural values play a large part in our existing beliefs, they also influence how we process new evidence about climate change. But this view is based on lab experiments. Does Kahan’s view that cultural cognition is the whole story work out in the real world? Is that view “externally valid”?

The evidence says no. A 2012 Pew surveys of the general public found that even among liberals, there is low perception of the scientific consensus on human-caused global warming. When Democrats are asked “Do scientists agree earth is getting warmer because of human activity?”, only 58% said yes. There’s a significant "consensus gap” even for those whose cultural values predispose them towards accepting the scientific consensus. A “liberal consensus gap”.

My own data, measuring climate perceptions amongst US representative samples, confirms the liberal consensus gap. The figure below shows what people said in 2013 when asked how many climate scientists agree that humans are causing global warming. The x-axis is a measure of political ideology (specifically, support for free markets). For people on the political right (e.g., more politically conservative), perception of scientific consensus decreases, just as cultural cognition predicts. However, the most relevant feature for this discussion is the perceived consensus on the left.

At the left of the political spectrum, perceived consensus is below 70%. Even those at the far left are not close to correctly perceiving the 97% consensus. Obviously cultural cognition cannot explain the liberal consensus gap. So what can? There are two prime suspects. Information deficit and/or misinformation surplus. 

Kahan suggests that misinformation casting doubt on the consensus is ineffective on liberals. I tend to agree. Data I’ve collected in randomized experiments supports this view. If this is the case, then it would seem information deficit is the driving force behind the liberal consensus gap. It further follows that providing information about the consensus is necessary to close this gap. 

So cultural values and information deficit both contribute to the consensus gap. Kahan himself suggests that science communicators should consider two channels: information content and cultural meaning. Arguing that one must choose between the information deficit model or cultural cognition is a false dichotomy. Both are factors. Ignoring one or the other neglects the full picture. 

But how can there be an information deficit about the consensus? We’ve been communicating the consensus message for years! Experimental research by Stephan Lewandowsky, a recent study by George Mason University and my own research have found that presenting consensus information has a strong effect on perceived consensus. If you bring a participant into the lab, show them the 97% consensus then have them fill out a survey asking what the scientific consensus is, then lo and behold, perception of consensus shoots up dramatically. 

How does this “internally valid” lab research gel with the real-world observation that perceived consensus hasn’t shifted much over the last decade? A clue to the answer lies with a seasoned communicator whose focus is solely on “externally valid” approaches to messaging. To put past efforts at consensus messaging into perspective, reflect on these words of wisdom from Republican strategist and messaging expert Frank Luntz on how to successfully communicate a message: 

“You say it again, and you say it again, and you say it again, and you say it again, and you say it again, and then again and again and again and again, and about the time that you're absolutely sick of saying it is about the time that your target audience has heard it for the first time. And it is so hard, but you've just got to keep repeating, because we hear so many different things -- the noises from outside, the sounds, all the things that are coming into our head, the 200 cable channels and the satellite versus cable, and what we hear from our friends.” 

When it comes to disciplined, persistent messaging, scientists aren’t in the same league as strategists like Frank Luntz. And when it comes to consensus, this is a problem. Frank Luntz is also the guy who said: 

“Voters believe that there is no consensus about global warming in the scientific community.  Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly.  Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate, and defer to scientists and other experts in the field.” 

Luntz advocated casting doubt on the consensus for one simple reason. When people understand that scientists agree that humans are causing global warming, then they’re more likely to support policies to mitigate climate change. Confuse people about consensus, and you delay climate action. 

This finding has subsequently been confirmed by studies in 2011 and 2013. But a decade before social scientists figured it out, Luntz was already putting into place strategies to drum home the “no consensus” myth, with the purpose of reducing public support for climate action. 

Reflecting on the disinformation campaign and the social science research into consensus messaging, Ed Maibach at George Mason University incorporates both the “internally valid” social science research and the “externally valid” approach of Frank Luntz:

We urge scientific organizations to patiently, yet assertively inform the public that, based on the evidence, more than 97% of climate experts are convinced that human-caused climate change is happening. Some scientific organizations may argue that they have already done this through official statements. We applaud them for their efforts to date, yet survey data clearly demonstrate that the message has not yet reached or engaged most Americans. Occasional statements and press releases about the reality of human-caused climate change are unfortunately not enough to cut through the fog—it will take a concerted, ongoing effort to inform Americans about the scientific consensus regarding the realities of climate change.

How do we achieve this? Maibach suggests climate scientists should team up with social scientists and communication professionals. What should scientists be telling the public? Maibach advises:

In media interviews, public presentations, and even neighborhood and family gatherings, climate scientists should remember that many people do not currently understand that there is an overwhelming scientific consensus about human-caused climate change. Tell them, and give them the numbers.

The book Made To Stick looks at “sticky” messages that have caught the attention in the public’s eyes. It runs through many real-world case studies (e.g., externally valid examples) to demonstrate that sticky ideas are simple, concrete, unexpected and tell a story. For a general public who think there is a 50:50 debate among climate scientists, learning that 97% of climate scientists agree that humans are causing global warming ticks many of the sticky boxes.

 

Wednesday
Jun182014

WSMD? JA! How confident should we be that what one "believes" about global warming, on 1 hand, and political outlooks, on other, measure the same *one* thing?

This is the 983rd--I think; it could also be 613th--episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

@DaneGWendell, snickering at a bar graph (I pretty much agree: bar graphs almost always are a yucky way to graphically report interesting data!) A couple days ago I posted something on “what belief in global warming measures.” The answer, I said, was one’s a group-based sense of self-identity.

To support basic point I stated that (1) the Industrial Strength Measure of global warming risk perceptions, (2) a standard “belief in” human-caused global warming item, (3) the standard 5-point “liberal-conservative” ideology measure, and (4) the standard 7-point partisan self-identification display the psychometric properties of being observable indicators for a single latent variable.

A “latent variable” is something that can’t be observed directly. “Indicators” are things one can observe that correlate with the latent variable, typically because they are caused by it (that’s not strictly necessary; one can model a latent variable as being caused by indicators, or both indicators and latent variables as being caused by some other exogenous variable, etc.).

We can thus use the indicators as a substitute for the latent variable in modeling how the latent variable relates to other quantities of interest. When the indicators are aggregated appropriately, their “noise”—the parts of them that vary independently of their causal connection to the latent variable—cancel out, making the resulting scale or index an even more discerning measure of the latent variable (DeVellis 2012).

But before one can do that, one has to be confident the putative indicators really do have the properties one would expect of variables that are measuring the same thing.

I noted that the scale formed by combining the global-warming risk ISM, the “belief” in climate change item, and the two right-left political outlook ones displays a high “Cronbach’s α,” an inter-item correlation statistic that is conventionally understood to measure how reliably the aggregated items (the indicators) can be taken to be measuring any latent variable.

But a curious & reflective guy named @DanegGWendell correctly noted—on twitter—that a high α doesn’t by itself guarantee that the aggregated items are measuring a single latent variable. 

Particularly where one has a large number of items, a scale formed by summing item responses can display a reasonably high α when in fact they are measuring two or maybe even more correlated latent variables.

Linear factor analysis is one of the conventional ways to assess the “dimensionality” of a scale. Conceptually, factor analysis estimates how much variance in the responses to the items can be accounted for by positing a single factor or latent variable, how much of the remaining variance can then be accounted for by positing a second, and so forth.

@DaneGWendell was interested in what a factor analysis of the global warming ISM, global warming belief, and political outlook measures would reveal.

Good question & worthy of a WSMD, JA!

To start, here’s the item “correlation matrix.”  The coefficients express polychoric correlation, which is more appropriate than pearson correlation where, as here, one wants to do a factor analysis of "mixed" data (the ISM is a multi-point rating scale, the political outlook measures multi-point Likert items, and the “belief in” measure a dichotomous item). 

 

Here is the factor analysis of that correlation matrix: 

 

There are a variety of conventional “rules of thumb” used to assess factor structure, all of which suggest that the four items here are appropriately treated as forming a “unidimensional” (i.e., one latent variable) scale.

E.g., the ratio of the “eigenvalues” of the first factor (which explains 90% of the variance in the items) and of the second (which explains almost all the rest) is “greater than 3.”

In addition, the eigenvalue for the second factor is “less than 1.”

Or if we look at a “scree plot,” which plots the eigenvalue of successive factors, there is an “elbow” at 2.

Maybe you can tell, but I find this way of proceeding, which is exactly what you'll see in most articles or textbooks, pretty mechanical and unmotivated. 

Call me silly, but I think it makes more sense to use judgment in assessing the covariance structure to determine whether the items can plausibly be understood to measuring only one latent variable. 

Actually, it's been shown by people who are actually thinking about what they are doinjg and why that treating a two-dimensional scale as one dimensional often has no adverse affect on the accuracy of that scale as a measure of a single latent variable if the two factors are very closely correlated (e.g., Bolt 1999). 

Also, the various statistical techniques and rules of thumb (pragmatic fit indexes etc.) that researchers typically use to investigate scale "dimensionality" have been described as essentially "completely worthless" ((Embretson & Reise 2000, p. 228).

But in fact, that's an unfair appraisal.  They are useful-- but not if used mechanically, as if (to quote Chris Hedges), "the answer to the question" whether a group of items can be treated as observable indicators of a single latent variable were the same as asking, “I mean, what exact buttons do I have to hit?”

"There is utility" (to paraphrase Chris Hedges), in these techniques "in that they may provide supporting evidence that a data set is reasonably dominated by a single common factor" (Embretson & Reise 2000, p. 228).

Or in other words, factor analysis, cronbach's α, and various related statistical measures are tools one can use to equip judgment to do a more reliable job in helping to form valid inferences. 

But treated as substitutes for judgment, they are "completely worthless" (Hedges, of course, 1999, 2000, 2006, 2012, 2014, 2014).

So applying some judgment, what am I trying to say here, and how confident should I be about that given this particular set of observations?

Basically, I’m saying that the 4 items are all measuring the “same thing”—a latent disposition to form coherent stances on matters political. The responses to the “climate change” items are expressions of that disposition—are caused by it—in the same way as responses to the liberal-conservative ideology and party self-identification measures.

The factor analysis is consistent with that. 

But wouldn’t it be more satisfying if I showed this interpretation was more convincing than some alternative plausible hypothesis?

One might think—very reasonably!—that expressions of risk toward environmental hazards reflect a latent disposition, one correlated with but in fact distinct from the sense of identity that one might think political outlooks measure. 

A good alternative hypothesis, then, would be that “climate change” risk perceptions and related factual beliefs are better understood as indicators of some “environmental concern” disposition that is connected to but actually not the "same thing" as the "self-identity" disposition indicated by liberal-conservative ideology and party self-identification.

That alternative hypothesis would have been supported, for sure, if variance in these items had turned out to be more convincingly explained by two discrete factors, one comprising the political outlook items and the other the climate-change items.

But an even more convincing test would be to add some additional “environmental risk concern” items to the “mix,” and then see what happens.

Here is  a covariance matrix that adds to the four items in question ISMs for “artificial food colorings,” “use of artificial sweeteners in diet soft drinks,” and “genetically modified food.”

The signs of the items are consistent with what one might expect if one beleived both that environmental risk perceptions will cohere with each other and that political outlooks will correlate with environmental risk perceptions.

But the correlations between the artificial food coloring, artificial sweetener, and GM food ISMs, on the one hand, and the climate-change items, on the other, are much smaller than the correlations between the climate-change items and the political outlook ones, on the other!

That makes me think it's less likely that global warming items are measuring the "same thing" as those other risk items than it is that the global warming items are indeed measuring the "same thing" as the political outlook items.

Now consider the factor analysis of these 7 items:

The relative proportions of variance explained by the first two factors—0.6 and 0.3—is much closer than was the case for the two factors in the first analysis (0.9 and 0.1).

By the same token, the rule-of-thumb criteria—ratio of eigenvalues (about 2), the absolute size of the second factor’s eigenvalue (> 1), and the scree plot (“elbow” at 3 rather than 2) all support treating the items as measuring two discrete factors.

More importantly in my judgmental opinion, if we look at the “factor loadings”—essentially the correlations between the factor and the indicated items—we can see that the covariance structure looks as you might expect if there 2 latent variables being measured here rather than 1.

The first is one consisting of the global warming ISM, the  “belief in” climate change item, the liberal-conservative ideology item, and the partisan self-identification item.

That's a discrete factor corresponding to the hypothesized latent disposition for which those four variables are all indicators.

The second factor loads much less heavily on those four items and much more so on the food coloring, artificial sweetener, and GM food risk ISMs.

We might, then, want to treat the latter three variables as a scale that measures a concern with environmental risks, or maybe with “food risks” in particular.

The Cronbach’s α for a scale that aggregates those three items would be 0.76.  Usually 0.70 is considered “good.”

The Cronbach’s α for a scale formed by aggregating the climate-change and political outlook items that form the first factor would be 0.85. 

I'm happy about that, though, less b/c I cleared some arbitrary statistical threshold than b/c it just is the case that w/ a "low" Cronbach’s α, one won't be able to connect variance in the scale to variance in other quanitities of itnerest.

There is a very modest positive correlation between the scales of 0.15 (p < 0.01).  In other words, the identity disposition explains some of the variance in this “food risk” disposition, but not much (that's kind of interesting, don't you think? but the 14 billion readers of this blog are among the select few who already know that it's not true that GM foods divide the US general public along political lines).

Well there you go!

I’m even more confident than I would have been had I not done these analyses, or had I just done a recipe-book factor analysis of the four items I hypothesized form a single latent “identity” variable and stopped there.

But that’s all I am: more confident than I’d be otherwise.

Also, not as confident as I could be if I were to do even more things that admit of meaningful assessment than the still too recpie-bookish application of factor analysis I just performed.

And for sure not so confident that I wouldn't change my mind if I were shown meaningful evidence that seemed to support a different conclusion the factor analyses notwithstanding.

The idea that one can perform some set of tests in a mechanical, judgment-free fashion and get “the answer” on questions about how elements of cognition work is commonplace, but wrong.

References

Bolt, D.M. Evaluating the Effects of Multidimensionality on IRT True-Score Equating. Applied Measurement in Education 12, 383-407 (1999).

DeVellis, R.F. Scale development : theory and applications (SAGE, Thousand Oaks, Calif., 2012).

Embretson, S.E. & Reise, S.P. Item response theory for psychologists (L. Erlbaum Associates, Mahwah, N.J., 2000).

 

Wednesday
Jun182014

What is the *message* of real-world "scientific consensus" messaging? Ruminations on the external validity of climate-science-communication studies, part 3

This is part 3 of a series on external validity problems with climate-science-communication studies.The problem, in sum, is that far too many researchers are modeling dynamics different from the ones that occur in the real world, and far too many communicators are being induced to rely on these bad models.

In my first post, I described the confusion that occurs when pollsters assert that responses to survey item that don't reliably or validly measure anything show there's "overwhelming bipartisan support" for something having to do with climate change.

In the second, I described the mistake of treating a laboratory "messaging" experiment as better evidence than 10 yrs of real-world evidence on what happens when communicators expend huge amounts of resources on a "scientific consensus" messaging campaign.

This post extends the last by showing how much different real-world scientific-consensus "messaging" campaigns are from anything that is being tested in lab experments.

All of these are exercpts from a paper I'll post soon -- one that has original empirical data relating to what measures what in the study of climate-change science communication.

* * *

5. “Messaging” scientific consensus


a. The “external validity” question.
 * * *

b.  What is the “message” of “97%”?  “External invalidity” is not an incorrect explanation of why “scientific consensus” lab experiments produce results divorced from the observable impact of real-world scientific-consensus “messaging” campaigns. But it is incomplete. 

We can learn more by treating the lab experiments and the real-world campaigns as studies of how people react to entirely different types of messages.  If we do, there is no conflict in their results.  They both show individuals rationally extracting from “messages” the information that is being communicated.

Consider what the “97% scientific consensus” message looks like outside the lab.  There people are likely to "receive" it in the form it takes in videos produced by the advocacy group Organizing for Action.  Entitled “X is a climate change denier,” the videos consist of a common template with a variable montage of images and quotes from “X,” one of two dozen Republican members of Congress (“Speaker Boehner,” “Senator Marco Rubio,” “Senator Ted Cruz”). Communicators are expected to select “X” based on the location in which they plan to disseminate the video. 

The video begins with an angry, perspiring, shirt-sleeved President Obama delivering a speech: “Ninety-seven percent of scientists,” he intones, shaking his fist.  After he completes his sentence, a narrator continues, “There’s not a lot of debate left in this debate: NASA and 97% of the nation’s scientists agree . . .,” a message reinforced by a  cartoon image of a laboratory beaker and the printed message “97% OF SCIENTISTS AGREE.” 

After additional cartoon footage (e.g., a snowman climbing into a refrigerator) and a bar graph  (“Events with Damages Totaling $1 billion or More,” the tallest column of which is labeled “Tornadoes . . .”) , the video reveals that X is a “CLIMATE CHANGE DENIER.”  X is then labeled “RADICAL & DANGEROUS” because he or she disputes what “NASA” and the “NATIONAL ACADEMY OF SCIENCES” and “ 97% of SCIENTISTS” (bloc letters against a background of cartoon beakers) all “AGREE” is true.

What’s the lesson?  Unless the viewer is a genuine idiot, the one thing she already knows is what “belief” or “disbelief in” global warming means. The position someone adopts on that question conveys who he is—whose side he’s on, in a hate-filled, anxiety-stoked competition for status between opposing cultural groups.  

If the viewer of “X is a climate denier” had not yet been informed that the message “97% of scientists agree” is one of the stock phrases used to signal one cultural group’s contempt for the other, she has now been put on notice. It is really pretty intuitive : who wouldn’t be insulted by someone screaming in her face that she and everyone she identifies with “rejects science”?

 The viewer can now incorporate the “97% consensus” trope into her own “arguments” if she finds it useful or enjoyable to demonstrate convincingly that she belongs to the tribe that “believes in” global warming.  Or if she is part of the other one, she can now more readily discern who isn’t by their use of this tagline to heap ridicule on the people she respects.

The video’s relentless use of cartoons and out-of-proportion, all-cap messages invests it with a “do you get it yet, moron?!” motif. That theme reaches its climax near the end of the video when a multiple choice “Pop Quiz!” is superimposed on the (cartoon) background of a piece of student-notebook paper.  “CLIMATE CHANGE IS,” the item reads, “A) REAL,” “B) MANMADE,” “C) DANGEROUS,” or as indicated instantly by a red check mark, “D) ALL OF THE ABOVE.”

The viewer of “X is a climate denier" is almost certainly an expert—not in any particular form of science but in recognizing what is known by science. As parent, health-care consumer, workplace decisionmaker, and usually as citizen, too, she adroitly discerns and uses to her advantage all manner of scientific insight, the validity and significance of which she can comprehend fully without the need to understand it in the way a scientist would.  If one administers a “what do scientists believe?” test after making visible to her the signs and cues that ordinary members of the public use to recognize what science knows, she will get an “A.” 

Similarly, if one performs an experiment that models that sort of reasoning, the hypothesis that this recognition faculty is pervasive and reliably steers the members of culturally diverse groups into convergence on the best available evidence will be confirmed.

But the viewer’s response to the “97% consensus” video is measuring something else.

The video has in fact forced her to be become another version of herself. After watching it, she will now deploy her formidable reason and associated powers of recognition to correctly identify the stance to adopt toward the “97% consensus” message that accurately expresses who she is in a world in which the answer to “whose side you are on?” has a much bigger impact on her life than her answer to the question “what do you know?”

 

 

Tuesday
Jun172014

"Messaging" scientific consensus: ruminations on the external validity of climate-science-communication studies, part 2

This is the second installment of a set on "external validity" problems in climate-science communication studies.

"Internal validity" refers to qualities of the design that support drawing inferences about what is happening in the study. "External vality" refers to qualities of the design that support drawing inferences from the study to the real-world dynamics it is supposed to be modeling.

The exernal validity problems I want to highlight don't affect only the quality of studies. They affect the quality of the practice of climate-science communication, too, because communicators are relying on externally invalid studies for guidance.

The last entry concerned the use of surveys to measure public opinion on climate change.

This one addresses experimental and other evidence used to ground "social marketing campaigns" that feature scientific consensus.  It is also only the first of two on "messaging" scientific consensus; the next, which I'll post "tomorrow," will examine real-world "messaging" that purports to implement these study findings.

This post, like the last, is from a paper that I'm working on and will post soon (one with some interesting new data, of course!)

* * *

5. “Messaging” scientific consensus

a. The “external validity” question. On May 16, 2013, the journal Environmental Research Letters published an article entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature.” In it, the authors reported that they had reviewed the abstracts of 12,000 articles published in peer-reviewed science journals between 1991 and 2011 and found that “among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming” (Cook et al. 2013).

“This is significant,” the lead author was quoted as saying in a press statement issued by his university, “because when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it.” “Making the results of our paper more widely-known,” he continued, “is an important step toward closing the consensus gap”—between scientists who agree with one another about global warming and ordinary citizens who don’t—“and increasing public support for meaningful climate action” (Univ. Queensland 2013).

The proposition that disseminating the results of ERL study would reduce public conflict over climate change was an empirical claim not itself tested by the authors of the ERL paper.  What sorts of evidence might one use (or have used) to assess it?

Opinion surveys are certainly relevant.  They show, to start, that members of the U.S. general public— Republican and Democrat, religious and nonreligious, white and black, rich and poor—express strongly pro-science attitudes and hold scientists in high regard (National Science Foundation 2014, ch. 7; Pew 2009). In addition, no recognizable cultural or political group of consequence in American political life professes to disagree with, or otherwise dismiss the significance of, what scientists have to say about policy-relevant facts. On the contrary, on myriad disputed policy issues—from the safety of nuclear power  to the effectiveness of gun control—members of the public in the U.S. (and other liberal democratic nations, too) indicate that the position that predominates in their political or cultural group is the one consistent with scientific consensus (Kahan, Jenkins-Smith & Braman 2011; Lewendowsky, Gignac & Vaugh 2012).

Same thing for climate change. As the ERL authors noted, surveys show a substantial proportion of the U.S. general public rejects the proposition that there is “scientific consensus” on the existence and causes of climate change. Indeed, the proportion that believes there is no such consensus consists of exactly the same proportion that says it does not “believe in” human-caused global warming (Kahan et al. 2011).

So, the logic goes, all one has to do is correct the misimpression of that portion of the public. Members of the public very sensibly treat as the best available evidence what science understands to be the best available evidence on facts of policy significance. Thus, “when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it” (Univ. Queensland 2013).

But there is still more evidence, of a type that any conscientious adviser to climate-science communicators would want them to consider carefully. That evidence bears directly on the public-opinion impact of “[m]aking the results” of studies like the ERL one “more widely-known” (Univ. Queensland 2013).

The ERL study was not the first one to “[q]uantify[]the consensus on anthropogenic global warming”; it was at least the sixth, the first one of which was published in Science in 2004 (Oreskes 2004; Lichter 2008; Doran & Zimmerman 2009; Anderegg et al. 2010; Powell 2012).  Appearing on average once every 18 months thereafter, these studies, using a variety of methodologies, all reached conclusions equivalent to the one reported in ERL paper.

Like the ERL paper, moreover, each of these earlier studies was accompanied by a high degree of media attention. 

Indeed, the “scientific consensus” message figured prominently in the $300 million social marketing campaign by Alliance for Climate Protection, the advocacy group headed by former Vice President Al Gore, whose “Inconvenient Truth” documentary film and book both prominently featured the 2004 “97% consensus” study published in Science (which was characterized by Gore as finding that "0%" of peer-reviewed climate science articles disputed the human contribution to global warming). 

An electronic search of major news sources indicates finds over 6,000 references to “scientific consensus” and “global warming” or “climate change” in the period from 2005 to May 1, 2013.

There is thus a straightfroward way to assess the prediction that “[m]aking the results” of the ERL study “more widely-known” can be expected to influence public opinion.  It is to examine how opinion varied in relation to efforts to publicize these earlier “scientific consensus” studies. 

Figure 9 plots the proportion of the U.S. general public who selected “human activities” as opposed to “natural changes in the environment” as the main cause of “increases in the Earth’s temperature over the last century” over the period 2003 to 2013 (in this Gallup item, there is no option to indicate rejection of the premise that the earth’s temperature has increased, a position a majority or near majority of Republicans tend to selection when it is available). The year in which “scientific consensus” studies appeared is indicated on the x-axis, as is the year in which “Inconvenient Truth” was released.   


Nothing happened.

Or, in truth, a lot happened.  Many additional important scientific studies corroborating human-caused global warming were published during this time.  Many syntheses of the data were issued by high-profile institutions in the scientific community, including the U.S. National Academy of Sciences, the Royal Society, and the IPCC, all of which concluded that human activity is heating the planet. High-profile, and massively funded campaigns to dispute and discredit these sources were conducted too.  People endured devastating heat waves, wild fires, and hurricanes, punctuated by long periods of weather normality.  The Boston Red Sox won their first World Series title in over eight decades.

It would surely be impossible to disentangle all of these and myriad other potential influences on U.S. public opinion on global warming.  But one doesn’t need to do that to see that whatever the earlier scientific-consensus "messaging" campaigns added did not “clos[e] the consensus gap” (Univ. Queensland 2013). 

Why, then, would any reflective, realistic person counsel communicators to spend millions of dollars to repeat exactly that sort of “messaging” campaign? 

The answer could be laboratory studies. One (Lewendowsky et al. 2012), published in Nature Climate Change, reported that the mean level of agreement with the proposition “CO2 emissions cause climate change” was higher among subjects exposed to a “97% scientific consensus” message than among subjects in a control condition (4.4 vs. 4.0 on a 5-point Likert scale).  After being advised that “97% of scientists” accept  CO2 emissions increase global temperatures, those subjects also formed a higher estimate of the proportion of scientists who believe that (88% vs. 67%).

Is it possible to reconcile this result with the real-world data on the failure of previous “scientific consensus” messaging campaigns to influence U.S. public opinion?  The most straightforward explanation would be that the NCC experiment was not externally valid—i.e., it didn’t realistically model the real-world dynamics of opinion-formation relevant to the climate change dispute. 

The problem is not the sample (90 individuals interviewed face-to-face in Perth, Australia). If researchers were to replicate this result using a U.S. general population sample, the inference of external invalidity would be exactly the same. 

For “97% consensus” messaging experiments to justify a social marketing campaign featuring studies like the ERL one, it would have to be reasonable to believe that what investigators are observing in laboratory conditions—ones created specifically for the purpose of measuring opinion—tell us what is likely to happen when communicators emphasize the “97% consensus” message in the real world. 

Such a strategy has already been tried in the real world.  It didn’t work.

There are, to be sure, many more things going on in the world, including counter-messaging,  than are going on in a “97% consensus” messaging experiment.  But if those additional things account for the difference in the results, then that is exactly why that form experiment must be regarded as externally invalid: it is omitting real-world dynamics that we have reason to believe, based on real-world evidence, actually matter in the real world.

On this account, the question to be investigated is not whether a “97% consensus” messaging campaign will influence public opinion but why it hasn’t over a 10-year trial.  The answer, presumably, is not that members of the public are divided on whether they should give weight to the conclusions scientists have reached in studying risks and other policy relevant facts. Those on both sides of the climate change believe that the other side’s position is the one in consistent with scientific consensus. 

The ERL authors’ own recommendation to publicize their study results presupposes public consensus in the U.S. in support of using the best available scientific evidence in policymaking.  The advice of those who continue to champion “97% consensus” social marketing campaigns does, too. 

So why have all the previous highly funded efforts to make “people understand that scientists agree on global warming” so manifestly failed to “close the consensus gap” (Univ. Queensland 2013)?

There are studies that seek to answer exactly that question as well.  They find that culturally biased assimilation—the tendency of people to fit their perceptions of disputed facts to ones that predominate in their cultural group—applies to their assessment of evidence of scientific consensus just as it does to their assessment of all other manner of evidence relating to climate change (Corner, Whitmarsh & Dimitrios 2012; Kahan et al. 2011). 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact (e.g., is the earth heating up? is it safe to store nuclear wastes deep underground? does allowing people to carry hand guns in public increase the risk of crime—or decrease it?), they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. As a result, they form polarized perceptions of scientific consensus even when they rely on the same sources of evidence.

These studies imply misinformation is not a decisive source of public controversy over climate change.  People in these studies are misinforming themselves by opportunistically adjusting the weight they give to evidence based on what they are already committed to believing.  This form of motivated reasoning occurs, this work suggests, not just in the climate change debate but in numerous others in which these same cultural groups trade places being out of line with the National Academy of Sciences’ assessments of what “expert consensus” is.

To accept that this dynamic explains persistent public disagreement over scientific consensus on climate change, one has to be confident that these experimental studies are externally valid.  Real world communicators should definitely think carefully about that.  But because these experiments are testing alternative explanations for something we clearly observe in the real world (deep public division on climate change), they don’t suffer from the obvious defects of studies that predict we should already live in world we don’t see.

Part 3

References

Anderegg, W.R., Prall, J.W., Harold, J. & Schneider, S.H. Expert credibility in climate change. Proceedings of the National Academy of Sciences 107, 12107-12109 (2010).

Cook, J., Nuccitelli, D., Green, S.A., Richardson, M., Winkler, B., Painting, R., Way, R., Jacobs, P. & Skuce, A. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters 8, 024024 (2013).

Corner, A., Whitmarsh, L. & Xenias, D. Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change 114, 463-478 (2012).

Doran, P.T. & Zimmerman, M.K. Examining the Scientific Consensus on Climate Change. Eos, Transactions American Geophysical Union 90, 22-23 (2009).

Farnsworth, S.J. & Lichter, S.R. Scientific assessments of climate change information in news and entertainment media. Science Communication 34, 435-459 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Lewandowsky, S., Gignac, G.E. & Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change 3, 399-404 (2012).

Lichter, S. Robert. Climate Scientists Agree on Warming, Disagree on Dangers, and Don't Trust the Media's Coverage of Climate Change. Statistical Assessment Service, George Mason University (2008).

National Science Foundation. Science and Engineering Indicators (Wash. D.C. 2014), available at http://www.nsf.gov/statistics/seind14/index.cfm/chapter-7/c7s3.htm.

Oreskes, N. The scientific consensus on climate change. Science 306, 1686-1686 (2004).

Pew Research Center for the People & the Press. Public praises science; scientists fault public, media (Pew Research Center, Washington D.C., 2009).

Powell, J. Why Climate Deniers Have No Scientific Credibility - In One Pie Chart. DESMOGBLOG.com (2012).

Univ. Queensland. Study shows scientists agree humans cause global-warming (2013). Available at http://www.uq.edu.au/news/article/2013/05/study-shows-scientists-agree-humans-cause-global-warming.