follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« No, I don't think "cultural cognition is a bad thing"; I think a *polluted science communication environment* is & we should be using genuine evidence-based field communication to address the problem | Main | Vaccine risk perceptions and risk communication: study conclusions & recommendations »
Friday
Apr182014

Want to improve climate-science communication (I mean really, seriously)? Stop telling just-so stories & conducting "messaging" experiments on MTurk workers & female NYU undergraduates & use genuine evidence-based methods in field settings instead

From Kahan, D., "Making Climate Science Communication Evidence-based—All the Way Down," in Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow, pp. 203-21. (Routledge Press, 2014):

a. Methods. In my view, both making use of and enlarging our knowledge of climate science communication requires making a transition from lab models to field experiments. The research that I adverted to on strategies for counteracting motivated reasoning consist of simplified and stylized experiments administered face-to-face or on-line to general population samples. The best studies build explicitly on previous research—much of it also consisting in stylized experiments—that have generated information about the nature of the motivating group dispositions and the specific cognitive mechanisms through which they operate. They then formulate and test conjectures about how devices already familiar to decision science—including message framing, in-group information sources, identity-affirmation, and narrative—might be adapted to avoid triggering these mechanisms when communicating with these groups.[1]

But such studies do not in themselves generate useable communication materials. They are only models of how materials that reflect their essential characteristics might work. Experimental models of this type play a critical role in the advancement of science communication knowledge: by silencing the cacophony of real-world influences that operate independently of anyone’s control, they make it possible for researchers to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. They are thus ideally suited to reducing the class of the merely plausible strategies to ones that communicators can have an empirically justified conviction are likely to have an impact. But one can’t then take the stimulus materials used in such experiments and send them to people in the mail or show them on television and imagine that they will have an effect.

Communicators are relying on a bad model if they expect lab researchers to supply them with a bounty of ready-to use strategies. The researchers have furnished them something else: a reliable map of where to look for them. Such a map will (it is hoped) spare the communicators from wasting their time searching for nonexistent buried treasure. But the communicators will still have to dig, making and acting on informed judgments about what sorts of real materials they believe might reproduce these effects outside the lab in the real-world contexts in which they are working.

The communicators, moreover, are the only ones who can competently direct this reproduction effort. The science communication researchers who constructed the models can’t just tell them what to do because they don’t know enough about the critical details of the communication environment: who the relevant players are, what their stakes and interests might be, how they talk to each other, and whom they listen to. If researchers nevertheless accept the invitation to give “how to” advice, the best they will be able to manage are banalities—“Know your audience!”; “Grab the audience’s attention!”—along with Goldilocks admonitions such as, “Use vivid images, because people engage information with their emotions. . . but beware of appealing too much to emotion, because people become numb and shut down when they are overwhelmed with alarming images!”

Communicators possess knowledge of all the messy particulars that researchers not only didn’t need to understand but were obliged to abstract away from in constructing their models . Indeed, like all smart and practical people, the communicators are filled with many plausible ideas about how to proceed—more than they have the time and resources to implement, and many of which are not compatible with one another anyway. What experimental models—if constructed appropriately—can tell them is which of their surmises rest on empirically sound presuppositions and which do not. Exposure to the information such modeling yields will activate experienced-informed imagination on the communicators’ part, and enable them to make evidence-based judgments about which strategies they believe are most likely to work for their particular problem.

At that point, it is time for the scientist of science communication to step back in—or to join alongside the communicator. The communicator’s informed conjecture is now a hypothesis to be tested. In advising field communicators, science of science communication researchers should treat what the communicators do as experiments. Science communication researchers should work with the communicator to structure their communication strategies in a manner that yields valid observations that can be measured and analyzed.

Indeed, communicators, with or without the advice of science of science communication researchers, should not just go on blind instinct. They shouldn’t just read a few studies, translate them into a plausible-sounding plans of action, and then wing it. Their plausible surmises about what will work will be more plausible, more likely to work, than any that the laboratory researchers, indulging their own experience-free imaginations, concoct. But they will still be only plausible surmises. Still be only hypotheses. Without evidence, we will not learn whether policies based on such surmises did or didn’t work. If we don’t learn that, we won’t learn how to do even better.

Genuinely evidence-based science communication must be based on evidence all the way down. Communicators should make themselves aware of the existing empirical information that science communication researchers have generated (and steer clear of the myriad stories that department-store consumers of decision science work tell) about why the public is divided on climate science. They should formulate strategies that seek to reproduce in the world effects that have been shown to help counter the dynamics of motivated reasoning responsible for such division. Then, working with empirical researchers, they should observe and measure. They should collect appropriate forms of pretest or preliminary data to try corroborate that the basis for expecting a strategy to work is sound and to calibrate and refine its elements to maximize its expected effect. They should also collect and analyze data on the actual impact of their strategies once they’ve been deployed.

Finally, they should make the information that they have generated at every step of this process available to others so that they can learn from it to. Every exercise in evidence-based science communication itself generates knowledge. Every such exercise itself furnishes an instructive model of how that knowledge can be intelligently used. The failure to extract and share the intelligence latent in doing science communication perpetuates the dissipation of collective knowledge that it is the mission of the science of science communication to staunch. 

 


[1] Unrepresentative convenience samples are unlikely to generate valid insights on how to counteract motivated reasoning. Samples of college undergraduates are perfectly valid when there is reason to believe the cognitive dynamics involved operate uniformly across the population. But the mechanisms through which motivated reasoning generates polarization on climate change don’t; they interact with diverse characteristics—worldviews and values, but also gender, race, religiosity, and even regions of residence. It is known, for example, that white males who are highly hierarchical and individualistic in worldviews or conservative in their political ideologies, and who are likely to live in the South and far west, tend to react dismissively to information about climate change (McCright & Dunlap 2013, 2012, 2011; Kahan, Braman, Gastil, Slovic & Mertz 2007). Are they likely to respond to a “framing” strategy in the same way that a sample of predominantly female undergraduates attending a school in New York City does (Feygina, Jost & Goldsmith 2010)? If not, that’s a good reason to avoid using such a sample in a framing study, and not to base practical decisions on any study that did.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (3)

Like a broken record:

Not that you should care about what I would like, I would like to see the results of a field experiment that tests the influence of explicit, guided exploration of the processes of motivated reasoning, on (pre- and post-) opinions related to climate change, nuclear energy, gun control, whether Obamacare encourages personal responsibility or reflects governmental tyranny, etc. Meta-cognition-based strategies, it seems to me, are the best methodology for getting people to be self-reflective in how to move past their preconceptions and subjective presuppositions. That is, IMO, a basic truth that underlies effective pedagogy, and indeed, effective communication. Hence, if you want to get people to be less constrained by the "motivations" of their reasoning, get them to be more meta-cognitive about their "motivations" and the influence of their motivations. IMO, no "top-down" methodology of communication will significantly neutralize the influence of motivated reasoning; an effective approach, IMO, should incorporate a bottom-up paradigm where people deconstruct the influence of their own motivations.


Also - from a theoretical perspective, why do you distinguish "science-based" communication from any other form of communication? What does "science-based communication" mean? What are the factors that differentiate "science-based" communication - as something other than just a (more or less) arbitrary selection of communication topic? I feel like I've asked you that questions before. Maybe you've answered it but I didn't understand your answer?

April 18, 2014 | Unregistered CommenterJoshua

@Joshua:

I care. :)

Not " 'science-based' communication"; "evidence-based science communication": formulate a hypothesis based on best available evidence (not "take heuristics & biases literature; add water & stir" instant decision science story telling) for field based intervention, the design & impact of which are amenable to meaningful empirical assessment

By "meta cognition," do you mean making people aware of cultural cognition & like processes? There's little reason to think that will help; in general people can't consciously control unconscious biases, & certainly there's plenty of motivated reasoning among people who understand & talk about it (I myself don't feel particularly immune)

April 18, 2014 | Registered CommenterDan Kahan

Sorry - I should have said "evidence based science communication"....but I still have basically the same question. What differentiates evidence-based science communication from any other brand of evidence-based communication - say evidence-based communication about gun control or the merits/demerits of Obamacare? Why do you single out evidence-based science communication? Merely because that is your point of interest, or because you believe it is somehow a different species than other domains of evidence-based communication?

---------------

I think that there is solid evidence that awareness and knowledge about cognitive processes (meta-cognition) can absolutely help someone learn and to evaluate evidence, particularly in a specific domain if not in a generalized sense.

Motivated reasoning is not, IMO, simply an unconscious process of bias, but a more complex cognitive dynamic that can be understood as such. Knowledge about motivated reasoning is not sufficient to eliminate its influence, but it is a precondition, I would say, for at least partially controlling for its biases. For example, if you didn't know about cultural cognition and about its lack of political discrimination, perhaps you would be more likely to agree with Krugman about an asymmetry in reasoning across the political divide. That doesn't mean that you're free from motivated reasoning, but that you have some increased measure of control over its influence - in particular in specific domains where you have studied its impact on how people reason.

What I am suggesting is that your approach to science communication (and I would say communication in other domains) can certainly be informed by epistemology and the science of education - and understanding metacognition is an important component of learning theory, IMO. For example, to the extent that someone knows about the pervasive influence of motivated reasoning, one can at least begin exercising "executive control" (a key concept related to metacognition) in developing strategies to account for its influence and in evaluating the effectiveness of those strategies. I would argue that your work is actually fundamentally rooted in an implicit belief in the power of meta-cognition, otherwise why would you spend so much time researching motivated reasoning and disseminating your findings? Don't you believe that increasing people's awareness of motivated reasoning, and self-awareness of how they are biased by their "motivations," will help mitigate its impact?

"& certainly there's plenty of motivated reasoning among people who understand & talk about it (I myself don't feel particularly immune)"

One does not need to eliminate the effects of motivated reasoning completely across the board to reduce their magnitude - particularly within a limited context.

Oh, and thanks for caring. :-)

April 18, 2014 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>