follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« WSMD? JA! How different are tea party members' views from those of other Republicans on climate change? | Main | A snapshot of the "white male effect" -- i.e., "white male hierarch individualist effect" -- on climate change »

Evidence based science communication ... a fragment

From something I'm working on . . . 

 I. EBSC: the basic idea. “EBSC” is a response to a deficient conception of how empirical information can be used to improve the communication of decision-relevant science.

Social psychology, behavioral economics, and other disciplines have documented the contribution that a wide variety of cognitive and social mechanisms make to the assessment of information about risk and related facts. Treated as a grab-bag of story-telling templates (“fast thinking and slow”; “finite worry pool”; "narrative"; "source credibility"; “cognitive dissonance”; “hyperbolic discounting”; “vividness . . . availability”; “probability neglect”), any imaginative person can fabricate a plausible-sounding argument about “why the public fails to understand x” and declare it “scientifically established.”

The number of “merely plausible” accounts of any interesting social phenomenon, however, inevitably exceeds the number that are genuinely true. Empirical testing is necessary to extract the latter from the vast sea of the former in order to save us from drowning in an ocean of just-so story telling.

The science of science communication has made considerable progress in figuring out which plausible conjectures about the nature of public conflict over climate change and other disputed risk issues are sound—and which ones aren’t.  Ignoring that work and carrying on as if every story were created equal is a sign of intellectual charlatanism.

The mistake that EBSC is primarily concerned with, though, is really something else. It is the mistake of thinking that valid empirical work on mechanisms of consequence in itself generates reliable guidance on how to communicate decision-relevant science.

In order to identify mechanisms of consequence, the valid studies I am describing (there are many invalid ones, by the way) have used “laboratory” methods—ones designed, appropriately, to silence the cacophony of potential influences that exist in any real-world communication setting so that the researcher can manipulate discrete mechanisms of interest and confidently observe their effects. But precisely because such studies have shorn away the myriad particular influences that characterize all manner of diverse, real-world communication settings, they don’t yield determinate, reliable guidance in any concrete one of them.

What such studies do—and what makes them genuinely valuable—is model science communication dynamics in a manner that can help science communicators to be more confident that the source of the difficulties they face reflect this mechanism as opposed to that one. But even when the model in question generated that sort of insight by showing how manipulation of one or another mechanism can improve engagement with and comprehension of a particular body of decision-relevant science, the researchers still haven’t shown what to do in any particular real-world setting. That will inevitably depend on the interaction of communication strategies with conditions that are richer and more complicated than the ones that existed in the researcher’s deliberately stripped down model.

The researchers’ model has performed a great service for the science communicator (again, if the researchers’ study design was valid) by showing her the sorts of processes she should be trying to activate (and which sorts it will truly be a waste of her time to pursue). But just as there were more “merely plausible” accounts than could be true about the mechanisms that account for a particular science communication problem, there will be more merely plausible accounts of how to reproduce the effects that researchers observed in their lab than will truly reproduce them in the field. The only way to extract the genuinely effective evidence-informed science communication strategies from the vast sea of the merely plausible ones is, again, by use of disciplined empirical observation and inference in the real-world settings in which such strategies are to be used.

Too many social science researchers either don’t get this or don’t care.  They engage in ad hoc story-telling, deriving from abstract lab studies prescriptions that are in fact only conjectures—and that are in fact often completely banal ("know your audience") and self-contradictory ("use vivid images of the consequences of climate change -- but be careful not to use overly vivid images because that will numb people") because of their high degree of generality.

This is the defect in the science of science communication that EBSC is aimed at remedying.  EBSC insists that science communication be evidence based all the way down—from the use of lab models geared to identifying mechanisms of consequence to the use of field-based methods geared to identifying what sorts of real-world strategies actually work in harnessing and channeling those mechanisms in a manner that promotes constructive public engagement with decision –relevant science.

* * * 

IV.  On “measurement”: the importance of what & why. Merely doing things that admit of measurement and measuring them doesn’t make science communication “evidence based.”  

“Science communication” is in fact not a single thing, but all of the things that are forms of science communication have identifiable goals.  The point of using evidence-based methods to promote science communication, then, is to improve the prospect that such goals will be attained. The use of empirical methods to “test” dynamics of public opinion that cannot be defensibly, intelligently connected to those goals is pointless. Indeed, it is worse than pointless, since it diverts attention and resources away from activities, including the use of empirical methods, that can be defensibly, intelligently understood to promote the relevant science communication goals.

This theme figures prominently and persuasively in the provocative critique of the climate change movement contained in the January 2013 report of Harvard sociologist Theda Skocpol. Skocpol noted the excessive reliance of climate change advocacy groups on “messaging campaigns”  aimed at increasing the percentage of the general population answering “yes” when asked whether they “believe” in global warming.  These strategies, which were financed to the tune of $300 million in one case, in fact had no measureable effect.

But more importantly, they were completely divorced from any meaningful, realistic theory of why the objective being pursued mattered.  As Skocpol notes, climate-change policymaking at the national level is for the time being decisively constrained by entrenched political economy dynamics. Moving the needle" on public opinion--particularly where the sentiment being measured is diffusely distributed over large segments of the population for whom the issue of climate change is much less important than myriad other things -- won't uproot these political economy barriers, a lesson that the persistent rebuff of gun control and campaign-finance laws, measures that enjoy "opinion poll" popularity that climate change can only dream of, underscores.

So what is the point of EBSC? What theory of what sorts of communication improve public engagement with climate science (or other forms of decision-relevant science) and how should inform it? Those who don't have good answers to these questions can measure & measure & measure -- but they won't be helping anyone.


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

My first impression was "wordy". (I recognise it as one of my own major faults. ;-) )
I think it could be shortened a lot without losing much meaning.

For example:
Many plausible theories have been proposed about why science communication does or does not work, but most of these are not evidence-based. Even those that are often extrapolate techniques and advice beyond the research findings, being more speculation or hypothesis than proven recipe; although such theories may still be useful for guiding the general direction of further research, eliminating unproductive paths. Evidence-based science communication aims to make the subject evidence-based throughout - from lab models of basic mechanisms to real-world application.

The approach taken must also consider how its immediate aims fit into a wider political strategy. Climate change advocacy groups have spent hundreds of millions of dollars on campaigns to shift public opinion on such simplistic questions as 'belief' in global warming. But the block is not public opinion, but the dynamics of the political economy. Such diffuse sentiments cannot uproot the practical day-to-day economic barriers in the way of decision-making. Even if every aspect of the science is accurately measured and quantified, evidence-based science communication will not do anyone any good if it does not take the practicalities needed for effectiveness into account.


Is that a fair summary? What did I miss?

I realise that I've probably chopped too much out - I did that deliberately. The idea is to make you think before putting it back in. :-)

November 11, 2013 | Unregistered CommenterNiV


Ironic since I was hoping readers might fill in *omitted* material.

But I'm sure your version is superior.

November 14, 2013 | Registered CommenterDan Kahan

Well, a more compact and concise argument makes it easier to see what's missing... :-)

As I've said before, and as your bit on Skocpol's critique of the climate change advocacy groups alludes to, the problem with all this stuff is that it seeks to measure the degree of persuasion rather than scientific understanding. It seems to write off genuine scientific understanding and informed decision-making by the masses as an impossible dream, and instead tries to find ways of achieving the same effect by getting people to listen to and trust the people who supposedly do understand.

The experiments and surveys all measure what people believe before and after different interventions, but they don't seem to make any attempt to measure what they understand, or to ask people why they believe what they do. (Or in many cases, exactly what people think the questions they're being asked actually mean.) As a political exercise, such an emphasis is perfectly understandable, but as a scientific exercise it seems to me like rather a big gap. I would regard "science communication" as the task of teaching people enough scientific method and understanding to investigate and see the answer for themselves, and to understand the role of debate and challenge in that process. So I remain bemused that the question seems to be of such little research interest.

Another thing that's missing is anything on the specifics of the climate debate. Advice is couched in generalities, but it isn't clear how it should be applied to such questions as the validity of climate models, the availability and adjustment of raw data, the uncertainties and unknowns, the attention appropriate to quality control, and what should be done about the errors, insufficiencies, and outright deceptions observed to have been practised in climate science. How is people's understanding of the issues impacted by, for example, reading the 'Harry read me' or Montford's book on paleoclimate reconstructions? Or by their opposite numbers? Can people say better what the debate is about, what the competing claims are, and what evidence they depend upon? What specific types of arguments do they come up with to support or criticise their positions? Can they better outline the extent and limits of our knowledge, and comment in an informed way on what reasonably accessible further evidence could resolve some of those uncertainties?

What, for example, should a 'climate communicator' do about the Climategate revelations? Ignore them? Explore, explain and seek the evidence to answer them? Accept them? Summarily dismiss them? Deny them? What approach most enhances scientific understanding? Is that the same approach as the one most enhancing trust and belief?

I understand that you don't want to get into the details of the climate debate - and I accept your reasoning - but it still seems odd to be discussing the climate debate without ever discussing any of its contents. Rather as if one were to discuss the biology of the giraffe without ever once mentioning its long legs and neck.

However, I understand that our views differ on that, and that it probably wouldn't be a useful conversation. The most useful thing I could think of regarding your essay, to move the conversation forward, was to simplify it. Reading it, I kept losing the thread of the argument and having to skip back. Even to the extent that I agree with it, it didn't result in that clarity that makes a conclusion seem 'obvious', so it didn't feel like it would be persuasive. And even if we disagree, I still thing the interests of debate are best served by your argument being as clear and persuasive as possible. I probably ought to dig out that quote from Mill again, but perhaps you remember it? I still find it amazing how entirely relevant today that essay of his still is.

November 15, 2013 | Unregistered CommenterNiV

The Last Tree

Now we're talking about the environment.
We're arguing about it. …
We weren't doin that 50 years ago.
There was only one way to think about it, and that was:

Pollute to make money is the best way to do everything!

And that's changed.
It's changed because people started askin questions.
It doesn't matter whether they're on the Right, the Left, the this or that …
There needs to be people in every group who ask that group questions that an outsider could never ask.
So it's important to make friends in all the different groups, in all the different cultures, in all the different places.
And so that you support each other by having at least that attitude. …

(Arlo Guthrie, Big Ideas, ABC Radio National, 15 August, 2013)

December 8, 2013 | Unregistered Commenterpeaceandlonglife

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>