follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« What should we teach kids (& others) about cultural conflict over science? And should science education aim to "overcome" cultural cognition? | Main | The "local-adaptation science communication environment": the precarious opportunity »
Friday
Sep212012

Followup exchange on Sunstein op-ed & science communication

I got a thoughtful email from a natural scientist who said that he and some colleagues had been discussing the Sunstein NYT op-ed as well as the reflections I posted on it the other day and had some questions:

I apologize in advance if my questions are too basic or clumsy, but I’m a little out of comfort zone as a physical scientist.  What you’ve described in various places as to what’s driving polarization makes perfect sense to me, however, the primary question I have is if this is somehow a uniquely American phenomenon.  The reason I ask is with exceptions of course the rest of the world as I’ve been told and witnessed at times does not experience the “questioning of the science” to the degree we seem to enjoy.  I’m approaching it through the lens of climate, but it may be true of other contentious scientific issues as well.  In your sampling in various studies have they been international samples or just American?  So is this effect somehow tied to our current American societal system or is it more general for all of humanity?  And has this effect been increasing or becoming more pronounced and moving into new spheres of science over time?  I know some of the history of previous “debates” such as evolution and cigarette smoke, but is it becoming more pervasive?  This is really a fascinating, yet crucially important topic for me.  And frankly it’s been humbling as a scientist that my word is not sacrosanct and a small business owner or a minister may actually be a more effective communicator of the science than I am.

Here is my response:

Nothing at all simplistic about your questions! I'll try my best to answer...

1. The science of science communication is large, diverse, growing, and provisional. First point to realize is that there is a pretty decent-sized literature on science communication & public risk perception. It's impossible -- in a good way -- to be able to advance concrete points w/o making judgments about what findings strike one as the most supported or the most pertinent to the issue at hand. I'll do that in responding to your inquiries. But I don't want, in doing that, to give the impression that "this is all there is to say" or "any other response has got to be wrong" etc.  Actually, it's clear to me that you are already familiar with good portions of this work, so this sort of boilerplate proviso is likely completely unncessary here; but I do feel it's important to recognize both that there are lots of live conjectures & hypotheses in play, and also lots of hard-working, smart empirical researchers whose work is well worth consulting!

2. The climate-change conflict is not a singular phenomenon. Ok... It's understandable when viewing  the phenomenon  "through the lens of  climate"  to form the impression that the sort of conflict we see over climate-change is singular in all kinds of ways-- that it  applies only to that issue, e.g., or is "strange US thing," or reflects "new & emerging skepticism about science." I actually don't think any of these things is true, and that that's why it is important to widen the lens, as it were.

3. The emergence of the study of public risk perceptions and science communication -- over three decades ago!  The study of disconnects between public opinion on science on environment and technological risks has been around for at least 35 yrs. Moreover, the early impetus for it was the public's resistance to the predominant -- I think fair to say "consensus" view -- among scientists that nuclear power risks (particular storage in deep geologic isolation) involved low risks fully amenable to effective regulation. Paul Slovic, Bernard Fischoff & others formulated the "psychometric theory" of risk, which emphasized various dynamics neglected by the then-prevailing frameworks in decision science -- from cognitive biases of one sort or another, to distinctive qualitative valuations of risk that are independent of the sorts of things that figure in policymaking "cost-benefit"  analysis. E.g., Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S. & Combs, B. How Safe Is Safe Enough? A Psychometric Study of Attitudes Toward Technological Risks and Benefits. Policy Sci. 9, 127 (1978); Slovic, P., Fischhoff, B. & Lictenstein, S. Facts versus Fears,  in Judgment Under Uncertainty: Heuristics and Biases. (eds. D. Kahaneman, P. Slovic & A. Tversky). pp. 163-78;  Slovic, P. Perception of risk. Science 236, 280-285 (1987). This work also looked at public concerns over risks involving food attitives, water pollution, air pollution and the like. 

4. Cultural theory and cultural cognition.  The cultural theory of risk, associated with Mary Douglas & Aaron Wildavsky (Douglas, M. & Wildavsky, A.B. Risk and culture: An essay on the selection of technical and environmental dangers. (University of California Press, Berkeley; 1982)) dates from the nuclear and clean-air debates, too. It was at that time an alternative to the psychometric theory. But "cultural cognition theory," with which Slovic has been prominently involved, essentially marries the two. See Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012). The idea is that the mechanisms featured in the psychometric theory can help to fill in why there are the sorts of relationships that Douglas posits between cultural outlooks and risk perceptions; in addition, Douglas's framework, which emphpasizes systematic differences in perception of risk and conflict over them between groups, furnishes a basis for undrestanding how one and the same set of mechnisms form the psychometric theory can produce division and controversy in public debates. 

5. Cross-cultural cultural cognition. These dynamics are *not* confined to the U.S. There have been plenty of studies using methods associated with the cultural theory of risk to examine conflicts over risk perception in Europe. Recently, the Cultural Cognition Project research group has been using its measures to examine conflicts over climate change in other countries, including Australia and the UK. There is also recent work emerging in Canada using measures similar to ours (I'm going to post a blog essay on this soon). 

6. We aren't culturally divided over the value of what scientists have to say; we are divided over what scientists are saying.  It is also a mistake, in my view, to associate any of these dynamics with skepticism about or hostility toward science. In the US, in particular-- as likely you know--, there is widespread public confidence and trust in scientists. Members of the public who are culturally divided over risks like climate change & nuclear power are not divided over whether scientific consensus should be normative for risk regulation and policymaking; they are divided over what scientific consensus is. This happens becuase determining what "most scientists believe" is not something any more amenable to direct observation for ordinary people than melting glaciers; people have to get the information by observing who is saying what in public discussion, and in that process, all the mechanisms that push groups apart are going to skew impressions of what the truth is about the state of scientific opinion.

CCP has studied this very issue, finding that groups divided over climate change process evidence of scientific consensus on that issue and various others in biased ways and thus form systematically opposed, and very unreliable, perceptions of what the state of scientific consensus is. See Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011). In my view, the perception that "climate skeptics are anti-science" is itself a product of culturally motivated reasoning, and the persistence of this view distracts us from addressing the real issue and likely even magnifies the problem by needlessly insulting a large segment of the public.

7. The existing science of science communication doen't tell us what to do; rather it furnishes us with models and methods that we can and must use to figure that out. On "what to do": The literature is filled with potentially helpful strategies. But I really think that it's a mistake to think that it's useful to just sift through the literature & boil it down into lists of "dos & don'ts," e.g, "use ideologically/cuturally congenial communicators!"; "know your audience!"; "use vivid images to get attention but beware vivid images b/c they scare people & numb them!"  This is a mistake, first, because in fact these sorts of admonitions can easily cause people to blunder -- e.g., to make a ham-handed effort to line up some sock-puppet advocate, whose appearance in the debate is such an obvious set up that it drives people the wrong way.

It's also a mistake b/c the "dos & don'ts," even when they are exactly right, are just too general to be of real use. They reflect conclusions drawn from studies that are highly stylized and aimed at figuring out the real mechanisms of communication. That sort of work is really important -- b/c if you don't start out with the mechanisms of consequence, you'll get nowhere. But they don't  in themselves tell you want to do in any particular situation b/c they are too general, too (deliberately) remote from the details of particular communication environments.  

In other words, they are *models* that those who *are* involved in communication, and who know all about the particulars of the situation they are involved in, should be guided by as they come up with strategies fitted to their communication needs. And when they do that-- when they try to reproduce in their real world settings the effects that social scientists captured in their laboratory models-- the social scientists should be there to help them test their conjectures by observing and measuring and by collecting information. They should also collect information on how that particular field experiment in science communication worked and memoralize it and share it w/ other communicators -- for whom it will be another, even richer model of what to try in their own particular situation (where again, they should use evidence-based appraoches)... 

You see what I'm getting at, I'm sure.  What I've just described, btw, is the process by which medicine made the transition form an experienced based craft to a science-, evidence-based one. That's got to be the next step in the science of science communication. I really urge scientists, science communicators, and scholars of science communication to take this step; and I'm happy to contribute in any way I can!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.