follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "How did this happen in the first place?" | Main | More "class discussion" »
Wednesday
Mar132013

I'm happy to be *proven* wrong -- all I care about is (a) proof & (b) getting it right

Here is another thoughtful comment from my friend & regular correspondent Mark McCaffrey, this time in response to reading "Making Climate-Science Communication Evidence-based—All the Way Down." Some connection, actually, with issues raised in "Science of Science Communication course, session 4." 

As we've discussed before, a missing piece of your equation in my opinion is climate literacy gained through formal education. Your studies have looked at science literacy and numeracy broadly defined, not examining whether or not people understand the basic science involved.

If you want to go "all the way down" (and I"m not clear exactly what you mean by that), then clearly we must include education. There's ample evidence in educational research that understanding the science does make a difference in people's level of concern about climate change-- see the Taking Stock report by Skoll Global Threats which summarizes recent literature that shows that women, younger people and more educated people are more concerned about climate change. Michael Ranney and colleagues at UC Berkeley have also been doing some interesting research (in review) on the role of understanding the greenhouse effect mechanisms in particular in terms of people's attitudes.

Yes, cultural cognition is important, but it's only one piece of the puzzle. Currently, fewer than one in five teens feel they have a good handle on climate change and more than two thirds say they don't really learn much in school. Surely this plays a role in the continued climate of confusion, aided and abetted by those who deliberately manufacture doubt and want to shirk responsibility.

My response:

I'm not against educating anyone, as you know.

But I do think the evidence fails to support the hypothesis that the reason there's so much conflict over climate change is that people don't know the science.

They don't; but that's true for millions of issues on which there isn't any conflict as well. Ever hear of the federal Formaldehyde and Pressed Wood Act of 2010? If you said "no," that's my point. (If you said "yes," I won't be that surprised; you are 3 S.D.'s from the national mean when it comes to knowing things relating to public policy & science.) The Act is a good piece of legislation that didn't generate any controversy. The reason is not that people would do better on a "pressed wood emissions" literacy test than a climate-science literacy test. It's that the issue the legislation addresses didn't become bound to toxic partisan meanings that make rational engagement with the issue politically impossible.

(I could make this same point about dozens of other issues; do you think people have a better handle on pasteurization of milk than climate? Do they have a better understanding of the HBV vaccine than the HPV vaccine?)

But none of this has anything to do with this particular paper. This paper makes a case for using evidence-based methods "all the way down": that is, not only at the top, where you, as a public-spirited communicator, read studies like the ones you are discussing as well as mine & form a judgment about what to do (that's all very appropriate of course); but also at the intermediate point where you design real-world communication materials through a process that involves collecting & analyzing evidence on effectiveness; and then finally, "on the ground" where the materials so designed are tested by collection of evidence to see if they in fact worked.

That's the way to address the sorts of issues we are debating -- not by debating them but by figuring out what sort of evidence will help us to figure out what works.

So good if you disagree w/ me about what inference to draw from studies that assess the contribution lack of exposure to scientific information has played in the creation of the climate change conflict. Design materials based on the studies you find compelling; use evidence to tweak & calibrate them. Then measure what effect they have.

Some other real-world communicator who draws a different inference -- who concludes the problem isn't lack of information but rather the pollution of the science communication environment with toxic meanings -- will try something else. But she too will use the same evidence-based protocols & methods I'm describing.

Then you & she can both share your results w/ others, who can treat what each of you did as an experimental test of both the nature of the communication problem here & the effectiveness of a strategy for counteracting it!

Indeed, I should say, I'd be more than happy to work with either or both you & this other communicator! Another point of the paper is that social scientists shouldn't offer up banal generalities on "how to communicate" based on stylized lab experiments. Instead, they should collaborate with communicators, who themselves should combine the insights from the lab studies with their own experience and judgment and formulate hypotheses about how to reproduce lab results in the field through use of evidence-based methods --which the social scientist can help design, administer & analyze.

There are more plausible conjectures than are true -- & that's why we need to do tests & not just engage in story telling. Anytime someone does a valid test of a plausible conjecture, we learn something of value whatever the outcome!

Of course, it is also a mistake not to recognize when evidence suggests that plausible accounts are incorrect-- and to keep asserting the accounts as if they were still plausible. I'm sure we don't disagree about that.

But I'm not "on the side of"any theory. I'm on the side of figuring out what's true; I'm on the side of figuring out what do do. Theories are tools toward those ends. They'll help us, though, only if test them with "evidence-based methods all the way down."

Anyone else have thoughts on how to think about these issues?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (13)

IMHO, the correct answer here is that it is Cultural Cognition all the way down.

Science informed policy does not imply some monolithic concept of how the available science is to be interpreted and used in applications. While the science is the science, the framework in which the science is to be evaluated may be quite different depending on one's selection of educational choices. An industrial chemist is likely to have a different context than an academic chemist or a chemical engineer, or a business major or an environmental science major or an education major or a law school graduate, or a psychologist.

It seems to me that a "proper" (non academic) lawsuit chasing lawyer would never have made this statement: ". Ever hear of the federal Formaldehyde and Pressed Wood Act of 2010? If you said "no," that's my point. " since this could be a gold mine of a topic: http://voices.yahoo.com/fema-trailer-formaldehyde-cases-scheduled-trial-2794732.html. Maybe average members of the public haven't heard yet, but most mobile home owners are being informed, frequently by such lawyers. As are many other consumers with connections to the environmental movement. What they learn is going to be shaped by which "expert" they hear from. A lawyer eager to sue, as above? A scientist wondering about detection limits, within a regulatory agency? http://www.atsdr.cdc.gov/toxprofiles/tp111-c6.pdf An engineer with a "fix"? Engineers who are industry representatives? http://www.apawood.org/level_b.cfm?content=srv_env_form A environmental science major working for, say, the Environmental Defense Fund? http://notaguineapig.org/2011/03/08/week-1-formaldehyde/ An educator with a consumer education program for decisions prior to sale and home ventilation techniques afterwards? http://www.cpsc.gov/en/Safety-Education/Safety-Guides/Home-Appliances-Maintenance-and-Structure/The-Inside-Story-A-Guide-to-Indoor-Air-Quality/

Those links are fairly randomly acquired Google search results, but clearly each has it's own culturally cognitive biases and each would be expected to elicit different results with otherwise uninformed parties. Most people encountering such information might have overlapping feelings about the environment, governmental control, lawyers and lawsuits, or business operations. Thus who they encounter and determine to be credible first might have significant impact.

March 13, 2013 | Unregistered CommenterGaythia Weis

"There's ample evidence in educational research that understanding the science does make a difference in people's level of concern about climate change"

I'd be interested to know what that research is, because it doesn't fit with my experience. Almost nobody fully understands the science, on either side of the debate. There are only degrees of understanding. And I've seen high degrees of understanding from some of the most sceptical. Adherents to the minority view tend to have to learn more, to justify their opinions in the face of majority opposition.

What this *sounds* like is the sort of research in which they define 'climate science literacy' as agreement with various orthodox statements taught in formal education, and then observe that people who 'know' those statements are more climate-concerned. That's begging the question.

My own test of whether somebody understands the science is as follows:
1. Materials that are transparent to visible sunlight but opaque to thermal infrared are called 'greenhouse materials'. Sunlight can shine through them and be absorbed by surfaces below, but when these re-radiate the energy it is absorbed and partly radiated back.
2. Liquid water is transparent to visible sunlight, but opaque to thermal IR - it is all absorbed within about a millimetre.
Using this information, calculate the approximate magnitude of the greenhouse effect in a black-bottomed pool of water a metre deep.

Could you answer this question? If not, then you don't know the basic science on the greenhouse effect.

That means that you come to your conclusion on the matter by some other means than understanding it. And it means that, since only the tiniest fraction of the general public would be capable of answering such a question (or many others), that understanding of the science can play no part in explaining the different conclusions people reach.

People can, it is true, legitimately reach conclusions based on a partial understanding, but it depends which parts. If you know one subset you may be more likely to reach one conclusion; if you know a different subset you may reach another. Again, it's not simply the *amount* of understanding - the detail matters. You would naturally find that better understanding led to a particular conclusion if you only measured one subset, which naturally enough is what most people who only know one subset do. It would take a clever experimental design to test this properly and impartially.

So I'd be very interested to know if somebody has managed it, but I'm currently sceptical that they have.

March 13, 2013 | Unregistered CommenterNiV

@Gaythia:

I am sure you are right that there will be people engaged in misinforming on formaldehyde, not to mention any & every other potential risk that is regulated in sensible manner along w/ any & every that isn't. Lawyers will certainly get a piece of the action. Indeed, sensible legislation often is enacted at the federal level to *preempt* state law, including tort standards that are the basis of lawsuits. That's typically how good legislation of this sort gets passed: to escape from being hammered by a patchwork of state regulations, industry assents to federal legislation that authorizes the EPA to do something sensible. Of course, the battles continue thereafter.

But they continue free of the distorting/contorting pressure of cultural polarization.

The goal, as far as I'm concerned, *isn't* to make citizens generally know every jot & tittle of information relevant to risks like those posed by Formaldehyde. That would be insane, actually. The public deserves to have a political process they can be confident is attentive to public safety risks; they deserve to have the knowledge that it's working as it should; they deserve to know too when it's not. When those conditions are met, people can generally go happily go about their business, which may or may not require them to learn a good deal about some issue involving science.

Call this the "normal state of affairs." It's far from perfect. For sure it is not free of reasonable forms of contestation among people & groups who have different values & interests; not free either of a variety of problems any democratic system of lawmaking has to endure (ones having to do, in particular, with public choice dynamics that enable rent seeking behavior).

But it's still a pretty good state to be in. It has produced a lot of good law & avoided a lot of bad. The things that you are adverting to & that trouble you are troubling & annoying, but they can be handled.

What's *much much* worse & really not something that can be handled is the pathological state of affairs that characterizes climate change. There, the science communication environment has been contaminated with toxic cultural meanings that make enlightened regulatory responses impossible.

One goal of the science of science communication is to protect the science communication environment from this sort of pollution, so that democracy can get the degree of enlightened regulation associated with the normal state of affairs.

Protecting the science of science communication environment has relatively little to do with educating citizens on science. Contamination happens wholly independent of how educated people are; and when it happens, the most science literate are often the most polarized.

None of this is to say that science education *isn't* good. It is. For lots of things, and as an end in itself. But it is a mistake, I'm convinced, to think that the problem that climate change represents is a consequence of inadequate education or can be fixed or avoided from recurring w/ more.

March 13, 2013 | Unregistered Commenterdmk38

From the top of the post:

"...As we've discussed before, a missing piece of your equation in my opinion is climate literacy gained through formal education. Your studies have looked at science literacy and numeracy broadly defined, not examining whether or not people understand the basic science involved..."

Using a quote from your paper:

"...If this position is correct, one would also expect concern with climate change to be positively cor-
related with numeracy. Numeracy refers to the capacity of individuals to comprehend and make use of
quantitative information11. More numerate people are more disposed to use accuracy-enhancing forms of
System 2 reasoning and to be less vulnerable to the cognitive errors associated with System 111-12. Hence,
they should, on this view, form perceptions of climate-change risk less biased toward underestimation.
These predictions were unsupported (Fig. 1). As respondents science-literacy scores increased,
concern with climate change decreased (r = -0.05, p = 0.05). There was also a negative correlation be-
tween numeracy and climate-change risk (r = -0.09, p < 0.01). The differences were small, but neverthe-
less inconsistent with SCT, which predicts effects with the opposite signs..."

Kind of lends credence to the idea that engineers and weathermen are more likely as a group to have a low score on climate change risk.

Sensitivity is where the fight over CC is. With a likely climate sensitivity to CO2 of about 1dC per doubling of CO2, there is little to get worked up about as we still have a long way to go to get to the first double, and VERY unlikely to make CO2 double yet again.

OT but fun....FOIA released the password for an additional 200,000 or so Climategate emails. Time to buy futures in popcorn :-)

March 13, 2013 | Unregistered CommenterEd Forbes

@Ed: I actually don't think the parts of our study you cite are contrary to Mark's point. He is noting that general measures of science literacy & numeracy are different from measures of, say, climate science literacy. I agree with him there but think that only begs the question why "climate science literacy" is not predicted by science literacy & numeracy? The answer, I'd say, is cultural cognition motivates people to use their science comprehension to reinforce their attachment to the views that predominate in their group & if that's so, "education" isn't going to get anyone very far. What needs to be changed is not how much information is available or how much effort is made to educate but rather the conditions that create pressure not to use the powers of science comprehension (and recognition) that they have in a way that leads them (as it usually does) to the best evidence.

March 13, 2013 | Unregistered Commenterdmk38

So good if you disagree w/ me about what inference to draw from studies that assess the contribution lack of exposure to scientific information has played in the creation of the climate change conflict. Design materials based on the studies you find compelling; use evidence to tweak & calibrate them. Then measure what effect they have.

I am surprised by this. Why wouldn't you be conducting that sort of experiment as we speak? Wouldn't an obvious test for your theory be to educate a group of people on climate change to then see whether or not there is any change in the degree of association of their views to their cultural/ideological/psychological/social predisposition? In such a manner, you could help control for many influencing variables that might be in play by assessing level of knowledge about climate change (what caused some people to know more than others - perhaps a confounding causal influence; how well does testing numeracy serve as a proxy for scientific understanding about climate change, etc?).

Personally, I believe that the best way to address motivated reasoning is to inform people with examples of the phenomenon, through real-world experimentation. I have done that by presenting students with materials on controversies that show how cultural/ethnic/national identification subjectively predicts orientation towards right and wrong, moral or immoral, justice or injustice, etc. For example, I ask students to review materials on subjects such as ME terrorism ("freedom fighter or terrorist?”), views on the O. J. Simpson trial, the ethics of euthanasia, stem cell research, the role of medical professionals in information patients about end of life issues, the role of women in society, etc. And along with that instructional process, we examine, quite specifically, the association between views on these issues and ideological/cultural/ethnic/racial/political orientation.

I just fly by the seat of my pants when I do that kind of work. I have attempted to do informal pre-test/post-test assessments of the effectiveness of addressing the phenomenon of motivated reasoning head on – but not comprehensively, and I can’t say that I have evidence that it works.

But I certainly know that unless I have empirical evidence, I can’t be sure that my work had any impact other than to have students gain more information on the subjects we covered. I have no idea whether the information they are exposed to (provided within an organizing framework that explicitly discusses the relationship between how people align on these issues and their personal identification) increases their understanding of motivated reasoning – let alone helps to control for their tendency towards cultural cognition. But I do that work because If find it fun. Certainly, if I were prescribing my methodology - within an academic framework - as way for others to address cultural cognition, I would feel that a pre-test/post-test kind of analysis would be necessary. Not only would I seek a testing scenario to positively show he effect I was looking for, but also a testing scenario to disprove other possible explanatory mechanisms. I would certainly need to show that it wasn't simply the act of informing my students about the issues that caused any reduction in evidence of motivated reasoning that may have been manifest.

March 13, 2013 | Unregistered CommenterJoshua

"..general measures of science literacy & numeracy are different from measures of, say, climate science literacy..."

One uses science literacy & numeracy to review what the professed activists in "climate science" produce.

When one uses "Mike's Nature Trick" to sell an obvious distortion, it tends to cause other issues. Science literacy & numeracy brought the distortion by the "climate science" professionals to light.

March 13, 2013 | Unregistered CommenterEd Forbes

@Joshua:

I agree that that would be an interesting study. It would be akin, I take it, to the ones that test the hypothesis that teaching the "understanding" of evolution does/doesn't promote "belief" in evolution (answer: doesn't).

I'd certainly be willing to collaborate/consult on such a project. Some qualifications/concerns/reservations:

a. I do think it would be challenging here to design a study here that didn't conflate the predictor & outcome variables (i.e., equate "learning science" with "accepting/believing climate change," making the hypothesis into an unhelpful tautology).

b. At this point, I myself am more interested in doing applied field-based experiments & affirmative uninterested in lab studies that do not test mechanisms of "upstream" relevance to real-world communication--i.e., ones that if established would be of use to communicators, who could then try to reproduce in the field the effects observed in lab experiments. So if someone like Mark had a version of this study that was "field based," I'd be interested; I'd be less interested in a lab experiment that didn't involve a real-world communicator. (That's just me, of course. Scholars should pursue whatever they are interested in -- as long as they don't misrepresent or overstate/oversell relevance.)

c. There are only 26 hrs in a day & 374 days in a yr ... ! I've got more than enough going on (30 hrs/day & 425 days in yr's worth) so I'd have to be seduced/enticed.

If you'd like to do a guest post on your epxerience w/ motivated reasoning & education -- perfectly fine that it is exploratory in nature! good way to figure out what is worthy doing in more controlled way (which then anticipates going back again into uncontrollable world) -- let me know!

March 14, 2013 | Registered CommenterDan Kahan

There is one question I would love to see you directly address here. Its the one that most keeps me up at night. We all know that these misconceptions about climate science don't happen in a vacuum. They happen in the midst of a very successful well funded effort to create confusion, inspire debate where there is agreement and foster mistrust in general in the scientific process. Given that reality, can you help me to understand what it is about those techniques which make them work so well?

March 14, 2013 | Unregistered CommenterJacob Tanenbaum

@Jacob: I started to answer & then realized the question was important enough to warrant a blog post!

March 14, 2013 | Registered CommenterDan Kahan

Thanks for the offer of a guest post, Dan. Have to think about whether I could put something together worthy of a post - particularly since the next couple of months will be crazy busy for me (and not the least because my hard drive just died). I will let you know.

a. I do think it would be challenging here to design a study here that didn't conflate the predictor & outcome variables (i.e., equate "learning science" with "accepting/believing climate change," making the hypothesis into an unhelpful tautology).

FWIW, a study I would personally be interested in (in case it isn’t clear from what I wrote earlier)…. would be something on the order of a study that examines the impact of a metacognition-based approach to science communication. The aspect of informing an audience about the science content on (both sides of) an issue would be more or less a vehicle or by-product of an attempt to shine a spotlight on the phenomenon of motivated reasoning. I guess that would make it more of a “lab experiment” than a field-based experiment…
The outcome to measure would not be the amount of science learned or even changes in beliefs per se (certainly when measured as a binary "are concerned/are not concerned about climate change") - but the measure of how strongly beliefs correlate with culturally-based variables. Maybe the measure would be a pre/post analysis of correlation between opinions on a topic and specific cultural markers. Maybe the measure would be a pre/post analysis of ability to accurately articulate the arguments on both sides of the debate. Perhaps the outcome measure would be assessing the subjects' ability to identify indicators of cultural cognition, confirmation bias, or to identify the existence of motivated reasoning or confirmation bias within one's own "tribe."

As far as I am concerned, a prerequisite for progress on motivated reasoning is for individuals to: (1) identify the power and pervasiveness of the phenomenon of motivated reasoning - within their own tribe as well as in others, (2) to realize that it is unrealistic for an individual to think that he/she him/herself is not affected by motivated reasoning and, (3) to be willing to engage in good-faith dialog about the boundaries of his/her own motivated reasoning. Unfortunately, I think that w/o reaching that prerequisite - progress in addressing the impact of cultural cognition on science communication will necessarily be quite circumscribed. There is no way to remove the political/ideological pollutants that infuse these debates. They are hard-wired into the topics themselves and into each of us as humans who reason. The best we can do is openly work to control for the influence of motivated reasoning. I am a huge believer in the importance and power and necessity of metacognitive instruction for addressing/affecting the manifestation of motivated reasoning.
I imagine two groups. One would be a science communicator instructing a randomly selected audience (with relevant attributes measured) about the science of climate change (or HPV, or GMO, stem cell research, evolution versus intelligent design, or even on “non-scientific” issues such as the validity of jury nullification for young black marijuana dealers or the ethics women’s status in Muslim society.). Necessarily, the instruction would include material from all sides of the debate - presented as objectively as possible. This would be the control group.

The other group would use the same materials within a similar instructional paradigm (similar type of instructor or perhaps necessarily even the same instructor, similar audience demographics) delivered in the same fashion, but with the addition of material focused on metacognitive instruction focused on the topic of motivated reasoning.

An assessment of degrees of change in pre- and post-test levels of motivated reasoning would be the targeted outcome measure.

March 14, 2013 | Unregistered CommenterJoshua

Dan,
I would be interested in working with you to design a test in response to your point-counterpoint with Mark McCaffrey. And I may have an interesting "laboratory" that would lend itself to the experiment. Is there a way I can contact you directly to discuss?

September 3, 2013 | Unregistered CommenterDavid Herring

Sure-- send me email! I'll send you one

September 3, 2013 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>