follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Science of Science Communication 2.0, Session 8.1: Emerging technologies part I | Main | "the strongest evidence to date" on effect of "97% consensus" messaging »

Weekend update: Science of Science Communication 2.0: on CRED "how to" manual & more on 97% messaging

Here are some contributions to class discussion.  The first is from Tamar Wilner, who offers reactions to session 7 readings.  I've posted just the beginning of her essay and the linked to her page for continuation. The second is from Kevin Tobia (by the way, I also recommend his great study on the quality of "lay" and "expert" moral intuitions), who addresses "97% messaging," a topic initially addressed in Session 6 but now brought into sharper focus with a recently published study that I myself commented on in my last post.

Health, ingenuity and ‘the American way of life': how should we talk about climate?

Tamar Wilner

Imagine you were

  1. President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;
  2. a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;
  3. a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or
  4. a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be? How would you advise any one of these actors to proceed?

First, some thoughts on these four readings.

The CRED Manual: well-intentioned, but flawed2015-02-27_10-42-04

Source material: Center for Research on Environmental Decisions, Columbia University. “
The Psychology of Climate Change Communication: A guide for scientists, journalists, educators, political aides, and the interested public.”

When I first read the CRED manual, it chimed well with my sensibilities. My initial reaction was that this was a valuable, well-prepared document. But on closer inspection, I have misgivings. I think a lot of that “chiming” comes from the manual’s references to well-known psychological phenomena that science communicators and the media have tossed around as potential culprits for climate change denialism. But for a lot of these psychological processes, there isn’t much empirical basis showing their relevance to climate change communication.

Of course, the CRED staff undoubtedly know the literature better than I do, so they could well know empirical support that I’m not aware of. But the manual authors often don’t support their contentions with research citations. That’s a shame because much of the advice given is too surface-level for communications practitioners to directly apply to their work, and the missing citations would have helped practitioners to look more deeply into and understand particular tactics.

continue reading


“Gateway beliefs” and what effect 97% consensus messaging should have

Kevin Tobia

A new paper reports the effect of “consensus messaging” on beliefs about climate change. The media have begun covering both this new study and some recent criticism. While I agree with much of the critique, I want to focus on a different aspect of the paper, the idea of a “gateway belief.” Thinking closely about this suggests an interpretation of the paper that raises important questions for future research – and may offer a small vindication of the value of 97% scientific consensus reporting. You be the judge.

First, what is a “gateway” belief? In this context, what is it for dis/belief in scientific consensus on climate change to be a “gateway” belief? You might think this means that some perceived level of scientific consensus is necessary to have certain beliefs in climate change, worry in climate change, belief in human causation, and support for public action. You can’t have these beliefs unless you go – or have gone – “through the gate” of perceived scientific consensus.

But here’s what the researchers actually have to say about the “gateway” model prediction:

We posit that belief or disbelief in the scientific consensus on human-caused climate change plays an important role in the formation of public opinion on the issue. This is consistent with prior research, which has found that highlighting scientific consensus increases belief in human-caused climate change. More specifically, we posit perceived scientific agreement as a “gateway belief” that either supports or undermines other key beliefs about climate change, which in turn, influence support for public action. ... Specifically, we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue (H1). In turn, a change in these key beliefs is subsequently expected to lead to a change in respondents’ support for societal action on climate change (H2). Thus, while the model predicts that the perceived level of scientific agreement acts as a key psychological motivator, its effect on support for action is assumed to be fully mediated by key beliefs about climate change (H3). 

There are a couple things worth noting here. First, if the ultimate aim is increasing support for public action, you don’t have to go “through the gate” of the gateway belief model at all. That is, what is really doing the work is the set of (i) beliefs that climate change is happening, (ii) beliefs it is human-caused, and (iii) how much people worry about it. If we could find some other way to increase these, we could affect public support without needing to change the perceived level of science consensus (though, whether that change might itself affect the perceived level of scientific consensus is another interesting and open question).

Second – and more importantly – it is not the case that there is some “gateway belief” in perceived scientific agreement that is required to have certain beliefs in climate change or human causation, worry, or support for action. The model merely predicts that all of these are affected by changes to the perceived level of scientific agreement. Thus, it is incorrect to draw from this research that the “gateway belief” of perceived scientific consensus is (as analogy might suggest) some necessary or required belief that must be held in order to belief in human-caused/climate change, worry about it, or support for action. Instead, the gateway belief is something like a belief that tends to have or normally has some relation to these other climate beliefs.

When we put it this way, the “gateway belief” prediction might start to seem less interesting: there’s one belief about climate change (perceived consensus among scientists) that is a good indicator of other beliefs about climate change. But, what is more intriguing is the further claim the researchers make: this gateway belief isn’t just a good indicator of others, but increasing the gateway belief will cause greater belief in the others. Perceiving scientific non-consensus is the gateway drug to climate change denial!

This conception of the gateway belief illuminates a subtle feature of the researchers’ prediction. Recall:

we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue ....

The hypothesis here is that changing the level of perceived consensus causes changes in these other climate beliefs.

Some might worry about the relatively small effects found in the study. But if we recognize the full extent of the researchers’ prediction (that increased perceived consensus will raise the other beliefs AND that decreased perceived consensus will lower the other beliefs), one possibility is that some participants with very high pre-test consensus estimates (e.g. 99% or 100%) actually reduced their estimate in light of the consensus messaging – and that this affected their other beliefs. It might seem unlikely that many participants held an initial estimate of consensus upwards of 97% (even with a pre-test mean estimate of 66.98), but data on this would be useful.

There is a more plausible consideration that also weakens the worry about small effect size: would we really want participants (or people in the world) to change their beliefs about science in exact proportion to the most recent information they receive about expert consensus? To put it another way, the small effect size might not be evidence of the weakness of consensus messaging, but might rather be evidence of the measured fashion in which people weigh new evidence and update their beliefs.

What would be helpful is data on the number (or percent) of participants whose beliefs increased from pre to post test. This would help distinguish between two quite different possibilities: 

  1. very few participants greatly increased their beliefs in climate change after reading the consensus message
  2. many participants moderately increased their beliefs in climate change after reading the consensus message
There is no good way to distinguish between these from the data provided so far, but which of these is true is quite important. If (1) is true, consensus messaging can only be offered as a method to appeal to the idiosyncratic few. And we might worry that the few people responding in this extreme way are, in some sense, overreacting to the single piece of evidence they just received. If (2) is true, this might provide a small redemption of the “consensus messaging campaign.” A little consensus messaging increases people’s beliefs a little bit (and, since this is from just one message, the small belief change is quite reasonable).

Of course, even if (2) is true, much more research will be required before deciding to launch an expensive messaging campaign. For instance, suppose we discover that the climate beliefs of people with very high initial beliefs in consensus (“97+% initial believers”) are actually weakened when they are exposed to 97% messaging. Even if people who are initially “sub-97% initial believers” change their beliefs in light of 97% messaging, we should ask whether this trade-off is beneficial. There are a number of relevant considerations here, but one thought is that perhaps reducing someone’s climate change belief from 100% to 99% is not worth an equivalent gain elsewhere from, say, 3% to 4%. For one, behaviors and dispositions may increase/decrease non-proportionally. 100 to 99 percent belief change might result in the loss of an ardent climate action supporter, while change from 3 to 4 percent might result in little practical consequence. These are all open questions! 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

Kevin -

Nice read!

A comment related to this part:

==> " Perceiving scientific non-consensus is the gateway drug to climate change denial!"

I think that is an aspect of the discussion that is interesting, but that I've rarely, if ever, seen discussed.

It could well be that consensus-messaging doesn't have a particularly strong and observable effect of increasing "belief" that humans are changing the climate, but still has an effect of reducing the impact of "anti-consensus" messaging. That second effect could be hidden in the small observed change in increased "belief" in climate change.

Your post reminds me of just how flawed so much of this research is in that it is not reality-based nor longitudinal - you allude to with your point about the need for closer examination of pre- and post-experiment changes among the participants.

March 1, 2015 | Unregistered CommenterJoshua

"There is a more plausible consideration that also weakens the worry about small effect size: would we really want participants (or people in the world) to change their beliefs about science in exact proportion to the most recent information they receive about expert consensus?"

Would you really want participants / people in the world to be so easily influenced by such messaging?

Suppose somebody runs a big campaign to say "97% of scientists have concluded that climate change is not going to destroy the world." "97% of scientists have concluded that climate change is not going to cause massive flooding of coastal areas." "97% of scientists have concluded that the number of severe hurricanes is reducing." "97% of scientists have concluded that as the climate changes and CO2 levels rise, grain yields are going up." "97% of scientists have concluded that climate change is not going to melt the polar ice caps any time soon." "97% of scientists say that crop plants grow better in warm greenhouses with additional CO2 pumped into them." "97% of scientists have concluded that the polar bears are doing OK - we have to cull 5% of them annually just to keep the population from expanding out of control!" and so on.

Do you think it would be good if such messages worked, and every one reduced public support for action by a few points? How many would we need to reduce it to zero?

And where is the research on the effect of such questions? (Or the questions on such research?!) Is this impartial science, or partisan political marketing you're doing? ;-)

March 2, 2015 | Unregistered CommenterNiV

Joshua - nice points! I think, among other things, they underscore important worries about taking the 97% study as support for a real-world consensus-messaging campaign. Before putting up lots of money we would want to know much more about what (real-world) factors contribute to perceived anti-consensus and climate skepticism. We'd also want to know how those "anti-consensus" vehicles interact with the proposed "consensus-messaging."

For instance, a real-world "anti-consensus" vehicle might be relatively immune from consensus messaging. Even worse is the possibility that (outside of the lab) the proposed consensus messaging might activate some of these previously more dormant anti-consensus mechanisms: If I'm with my friends watching football at a bar and the TV commercial tells us "97% of climate scientists have concluded that climate change is happening!", it is possible that my level of perceived consensus will increase, perhaps even resulting in a modest increase in my other climate change beliefs. But I can also imagine (in certain contexts) a friend here might interject, "That climate change stuff is a bunch of ... !" or "Oh, I don't believe that! My uncle is a scientist and he doesn't think there's anything to worry about!"

The strength and effects of this kind of real-world "anti-consensus" message is unclear, but these possibilities seem plausible enough to require much more research before funding a large consensus messaging campaign.

NiV - I like your ideas! I suspect that even if every one of the messages you proposed INDIVIDUALLY reduced public support by some small amount, there would be diminishing returns from such messaging and we probably couldn't expect them to COLLECTIVELY drive public support to 0 (of course, this is just a hunch, someone could try to test this and show otherwise :)

What you say is a very interesting suggestion about the "gateway" belief claim. Perceived consensus may have this small effect (as an average among the large sample), but perhaps there are other messages that would have larger/smaller effects, in various contexts, for various individuals. My own hunch is that there are at least two relevant considerations here: (1) the magnitude of discrepancy between someone's credence in the consensus and the actual science consensus (e.g. if 97% scientists believe X, I'd guess - all else equal, there is a bigger effect for someone with an initial belief in 30% consensus compared to someone with 95% consensus.) and (2) how important the claim is to the individual (e.g. I'd expect the message that 97% of scientists believe in X to have a greater effect - all else equal - on someone who cared a lot about X than someone who did not. consider people with varying levels of care about severe hurricanes, flooding coastal areas, grain yields, etc.)

March 2, 2015 | Unregistered CommenterKevin

"I suspect that even if every one of the messages you proposed INDIVIDUALLY reduced public support by some small amount, there would be diminishing returns from such messaging and we probably couldn't expect them to COLLECTIVELY drive public support to 0"

I agree. I don't think it's any sort of a linear combination of influences - just adding up all the pro-consensus messages, subtracting all the anti-consensus messages, to get the net public opinion.

I would expect it to depend critically on whether a person has any preexisting reason to believe the conclusion X is wrong (casting doubt on the credibility of 97% of scientists) or to believe that the scientists are unreliable on this topic. I think most people would in the absence of any other information tend to credit what other people report scientists say. But people do already have information, in differing amounts, and with differing slants, depending on their cultural network.

The more they already know, the less moveable their preexisting beliefs will be. The less they know, the more easily they can be influenced.

It also depends on how consistent the information they get is. If all sources say the same thing, each tends to reinforce confidence in the others. But if different sources say different things, then the topic is identified as 'controversial' and becomes harder to influence. Since it is known that at least one side must be wrong, the credibility of sources is necessarily in question. People will make more effort to assess either the credibility of the sources or the information, most commonly by checking whether the expert is right about other things that the person already 'knows' the answer to, and whether the information being provided conflicts with preexisting beliefs. (Thus, of course, somebody 'wrong' in their political views is less credible as an expert on other topics.)

And there is a hysteresis effect - information supporting one side weighs against the credibility of the other side, which down-weights any further evidence that side provides. The result may vary depending on what order the information is provided - some people tend to stick with whatever they were first told and dogmatically dismiss later information that conflicts with it (conflict reduces the credibility of the new information), other people believe whatever they were last told (conflict reduces the credibility of the preexisting belief).

All in all, I'd expect the effect to be very complicated, and depend on the current belief state of the participants, which may vary over time. I think trying to deduce what's going on inside the black box by looking at means or simple linear regressions is naive. I could be wrong, but I find it very hard to fit the simple 'additive influence' picture this study seems to assume with what I know of the climate debate.

"But I can also imagine (in certain contexts) a friend here might interject,..."

If they're anything like my friends, somebody will interject that the statistic is factually wrong and be able to point to the peer-reviewed papers the number supposedly comes from to prove it. (Relatively few people have that depth of knowledge, but a lot of people know somebody who knows somebody with that depth of knowledge. Word gets around.) The strength of this sort of anti-consensus message on the credibility of any future pro-consensus messaging can be severe, as I can attest from direct observation. That's anecdote, not data, but I'd call it at least plausible.

So one of the major influences on the effectiveness of any potential anti-consensus messaging is whether your pro-consensus message is true. Messages that are not true are more vulnerable to being counteracted, which not only cancels the effect of the message, but wipes out the credibility of the messenger. Even if all the subsequent messages are true, they'll not be believed. Another non-linear effect.

"'d expect the message that 97% of scientists believe in X to have a greater effect - all else equal - on someone who cared a lot about X than someone who did not."

Agreed. But is all else equal? Doesn't a person who cares passionately about an issue not already have a body of preexisting knowledge/belief about it?

For example, people who care about polar bears already know that they are endangered, that polar biologists have issued dire warnings, that environmentalist campaign groups have raised money with pitiful pictures of the cutest l'il baby polar bears clinging to floating blocks of melting ice. Now, it's possible that in this body of knowledge they will know that polar bears are hunted, although the numbers that can be taken are strictly limited, and they might have heard the statistic that roughly a thousand are killed each year. It's rather better known that there are about 20,000-25,000 polar bears in total. If they do, they can divide one number by the other to get 4-5%. (It's then a matter of simple logic that if we stopped culling them, they'd expand at around 4-5% a year, at least initially, which doesn't sound exactly like a species in danger of extinction.) But if they don't know that, they'll almost certainly reject the information as 'ridiculous nonsense' and discredit either the 97% of scientists saying it, or more likely, the person making the claim that that's what scientists believe. Because they've heard scientists talking about it on TV and in Greenpeace fund-raising adverts, and they've been saying the exact opposite!

Whereas somebody who neither knows nor cares anything about polar bears and hasn't seen the adverts may well take my statement on trust, knowing no reason to doubt it.

So whether my polar bear messaging campaign works depends on which bits of previous polar bear campaigns stuck in how many people's minds, potentially yielding opposing effects in different subsets. It's complicated.

And I think this sort of complexity requires some hard science to sort out, not some simplistic marketing survey. Researchers need to measure preexisting beliefs in detail, apply their intervention, measure the effect, and ask the subjects for their reasoning. Because while asking subjects is not entirely reliable, it's a heck of a lot more reliable than making up plausible-sounding (but as explained above, highly over-simplified and unrealistic) speculations based on what the experimenter guesses might be going on in people's heads.

And in particular, it requires dropping the ulterior motive underlying all these studies of persuading the public to one's political point of view, and instead looking at interventions in both directions, and how they interact, without constantly making judgements about which are the 'right' or 'desirable' outcomes. The moral judgements are distorting and limiting what hypotheses researchers are willing to consider.

And I'd definitely recommend that you build research teams containing a mixture of sceptics and believers, so that you don't get such blindspots, and each can bounce ideas and hypotheses off the other. After all, the science should be equally applicable to either side (Argument from Authority and Argument ad Populam being content-independent fallacies), and the paper ought to be written so that it is impossible to tell what the authors' opinions actually are on the topic. After all - you're not experts on atmospheric physics so as scientists you ought to be suspending judgement. "No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves", as one scientist famously said in somewhat heated circumstances. It's wise advice.

I don't expect that to actually happen, of course. Political polarisation affects scientists too - and if we extrapolate Dan's result about people with higher scientific literacy to those people with the highest scientific literacy, it should affect them worse. But at least we can say this is what ought to happen.

March 2, 2015 | Unregistered CommenterNiV

I would like to see a Conservative-Liberal breakdown on knowledge of the proposed amelioration plans

Surveys my own students have conducted indicate a strong correlation for recognizing the existence problem of AWG among the cohorts self identifying as Democrat.

But we also found that the cohort that self identifies as Republican has a much better understanding of the policy proposals being advanced to ameliorate AWG.

In terms of advancing cognition on the issue we and only seem to be focusing on one side and half of the bias.

March 3, 2015 | Unregistered CommenterKN

If somebody told me that 97% of scientists agred on something, that would may make me think OK there is a strong agreement..

my question would then be (and I suspect anybodies)

What exactly (specifically) do they agree on?

This is where Skeptical Science style messaging falls down.. as they start to get very evasive.. and/or circular, 97% of scientist agree that there is a consensus.. LOL

pinning down specifically, what they agree is nigh on impossible -

For anybody that has looked at the leaked Consensus Project forum, they were all very keen to market the 97% message of their new paper (before they had done any research..)

Then they spent rather a long time, trying to actually define what the consensus was! All has different opinions.

With Dana Nuccittelli, tying the Skeptical Science group in knots as he didn't want a definition of what the 97% of scientists actually agreed on that would include the climate sceptics, because that would weaken the message!!

bad faith.

March 10, 2015 | Unregistered CommenterBarry Woods

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>