follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Scientists discover source of public controversy on GM food risks: bitter cultural division between scaredy cats and everyone else! | Main | First things first: the science of normal (nonpathological) science communication »
Tuesday
Apr192016

New "strongest evidence yet" on consensus messaging!

Yanking me from the jaws of entropy just before they snapped permanently shut on my understanding of the continuing empirical investigation of "consensus messaging," a friend directed my attention to a couple of cool recent studies I’d missed.

For the 2 members of this blog's list of 14 billion regular subscribers who don't know," consensus messaging” refers to a social-marketing device that involves telling people over & over & over that “97% of scientists” accept human-caused global warming.  The proponents of this "strategy" believe that it's the public's unawareness of the existence of such consensus that accounts for persistent political polarization on this issue.

The first new study that critically examines this position is Cook, J. & Lewandowsky, S., Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks, Topics in Cognitive Science 8, 160-179 (2016).

Lewandowsky was one of the authors of an important early study (Lewandowsky, S., Gignac, G.E. & Vaughan, S, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change 3, 399-404 (2012)), that found that advising people that a “97% consensus” message increased their level of acceptance of human-caused climate change.

It was a very decent study, but relied on a convenience sample of Australians, the most skeptical members of which were already convinced that human activity was responsible for global warming.

Cook & Lewandowsky use representative samples of Australians and Americans.  Because climate change is a culturally polarizing issue, their focus, appropriately, was on how consensus messaging affects individuals of opposing cultural predispositions toward global warming.

Take a look at C&L's data. Nice graphic reporting!They report (p. 172) that “while consensus information partially neutralized worldview [effects] in Australia, in replication of Lewandowsky, Gignac, et al. (2013), it had a polarizing effect in the United States.”

“Consensus information,” they show, “activated further distrust of scientists among Americans with high free-market support” (p. 172). 

There was a similar “worldview backfire effect” (p. 161) on the belief that global warming is happening and caused by humans among Americans with strong conservative (free-market) values,” although not among Australians (pp. 173-75).

Veeeery interesting.

The other study is Deryugina, T. & Shurchkov, O, The Effect of Information Provision on Public Consensus about Climate Change. PLOS ONE 11, e0151469 (2016).

D&S did two really cool things.

First, they did an experiment to assess how a large (N = 1300) sample of subjects responded to a “consensus” message.” 

They found that exposure to such a message increased subjects’ estimate of the percentage of scientists who accept human-caused global warming.

However, they also found that  [the vast majority of] subjects did not view the information as credible. [see follow up below]

  “Almost two-thirds (65%) of the treated group did not think the information from the scientist survey was accurately representing the views of all scientists who were knowledgeable about climate change,” they report.

This finding matches one from a  CCP/Annenberg Public Policy Center experiment, results of which I featured a while back, that shows that the willingness of individuals to believe "97% consensus" messages is highly correlated with their existing beliefs about climate change.

In addition, D&S find that relative to a control group, the message-exposed subjects did not increase their level of support for climate mitigation policies.  

Innovatively, D&S measured this effect not only attitudinally, but behaviorally: subjects in the study were able to indicate whether they were willing to donate whatever money they were eligible to win in a lottery to an environmental group dedicated to “prevent[ing] the onset of climate change through promoting energy efficiency.”

In this regard, D&S report “we find no evidence that providing information about the scientific consensus affects policy preferences or raises willingness to pay to combat climate change” (p. 7).

Subjects exposed to the study’s consensus message were not significantly more likely—in a statistical or practical sense—to revise their support for mitigation policies, as measured by either the attitudinal or behavioral measures feature in the D&S design.

This is consistent with a model where people look to climate scientists for objective scientific information but not public policy recommendations, which also require economic (i.e. cost-benefit) and ethical considerations,” D&S report (p. 7).

Second, D&S did a follow-up survey, in this part of the study, they re-surveyed subjects who received a consensus message to the consensus message six-months after the initial message exposure.

Still no impact on the willingness of message exposed subjects to support mitigation policies (indeed, all the results were negative, Tbl. 7,albeit “ns”).

In addition, whereas immediately after message exposure, subjects had reported higher responses on 0-100 measures of their perceptions of the likelihood of temperature increases by 2050, D&S report that they “no longer f[ound] a significant effect of information”—at least for the most part. 

Actually, there was significant increase in responses to items soliciting belief that temperatures would increase by more than 2.5 degrees Celsius by that time -- and that they would decrease by that amount.

D&S state they are “unable to make definitive conclusions about the long-run persistence of informational effects” (p. 12).  But to the extent that there weren’t any “immediate” ones on support for mitigation policies, I’d say that the absence of any in the six-month follow up as well rules out the possibility that the effect of the message just sort of percolates in subjects' psyches, blossoming at some point down the road into full-blown support for aggressive policy actions on climate change.

In my view, none of this implies that nothing can be done to promote support for collective action on climate change. Only that one has to do something other-- something much more meaningful-- than march around incanting "97% of scienitists!"

But the point is, these are really nice studies, with commendably clear and complete reporting of their results. The scholars who carried them out offer their own interpretations of their data-- as they should-- but demonstrate genuine commitment to making it possible for readers to see their data and draw their own inferences. (One can download the D&S data, too, since they followed PLOS ONE policy to make them available upon publication.)

Do these studies supply what is now the “strongest evidence to date” on the impact of consensus-messaging? 

Sure, I’d say so-- although in fact I think there's nothing in the previous "strongest evidence to date" that would have made these findings at all unexpected.

What do you think?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (18)

Nice to see SOMEONE, finally, focus on longitudinal data. Some much of the discussion on these issues fallaciously suggests extrapolating longitudinally from cross-sectional data. I don't get how so many smart, knowledgeable people can make such a fundamental error.

Oh, wait, yes I do. Motivated reasoning/cultural cognition/identity protective cognition/confirmation bias.

April 19, 2016 | Unregistered CommenterJoshua

Next up, let's see someone use real world context for evaluating these questions, where the effect of messaging as it actually exists in polarized communication fora is being measured. Combine that with longitudinal data, and then we'll actually have something meaningful to talk about rather than just the sameosameo.

April 19, 2016 | Unregistered CommenterJoshua

I find your assessment somewhat biased. It appears you only like studies that seem to agree with your personal opinion of consensus messaging.

For example, for the Cook & Lewandowsky study you mention, consensus messaging seemed to neutralize polarizing worldviews for Australian citizens and for most Americans as well.

In fact, look at Figure 5 (panel B). Consensus messaging directly increased acceptance of AGW for almost ALL Americans in the sample except for a tiny minority at the extreme end of the free-market ideology range. Would you really expect that one message will convince everyone the same, even the most committed minorities? I think that's an unrealistic expectation to set for a single exposure to a message. In light of this, I would think that this work supports the prior Lewandowsky paper in an even stronger sense.

2. For the other study you mention, they actually found that exposure to the consensus message significantly increased acceptance of AGW, with no significant difference across ideology. Doesn't this contradict your hypothesis about the cultural cognition of scientific consensus?

The authors call this the "hard info" condition (the 97% number), which had a significant impact on AGW (Table 1).

The also say that the 6 month follow up had too much error associated with the estimates to make a good judgment, but that possibly, 50% of the original effect on personal beliefs could be expected to carry over. This is quite positive news for a single exposure no?

And yet, you mention none of this to the reader?

3. The gateway belief model seems to predict that consensus would NOT directly impact policy-support, but only perceived consensus, which in turn drives personal beliefs, which in turn, influence policy support. I think it is a mischaracterization to imply the authors have suggested otherwise.

So all in all, I would look at these results and conclude that there is pretty good and converging evidence for the efficacy of consensus messaging, indeed?

But I suppose different cultural cognitions would shape different conclusions based on the same evidence.....

April 19, 2016 | Unregistered CommenterMotivated reasoning

MR -

nteresting, thanks for the comment. Looks like more reading is required to reconcile your comment with Dan's post.

April 19, 2016 | Unregistered CommenterJoshua

@MR:

There were effects in D&R on 0-100 "belief" in temperature increases--ones that after six months changed sign or disappeared; I noted that.

They also said that there was no effect either immediately or 6 mos later on support for mitigation.

Public support for mitigation is the "outcome" variable in the van der Linden et al "gateway belief" model you refer to. That's what "perceived scientific consensus" is a "gateway" too:


Specifically, we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue (H1). In turn, a change in these key beliefs is subsequently expected to lead to a change in respondents’ support for societal action on climate change (H2).

If you, in contrast, want to treat increasing the immediate proportion of scientists who accept AGW or responses on 0-100 measures of predicted changes in temperatures as a sufficient basis for social-marketing campaigns on scientific consensus *independently* of the effect of doing so on support for mitigation, though, go right ahead!

Similarly, if you think that values to the right of the the midpoint on the "free mkt values" ("look at" panels d, f h, j, which show experimental effect of consensus msg in relation to worldview) are "extreme," that's fine too. All that matters is that the researchers allow readers to see the data-- & decide for themselves.

C&L say their result "stands in contrast" to van der Linden et al. They state:

Similar to the present study, Kahan, Jenkins, et al. (2011) found that consensus information was potentially polarizing, with hierarchical individualists (i.e., mainly people
who endorse free markets) attributing less expertise to climate scientists relative to egalitarian
communitarians (who believe in regulated markets). The worldview-neutralizing
effect on Australians that was observed here replicates existing work involving an Australian
sample by Lewandowsky, Gignac, et al. (2013).

Understanding why scientific messages lack efficacy or indeed may backfire among
certain groups is of importance to scientists and science communicators, given the known
role of perceived consensus as a gateway belief influencing a range of other climate attitudes
(Ding et al., 2011; van der Linden et al., 2015; McCright & Dunlap, 2011; Stenhouse
et al., 2013).Understanding why scientific messages lack efficacy or indeed may backfire among certain groups is of importance to scientists and science communicators, given the known role of perceived consensus as a gateway belief influencing a range of other climate attitudes (Ding et al., 2011; van der Linden et al., 2015; McCright & Dunlap, 2011; Stenhouse
et al., 2013).

But people should for sure read & make up their own minds. Good for C&L for reporting their data in a manner that makes that possible.


April 19, 2016 | Registered CommenterDan Kahan

Hmm. Interesting paper. Despite it being by Cook and Lewandowsky, who don't exactly have good reputations as credible researchers, I decided to ignore that as 'argument from authority' and read it anyway. It's not too bad.

They do keep on citing the 97% result as if the papers they cite supported the statement (!!), which is most amusing, but it's expected behaviour and easy to ignore. The Bayesian model fit is kinda obvious but interesting to see someone in the field demonstrating one explicitly. There are lots of other BBNs that could achieve the same sort of fit, but as a proof of principle that belief polarisation can 'rationally' be explained it does the job.

I think, in itself, it doesn't get us much further forward in understanding the causes. Their BBN is just one hypothesis of many. But if it can get other social scientists considering a wider range of more complex models of belief updating, it could result in a useful paradigm shift.

April 19, 2016 | Unregistered CommenterNiV

@ Dan

Fair enough. But in the gateway model, all of the variables are technically "dependent" outcomes (they all have lines going to them) but I agree that policy-support is the ultimate outcome variable. Nonetheless, what the authors are describing is a process by which people may change their beliefs. If people change their beliefs about AGW they may in turn become more likely to support climate policy, or not, but I still think it is a valuable outcome nonetheless, independently of policy-support and we can agree to disagree on that.

However, changing someone's perception of what scientists' believe seems like a non-identity threatening belief to change for people right? You don't have to change your worldview, just your perception of what scientists' think, but if that then leads people to incrementally change their own personal beliefs, even minimally, it seems like a nice way to get your foot in the door, especially with skeptics. So why critique this line of research?

By the way, if you want to quote C&L, they state;

"We consider it remarkable that this subtle manipulation had a statistically detectable effect, however small" (p. 175). I would agree with that.

On the worldview backfire effect, they literally conclude;

"the present study finds evidence for belief polarization with a small number of conservatives exhibiting contrary updating" (p. 177). Perhaps what's important here is what works for the majority.

The data behind the C&L paper doesn't seem to be available online though, neither does the data for your key paper on cultural cognition of scientific consensus. Seems easy to hold other researchers to higher standards than you hold yourself ....

April 19, 2016 | Unregistered CommenterMotivated reasoning

@Motivated REasoning:

Draw your own inferences from their data; they give you all you need to do so. They also give all the information necessdary for anyone to draw inferences different from the ones they themselves do when they try to make sense of what they have found. So I encourage others to read & reflect for themselves.

BTW, that's what I'm commending the authors -- of *both* papers -- for doing: for reporting their results fully & accurately.

That's a standard I do certainly hold myself to -- both when I publish papers & when I report less formally on data I've collected. If you have someone in mind who doesn't do that -- someone who fails to report data completely enough for readers to make critical appraisals of what they say their studies show or who are in fact mischaracterizing what they find -- then go ahead & tell us who you are thinking of so we can all avoid being deceived by such people.

You say you can't find the C&L data. That's b/c the journal they published in doesn't require them to post them. But write & ask for them; I predict they'll happily share them w/ you.

I do think *all* journals should have the policy that PLOS ONE has of "open data" -- that is, of *requring* authors to post data & fully useable codebooks & analysis guides-- for any article they publish.

I also think that even if journals don't have such a policy, researchers shouldshare their data w/ other scholars after they've published them (before then I think those who collect the data for sure are entitled to the reward of being able to be the first to publish a paper w/ them).

Preparing one's data set for others to use is a pain in the ass, so it's not surprising that most scholars don't do it unless someone asks. That's certainly my excuse, although putting more of our published studies up is on my "to do" list. But the best thing would be if journals simply required it universally.

But reporting data clearly & fully -- in a way that enables critical enagement w/ them by readers & doesn't mislead them-- is a duty every researcher has any time he or she reports any, one that doesn't depend on whether or when or with whom they have an obligation to share their data.

Looking back now, I think I likely led you into error about the availability of C&L data. I did say that they demonstrated "commitement to making ti possible for readers to see their data ... " but I meant by that that the authors reported the data in form that made it possible for readers to understand what the collected data actually showed-- rather than leaping immediately into multivariate analyses the appropraiteness of which can't be assessed w/o more info or worse just telling readers that they had done analyses that aren't reported. I pointed out "too" that the D&S data could be "downloaded" "too" thanks to PLOS ONE's policy-- these are both good things but *different* "good" things!

As for "if-you-change-your-view-of-what-climate-scientists-think-you'll-change-your-policy-views,"I think the inerence you are drawing about how change in pct estimate of scientific consensus will blossom into change in policy views is contrary to the data that D&R present.

It's ceratinly not my view, either.

I think it's super complicated to make sense of how people integrate "what climate scientists believe" into their thinking -- but they clearly actually do already understand that climate scientists believe humans are causing global warming and bad things are going to happen; but we have a fucked up political culture that makes the questions "do you believe in climate change" & "is there scientific consensus on climate change?" *mean* something completely different from "what do you know about what scientists know & what should we do w/ their knowledge." We should fix that -- & here's *not* how to do that.

I'll also stick my neck out & hypothesize that if someone does a study like they did again -- one w/ a 6 mo follow up-- and measures *pct of scientists who believe in AGW*--they'll find *that* effect disappeared too.

If you tell subjects that “97% of scientists have concluded that human-caused climate change is happening” and then ask them “to the best of your knowledge, what percentage of climate scientists have concluded that human-caused climate change is happening,”a huge proortion will dutifully repeat back to you what you just said. That's called a command effect. They'll also predictably pick a higher value on *anything* w/ a score of 0-100 that you ask them after that. That's called anchorcing. The design of studies like that are invalid--pure & simple. No surprise that people who revise their estimates upward on those bogus measures don't agree after to donate money to charity or even say they have changed their view on human-caused climate change (yes or no; not on scale of 0-100, a ridiculous measure) if you simply ask them.

April 20, 2016 | Registered CommenterDan Kahan

@NiV-- I'm still trying to figure out what the Bayesian stuff adds, if anyting, to C&L...

I think in fact it is likely impossible to figure out anything from an experiment like this about whether anyone is engaged in Bayesian form of information processing as opposed to identity-protective rasoning or anyting else.

But I'm still processing that part of the paper & will update my views as soon as I figure out what the LR is...

April 20, 2016 | Registered CommenterDan Kahan

"You don't have to change your worldview, just your perception of what scientists' think, but if that then leads people to incrementally change their own personal beliefs, even minimally, it seems like a nice way to get your foot in the door, especially with skeptics."

The problem is that, if this really worked, the skeptics would just change it right back. If saying "97% of scientists believe in climate change" really induces people to believe it more, then saying "97% of scientists *don't* believe in climate change" will by the same logic cause them to believe it less. (Would it cause *you* to believe it less?)

In any case, it's a highly dangerous tactic, because it puts the credibility of the entire campaign at risk. To counter it, all a skeptic has to do is say "now just go read those papers he just cited". Anyone doing so can easily discover that the papers don't support the claim. All those people you just persuaded will now tend to apply the "falsus in uno, falsus in omnibus" principle, and discredit everything else the campaigners say as well. Even if it works temporarily, it creates a critical vulnerability that can be exploited. You're basically relying on people assuming you wouldn't be so silly as to make a claim so easily proved to be false, so they can deduce it's likely true. But evidence generally trumps assumptions, unless you're ideologically committed.

"You say you can't find the C&L data. That's b/c the journal they published in doesn't require them to post them. But write & ask for them; I predict they'll happily share them w/ you."

They didn't last time. The fight skeptics had getting the data out of them for their last few papers was epic!

"I'm still trying to figure out what the Bayesian stuff adds, if anyting, to C&L... I think in fact it is likely impossible to figure out anything from an experiment like this about whether anyone is engaged in Bayesian form of information processing as opposed to identity-protective rasoning or anyting else."

It's not possible to determine from this data whether people are using Bayesian processing. (Other studies in the AI literature on expert systems indicates that humans don't use Bayesian reasoning - their assessments are inconsistent with probability axioms.)

All they've shown is that there exists at least one Bayesian system that can model behaviour similar to what is observed. There are infinitely many more. So they can't tell which one, if any, is actually used, nor is this a demonstration that a Bayesian approach actually *is* used. All it says is that it's possible.

The significance is that it shows that the effect is not necessarily irrational or paradoxical, but can be explained by plausible and reasonable mechanisms. This therefore allows social scientists to start searching for them, instead of writing it off as a paradox that doesn't have to make any logical sense.

For what it's worth, I described essentially the same system on here months ago, when I pointed out that the Bayesian prescription does not only take as inputs the priors and the experimental result, but also the statistical model of how likely each outcome is under each hypothesis. Since people can disagree on the models they use as well, it's perfectly possible for people with the same priors seeing the same experimental result to nevertheless come to different conclusions; both following a Bayesian prescription.

It's not a particularly novel insight, nor a particularly sophisticated analysis. But it's not the sort of thinking I'd normally expect to fit well into the C&L worldview (who more usually explain climate skeptics as being subject to psychiatric disorders and conspiracy theories) so it's kind of a hopeful sign. That's probably wishful thinking on my part, but I try to maintain an optimistic outlook.

April 20, 2016 | Unregistered CommenterNiV

@Dan & NiV, thanks for the reflective comments. I agree that making data publicly available is a pain in the ass for researchers, but the reporting of results is necessarily selective (my point was that it seems strange to suggest that one team of authors is less transparent than another, the reporting of results is usually fitted to the research questions posed in the paper, I don't think it pays to offer the reader a data dump of all descriptive statistics).

I actually think the reporting of the results in the D&R paper is pretty bad. Also, I'm not suggesting that *open data* is universally the best policy. I actually think this whole idea of "transparency" about science and uncertainty is not very well thought-out on a larger level. Data dumps provide uncredentialed trolls with the opportunity to run their own bogus analyses on authors' data and spread more misinformation and confusion amongst the unsuspecting public. It's a risk-risk trade-off.

I agree with you that it is complicated to make sense of how people integrate information about what scientists' believe into their thinking, but the gateway process is certainly one explanation, I just find it notable you keep posting about it. Also, I am not convinced consensus messaging is fully the result of anchoring or priming, but it is certainly an empirical question worth exploring. @Joshua totally agree with you that more longitudinal and real-world field studies will ultimately be informative in determining the *practical* value of this approach.

With regard to the usefulness of the Bayesian analysis, I think C&L are suggesting that a Bayesian view allows people to update their beliefs after exposure to consensus but in a more mediated manner (i.e., one's cultural worldview or political ideology might attenuate some of the processing) but whatever updating does take place, is still *rational* within this framework. No doubt about the fact that we have a f*** up political culture though.

April 20, 2016 | Unregistered CommenterMotivated reasoning

@NiV--

I don't disagree w/ you on the Bayesianism point. What you are calling the "model" is what I refer to as the "truth-seeking criteria" that informs the LR. I agree w/ you that that apparatus is something on which Bayes's Theorem is indeed silent-- it presupposes the LR as an input. (I vaguely recall saying that back in 1974, when you posted the comment you refer to.)

But do you see how w/ an appropriate design the "model" in your view that varies among partisans who are assigning opposing LRs to the same information -- and thus not converging and possibly even polarizing -- can be experimentally probed and determined to be based on "identity protective" rather than "truth-seeking" criteria?

That's the $1 million question, you see... B/c in that case, we could say that identity-protective reasoning is eiether not Bayesian or at least not a form of Bayasian reasoning normatively suited to the ends of those who want to figure out the truth about anything.

I don't *think* the C&L design has what I view as the essential elements of a design that can ferret out identity-protective cognition. The results could well be ones that one would get not only w/ Bayesian resoning but w/ Bayeasian reasoning that is unproblematically informed by "truth-seeking" approaches to determiing the LR.

I don't think that's what's going on. But I don't think the result furnishes anyone any reason to revise their own priors on that.

Or at least that's my provisional view; that part of their paper is compelicated so I'm working through it slowly (obviously, the experiment results don't depend on that-- those are just cool extra things to help us try to draw inferences about the sort of reasoning that could produce the results they observed)

April 20, 2016 | Registered CommenterDan Kahan

@MV--

Well how *much* data one has to enable readers to see in order to make it possible form a critical appraisal of the inferences that can be drawn from a study for sure requires judgment. For sure just dumpting summary statistics of all sorts doesn't help.
But some basic stuff: tell people the exact wording of your mesures, e.g. If you test a hypothesis, then give enough information about what the raw data for readers to know whether they look the way that the hypothesis would imply. When you get to tht epoint of fitting a model, specify it in a way that genuinely tests the hypothesis. And show people the results of the model -- don't tell them you ran an analysis and it did or didn't show something .

And of course not leave out results that are contrary to inferences, etc.

But you are reading a lot into this post. All I did was say that I found it gratifying that authors who themselves were clearly expecting results different from the ones they got (either different in sign, strength, significance, etc) put it all out there for people to see.

Tell me what you don't like about D&R? It can't be that they dumped too much on you. What's left out?

The main think I don't like are those 101-point measures of things. I'm glad C&L didn't use them. But that goes to the validity of the deisgn--not to reporting.

I'm also curious why D&R didn't try to figure out if the impact of estimated scientific consensus persisted- for reasons I stated. I haven't looked at their data-- I'm assuming that they didn't ask that question again. But if they did, then we should have a look!

Yes, "gateway" is plausible. But many many many many more things are plausible than true. That's what empirical research is for: helping us sort out the "true" (or close enough to true) from the (definitely) false.

But the enterprise can't achieve that end unless people do empirical work in a way that makes what data tell us about the relative plausibility of competing conjectures open to critical assessment.

April 20, 2016 | Registered CommenterDan Kahan

@Dan. I completely agree with you regarding reporting standards, and providing enough info for readers to be able to make up their own mind (and I think C&L do a good job at that in their paper).

I don't like the D&R reporting for exactly some of the basic standards you mention. For one, they don't specify what kind of regressions they are running for any of the models reported. The econometric specification seems rather odd for a randomized experiment but I suppose the reader could *guess* they are running an OLS regression of some sort. Also, their measures do not seem to be described anywhere in the text which makes it difficult to understand what the average scores mean without a reference scale. For example, in Table 2 they talk about "prob donate" which makes me think this is the result of a logit or probit but it is not discussed anywhere (or perhaps a linear probability model), who knows? They are not very transparent here.

Further, if its supposed to be a randomized experiment, there would be no need to control for a whole range of socio-demographic factors (given that the groups are supposed to be balanced on these characteristics). Yet, they report all of the results, in only one format: adjusted for 7 covariates? It's not clear at all to the reader what the simple result of the experiment was. For all I know, they include these covariates because they help provide the regression results they were hoping for. They run a bunch of individual regressions with correlated experimental conditions and 7 covariates, I would say that this is pretty far from reporting things clearly for the reader. It'd be more efficient to simply conduct an omnibus test for all of the measures simultaneously with the experimental condition as a factor. And then show the results adjusted for covariates (separately). They also clutter the results section with discussion - which distracts the reader from interpreting the results as they are.

In short, I'm not a fan of their reporting style. Although I agree that the 0-100 scales are not optimal, they seem necessary at least for measuring people's perception of the scientific consensus, I find it odd D&R didn't measure people's perception of the scientific consensus (it seems), which is the only variable that supposedly has a theoretical relationship with giving people info on the consensus (i.e., measures that have the same level of specificity). On the upside, their data is publicly available, but so is the gateway data, for the skeptical reader.

April 20, 2016 | Unregistered CommenterMotivated reasoning

p.s. I also agree with you that it is interesting that the papers reported results that might run somewhat contrary to initial expectations, but my reading of these papers was actually quite positive in terms of the efficacy of consensus. We talked about this, but I personally don't expect that a subtle message like this can or should be expected to radically change an individual's public support for climate action - a bit unfair to set this as the bar against which to evaluate the research, especially in light of the fact that so many other messages are polarizing and don't seem to work at all. The fact that consensus messaging can boost acceptance of AGW, either directly or indirectly, is interesting in itself I think.

April 20, 2016 | Unregistered CommenterMotivated reasoning

"Also, I'm not suggesting that *open data* is universally the best policy. I actually think this whole idea of "transparency" about science and uncertainty is not very well thought-out on a larger level. Data dumps provide uncredentialed trolls with the opportunity to run their own bogus analyses on authors' data and spread more misinformation and confusion amongst the unsuspecting public. It's a risk-risk trade-off."

The original purpose of publishing scientific papers is to enable the rest of the scientific community to check the work - to refute, confirm, or extend it. Only by surviving this checking process do results gain scientific credibility - simply getting published is not enough. Therefore, it is essential to meet the requirements of the scientific method that enough data is published for anyone else to be able to find any flaws or holes, if there are any. Any practice that doesn't do this is not scientific.

"Uncredentialed trolls" are a part of that process. A scientific result has to be able to withstand attacks from *anyone*, not merely the approved, like-minded professionals who can be trusted not to break another man's rice bowl. If the only people attacking a theory are unqualified, then that is indeed a problem, but only because it means the attack will be less effective. If attacked by ignorant trolls, the correct thing to do is to educate them so that their future attacks may be more likely to find genuine errors. People willing to spend a lot of time and effort criticising your theories are a valuable scientific resource, and ought to be encouraged. Since it is only by surviving attacks that your work will gain credibility, the more persistent, motivated, and expert the attackers, the faster your credibility rises. (Assuming your theory survives the attacks, of course.)

Anyway, quite a lot of us *do* have credentials. :-)

"But do you see how w/ an appropriate design the "model" in your view that varies among partisans who are assigning opposing LRs to the same information -- and thus not converging and possibly even polarizing -- can be experimentally probed and determined to be based on "identity protective" rather than "truth-seeking" criteria?"

"Identity protective" is a hypothesis about mechanisms - say rather "identity-correlated".

In the particular case of the nuclear question, I think it did happen to be identity-protective (sort of), but I don't think this implies that it will be so for every such question. There are a wide range of potential mechanisms that could lead to the same sort of effect.

April 22, 2016 | Unregistered CommenterNiV

I am a high school teacher intrigued by the interaction between identity-protective reasoning and consensus messaging. Is the failure of consensus messaging about climate change rooted in the fact that it leaves no meaningful other route for group meaning for those whose identities rely upon maintaining the status quo in terms of the economy and energy? You suggest in one post that there are ways of bringing skeptics (CWMs?) around: perhaps you have already written about this? Usually educators are more like guides than persuaders, yet I would be very curious to know what you think would be more successful.

As a teacher, I am wondering about the relationship between consensus messaging and what is often called “school culture.” High school students want to belong to meaningful communities and so telling have peers tell them, “We recycle here—this is what we do,” seems powerful. Because young people are in the process of defining group identities, might this mean that they are less susceptible to identity-protective reasoning?

If I accept the fact that increasing understanding of (or curiosity about) the science of climate change might do nothing to reduce polarization of my students’ outlooks, and if my goal is for them to be agents in combatting climate change (or racism or homophobia for that matter—now perhaps I am a persuader), then what is my best way forward? As strange as it sounds, would it be to try to change, as you write, “who they are?” This doesn’t sounds appealing when I imagine myself being the only adult in a room full of adolescents.

But the question remains for me: is the goal of a science teacher increasing her students knowledge about climate change? Their curiosity? Their faith in the scientific method?

Relatedly, in the teaching of history, where does your research point when we are trying to teach white students that, say, thinking of the United States as “the greatest country in the world” doesn’t help us understand centuries of slavery? Or that racism didn’t disappear with the Civil Rights Act. I wonder if deploying more data is the way forward—or might this simply encourage students to find new, non-threatening arguments (that racism in recent decades is merely the outlook of a few bad eggs, for example)?

April 25, 2016 | Unregistered CommenterPatrick Walsh

@Patrick-- I think you are right, of course, that young people & all people are influenced by "peers" in that way. think of campaigns to shape behavior like binge drinking, date rape tec. Sometimes those who promote "97% consensus" messaging invoke this but of course members of the public don't think of climate scientsts as their "peers" in this sense; they aren't part of a shared reputational community.

Th reputational community students *are* a part of are in fact likely to put the same pressures on them to fit their beliefs to ones predominant in their social groups as it does on their parents. I'm sure you have seen this? It is widely understood & studied problem in teaching evolution, & now education researchers are exploring the challenges it poses for teaching climate science.

You might find this interesting. I think it is inspired & inspiring. I read it as about a teacher's experiment to try to create a climate in which peer influences, rather than harnessed to try to change who students are, were disentangled from the project to engage knowledge and issues on climate change.

I think hs teachers are going to teach us a lot here-- b/c they care, for one thing; b/c they are committed to experimentation and sharing what they know for another; and b/c the insights that they generate in their project to dissolve the conflict that people face in using their reason to be who they are and to learn what science knows in this context will have lots of very readily apparent relevance in many others.

April 28, 2016 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>