follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "Repugnance" & reasoned deciscionmaking ... a fragment | Main | Serious problems with "the strongest evidence to date" on consensus messaging ... »
Friday
May202016

Can you spot which "study" result supports the "gateway belief model" and which doesn't? Not if you use a misspecified structural equation model . . .

As promised “yesterday”: a statistical simulation of the defect in the path analysis that van der Linden, Leiserowitz, Feinberg & Maibach (2015) present to support their “gateway belief model.”

VLFM report finding that a consensus message “increased” experiment subjects’ “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it. In support of this claim, they present this structural equation model analysis of their study results:

 

As explained in my paper reanalyzing their results, VLFM’s data don’t support their claims. They nowhere compare the responses of subjects “treated” with a consensus message and those furnished only a “placebo” news story on a Star Wars cartoon series.  In fact, there was no statistically or practically significant difference in the “before and after” responses of these two groups of subjects’ expressions of belief in climate change or support for global warming mitigation.

The VLFM structural equation model obscures this result.  The model is misspecified (or less technically, really messed up) because it contains no variables for examining the impact of the experimental treatment—exposure to a consensus message—on any study outcome variable besides subjects’ estimates of the percentage of climate scientists who adhere to the consensus position on human-caused global warming.

To illustrate how this misspecification masked the failure of the VLFM data to support their announced conclusions, I simulated two studies designed in the same way as VLFM’s. They generated these SEMs:

As can be seen, all the path parameters in the SEMs are positive and significant—just as was true in the VLFM path analysis.  That was the basis of VLFM’s announced conclusion that “all [their] stated hypotheses were confirmed.”

"Study No. 1" -- is this the one that supports the "gateway model"?But by design, only one of the simulated study results supports the VLFM hypotheses.  The other does not; the consensus message changes the subjects’ estimates of the percentage of scientists who subscribe to the consensus position on human-caused climate change, but doesn’t significantly affect (statistically or practically) their beliefs in climate change or support for mitigation--the same thing that happened in the actual VLFM study.

The path analysis presented in the VLFM paper can’t tell which is which.

Can you?  If you want to try, you can download the simulated data sets here.

To get the right answer, one has to examine whether the experimental treatment affected the study outcome variable (“mitigation”) and the posited mediators (“belief” and  “gwrisk”) (Muller, Judd & Yzerbyt 2005). That’s what VLFM’s path analysis neglects to do.  It’s the defect in VLFM that my re-analysis remedies.

Or is it "Study No. 2"? Download the data & see; it's not hard to figure out if you don't use a misspecified SEMFor details, check out the “appendix” added to the VLFM data reanalysis

Have fun—and think critically when you read empirical studies.

References

Muller, D., Judd, C.M. & Yzerbyt, V.Y. When moderation is mediated and mediation is moderated. Journal of personality and social psychology 89, 852 (2005).

van der Linden SL, Leiserowitz A.A., Feinberg G.D., Maibach E.W. The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLoS ONE (2015), 10(2): e0118489.doi:10.1371/journal.pone.0118489.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (15)

This is not the first time a climate psychology paper has come to wrong conclusions through a misspecified SEM: see Conspiracist Ideation as a Predictor of Climate-Science Rejection: An Alternative Analysis.

May 20, 2016 | Unregistered CommenterJonathan Jones

@jonanthan-- thanks! I'll take a look.

One thing is that it's really kind of odd to use SEM for a study like VLFM. All the vairables were observed, not latent; there was only one outcome meassure --not multiple ones, etc. None of the things that might make one use SEM rather than a conventional linear regression analysis. Not to mention that SEM is primarily aimed at figuring out how to parse the covariances of observational data to find grounds for causal inferences; figuring out what happened in psychology experiment on framing is pretty darn simple... I'm sure there *are* times when an SEM could be cogently defended for experimental data, but there is also good reason to be on guard against one being used to disguise the absence of statistically or practically significant effects

May 20, 2016 | Registered CommenterDan Kahan

The mildly cynical might suspect that some researchers have got so used to using SEM that they forget that there are other simpler alternatives, which might be more robust to breakdowns in statistical assumptions. The very cynical might suggest that SEM has the huge advantage of hiding the actual data from view, making it hard for the reader to assess whether the analysis was really appropriate.

Personally I prefer the simpler methods advocated by Asheley Landrum in your post Raw data: the best safeguard against empirical bull shit!. Indeed the key graph in our piece (linked above in comment 1) looks awfully like his graphs.

May 22, 2016 | Unregistered CommenterJonathan Jones

@Jonathan-- Actually, I think the SEM is actually really really unusual for experiment like this & should be tipoff that something is wrong, that likely something is being obscured in way you said.

But for sure, *the* most fundamental point is that one has to show the raw data. This isn't like, oh, quatum physics, where one can't *see* the effects w/o the aid of some kind of statistical artifiice ;)

Here one first *looks* to see. Then one uses the statistics to discipline & refine inferences.

May 22, 2016 | Registered CommenterDan Kahan

Fyi. Methinks Asheley is a her

May 23, 2016 | Unregistered CommenterJoshua

@Dan, great posts, this and the last one.

'...should be tipoff that something is wrong, that likely something is being obscured in way you said.'

I think you will struggle with combating the steamroller of consensus messaging. Being right doesn't always mean making much headway; you face much inertia. The strong bias implied by above ultimately springs from the fact that the climate consensus is a not a scientific one, but an enforced social one. Though I know you won't agree with this causation, the very purpose of the consensus is to maintain itself, as is the case with any culture. Given the mechanisms are largely subconscious, purported strength is a meme that generally rises to the top.

btw albeit atrociously late, I recently left a comment on your science curiosity thread of March 7.

May 24, 2016 | Unregistered CommenterAndy West

@Andy-- thanks.

I don't see myself as "struggling with or combabatting a streamroller of consensus messaging," though.

I see myself as contributing to the public good of protecting the science of science communication from the degradation it suffers when misreported study findings are not identified and corrected.

will get to your science curiosity msg presently -- thanks for alerting me

May 24, 2016 | Registered CommenterDan Kahan

I like your method of checking this method. Maybe this is standard, but I haven't seen it before (I work in computer programming, not scientific research). It almost seems like it would be a good idea to do simulated research first, before picking an analysis method or even conducting the study. If you are wanting to distinguish between different possibilities, simulating them and seeing if your statistical test can tell the difference, seems like an excellent idea. It reminds me of the software method of writing tests first, then doing the actual coding against those tests.

May 24, 2016 | Unregistered CommenterRoss Hartshorn

@Dan,

on this occasion, I think those two views are both valid and amount to the same thing ;)

May 25, 2016 | Unregistered CommenterAndy West

@Ross--

It's pretty common in the social sciences to use simulated data to test the validity & robustness of particular statistical methods. E.g, people will often used simulated data to gauge how well a particular technique for measuring a latent or unobserved variable is working -- obviously (as it were) if you can observe the variable b/c you created it, then you can test a technique that tries to tell you what something you can't see looks like. Another setting in which this strategy is used is to figure out how robust a statistical test is -- e.g., how well a technique will work if some assumption it makes about character of data (normality of distribution, say, or dimensionality of covariance structure) is "violated" (the answer is often "enh, not a big deal").

For a really mind-blowing example of how statistical simulation was used *after the fact* to help show that a particular analytical strategy was the *wrong* one for analyzing experimental data, consider Miller & Sanjurjo's paper on the " 'hot hand fallacy' fallacy." I posted a data-simulator for people to play with to "see with their own eyes" that M&S were right & the authors of the famous "hot hand fallacy" papers wrong about how to test whether patterns of performance by basketball players are "streakier" than one would expect them to be if their performances were random variables." It's really hard to believe until you see it for yourself--even though M&S prove their case analytically. I'm not sure what M&S would say, but I think the only reliable way to figure out what the "null" is for a study like that is via simulation.

Simulation, while not so common as to be standard ,also is a very straightforward way to determine how big a sample one will need to assure adequate statistical power. I think it's a sign of insanity for people to try to calculate power analytically-- it can be done but it's really intricate & easy to screw up-- so just simulate, for crying out loud! That's why super-fast processors are for: to make otherwise laborious analytical tasks tractable & fast.

All that said, in *this* case it should have been obvious to the researchers -- VLFM -- that they analytical method they were using was *not* valid. Similarly, it's not necessary to do a simulation to understand why what they did was wrong -- but it helps to demonstrate the point.

May 25, 2016 | Registered CommenterDan Kahan

@Andywest-- in the science of science communcation, at least, no serious scholar would say there is "consensus" on much of anything. That's one reason why it's such an interestintg field.

May 25, 2016 | Registered CommenterDan Kahan

@Dan,

I didn't imply there was a consensus in the field of science communication, but that in despite of good advice from yourself and others, there is nevertheless much pressure in the domain at large to strongly communicate a consensus ('the 97%') on calamitous AGW. And much actual communication of same. This is the inertia you face. No scientific field should have a heavily policed and pushed consensus; where this occurs it demonstrates that the consensus is not a scientific one, but a social one.

May 25, 2016 | Unregistered CommenterAndy West

@Dan, on second read I may have misinterpreted slightly, although I think my above still stands. But if you mean that scholars of science communication (should) know better than to strongly emphasize / push consensus, then I'd agree that they ought to. However, with or without their backing regarding each particular such scholar, it is manifestly happening big-time anyhow.

May 25, 2016 | Unregistered CommenterAndy West

@AndyWEst--

I think iconsneus-messaging is for sure a mistake. But there's no "consensus" on that among scholars studying the question.

the only thing there needs to be consensus on in order to learn from evidence is that scholars ought to report empirical fihdings fully & accuately & engage reflectively & critically w/ all the empirical data that exist.

May 25, 2016 | Registered CommenterDan Kahan

'I think iconsneus-messaging is for sure a mistake. But there's no "consensus" on that among scholars studying the question.'

Absolutely there isn't. I did not imply so. However there is enormous consensus messaging, and bias towards consensus messaging which likely has led to the problematic study which you highlight here. Your attempt to work for the public good in this respect is working against the inertia of that bias.

May 25, 2016 | Unregistered CommenterAndy West

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>