follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« What sorts of inferences can/can't be drawn from the "Republican shift" (now that we have enough information to answer the question)? | Main | Have Republicans changed views on evolution? Or have creationists changed party? Pew's (half-released) numbers don't add up ... »

Weekend update: Non-replication of "asymmetry thesis" experiment

A while back I did a couple of posts (here & here) on Nam, H.H., Jost, J.T. & Van Bavel, J.J. “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance,  PLoS ONE 8(4) 8, doi:59810.51371/journal.pone.0059837 (2013)

NJV-B requested subjects (Mechanical Turk workers; more on that presently) to write  “counter-attitudinal essays”—ones that conflicted with the positions associated with subjects’ self-reported ideologies—on the relative effectiveness of Democratic and Republican Presidents. They found that Democrats were "significantly" more likely to agree to write an essay comparing Bush II favorably to Obama or Reagan favorably to Clinton than Republicans were to write onecomparing Obama favorably to Bush II or Clinton favorably to Reagan.

NJV-B interpreted this result as furnishing support for the "asymmetry thesis," the proposition that ideologically motivated reasoning is disproportionately associated with a right-leaning or conservative ideology. The stronger aversion of Republicans to writing counter-attitudinal essays, they reasoned, implied greater resistance on their part to reflecting on and engaging evidence uncongenial to their ideological predispositions.

I wrote a post explaining why I thought the design was a weak one.

Well, now Mark Brandt & Jarret Crawford have released a neat working paper that reports a replication study.

They failed to replicate NJV-B result. That is, they found that the subjects' willingness to write a counter-attitudinal essay was not correlated with their ideological dispositions.

That's interesting enough, but the paper also has some great stuff in it on other potential dispositional influences on the subjects' assent to write counter-attitudinal essays.

They found, e.g., that the subjects' score on a "confidence in science" measure did predict their willingness to write counter-attitudinal essays.  

The also found that "need for closure"-- a self-report measure of cognitive style that consists of agree-disagree items such as "When thinking about a problem, I consider as many different opinions on the issue as possible" -- did not predict any lesser or greater willingness to advocate for the superiority of the "other side's" Presidents.

These additional findings are relevant to the discussion we've been having about dispositions that might counteract the "conformity" effects associated with cultural cognition & like forms of motivated reasoning.

One shortcoming -- easily remedied -- relates to BC's reporting of their results.  There are some cacophonous bar charts that one can inspect to see the impact (or lack thereof) of ideology on the subjects' willingeness to write counter-attitudinal essays.  

But the magnitudes of the other reproted effects are not readily discernable.  In the case of the "confidence in science" result, the authors report only a logit coefficient for an interaction term (in a regression model the full output for which is not reported).  Even people who know what a logit coefficient is won't be able to gauage the practical significance of a result reported in this fashion (& what a shame to relate one's findings exclusively in a metric only those who "read regression" can understand, for they comprise only a tiny fraction of the world's curious and intelligent people).

For the need-for-cogniton closure result, the authors don't report anything except that the relevant interaction term in an unreported regression model was non-significant.  It is thus not possible to determine whether the effect of "need for closure" might have been meaningfully associated with aversion to engaging dissonant evidence & failed to achieve "statistical significance" due to lack of an adequately large sample. 

These sorts of reporting problems are endemic to social psychology, where papers typically obsess over p-values & related test statistics & forgo graphic or other reporting strategies that make transparent the nature and strength of the inferences that the data support.  But I've seen worse, and I don't think the reporting here is hiding some flaw in the BC study-- on the contrary, it is concealing the insight that one might derive from it!

The last thing I can think of to say--others should chime in-- is that is super unfortunate that BC, like NJV-B, relied on a Mechanical Turk "workforce" sample.  

As I've written previously, selection bias, repeat exposure to cognitive style measures, and misrepresentations of nationality make MT samples an unreliable (invalid, I'd say) basis for testing hypotheses about the interaction of cognition and political predispositions.

Brandt and Crawford have done several super cool studies on the "asymmetry thesis" (herehere & here,  e.g.).  They are sharp cookies.  

So they should definitely not waste their time -- and their ingenuity -- on junky MT samples.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

Thanks for posting. The data and full results are available here: . Anyone can wade through them if they would like.

The main effect of need for closure (Note: *not* need for cognition as you used a few times above) was (in log form) b = -.19, SE = .16, 95% CI [-.49, .12], p = .23. The *direction* of this effect means that people with a higher need for closure and less likely to write either the consistent or the inconsistent essay. The interaction between need for closure and essay type was even smaller: b = -.054, SE = .15, 95% CI [-.35, .24], p = .72. Personally, while I would agree that the need for closure result might be reliably negative with a larger sample, I do not think it varies by whether the essay is an attitude consistent or inconsistent essay* suggesting that in our study the need for closure might not reflect a closure to inconsistent views and ideas. This is pretty speculative, but it is noteworthy that the measure wasn't "behaving" as one might expect.

We used an Mturk samples - despite the limitations you and others have identified - because it was the same sample used by the original authors. The makes it more difficult to argue that our non-sig results were due to a different sampling frame.

That being said, I want to highlight that this preliminary replication study doesn't put the question to rest. Perhaps with better samples that aren't a professionalized survey-completion workforce the effect would emerge. The alternative "compliance perspective" we tested also predicts an ideological asymmetry effect, but for different reasons; however, we didn't really find support for this idea either. This suggests that the paradigm used, the induced-compliance-over-Mturk-paradigm, may be the problem. Future work will tell.

*The original study only used inconsistent essays. We added consistent essays because the logic of the original results depended on the effects only being present for inconsistent essays. We found that this additional factor did not make much of a difference in terms of ideology.

January 5, 2014 | Unregistered CommenterMark B


Thanks! You &/or C should write a guest post on *all* your cool studies & what you think they signify for the "asymmetry" issue, the condition of humanity, and the prospects for Red Sox repeat, etc.

You are saying that the impact of "need for cognition closure" (p. 14!) on willingeness to write ideologically non-congenial essay was b = -.054, SE = .15, right? I agree that that's so close to zero that it seems unlikely that the problem was power.

But I'm "predisposed" to think that self-report measures of cognitive style -- ones that assume a researcher can just *ask* someone if he or she tends to "think things through"-- are suspect. Someone who likes these measures will point out that you have very small n's, especially given that you are looking at interaction of two kinds of individual differences. Did you do a power analysis?

LIkely I am misreading the paper or your comment or both, but in the paper you state "the main effect of the need for cognitive closure was marginally significant (b = -.28, SE = .15, Wald = 3.25, p = .07) suggesting that people with a higher need for closure were marginally less likely to write an essay (whether consistent or inconsistent) than were people with a low need for closure." Aren't you saying something different now? "The main effect of need for closure (Note: *not* need for cognition as you used a few times above) was (in log form) b = -.19, SE = .16, 95% CI [-.49, .12], p = .23. The *direction* of this effect means that people with a higher need for closure and less likely to write either the consistent or the inconsistent essay."? Different "main effect"? Sorry for my confusion!

The business on "log form" --seriously, why not make this more accessible? Am I right that b = -.19 is about 5% change in probability of writing an essay? Why not do a graph w/ predicted probabilities to help people--those who do & those who don't understand logistic regression-- see how much impact some meaningful change in the "trust in science" measure had on probability that a subject to write a counterattitudinal essay? Don't make smart people "wade" through "logit" coefficients!

I think it's really exciting you are adding measures of cognitive style to your designs. Got plans for more along these lines?

January 5, 2014 | Registered CommenterDan Kahan


A response out of order...

The discrepancy you identified is from different models. Basically we were trying to cover our bases with a variety of plausible models that include different "packages" of interaction terms etc. The "marginal" effect on page 14 is from an additional exploratory model that included additional possible interactions. The more clearly non-sig effect I report in my comment is based on confirmatory tests of the possible models.

We did not do a formal power analysis because the models were quite complex. We basically tried to make sure that our sample size for our primary comparison was about 2.5 times the size used in the original study. This is based on some heuristic reasons from Uri Simonsohn and we discuss it briefly here:

I agree about the self-report cognitive style measures.They might be helpful in the long run, but its hard to say. Interestingly the ideological assyemtry effects on more behavioral measures of cognitive style are weaker than self-report measures and show some evidence of being contaminated by selective reporting: And then there is other evidence that when you ask about specific issues you get clear extremity/symmetry effects:

And hey, I'm all for better looking graphs, but I'm less inclined to put it all together for something like this. Is that a good excuse? Probably not. I recently got a nice suggestion for additional analyses on this dataset. When I sit down to do that, I'll try to make some slightly more intuitive graphs and pass them along.

January 5, 2014 | Unregistered CommenterMark B

@Mark-- thx, & excellent! Definitely keep moving forward!

Also-- really really admire you guys for registering the design etc. before carrying out the study. I think people should do this less to deter post hoc prodding than to impose ex ante discipline on design. You guys had cool hypotheses and mapped out a valid strategy for testing at the outset. It's painfully obvious that many researchers haven't really thought these things through before they collect data....

January 5, 2014 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>