follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: What is this "science of science communication" thing? | Main | Weekend update: Who really did write the CRT-evolution paper then? »
Thursday
Aug272015

Are people more conservative when “primed for reflection” or when “primed for intuition”? Apparently both . . . . (or CRT & identity-protective reasoning Part 2^8)

1.  The obvious reason people disagree with me is because they just can’t think clearly! Right? Right?? Well, I don’t think so, but I could be wrong
 

As the 14 billion readers of this blog know, I’m interested in the relationship between cognition and political outlooks. Is there a connection between critical reasoning dispositions and left-right ideology? Does higher cognitive proficiency of one sort or another counteract the tendency of people to construe empirical data in a politically biased way?

The answer to both these questions,  the data I’ve collected persuades me, is, No.

But as I explained just the other day, if one gets how empirical proof works, then one understands that any conclusion one comes to is always provisional. What one “believes” about some matter that admits of empirical inquiry is just the position one judges to be most supported by the best available evidence now at hand.

2.  New evidence that liberals are in fact “more reflective” than conservatives?


So I was excited to see the paper “Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology,” Judgment and Decision Making, July 2015, pp. 314–331, by Deppe, Gonzalez, Neiman, Jackson Pahlke, Smith & Hibbing. 

Deppe et al. report the results from a number of studies on critical reasoning and political ideology.  The one that got my attention was one in which Deppe et al. reported that they had found “moderately sized negative correlations between CRT scores and conservative issue preferences” in a “nationally representative” sample” (pp. 316, 320).

As explained 9,233 times on this blog, the CRT is the standard assessment instrument used to measure the disposition of individuals to engage in effortful, conscious “System 2” information processing as opposed to the intuitive, heuristic “System 1” sort associated with myriad cognitive biases (Frederick 2005).

It was really really important, Deppe et al. recognized, to use a stratified general population sample recruited by valid means to test the relationship between political outlooks and CRT. 

Various other studies, they noted, had relied on samples that don’t support valid inferences the relationship between cognitive style and political outlooks. These included M Turk workers, whose scores on the CRT are unrealistically high (likely b/c they’ve been repeatedly exposed to it); who underrepresent conservatives, and thus necessarily include atypical ones; and who often turn out to be non-Americans disguising their identities (Chandler,  Mueller, & Paolacci 2014; Krukpnikov & Levine 2014; Shapiro,Chandler, & Mueller 2013).

Other scholars, Deppe et al. noted, have constructed samples from “visitors to a web site” on cognition and moral values who were expressly solicited to participate in studies in exchange for finding out about the relationship between the two in themselvesAs a reflective colleague pointed out, this not particularly reflective sampling method is akin to polling ESPN.com visitors to try to figure out what the frequency of “liking football” is among different groups in the general population.

The one study Deppe et al. could find that used a valid general population sample to examine the correlation between CRT scores and right-left political outlooks was one I had done (Kahan 2013).  And mine, they noted, had found no meaningful correlation.

Deppe et al. attributed the likely difference in our results to the way in which they & I measured political orientations.  I used a composite measure that combined responses to standard, multi-point conservative-liberal ideology and party self-identification measures.  But  “self-reported ideology,” they observed, “is well-known to be a highly imperfect indicator of individual issue preferences.”

Nixon reacts w/ shock to Deppe et al. study finding that conservs are unreflectiveSo instead they measured such preferences, soliciting their subjects responses to a variety of specific policies, including gay marriage, torture of terrorist subjects, government health insurance, and government price controls (a goody but oldie; “liberal” Richard Nixon was the last US President to resort to this policy).

On the basis of these responses they formed separate “Economic,” “Moral,” and  “Punishment” “conservative policy-preference” scales.  The latter two, but not the former, had a negative correlation with CRT, as did a respectably reliable scale (α =0.69) that aggregated all of these positions.

Having collected data from a Knowledge Networks sample “to determine if the findings” they obtained with M Turk workers “held up in a more representative sample” (p. 319), they heralded this result as  “offer[ing] clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives.”

That’s pretty interesting! 

So I decided I should for sure to take the study into account in my own perpetual weighing of the evidence on how critical reasoning relates to political outlooks and comparable indicators of cultural identity.

I downloaded their data from JDM website with the intention of looking it over and then seeing if I could replicate their findings with nationally representative datasets of my own that had liberal and conservative policy positions and CRT scores.

Well, I was in fact able to replicate the results in the Deppe et al. data. 

However, what I ended up replicating were results materially different from what Deppe et al. had  actually reported. . . .

3.  Unreported data from a failed “priming” experiment: System 2 reasoners get more conservative when primed to be “reflective” and when primed to be “intuitive”!


Deppe et al. had collected their CRT and political-position data as part of a “priming” experiment.  The idea was to see if subjects’ political outlooks became more or less conservative when induced or Full results from TESS/Knowledge Networks sample (study 2). Click to inspect--very strange indeed!“primed” to rely either on “reflection,” of the sort associated with System 2 reasoning, or on “intuition,” of the sort associated with System 1.

They thus assigned 2/3 of their subjects randomly to distinct “reflection” and “intuition” conditions. Both were given word-unscrambling puzzles that involved dropping one of five words and using the other four to form a sentence.  The sentences that a person could construct in the “reflection” condition emphasized use of reflective reasoning (e.g., “analyze the numbers carefully”; “I think all day”), while those in the “intuition” condition emphasized the use of intuitive” reasoning (e.g., “Go with your gut”; “she used her instinct”).

The remaining 1/3 of the sample got a “neutral prime”: a puzzle that consisted of dropping and unscrambling words to form statements having nothing to do with either reflection or intuition (e.g., “the sky is blue”; “he rode the train”).

Deppe et al.’s hypothesis was that “subjects receiving an intuitive prime w[ould] report more conservative attitudes” and those  “receiving a reflective prime . . . more liberal attitudes,” relative to “those receiving a “neutral prime.”

Well, the experiment didn’t exactly come out as planned.  Statistical analyses, they reported  (p. 320),

show[ed] no differences in the number of correct CRT answers provided by the subjects between any group, indicating that the priming protocol manipulation . . . failed to induce any higher or lower amounts of reflection. With no differences in thinking style, again unsurprisingly, there were no statistically significant differences between the groups on self-reported ideology  or issue attitudes.

But I discovered that the results were actually way more interesting that!

There may have been “no differences” in the CRT scores and “conservative issue preferences” of subjects assigned to different conditions, but it’s not true there were no differences in the correlation between these two variables in the various conditions: in both the “reflection” and “intuition” conditions, subjects scoring higher on the CRT adopted “significantly” more conservative policy stances than their counterparts in the “neutral priming” condition! By the same token, subjects scoring lower in CRT necessarily became more liberal in their policy stances in the "reflection" & "intuition" conditions.

Wow!  That’s really weird!

If one took the experimental effect seriously, one would have to conclude that priming individuals for “reflection” makes those who are the most capable and motivated to use System 2 reasoning (the conscious, effortful, analytic type) become more conservative--and that priming these same persons for “intuition” makes them more conservative too!

4.  True result in Deppe et. al: “more representative sample” fails to “replicate” negative correlation between conservative policy positions and CRT!


Deppe et al. don’t report this result.  Likely they concluded, quite reasonably, that this whacky, atheoretical outcome was just noise, and that the only thing that mattered was that the priming experiment just didn’t work (same for the ones they attempted on M Turk workers, and same for a whole bunch of “replications” of classic studies in this genre).

But here’s the rub.

The “moderately sized negative correlation[] between CRT scores and conservative issue preferences overall” that Deppe et al. report finding in their "nationally representative" sample (p. 319) was based only on subjects in the “neutral prime” condition.

As I just explained, relative to the “neutral priming” condition, there was a positive relationship "between CRT scores and conservative issue preferences overall" in both the “reflection” and “intuition priming” conditions.

If Deppe et al. had included the subjects from the latter two conditions in their analysis of the results of study 2, they wouldn’t have detected any meaningful correlation –positive or negative—“between CRT scores and conservative issue preferences overall” in their critical “more representative sample.

It doesn’t take a ton of reflection to see why, under these circumstances, it is simply wrong to characterize the results in study 2 as furnishing “correlational evidence to support the hypothesis that higher CRT scores are associated with being liberal.”

For purposes of assessing how CRT and conservatism relate to one another, being assigned to the "neutral priming" condition was no more or less a "treatment" than being assigned to the “intuition" and "reflection" conditions.  The subjects in the "neutral prime" condition did a word puzzle—just as the subjects in the other treatments did.  Insofar as the experimental assignment didn't didn't generate "differences in the number of correct CRT answers" or in "issue attitudes" between the conditions (p. 320), then either no one was treated for practical purposes or everyone was but in the same way: by being assigned to do a word puzzle that had no effect on ideology or CRT scores.

That's more like it, says Tricky Dick!Of course, the correlations between conservative policy positions and CRT did differ between conditions.  As I pointed out, Deppe et al. understandably chose not to report that their “priming” experiment had "caused" individuals high in System 2 reasoning capacity to become more conservative (and those low in System 2 reasoning correspondingly more liberal) both when “primed” for “reflection” and when “primed” for intuition.  The more sensible interpretation of their weird data was that the priming manipulation had no meaningful effect on either conservativism or CRT scores. 

But if one takes that very reasonable view, then it is unreasonable to treat the CRT-conservatism relationship in the “neutral priming” condition as if it alone were the “untreated” or “true” one.

If the effects of experimental assignments are viewed  simply as noise—as I agree they should be!—then the correct way to assess the relationship between CRT & conservatism in study 2 is to consider the responses of subjects from all  three conditions

An alternative that would be weird but at least fully transparent would be to say that “in 2 out of 3 ‘subsamples,’ ” the “more representative sample” failed to “replicate” the negative conservative-CRT correlation observed in their M Turk samples.

But the one thing that it surely isn’t justifiable is to divide the sample into 3 & then report the data from the one subsample that happens to support the authors' hypothesis -- that conservatism & CRT are negatively correlated -- while simply ignoring the contrary results in the other two. 

I’m 100% sure this wasn’t Deppe et al.’s intent, but by only partially reporting the data from their "nationally representative sample" Deppe et al. have unquestionably created a misimpression.  There's just no chance any reader would ever have guessed that the data looked like this given their description of the results—and no way a reader apprised of the real results would ever agree that their "more representative sample" had "replicated" their M Turk sample finding of a “negative correlation[] between CRT scores and conservative issue preferences overall” (p. 320).

5. Replicating Deppe et. al.

As I said, I was intrigued by Deppe et al.’s claim that they had found a negative correlation between conservative policy positions and CRT scores and wanted to see if I could replicate their finding in my own data set.

It turns out their study didn’t find the negative correlation they reported, though, when one includes responses of the 2/3 of the subjects unjustifiably omitted from their analysis of the relationship between CRT scores and conservative policy positions.

Well, I didn’t find any such correlation either when I performed a comparable data analysis on a large (N = 1600) nationally representative CCP (YouGov) study sample from 2012—one in which subjects hadn’t been assigned to do any sort of word-unscrambling puzzle before taking the CRT.

In my sample, subjects responded to this “issues positions” battery:

The responses formed two distinct factors, one suggesting a disposition to support or oppose legalization of prostitution and legalization of marijuana, and the other a disposition to support or oppose liberal policy positions on the remaining issues except for resumption of the draft, which loaded on neither factor.

Reversing the signs of the factor scores, I suppose one could characterize these as “social” and “economic_plus” conservativism respectively .

Both had very very small but “significant” correlations with CRT. 

bivariate correlations between CRT and "conservative overall" and subdomains in nationally representative CCP/YouGov sample. Z_conservrepub is composite scale comprising liberal-conservative ideology and partisan self-id (α = 0.82).But the signs were in opposing directions:  Economic_plus: r =  0.06, p < 0.05; and Social, r = -0.14, p < 0.01.

Not surprisingly, then, these two canceled each other out (r = -0.01, p = 0.80) when one examined “conservative policy positions overall”—i.e., all the policy positions aggregated into a single scale (α = 0.80).

That is exactly what I found, too, when I included the 2/3 of the subjects that Deppe et al. excluded from their report of the correlation between CRT and conservative policy positions in Study 2.  That is, if one takes their conservative subdomain scales as Deppe et al. formed them, there is a small negative correlation between CRT and “Punishment” conservativism ( r = -0.13, p < 0.01) but a small positive one (r = 0.17, p < 0.01) between CRT and “Economic conservativism.”

There is another, even smaller negative correlation between CRT and the “Moral” conservative policy position scale (r = - 0.08, p = 0.08).

Bivariate correlations in Deppe et al. TESS/Knoweldge Networks sample overallOverall, these tiny correlations all wash out (“conservative issue preferences overall”: r = -0.01, p = 0.76).

That—and not any deficiency in conventional left-right ideology measures (ones routinely used by the “neo-authoritarian personality” scholars (Jost et al 2003) that Deppe et al. cite their own study as supporting)— also explains why there is zero correlation between CRT and liberal-conservative ideology and partisan self-identification.

In any event, when one  simply looks at all the data in a fair-minded way, one is left with nothing—and hence nothing that supplies anyone with any reason to revise his or her views on the relationship between political outlooks and critical reasoning capacities.

6. Yucky NHT--again

One last point, again on the vices of “null hypothesis testing.”

Because they were so focused on their priming experiment non-result, I’m sure it just didn’t occur to Deppe et al. that it made no sense for them to exclude 2/3 of their sample when computing the relationship between conservativism and CRT scores in Study 2.

But here’s something I think they really should have thought a bit more about. . . . Even if the results in their study were exactly as they reported, the correlations were so trivially small that they could not, in my view, reasonably support a conclusion so strong (not to mention so clearly demeaning for 50% of the U.S. population!) as

We find a consistent pattern showing that those more likely to engage in reflection are more likely to have liberal political attitudes while those less likely to do so are more likely to have conservative attitudes....

...The results of the studies reported above offer clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives....

 I’ll say more about that “tomorrow,” when I return to a theme briefly touched on a couple days ago on the common NHT fallacy that statistical “significance” conveys information on the weight of the evidence in relation to a study hypothesis.

Refs

Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

Deppe, K.D., Gonzalez, F.J., Neiman, J.L., Jacobs, C., Pahlke, J., Smith, K.B. & Hibbing, J.R. Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology. Judgment and Decision Making 10, 314-331 (2015).

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Jost, J.T., Glaser, J., Kruglanski, A.W. & Sulloway, F.J. Political Conservatism as Motivated Social Cognition. Psych. Bull. 129, 339-375 (2003).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).


Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014). 

Shapiro, D.N., Chandler, J. & Mueller, P.A. Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science 1, 213-220 (2013).

 

"I told you -- the ball cost 5 cents!"

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

Insofar as the experimental assignment didn't affect CRT scores and "conservative overall" scores,...

I don't follow. Did you mean something such as "Insofar as the combined results of all three experimental assignments didn't affect CRT scores and "conservative overall" scores"?

August 31, 2015 | Unregistered CommenterCortlandt

@Cortlandt: There was no difference in the *mean* CRT or *mean* "conservative issue position" scores of individuals assigned to "intuition," "reflection," & "neutral" priming conditions.

The correlation between CRT & conservatism was stronger, though, in the "intuition" & "reflection" conditions. So ... either (A) priming for "reflection" *&* intuition makes more reflective people more conservative & intuitive more liberal; or (B) the difference in correlations between conservativism & experimental assignment was noise-- & the best estimate of the conservative-CRT relationship is the one that considers responses from all 3 conditions.

(B) strike me as approximately 10^8 more likely than (A). Obviously, if one selects out a subsample of a larger data set, one can see a spurious correlation that is belied by the sample as a whole (that's why it's considered inapproprate for researchers to monitor results & "stop" as soon as they get a significant result they like-- that's not what happened here, but same principle.) There's no reason to think that priming for "intuition" & "reflection" woudl both cause people to become more conservative (in fact, there's a pretty big controversy at this point over the validity of studies showing effects from "priming" people w/ suggestive words, etc)

But either way, it's not good to report the correlation between conservativism & CRT in the one condition in which it showed a very slight negative correlation & not report the other two, so people can sort this out for themselves.

BTW, I *am* 100% sure the authors didn't notice this anomaly & *didn't* mean to mislead anyone! I'm sure they want to figure out the truth as much as everyone else

August 31, 2015 | Registered CommenterDan Kahan

Dan,

Thanks for your answer. FYI, the phrase:
"Insofar as the experimental assignment didn't affect CRT scores and "conservative overall" scores"
caused me to think this was a reference to the correlation between the two variables.
Thus the following would have been clearer to me:
"Insofar as the experimental assignment didn't affect CRT scores and "conservative overall" scores"
or better:
"The correlation between the variables aside, experimental assignment didn't affect the mean CRT scores or mean "conservative overall" scores"

August 31, 2015 | Unregistered CommenterCortlandt

Correction. The first suggested rewrite should be:
"Insofar as the experimental assignment didn't affect CRT scores or "conservative overall" scores"

Sorry for the triple postings. I'm having some weird internet/browser glitches. I was trying to use the Preview Post button.

August 31, 2015 | Unregistered CommenterCortlandt

@Cortlandt-- thanks!

(the comment functionality on site is a bit glitchy--I recently deactivated the CAPTCHA to see if that would help reduce probs)

August 31, 2015 | Registered CommenterDan Kahan

Provocative post as usual Dan, I think the problem here is that ideology “left” to “right” is a heuristic or cognitive map of self-position on one dimension that actually indicates structural differences in society that exist on two dimensions. So, could it be that the “traditional” vs. “secular” aspect of conservatism represents (is an abstraction of) class background.

So, what does the data say on class background and biblical literalism and CRT?

And, this is just Frederick's scale?

Gordon

September 4, 2015 | Unregistered Commenterggauchat

@Gordon

1. Agree 100% that "right-left" is a very weak representation of the sort of cultural identity that someone interested in these matters would want to consider! I see our cultural worldviews, b/c they are 2 dimensional, as better. Religiosity is also an indicator that I think is important in identifying the "cultural styles" of significance here, as are , race & gender.

My philosophy, and likely yours too, is that the sorts of profiles one can form with these sorts of variables are all just noisy indirect measures of one and the same set of 'cultural styles,' and the only question is the instrumental one of which measurement strategy gets us closes to our goal of explaining, predicting and fashioning useful prescriptions.

I'm sure from discussions we've had too that we both know that it's challenging to combine diverse indicators into a valid scale, something that one has to do, since just putting all the relevant indicators in as "independent" or "right-hand side" variables in a regression does exactly the opposite of what we are trying to do: it partials out the covariance, which is in fact the best measure of the latent variable!

Scales like "left-right," "religiosity," & "cultural worldviews" are very tractable; one can easily add race & gender to any one of those through interactions. But each one of these scales is for sure imperfect from an instrumental perspective (& of course none makes any pretense to be capturing or descriging the "essence" of a phenomeon like culture etc.)

It heartens me to know that you also find these questions vexing, and current practice (including the ones we ourselves resort to to do the best we can!) unsatisfying. B/c that means that there's at least one really smart person out there (I'm sure more than 1!) who has the motivation to figure out even better measurement strategies here.

2. Yes, Frederick's CRT. There are parallel issues here b/c CRT, while ingenious and tremendously valuable, clearly captures too small a portion of the relevant latent cognitive disposition. OSI 2.0 does better -- but not b/c it is a better measure of "cognitive reflection" but only b/c it ends up getting a "bigger piece" of "cognitive reflection' than CRT does in the course of measuring a cognitive disposition that is related to it.

3. On bibliglical literalism, ... well, take a look, if you haven't already, at my post on Will Gervais's recent paper that purports -- in my view unconvincinglly -- to find that differences in "cognitive reflection" as measured by CRT "predict" acceptance of evolution.

September 5, 2015 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>