follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Cultural cognition and the Oregon Citizens' Initiative Review | Main | I agree with Chris Mooney -- on *the* most important thing »
Friday
Aug032012

How to recognize asymmetry in motivated reasoning if/when you see it

This is the last of installment of my series on “probing/prodding” the Republican Brain Hypothesis (RBH).  RBH posits that conservative ideology is associated with dogmatic or unreflective reasoning styles that dispose conservative people to be dismissive of policy-relevant science on climate change and other issues. This is the basic thesis of Chris Mooney’s book The Republican Brain, which ably collects and synthesizes the social science data on which the claim rests.

As I’ve explained, I’m skeptical of RHB. Studies conducted by CCP link conflict over policy-relevant science to a form of motivated reasoning to which citizens of all cultural and ideological persuasions seem worrisomely vulnerable. The problem, I believe, isn’t that citizens with one or another set of values can’t or won’t use reason; it’s that the science communication environment --on which the well-being of all citizensdepends —has become contaminated by antagonistic cultural meanings.

In the first installment in this series, I stated why I thought the social science work that RHB rests on is not persuasive: vulnerability to culturally or ideologically motivated reasoning is not associated with any of the low-quality reasoning styles that various studies find to be correlated with conservatives. On the contrary, there is powerful evidence that higher-quality reasoning styles characterized by systematic or reflective thought can magnify the tendency to fit evidence to ideological or cultural predispositions when particular facts (the temperature of the earth; the effectiveness of gun control; the health effects of administering the HPV vaccine for school girls) become entangled in cultural or ideological rivalries.

In the second installment, I described an original study that adds support to this understanding. In that study, I found, first, that one reliable and valid measure of reflective and open-minded reasoning, the Cognitive Reflection Test (CRT), is not meaningfully correlated with ideology; second, that conservatives and liberals display ideologically motivated reasoning when considering evidence of whether CRT is a valid predictor of open-mindedness toward scientific evidence on climate change; and third, that this tendency to credit and dismiss evidence in an ideologically slanted way gets more intense as both liberals and conservatives become more disposed to uses reflective or systematic reasoning as measured by their CRT scores.

If this is what happens when people consider evidence on culturally contested issues like climate change (and this is not the only study that suggests it is), then they will end polarized on policy-relevant science no matter what the correlation might be between their ideologies and the sorts of reasoning-style measures used in the studies collected in Republican Brain.

But there’s one last point to consider: the asymmetry thesis.

Mooney, who is scrupulously fair minded in his collection and evaluation of the data, acknowledges that there is evidence that liberals do sometimes display motivated cognition. But he believes, on balance (and in part based on the studies correlating ideology with quality-of-reasoning measures) that a tendency to defensively resist ideologically threatening facts is greater among Republicans—i.e., that this psychological tendency is asymmetric and not symmetric with respect to ideology.

The study I conducted furnishes some relevant data there, too.

The results I reported suggest that ideologically motivated reasoning occurred in the study subjects: how likely they were to accept that the CRT is valid depended on whether they were told the test had found “more” bias in people who share the subjects own ideology or reject it. This ideological slant got bigger, moreover, as subjects’ CRT scores increased.

But the statistical test I used to measure this effect—a multivariate regression—essentially assumed the effect was uniform or linear with respect to subjects’ political leanings. If I had plotted the result of that statistical test on a graph that had political leanings (measured by “z_conservrepub,” a scale that aggregates responses to a liberal-conservative ideology measure and a party-affiliation measure) on the x-axis and subjects’ likelihood of “agreeing” that CRT is valid on the y-axis, the results would have looked like this for subjects who score higher than average on CRT:

The tendency to “agree” or “disagree” depending on the ideological congeniality of doing so looks even for conservative Republicans and liberal Democrats. But it is constrained to do so by the statistical model. 

It is possible that the effect is in fact not even. This figure plots a hypothetical distribution of responses that is consistent with the asymmetry thesis.

 

Here people seem to adopt an ideologically opportunistic approach to assessing the validity of CRT only as the become more conservative and Republican; as they become more liberal and Democratic, in this hypothetical rendering, they are ideologically “neutral” with respect to their assessments. If one applies a linear model (or, as I did, a logistic regression model that assumes a symmetric sigmoid function), then an “asymmetry” of this sort could well escape notice!

But if one is curious whether an effect might not be linear, one can use a different statistical test. A polynomial regression fits a “curvilinear” model to the data. If the effect is not linear with respect to the explanatory variable (here, political outlook), that will show up in the model, the fit of which can be compared to the linear model.

So I fitted a polynomial model to the data from the experiment by adding an appropriate term (one that squared the effect of the interaction of CRT, ideology, and experimental condition). Lo and behold, that model fit better (see for yourself). The ideologically motivated reasoning that was generated by the experiment, and amplified by subjects disposition to engage in reflective information processing, really wasn’t linear!

But it wasn’t asymmetric in the sense contemplated by the ideological asymmetry thesis either! Where a “curvilinear” model fits best, one has to plot the effects of that model and see what it looks like in order to figure out what the nonlinear effect is and what it means.  This figure (which illustrates the effect captured in the polynomial model by fitting a “smoothed,” local regression line to that model’s predicted values) does that:

I guess I’d say that subjects' biased reasoning was "asymmetrical" with respect to the two experimental conditions: the intensity with which they credited or discredited ideological congenial evidence was slightly bigger in the condition that advised subjects the results of the (fictional) CRT studies had found "nonskeptics" on climate change to be closed-minded.But that was true, it seems, for those on both sides of the ideological spectrum.

In any event, the picture of what the “curvilinear” effect looks like is not even close to the picture the “asymmetry thesis” predicts. Both liberals and conservatives are engaged in motivated reasoning, and the effect is not meaningfully different for either.

Now, why go through all this?  Well, obviously, because it’s fun! Heck, if you are actually read this post and have gotten this far, you must agree.

But there’s also a take-away: One can’t tell whether a motivated reasoning effect is truly “asymmetric” unless one applies the correct statistical test.

It’s pretty much inevitable that an effect observed in any sort of social science experiment won’t be “linear.” Even in the (unlikely) event that the phenomenon one is measuring is in fact genuinely linear, data always have noise, and effects therefore always have lumps with reference to the experimental and other influences that produce them.

If the hypothesis one is testing suggests a linear effect is likely to be right or close to it, one starts with a linear test and sees if the results holds up.

If one has the hypothesis that the effect is not linear, or suspects after looking at the raw data that it might not be and is interested to find out, then one must apply an appropriate nonlinear test. If that test doesn’t corroborate that there is in fact a curvilinear effect, and that the curvilinear model fits better than the linear one, then one doesn’t have sufficient evidence to conclude the effect isn’t linear.

Sometimes when empirical researchers examine ideologically motivated reasoning the raw or summary data might make it look like the effect is “bigger” for one ideological group than the other. But that’s not enough to conclude that the effect fits the asymmetry thesis. Any researcher who wants to test the asymmetry hypothesis still has to do the right statistical test before he or she can conclude that the data really support it.

I’m not aware of anyone who has conducted a study of ideologically motivated reasoning who has reported finding a curvilinear effect that fits the logic of the asymmetry thesis.

If you know of such a study, please tell me!

Post 1 in this "series"

Post 2 in it

p.s.

I've also plotted the results in the same fashion I did last time--essentially predicting the likelihood that a "high CRT" (CRT = 1.6) "conservative Republicant" (+1 SD on z_conservrepub) and a "high CRT" "liberal Democrat" (-1 SD) would view the CRT test as valid in the three experimental conditions.

The estimates in the top graph take the curvilinear effect into account, so they can be understood to be furnishing a reliable picture of the relative magnitude of the motivated reasoning effects for people with those respective characteristics. Looks pretty uniform, I'd say.

Otherwise, while the effects might be just a tad more dramatic, they clearly aren't materially different from the ones brought into view with the ordinary logit model. No real point, I'd say, in treating the polynomical model as "better" in any interesting sense; it was just interesting to find out if the polynomial model would both fit better and alter the interpretation suggested the nonpolynomial model

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (11)

I've been loving this whole discussion. One question, though, on how your experiment tests motivated reasoning. It's a point we've discussed before and, I believe, similar to what the Guest commenter on Post 1 says: If a person has reason to believe that global warming accepters/skeptics have correctly evaluated the evidence to come to their position -- even if this belief is ill-formed or mistaken -- aren't the results of the test evidence, to that person, of whether the test accurately evaluates open-mindedness? If a test gets results that you believe to be incorrect, don't you then have reason to doubt the validity of the test? If mothers did worse on a test for being other-regarding, if professional musicians did worse on a test for pitch, if people who believe mass curves the spacetime metric did worse on a test for science-mindedness, those could be reasons to questions those tests, couldn't they? Assuming people think after true reflection, others will come to the same conclusion as them, how is this different? Thanks!

August 4, 2012 | Unregistered CommenterMW

Well, consider how the experiment I describe in post 2 could be seen as a model of how someone reacts to the studies in The Republican Brain.

Say a Democrat starts with a prior belief that Republicans are biased, and encounters a study that reaches that same conclusion based on a particular measure like Need for Cognition. Following the reasoning approach you describe, and consistent with the study results, she treats that study & the measure it uses as valid because it accords with what she already believes. Then she sees another study that uses another measure -- CRT -- that shows that Democrats are biased. Since she knows it's Republicans who are biased, she treats *that* measure and hence that study as invalid.

See any problem?

Here are some I see.

A. She never changes her mind on the basis of evidence from social science. That's not good if there's some chance that what she believes about Republicans -- necessarily on the basis of things other than social science evidence -- could be wrong or incomplete etc. in a way that valid studies could show her if she weren't using her existing belief about Republicans as the measuring rod of their validity.

B. In fact, she never genuinely *observes* any evidence -- at least in the form of a test of reasoning quality or style -- that bears on the correctness of her belief about Republicans. She is assuming that the belief is correct when she makes judgments about the validity or weight to be afforded such tests; thus, her assessments of the studies on ideological bias doesn't genuinely add anything to what she already believes.

C. If she doesn't *perceive* B, she will suffer a very sad, and likely very ugly, form of self-deception. She will treat the studies that corroborate her belief as furnishing her with proof that her belief is correct when in fact they don't give her any such proof, and as a result she'll become overconfident. Moreover, if she encounters Republicans who are using the same reasoning style and thus *rejecting* the studies that show Republicans are biased, she will treat that as evidence that Republicans are indeed biased -- they are rejecting valid studies, just like the studies say! -- when in fact she & they are both engaged in the same sort of self-confirming reasoning process...

August 5, 2012 | Registered CommenterDan Kahan

This has been a captivating discussion, and I do find your analyses compelling. Although I haven’t read Mooney’s book, I think there is a sense in which you could both be correct. That is, even if you’re correct that the propensity to engage in motivated reasoning is roughly equal among republican/conservatives and liberal/democrats, it might be the case that there are more (current) political issues with which republican/conservatives are inclined to disagree. Thus, republican/conservatives, on the whole, do engage in motivated reasoning more frequently (i.e., asymmetrically), but this is really an artifact of unequal base rates. Again, I haven’t read Mooney’s book, and I’m not claiming this is what he argues (judging by the title of the book, it is not), but it seems possible that you’re both correct. Any thoughts?

August 5, 2012 | Unregistered CommenterNick

Well, of course if she gives her priors more weight than they're worth in assessing validity, that's unreasonable. If she has (a) barely supported prior beliefs and (b) strong evidence (such as peer review, etc.) that a methodology is valid, and (c) she still finds the study 100% invalid/worthless, that's bad. But your writeup of the study says the results are "not evidence one way or the other on whether the test is valid." I disagree. I'm certainly not saying results are ever dispositive on validity. But here people have, crucially, pretty much no external evidence on whether the CRT is valid. They have a bare-bones "gotcha"-type test, and you indicate to them that there's no consensus about whether the test is valid. They're not competent to evaluate the test themselves (like the editors considering the ESP paper might have been). The subjects have little to go by except for the meager evidence you provide them: whether the test achieves the results they'd expect.

I agree (as I have before) that over time, if a person has a prior view on whether a proposition is true or false, that will bias the weight he gives evidence for or against the proposition. And that will exacerbate his bias. But if he does have reason to hold that prior, then for evaluating the validity of each individual test, it makes sense to take that prior into consideration. If the universe underwent a dramatic shift tomorrow and the speed of light changed, you'd still be right to think the person who measured c at 2.5 E 8 m/s had used an invalid methodology (although if enough people got this result, you'd begin to significantly question your priors). You might think the prior here -- accepters/deniers are more reflective -- is unreasonable. And you might think it has more potential for harm than your average unreasonable belief. But given that someone holds it, I think it's rational for that person to use it in evaluating the validity of a test.

August 5, 2012 | Unregistered CommenterMW

@Nick: What you say seems right to me -- that is, even if ideologically motivated reasoning is equally distributed across the political spectrum, the number of opportunities to experience ideologically or culturally motivated reasoning in the world might not be equal, in which case one side will display more of it. Mooney has acknowledged this, but concluded (at least provisionally; I don't think he's closed-minded, ideologically or otherwise) that it's not the explanation for what he sees as the Right's greater frequency of resistance to science. I myself have no idea of the frequency of instances in which one side of the other is "getting it wrong" in resisting or even how to measure such a thing. But I do think that in the face of the cultural polarization we are experiencing, the assertion by one side that the other is "more biased" will likely heighten the pressures that drive the two (or three or four, etc.) apart, and that an acknowledgment of the universal nature of the problem is more likely to motivate (consciously) them all to try to find procedural devices that minimize the risks of such distortions for all. Of course, we should advocate only what we believe (provisionally) to be true.

@MW:

1. I will do a post that addresses this issue -- how to think, analytically, about "confirmation bias" -- since I think your points are important and others deserve to reflect on them too & might help one or the other of us or both to form a better view.

2. But for now, stick with the question, "Is the experiment [in the 2d post] a valid model of what happens when people evaluate social science studies that find that people with their ideology are either more or less biased than other people?" Did the subjects in the experiment have any less basis for judging validity than do people in the real world who read about studies that purport to find that "Republicans are more biased"? Actually, I think the subjects had more basis than most of the latter. First, they actually *took* the CRT test themselves! How many people reading about studies like the ones featured in RB know what the NFC test actually consists of (numerous items like " 'Thinking is not my idea of fun' -- true or false?")? Second, the experiment wording describes the CRT as test that "some psychologists believe" is valid and tells subjects what the psychologists' findings signify "if the the test is valid"; it is thus very clear in inviting them to question the premise that the finding -- CRT predicts skeptics/believers are more biased -- is not well founded. That *isn't* what the studies or reports on them usually do. The question, though, is whether, in your judgment, the experiment design creates conditions close enough to those in the real world that the results give you more reason to think the world works in a particular way than you would have had otherwise? If so, assign them some LR other than 1 relative to your prior & keep your eyes open for more proof one way or the other; if not, then assign the results an LR of 1 & put the study out of mind!

3. If people in the world reason in the way that they did in the study, that's not good. I disagree with you -- if you really man to be saying this -- that whether it's "biased" to judge the weight of new evidence by its conformity to one's priors depends on how well founded one's priors are. No matter how well founded or how likely correct one's priors are, that way of thinking *is* confirmation bias. The consequences of reasoning in this biased way are another thing; surely that will depend on how close your priors are to the truth, along with what the cost of error is. But if you decide on the whole to use your priors to assess the weight of new evidence -- because the cost of getting evidence independent of your priors to assess the new evidence is too high relative to the likely cost of potentially persisting in error -- you are deciding *not* to consider evidence of whether your priors are correct. If that's true and you don't know it, you will experience the sort of self-deception I described (one that results in over-confidence and a judgmental style that condemns others as "unreasoning" for reasoning in exactly the same as oneself). That's the bad thing I have in mind.

4. I'm pretty sure that even if we put any disagreement we have on (2) aside, if the experiment is externally valid (point 1), then we still both have evidence that the reason people are disagreeing on ideologically charged matters like climate change, nuclear power, HPV vaccine, etc. is that they those on both sides of these issues are relying on their priors (actually, I think on their predispositions, but leave that aside for now) and not updating based on new evidence. I'm happy with that characterization of the result. The question I'm asking you is are you really happy, normatively, to say, "I'm happy to learn that this is what's going on - because *my* side is right"?

August 5, 2012 | Registered CommenterDan Kahan

Thanks for your thoughtful responses to my questions! And sorry for delay in response (I'm in the process of moving).

To thoroughly address your points, I want to draw a distinction we've discussed before: you can use the results of a test either (a) to update the likelihood that the proposition that was tested is true or (b) to update the likelihood that the test is valid. I think you're operating under the assumption that people will do both (a) and (b) together, and because we don't want to introduce confirmation bias into (a), we should not allow priors to influence (b). But I think (a) and (b) can be separate:

Here are the Bayesian updating equations for each of (a) and (b):

(a). (Updated likelihood of prop) = (Prior odds of prop)*(Likelihood of result if prop is true/likelihood of result if prop is false)
(b). (Updated likelihood of test validity) = (Prior odds of validity)*(Likelihood of result if test is valid/likelihood of result if test is invalid)

It seems the first equation requires an implicit judgment of test validity to update proposition likelihood, as the result will only be more/less likely, given proposition truth/falsity, if the test is valid. And the second equation requires an implicit judgment of proposition likelihood to update test validity, as the result will be much more likely, given test validity, if the result is almost certainly true. (In this way assessment of test validity will depend on conformity with one's priors and strength of those priors). But it also seems to me that these are two separate updating processes -- separate questions of "is the prop true?" and "is the test valid?" -- so the updating of each should depend on the prior odds of the other (i.e., the updated likelihood of test validity should depend on how likely the proposition was before the test results were announced; the updated likelihood of the proposition should depend on all validity-relevant facts other than the actual results. The former more obviously avoids an infinite regress, although the latter does too). I think that's the only way to avoid bias in updating.

Are you with me so far? Do you agree?

If so, then you agree that these questions should be separable. And your experiment asks your subjects question (b), given the results, is the test valid? It seems to me, then, if people are doing this process correctly (if they agree with me and I am correct), they should allow their priors to influence their assessment of whether the test is valid, although they should not allow them to influence their updating of proposition likelihood.

But you want me to focus on how people actually think about this stuff (point 1/2), and that's harder for me to say. I doubt that people meticulously isolate their priors when they implicitly evaluate test validity in order to update proposition likelihood -- when they read Mooney's book and update the likelihood that liberals are more open to experience than conservatives, etc. But I do think that their implicit judgments -- likely to take the form of "a scientific study in a peer-reviewed journal says..." or "just one experiment out of many says..." -- might not be the same as they would be if people were asked to focus on the methodology itself. It's possible most people don't think the methodology of a published study is up for discussion, and it's only when they have some external reason to doubt it that they'll give their priors the weight they gave them in your experiment. But it's also possible people fully incorporate a biased opinion about study validity into their proposition-likelihood-updating process.

I'll admit my intuition's with yours. I think that people will see studies that "disfavor" their own group as methodologically invalid, and therefore they will fail to update their beliefs, and vice versa. But I'm not sure your study supports this over the hypothesis that people are acting rationally by both questioning the validity of these studies and updating their priors correctly. The way they're looking at CRT (is this test valid?) is sufficiently different from the way they look at Mooney's reports (does this result make me change my mind?) that I'm not sure how ecologically valid the experiment is.

I hope this was responsive! Please feel free to point out flaws or request clarification of confusing phrasings.

August 6, 2012 | Unregistered CommenterMW

Does it not depend also on the type of evidence presented?

Let's take your Democrat in the second comment. She sees a report of a test confirming Republicans are biased. The test is reporting a true result (so far as she is concerned) so that's evidence in its favour. Next she sees a report of a new test that shows Republicans are less biased than Democrats. The test conflicts with what she understands to be true, so now she has to weigh the evidence - is the test invalid or are her beliefs wrong? Her beliefs are based on long experience and many confirming instances of Republicans who refuse to see sense, while the validity of the new test is based on it simply being asserted by a source she doesn't know whether to trust. Obviously she's going to stick with her prior.

But suppose she is instead presented with lots of detailed evidence supporting the validity of the test in politically neutral topics - the new test gives strong correlations with exam success, scientific career progress, success in identifying specific fallacies and puzzle solving, etc. And *then* you report that it is also correlated with being a Republican, then there is a far weightier conflict with prior beliefs. Assertion is weak evidence.

The difference between confirmation bias and accounting for priors seems to me to be the difference between *liking* an outcome and *having prior evidence* for the outcome. A question like whether Republicans are biased thinkers is something for which Democrats will have a lot of experience to draw on. On other questions they may be able to deduce a likely answer based on what they do know. You need a question for which the subject has no prior knowledge, but a clear preference.

August 6, 2012 | Unregistered CommenterNiV

@MW: right, one can't simultaneously use priors to assess weight of evidence & use evidence weighted in that way to update priors. The experiment is designed to test that whether that's what is going on when people evaluate evidence that one or another ideological perspective is biased. In the world, people read Republican Brain studies to see if the evidence supports the conjecture that low-level reasoning is concentrated with one side of ideological spectrum. For them to treat the studies as evidence, they have to make assessment of whether the methods are valid. But if they in fact judge studies valid conditional on the studies showing that *the other side* is more biased than *their side,* that's going to be a problem... The experiment furnishes evidence that that's what they do.

August 7, 2012 | Registered CommenterDan Kahan

@NiV: what you describe is just confirmation bias resulting in sluggish updating. To start, realize the issue *isn't* whether someone "changes his/her mind" upon seeing new evidence that contradicts existing belief; it is only whether a person treats such evidence as *valid* or entitled to weight & if so how much. Someone who has lots of "expderience" that supports a proposition ("odds vs. ESP 10^31:1") will revise his or her prior estimate of the likelihood of the proposition when shown contrary evidence but might well continue to believe the proposition is true ("odds vs. ESP 10^30:1"). But if he or she says,"oh -- I know *that's* not true so I will ignore that piece of evidence, or ignore unless I get some *super* high level of proof* [i.e., not just LR different from 1]" -- that person is going to update more slowly than he or she should and would if that person were able & willing to assess the validity & weight of the evidence independently of what that person already believes. See Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias. The Quarterly Journal of Economics 114, 37-82 (1999); 1. Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993).

Of course, if someone gets very scanty basis for believing a test is valid, then there's no reason to update. But it can't be the case that one has more or less reason to see the test as valid based on whether it is consitent or inconsistent w/ the very proposition the truth of which is what motivates you to do or be interested in the test.

August 8, 2012 | Registered CommenterDan Kahan

I have no qualifications to contribute to this discussion except having scored very high in the past on an evaluation for "critical thinking" and having scored 3 correct on the CRT test. Also I'm a liberal, probably a 2 on your spectrum. That said I see some major problems with your conclusion, but don't know how to test for them. Possibly your sampling method is way wrong. There are clear examples of anything from motivated cognition to severe paradigm blindness that are far more prevalent on the right than on the left. Consider only what has been termed "right wing forwards". These are e-mails that circulate on the internet, that present clearly false stories, that are believed and passed on by conservatives. There is nothing similar on the left. There are all of the theories about Obama that he was born in Africa, that he is a muslim, that he is hates white people, that he is going to "take away our guns", that he is a communist, etc., that are either preposterous or have been thoroughly debunked, or have no remote basis in fact or evidence but conservatives choose to believe. There is no similar abberation on the left. Liberals that find significant character failings in Romney generally base them on facts that are supportable, like his performance at Bain. Different people can interpret the facts differently, but the liberal interpretation is not unreasonable. It may be "motivated cognition" but is not much of a stretch.
There is also the spectrum from rational or realistic interpretation of events and human motivation/behavior to the well known cognition defects associated with borderline, to high performing to handicapped Aspergers. My sample is very small, but all of the borderline/high performing Aspes I have met have been extreme right wing. They hold extreme beliefs and can bend almost any data to support their beliefs. That is extreme motivated cognition. I don't find any analog among liberals. Many of the ideas/beliefs of the Tea Party are clearly severe motivated cognition. There is no Tea Party equivalent on the left. And of course there are Christian evangelicals, but very rarely liberal ones. I think that however you chose your sample and conducted your tests, you failed to represent the population for some reason.

September 17, 2012 | Unregistered CommenterMurray Duffin

This issue keeps gnawing at me, and I may have put my finger on one aspect of problem. The propensity for motivated cognition would seem to be distributed along a spectrum (as are most such factors) from an ability to look at an issue holistically and objectively and make a balanced judgement, to some minor confirmation bias, through severe confirmation bias to moderate motivated cognition, then on to paradigm paralysis and finally to truly defective reasoning .It could be that your experiment only tests for the most moderate portion of this spectrum, and correctly finds equality on the left and right.
However consider a curve of % of target population on the y axis vs degree of motivated cognition on the x axis. To me it seems likely, based on empirical observation, that left and right will overlap for some short distance along the spectrum, and then left will begin to drop rapidly, while right will hold up longer and then drop more slowly.
Your test may only be dealing with the period of overlap and therefore reaches a misleading conclusion.
I don't have any idea how you test for a greater extent of the spectrum, but I think you have to design a valid method and redo your research before coming to even a tentative conclusion.

September 19, 2012 | Unregistered CommenterMurray Duffin
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.