follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Sunstein on "biased assimilation" & ideologically credible messengers

Many thanks to all the people who sent me emails asking if I saw Cass Sunstein's op-ed on "biased assimilation" today in NYT: they assured I didn't miss a good read! 

Sunstein's basic argument is that inundating people with "balanced information" doesn't promote convergence on sound conclusions about policy because of "biased assimilation." For this, he cites (via the magic of hyperlinked text) the classic 1979 Lord, Ross & Lepper study on capital punishment.  

Sunstein's proposal for counteracting this dynamic is to recruit ideologically congenial advocates to challenge people's preexisting views: "The lesson for all those who provide information," he concludes, is "[w]hat matters most may be not what is said, but who, exactly, is saying it."

Op-ed word limits and the aversion of editors to even modest subtlety make simplification inevitable.  Given those constraints, what Sunstein manages in 800 words is a nice feat.

But being free of such constraints here, I'd say the growing "science of science communication" literature suggests a picture of public conflict over science that is simultaneously tighter and richer than the one Cass was able to present.

To begin, "biased assimilation" doesn't itself predict that identity-congruent messengers should be able to change minds. LR&L find only that only that people will construe information on controversial issues to reinforce what they already believe--"confirmation bias" essentially.

I believe the phenomenon at work in polarized science debates is something more general: identity-protective motivated reasoning. This refers to the tendency of people to conform their processing of information -- whether scientific evidence, policy arguments, the credibility of experts, or even what they see  with their own eyes -- to conclusions that reinforce the status of, and their standing in, important social groups.

"Biased assimilation" might sometimes be involved (or appear to be involved) when identity-protective motivated reasoning is at work. But because sticking to what one believes doesn't always promote one’s status in one’s group, people will often be motivated to construe information in ways that have no relation to what they already believe.

E.g., in a study that CCP did of nanotechnology risk perceptions, we did find that individuals exposed to "balanced information"  became culturally polarized relative to ones who hadn't received balanced information. But those in the "no-information" condition, most of whom knew little about nanotechnology, were not themselves culturally divided; they had priors that were random with respect to their cultural views. Thus, the subjects exposed to balanced information selectively assimilated it not to their existing beliefs but to their cultural predispositions--which were attuned to affective resonances that either threatened or affirmed their groups' way of life. 

Or consider a framing experiment we did involving "geoengineering." In it, we found that individuals culturally predisposed to be dismissive toward climate-change science were much more open-minded in their assessment of such sciencewhen they were first advised that scientists were proposing research into geoengineering and not only stricter CO2 limits as a response to climate change. 

Biased assimilation -- the selective crediting or discrediting of information based on one's prior beliefs -- can't explain that result, but identity-protective motivated reasoning can. The congeniality of geoengineering, which resonates with pro-technology, pro-market, pro-commerce values, reduced the psychic cost of considering information to which individuals otherwise would have attached value-threatening implications--such as restrictions on commerce, technology, and markets.

Identity-protective motivated reasoning also explains the persuasiveness of ideological congenial advocates that Sunstein alluded to at the end of his column.  The group values of the advocate are a cue about what position is predominant in a person's cultural group. If that cue is strong and credible enough, then people will go with the argument of the culturally congenial advocate even if the information he is presenting is contrary to their existing beliefs. 

We examined this in a study of HPV-vaccine risk perceptions. In that experiment, we found that "balanced information" did polarize subjects along lines that reflected positions (and thus existing beliefs) predominant within their cultural groups. But when arguments were attributed to "culturally identifiable experts" – fictional public health experts to whom we knew subjects would impute particular cultural values -- individuals consistently adopted the position advocated by the expert whose values they (tacitly) sensed were most like theirs.   

This study only shows not only that the influence of culturally congenial experts is distinct from, and stronger than, biased assimilation. It also helps to deepen our understanding of why.

Indeed, reliable understandings of “why”-- and not merely analytical clarity--is what's at stake here. As I'm sure Cass would agree, one needs to do more than reach into the grab bag of effects and mechanisms if one wants to explain, predict, and formulate prescriptions. One has to formulate a theoretical framework that integrates the dynamics in question and supplies reliable insights into how they are likely to interact. Identity-protective cognition (of which cultural cognition is one conception or, really, operationalization) is a theory of that sort, whereas "biased assimulation" is (at most) one of the mechanisms that theory connects to others.

If I'm right (I might not be; show me the evidence that suggests an alternative view) to see identity-protective cognition as the more general and consequential dynamic in disputes about policy-relevant science, moreover, then it becomes important to identify what the operative group identities are and the means through which they affect cognition.  Sunstein suggests ideological affinity is important for the credibility of advocates. Well, sure, ideological affinity is okay if one is trying to measure identity-protective motivated reasoning. But for reasons I’ve set forth previously, I’d say cultural affinity is generally better -- if we are trying to explain, predict and formulate prescriptions that improve science communication. 

As for whether recruiting ideologically congenial advocates is the "lesson" for those trying to persuade "climate skeptics," that's a suggestion that I'm sure Cass would urge real-world communicators to consult Bob Inglis about before trying.  Or Rick Perry and Merck.

These two cases, of course, are entirely different from one another: Inglis took a brave stance based on how he read the science, whereas Perry took a payment to become a corporate sock-puppet. But both cases illustrate that deploying culturally congenial advocates to spread counter-attitudinal messages isn't a prescription that emerges from the literature in nearly as uncomplicated a manner as Sunstein might be seen to be suggesting.

The point generalizes. It's important to to attend to the wider literature in the science of science communication because the lessons one might distill by picking out one or another study in social psychology risks colliding head on with opposing lessons that could be drawn from others examining alternative mechanisms.

Actually, I'm 100% positive Sunstein would agree with this. Again, one can't possibly be expected to address something as complex as reconciling off-setting cognitive mechanisms (here: "trust the guy with my values," on one hand, vs. "excommunicate the heretic" & the "Orwell effect, on the other) in the cramped confines of an op-ed.

Okay, enough of that. Going beyond the op-ed, I'm curious what Sunstein now thinks about the relationship between "biased assimilation" --and identity-protective motivated reasoning generally -- and Kahneman's "system 1/system 2" & like frameworks of dual process reasoning. 

This was something on which a number of CCP researchers including Paul Slovic, Don Braman, John Gastil & myself, debated Cass in a lively exchange in the Harvard Law Review before he took on his post in the Obama Administration. Sunstein's position then was that cultural cognition was essentially just another member of the system 1 inventory of "cognitive biases."  

But research we've done since supports the hypothesis that culturally motivated reasoning isn't an artifact of “bounded rationality,” as Sunstein puts it. On the contrary, cultural cognition recruits systematic reasoning, and as a result generates even greater polarization among people disposed to use what Kahneman calls “system 2” processing.

Indeed, in our Nature Climate Change paper, we argued that this effect reflects the contribution that identity-protective cognition makes (or can make) to individual rationality. It's in the interest of individuals to conform their positions on climate change to ones that predominate within their group: whether an individual gets the science "right" or "wrong" on climate change doesn't affect the risk that climate change poses to him or to anyone else-- nothing he does based on his beliefs has any discernable impact on the climate; but being "wrong" in relation to the view that predominates in one's group can do an individual a lot of harm, psychically, emotionally, and materially. 

The heuristic mechanisms of cultural cognition (including biased assimilation, cultural-affinity credibility judgments) steer a person into conformity with his or her cultural group and thus help to make that person's life go better. And being adept at system 2 only gives such a person an even greater capacity to "home in" on & defend the view that predominates in that person's group. 

Of course, when we all do this at once, we are screwed. This is what we call the "tragedy of the risk perception commons.” Fixing the problem will require a focused effort to protect the science communication environment from the sort of toxic cultural meanings that create a conflict between perceiving what is known to science and being who we are as individuals with diverse cultural styles and commitments.

I’m glad Cass is now back from his tour of public service (and grateful to him for having taken it on), because I am eager what he has to say about the issues and questions that risk-percepton scholars have been debating since he’s been gone!



Dunning, D. & Balcetis, E. See What You Want to See: Motivational Influences on Visual Perception. Journal of Personality and Social Psychology 91, 612-625 (2006).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92 (Jan. 9, 2012).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, advance on line publication, (2012).

Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006).

Kahneman, D. Thinking, fast and slow, Edn. 1st. (Farrar, Straus and Giroux, New York; 2011).

Kunda, Z. The Case for Motivated Reasoning. Psychological Bulletin 108, 480-498 (1990).

Lessig, L. The Regulation of Social Meaning. U. Chi. L. Rev. 62, 943-1045 (1995).

Lord, C.G., Ross, L. & Lepper, M.R. Biased Assimilation and Attitude Polarization - Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, 2098-2109 (1979).

Sherman, D.K. & Cohen, G.L. The Psychology of Self-defense: Self-affirmation Theory, in Advances in Experimental Social Psychology, Vol. 38. (ed. M.P. Zanna) 183-242 (2006).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).


Culturally polarized Australia: Cross-cultural cultural cognition, Part 3 (and a short diatribe about ugly regression outputs)

In a couple of previous posts (here & here), I have discussed the idea of "cross-cultural cultural cognition" (C4in general and in connection with data collected in the U.K. in particular. In this one, I'll give a glimpse of some cultural cognition data from Australia. 

Australia CC scalesThe data come from a survey of large, diverse general population sample. It was administered by a team of social scientists led by Steven Hatfield-Dodds, a researcher at the Australian National University. I consulted with the Hatfield-Dodds team on adaptation of the cultural cognition measures for use with Australian survey respondents.

It was a pretty easy job! Although we experimented with versions of various items from the "long form" cultural cognition battery, and with a diverse set of items distinct from those, the best performing set consisted of the two six-item sets that make up the "short form" versions of the CC scales. The items were reworded in a couple of minor ways to conform to Australian idioms.

Scale performance was pretty good. The items loaded appropriately on two distinct factors corresponding to "hierarchy-egalitarianism" and "individualism-communitarianism," which had decent scale-reliability scores. I discussed these elements of scale performance more in the first couple of posts in the  Cseries. 


The Hatfield-Dodds team included the CC scales in a wide-ranging survey of beliefs about and attitudes toward various aspects of climate change. Based on the results, I think it's fair to say that Australia is at least as culturally polarized as the U.S.

The complexion of the cultural division is the same there as here. People whose values are more egalitarian and communitarian tend to see the risk of climate change as high, while those whose values are more hierarchical and individualistic see it as low. This figure reflects the size of the difference as measured on a "climate change risk" scale that was formed by aggregating five separate survey items (Cronbach’s α = 0.90):

Looking at individual items helps to illustrate the meaning of this sort of division -- its magnitude, the sorts of issues it comprehends, etc.  

Asked whether they "believe in climate change," e.g., about 50% of the sample said "yes." Sounds like Australians are ambivalent, right? Well, in fact, most of them are pretty sure -- they just aren't, culturally speaking, of one mind. There's about an 80% chance that a "typical" egalitarian communitarian," e.g., will say that climate change is definitely happening; the likelihood that a hierarchical individualist will, in contrast, is closer to 20%.

There's about a 25% chance the hierarchical individualist will instead say, "NO!" in response to this same question. There's only a 1% chance that an egalitarian communitarian in Australia will give that response!

BTW, to formulate these estimates, I fit a multinomial logistic regression model to the responses for the entire sample, and then used the parameter estimates (the logit coefficients and the standard errors) to run Monte Carlo simulations for the indicated "culture types." You can think of the simulation as creating 1,000 "hierarch individualists" and 1,000 "egalitarian communitarians" and asking them what the they think. By plotting these simulated values, anyone, literally, can see, literally, the estimated means and the precision of those estimates associated with the logit model. No one -- not even someone well versed in statistics -- can see such a result like in a bare regression output like this:

Yet this sort of table is exactly the kind of uninformative reporting that most social scientists (particularly economists) use, and use exclusively.  There's no friggin' excuse, for this, either, given that public-spirited stats geniuses like Gary King have not only been lambasting this practice for years, but also producing free high-quality software like Clarify, which is what I used to run the Monte Carlo simulations here (the graphic reporting technique I used--plotting the density distributions of the simulated values to illustrate the size and precision of contrasting estimates--is something I learned from King's work too).

So don't be awed the next time someone puts a mindless table like this in a paper or on a powerpoint slide; complain!

Oh .... There are tons of cool things in the Hatfield-Dodds et al. survey, and I'm sure we'll write them all up in the near future. But for now here's one more result from the Australia Cstudy:

Around 20% of the survey respondents indicated that climate change was caused either "entirely" or "mainly" by "nature" rather than by "human activity."  But the likelihood that a typical hierarchical individualist would view climate change was around 48% (+/-, oh, 7% at 0.95 confidence, by the looks of the graphic). Only about 5% chance an egalitarian communitarian would treat humans as an unimportant contributor to climate change.

You might wonder how about 50% of the hierarch individualists one might find in Australia would likely tell you that "nature" is causing climate change when less than 25% are likely to say "yes" if you ask them whether climate change is happening.

But you really shouldn't. You see, the answers people give to individual questions on a survey on climate change aren't really answers to those questions. They are just expressions of a global pro-con attitude toward the issue. Psychometrically, the answers are observable "indicators" of a "latent" variable. As I've explained before, in these situations, it's useful to ask a bunch of different questions and aggregate them-- the resulting scale (which will be one or another way of measuring the covariance of the responses) will be a more reliable (i.e., less noisy) measure of the latent attitude than any one item.  Although if you are in a pinch -- and don't want to spend a lot of money or time asking questions -- just one item, "the industrial strength risk perception measure," will work pretty well!

The one thing you shouldn't do, though, is get all excited about responses to specific items or differences among them. Pollsters will do that because they don't really have much of a clue about psychometrics.

Hmmm... maybe I'll do another post on "pollster" fallacies -- and how fixation on particular questions, variations in the responses between them, and fluctuations in them over time mislead people on public opinion on climate change.


I love Bayes -- and you can too!

No truism is nearly so elegant as, or responsible for more deep insights than, Bayes's Theorem.

I've linked to a couple of teaching tools that I use in my evidence course. One is a Bayesian calculator, which Kw Bilz at UIUC first came up with & which I've tinkered with over time.

The second is a graphic rendering of a particular Bayesian problem. I adapted it from an article by Spiegelhalter et al. in Science

In my view, the "prior odds x likelihood ratio = posterior odds" rendering of Bayes is definitely the most intuitive and tractable. It's really hard to figure out what people who use other renderings are trying to do besides frustrate their audience or make them feel dumb, at least if they are communicating with those who aren't used to manipulating abstract mathematical formuale.  As the graphic illustrates, the "odds" or "likelihood ratio" formalization, in addition to being simple, is the one that best fits with the heuristic of converting the elements of Bayes into natural frequencies, which is an empirically proven method for teaching anyone -- from elementary school children (or at least law students!) to national security intelligence analysts-- how to handle conditional probability.

If you don't get Bayes, it's not your fault.  It's the fault of whoever was using it to communicate an idea to you.


Sedlmeier, P. & Gigerenzer, G. Teaching Bayesian reasoning in less than two hours. Journal of Experimental Psychology: General 130, 380-400 (2001).




Even more on motivated consequentialist reasoning

Wow—super great comments on the “Motivated consequentialist reading” post.  Definitely worth checking out!

Some highlights: 

  • MW & Jason Hahn question whether I’m right to read L&D as raising doubts about Haidt & Graham’s characterization of the dispositions, particularly the “liberal” one, that generate motivated reasoning of “harms” & like consequences.
  • Peter Ditto offers a very generous and instructive response, in which indicates he thinks L&D is “perfectly consistent” with H&G but agrees that it “generally challenges” the equation of consequentialism with systematic reasoning in Greene’s distinctive & provocative dual-process theory of moral judgment.
  • A diabolical genius calling himself “Nick” asks whether the “likelihood ratio” I assigned to L&D on the “asymmetry thesis” has been contaminated by my “priors.” I answer him in a separate post.

I am persuaded, based on MW’s, Jason’s, and Peter’s various points, that I was simply overeager in reading the L&D results as offering any particular reason to question H&G’s characterization of “liberals.” (BTW, the reason I keep using quotes for “liberals” is that I think people who self-identify as “liberals” on the 5- or 7-point “liberal-conservative” survey measure are only imperfect Liberals, philosophically speaking; the ones who self-identify as “conservatives,” moreover, are also imperfect Liberals—they aren’t even close enough to being anti-liberals to be characterized as “imperfect” versions of that; we are all Liberals, we are all small “r” republicans—here…)

The basis of my doubt is that I find it unpersuasive to suggest that intuitive perceptions of “harm” unconsciously motivate liberals or anyone else to formulate conscious, confabulatory “harm-avoidance” arguments. I don’t get this conceptually; if it’s intuitive perceptions of harm that drive the conscious moral reasoning of liberals about harm, where is the motivated reasoning? Where does confabulation come in? I also think the evidence is weak for the idea that perceptions of “harm” (or “unfairness,” for that matter) are what make liberals see “harm” (or “unfairness”) is what explains "liberals'" positions, at least on issues like climate change & gun control & the HPV vaccination. I think “liberals” are motivated to see “harm” by unconscious commitments to some cultural, and essentially anti-Liberal perfectionist morality. That is, they are the same as “conservatives”  in this regard, except that the cultural understanding of “purity” that motivates "liberals" is different from the one that motivates “conservatives.”

But I concede, on reflection, that L&D don’t furnish any meaningful support for this view.

Here’s my consolation, however, for being publicly mistaken. Ditto directs me and others to the work of Kurt Gray, who Peter advises has advanced a more systematic version of the claim that everyone’s morality is “harm” based but also infused with motivated perceptions of one or another view of “purity” or the like (a position that would make Mary Douglas smile, or at least stop scowling for 10 or 15 seconds).

Well, as it turns out Gray himself wrote to me, too, off-line. He not only identified work that he & collaborators have done that engage H&G & also Greene in ways consistent with the position I am taking; he also was also very intent to furnish me with references to responses from scholars who take issue with him. So I plan to read up. And now you can too:

There are some 16 responses to the latter –from the likes of AlickeDitto, Liu & Wojcik; Graham & Iyerand Koleva & Haidt --in the Psychol. Inq. issue. Sadly, those, unlike the Gray papers, are pay-walled. :(


Doc., please level with me: is my likelihood ratio infected by my priors?!

In a previous post, I acknowledged that a very excellent study by Liu & Ditto had some findings in it that were supportive of the “asymmetry thesis”—the idea that motivated reasoning and like processes more heavily skew the factual judgments of “conservatives” than “liberals.” Still, I said that “there's just [so] much more valid & compelling evidence in support of the 'symmetry' thesis—that ideologically motivated reasoning is uniform ... across ideologies—” that I saw no reason to “substantially revise my view of the likelihood” that the asymmetry position is actually correct.

An evil genius named Nick asks:

So what (~) likelihood ratio would you ascribe to this study for the hypothesis that the asymmetry thesis does not exist? And how can we be sure that you aren't using your prior to influence that assessment? ….

You acknowledge Liu & Ditto’s findings do support the asymmetry thesis, yet you state, without much explanation, that you “don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.”

… One way to think about it is that your LR for the Liu & Ditto study as it relates to the asymmetry hypothesis should be ~ equal to the LR from a person who is completely ignorant (in an E.T. Jaynes sense) about the Cultural Cognition findings that bear on the hypothesis. It is, of course, silly to think this way, and certainly no reader of this blog would be in this position, but such ignorance would provide an ‘unbiased’ estimate of the LR associated with the study. [note that is amendable to empirical testing.]

You may have simply have been stating that your prior on the asymmetry hypothesis is so low that the LR for this study does not change your posterior very much. That is perfectly coherent but I would still be interested in what’s happening to your LR (even if its effect on the posterior is trivial).

Well, of course, readers can’t be sure that my priors (1,000:1 that the “asymmetry thesis” is false) didn’t contaminate the likelihood ratio I assigned to L&D’s finding of asymmetry in their 2nd study (0.75; resulting in revised odds that "asymmetry thesis is false" = 750:1).

Worse still, I can’t.

Obviously, to avoid confirmation bias, I must make an assessment of the LR based on grounds unrelated to my priors. That’s clear enough—although it’s surprising how often people get this wrong when they characterize instances of motivated reasoning as “perfectly consistent with Bayesianism” since a person who attaches a low prior to some hypothesis can “rationally” discount evidence to the contrary. Folks: that way of thinking is confirmation bias--of the conscious variety.

The problem is that nothing in Bayes tells me how to determine the likelihood ratio to attach to the new evidence. I have to “feed” Bayes some independent assessment of how much more consistent the new evidence is with one hypothesis than another. ("How much more consistent,” formally speaking, is “how many times more likely." In assigning an LR of 0.75 to L&D, I’m saying that it is 1.33 x more consistent with “asymmetry” than “symmetry”; and of course, I’m just picking such a number arbitrarily—I’m using Bayes heuristically here and picking numbers that help to convey my attitude about the weight of the evidence in question).

So even if I think I am using independent criteria to assess the new information, how do I know that I’m not unconsciously selecting a likelihood ratio that reflects my priors (the sort of confirmation bias that psychology usually worries about)? The question would be even more pointed in this instance if I had assigned L&D a likelihood ratio of 1.0—equally consistent with asymmetry and symmetry—because then I wouldn’t have had to revise my prior estimation in the direction of crediting asymmetry a tad more. But maybe I’m still assigning an LR to the study (only that one small aspect of it, btw) that is not as substantially below 1.0 as I should because it would just be too devestating a blow to my self-esteem to give up the view that the asymmetry thesis is false.

Nick proposes that I go out and find someone who is utterly innocent of the entire "asymmetry" issue and ask her to think about all this and get back to me with her own LR so I can compare. Sure, that’s a nice idea in theory. But where is the person willing to do this? And if she doesn’t have any knowledge of this entire issue, why should I think she knows enough to make a reliable estimate of the LR?

To try to protect myself from confirmation bias—and I really really should try if I care about forming beliefs that fit the best available evidence—I follow a different procedure but one that has the same spirit as evil Nick’s.

I spell out my reasoning in some public place & try to entice other thoughtful and reflective people to tell me what they think. If they tell me they think my LR has been contaminated in that way, or simply respond in a way that suggests as much, then I have reason to worry—not only that I’m wrong but that I may be biased.

Obviously this strategy depends (among other things) on my being able to recognize thoughtful and reflective people being thoughtful and reflective even when they disagree with me. I think I can.  Indeed, I make a point of trying to find thoughtful and reflective people with different priors all the time-- to be sure their judgment is not being influenced by confirmation bias when they assure me that my LR is “just right.”

Moreover, if I get people with a good enough mix of priors to weigh in, I can "simulate" the ideally "ignorant observer" that Nick conjures (that ignorant observer looks a lot like Maxwell's Demon, to me; the idea of doing Bayesian reasoning w/o priors would probably be a feat akin to violating the 2nd Law of Thermodynamics).

Nick the evil genius—and others who weighed in on the post to say I was wrong (not about this point but about another: whether L&D’s findings were at odds with Haidt & Graham’s account of the dispositions that motivate “liberals” and “conservatives”; I have relented and repented on that)—are helping me out in this respect!

But Nick points out that I didn’t say anything interesting about why I assigned such a modest LR to L&D on this particular point.  That itself, I think, made him anxious enough to tell me that he was concerned that I might be suffering from confirmation bias. That makes me anxious.

So, thank you, evil Nick! I will say more. Not because I really feel impelled to tussle about how much weight to assign L&D on the asymmetry point; I think and suspect they agree that it would be nice simply to have more evidence that speaks more directly to the point. But now that Nick is helping me out, I do want to say enough so that he (and any other friendly person out there) can tell me if they think that my prior has snuck through and inserted itself into my LR.

In the study in question, L&D report that subjects' “deontological” positions—that is, the positions they held on nonconsequenialist moral grounds—tended to correlate with their view of the consequences of various disputed policies (viz., “forceful interrogation,” “condom promotion” to limit STDs, “capital punishment,” and “stem cell research”).

They also found that this correlation—this tendency to conclude that what one values intrinsically just happens to correlate with the course of action that will produce the state of affairs—increases as one becomes more “conservative” (although they also found that the correlation was still significant even for self-described “liberals”). In other words, on the policies in questions, liberals were more likely to hold positions that they were willing to concede might not have desirable consequences.

Well, that’s evidence, I agree, that is more consistent with the asymmetry thesis—that conservatives are more prone to motivated reasoning—than are liberals.  But here's why I say it's not super strong evidence of that.

Imagine you and I are talking, Nick, and I say, "I think it is right to execute murderers, and in addition the death penalty deters." You say, "You know, I agree that the death penalty deters, but to me it is intrinsically wrong to execute people, so I’m against it.

I then say, "For crying out loud--let's talk about something else. I think torture can be useful in extracting information, & although it is not a good thing generally, it is morally permissible in extreme situations when there is reason to think it will save many lives. Agree?"  You reply, "Nope. I do indeed accept that torture might be effective in extracting information but it's always wrong, no matter what, even in a case in which it would save an entire city or even a civilization from annihilation."  

We go on like this through every single issue studied in the L&D study.

Now, if at that point, Nick, you say to me, "You know, you are a conservative & I’m a liberal, and based on our conversation, I'd have to say that conservatives are more prone than liberals to fit the facts to their ideology," I think I’m going to be a bit puzzled (and not just b/c of the small N).

"Didn’t you just agree with me on the facts of every policy we just discussed?" I ask. "I see we have different values; but given our agreement about the facts, what evidence is there even to suspect that my view of them  is based on anything different from what your view is based on -- presumably the most defensible assessment of the evidence?"

But suppose you say to me instead, “Say, don't you find it puzzling that you never experience any sort of moral conflict -- that what's intrinsically 'good' or 'permissible' for you, ideologically speaking, always produces good consequences? Do you think it's possible that you might be fitting your empirical judgments to your values?"  Then I think I might say, "well, that's possible, I suppose. Is there an experiment we can do to test this?"

I was thinking of experiments that do show that when I said, in my post, that the balance of the evidence is more in keeping w/ symmetry then asymmetry. Those experiments show that people who think the death penalty is intrinsically wrong tend to reject evidence that it deters -- just as people who think it's "right" tend to think that evidence it doesn't deter are unpersuasive. There are experiments, too, like the ones we've done ("Cultural Cognition of Scientific Consensus"; "They Saw a Protest"), in which we manipulate the valence of one and the same piece of evidence & find that people of opposing ideologies both opportunistically adjust the weight they assign that evidence. There are also many experiments connecting motivated reasoning to identity-protective cognition of all sorts (e.g, "They Saw a Game") -- and if identity-protective cognition is the source of ideologically motivated reasoning, too, it would be odd to find asymmetry.

So I think the L&D study-- an excellent study -- is relevant evidence & more consistent with asymmetry than symmetry. But it's not super strong evidence in that respect—and not strong enough to warrant “changing one’s mind” if one believes that the weight of the evidence otherwise is strongly in support of symmetry rather than asymmetry in motivated reasoning.

So tell, me, Dr. Nick—is my LR infected?


Aken's worldview

Do the the (dumbass) comments of (dumbass) Todd Aken supply evidence of the antagonism between conservative ideology and science?  Predictably, it is being depicted as such all over the internet.  

In truth, it's hard to believe that anyone who makes the mistake of treating a single individual's comments as evidence of anything (or who tries to entice others to make such a mistake) really understands (or is committed to) the disciplined form of observation and measurement that is the signature of science's way of knowing.  

But if one wanted to try to explore in a defensibly empirical way how the general belief Aiken expressed might be entangled in a cultural identity, one might start by considering the considerable body of evidence that social scientists have collected about who believes what and why about both abortion and date rape. It's pretty interesting.

The Republican position on abortion, this evidence suggets, might be part of a war against women, but if so, it's a civil war. As Kristin Luker shows (through masterful ethnography; definitely counts as "empirical," in my book), women occupy front-line positions on both sides of this cultural conflict. 

The social group most opposed to abortion, according to Luker's research, consists of women with  traditional, hierarchical values. Within a hierarchical way of life, women acquire status by successfully occupying domestic roles. "Motherhood" as a selfless--or essentially self-abnegating--state of commitment to the welfare of one's children reflects the highest form of female virtue.

This understanding is threatened by an alternative, egalitarian (and individualistic) outlook that measures the status of women and men in a unitary currency--viz., their success in markets, professions, and other institutions of civil society. The concept of a "right to choose" or "right to abortion" is linked -- through social practices but also through cultural meanings -- to this alternative outlook, and its alternative conception of female virtue. Hierarchical women are the ones who have the most status to lose should this outlook become dominant. Thus, Luker concludes, they are  the group most impelled to resist abortion rights. 

The same, status-protective logic, a large literature in women's studies suggests, informs the  position of hierarchical women in the "no means ...?" debate in rape law. A hierarchical way of life features norms that forbid women, in particular, from engaging in casual sex, or sex outside of marriage or committed relationships. "Token resistance" -- the initial feigning of a lack of consent by a woman who in fact desires sex-- is thought to be a form of strategic behavior engaged in by women who want to defy these norms while conveying to their partners that they can sill be expected to abide by hierarchical sexual mores generally (it's just that you are so irresistible!).  Hierarchical men and women take a dim view of such behavior. But the ones who resent "token resistance" the most are hierarchical women--whose status is being misappropriated by women who are trying to conceal their own lack of virtue.  

Women strongly committed to traditional, hierarchical gender norms are thus the most likely to believe that women who have acted contrary to traditional hierarchical norms--by, say, engaging in consensual sex outside of committed relationships on other occasions, or by wearing suggestive clothes, or by agreeing to be alone with a man in a room, or by drinking, etc.--really meant "yes" when they said "no." They are also the most quick, the women's studies literature suggests, to morally condemn such behavior.

These accounts are ones I've synthesized from various studies using sociological methods. But if they are right, we should expect these dynamics to generate motivated cognition. To protect their identities, women who subscribe to hierarchical norms should form factual perceptions that reflect the stake they have in opposing abortion and in conserving the law's attentiveness to "token resistance." We can test this conjecture by methods associated with social psychology.

CCP has in fact carried out studies with this goal. In one, we found that hierarchical, communitarian women were the group most disposed to see abortion as threatening to the health of women, a claim that is now one of the central justifications for a new generation of abortion restrictions in the U.S.

In another study, members of a large, diverse national sample reviewed facts from an actual rape case in which there was a dispute about whether a female college student who said "no" really meant it. Women with hierarchical values --particularly older ones -- were more likely than others to see the woman as "really" consenting despite her words. In addition to corroborating the women's studies position I described, this finding comports with the practical experience of attorneys who specialize in rape defense, and who report that the best juror in a "no means ... ?" case is likely to be a middle-aged woman with traditionalist outlooks (someone like Roy Black, who successfully defended William Kennedy Smith, wouldn't put it exactly this way; he wouldn't put it an any particular way--because he has professional situation sense, he'd just know it when he sees it.)

The cultural outlines of the dispute over "no means ...?" is very much at odds, though, with the prevailing view in legal scholarship, which depicts disputes about date rape as reflecting a conflict between men and women generally. In the study, there was no meaningful difference between men and women generally, considered apart from the interaction of cultural worldviews with gender that motivates hierarchical women to be particularly pro-defense in such date rape cases. Being a "liberal" or a "conservative," or a Democrat or Republican, also made no meaningful difference on their own.

So-- is there a connection between Aiken's comments and the culturally motivated cognition of facts relating to abortion and date rape?

Again, no one who takes a scientific view of the matter would try to draw from the sociological evidence I've described, and the sort of data CCP collected, an inference about what (if anything) was going on in Aken's brain.

But anyone who actually goes to the trouble of looking at relevant empirical evidence will find in it a plausible answer to how someone who forms and expresses beliefs like Aken's might fare pretty well in democratic politics. He is the beneficiary of the resentment and anxiety of 

women who think that they have in some ways become less liberated in recent decades, not more; who think that easy abortion, easy birth control and a tawdry popular culture have degraded their stature, not elevated it. Though the women [at an Aiken rally a couple days ago] here were of varying faiths and economic backgrounds, they were white and bound by a shared unease with Obama in particular and liberals in general, who seemed so often to hold them in contempt.

With their support, Aken might still win. And if you really want to know why they'd support him, the answer is much more complicated, much more interesting, and in many ways much more troubling than some kind of antagonism between "conservatism" as a personality trait and science as a way of knowing.



Abbey, A. Misperception as an Antecedent of Acquaintance Rape in Acquaintance Rape: the Hidden Crime. (eds. A. Parrot & L. Bechhofer) 96-112 (Wiley, New York; 1991).

Batchelder, J., Koski, D. & Byxbe, F. Women’s hostility toward women in rape trials: Testing the intra-female gender hostility thesis. American Journal of Criminal Justice 28, 181-200 (2004).

Burt, M.R. Cultural Myths and Supports for Rpe. Journal of Personality and Social Psychology 38, 217-230 (1980).

Burt, M.R. Rape Myths and Acquaintance Rape in Acquaintance Rape: the Hidden Crime. (eds. A. Parrot & L. Bechhofer) 26-40 (Wiley, New York; 1991).

Burt, M.R. & Albin, R.S. Rape Myths, Rape Definitions, and Probability of Conviction. J Appl Soc Psychol 11, 212-230 (1981).

Calhoun, K.S. & Tonwsley, R.M. Attributions of Responsibility for Acquaintance Rape in Acquaintance Rape: the Hidden Crime. (eds. A. Parrot & L. Bechhofer) 57-70 (Wiley, New York; 1991).

Ellison, L. & Munro, V.E. Reacting to Rape: Exploring Mock Jurors' Assessments of Complainant Credibility. Br J Criminol 49, 202-219 (2009).

Ellison, L. & Munro, V.E. Turning Mirrors Into Windows?: Assessing the Impact of (Mock) Juror Education in Rape Trials. Br J Criminol 49, 363-383 (2009).

Estrich, S. Rape. The Yale Law Journal 95, 1087-1184 (1986).

Kahan, D.M. Culture, Cognition, and Consent: Who Perceives What, and Why, in Acquaintance Rape Cases. University of Pennsylvania Law Review, 158, 729-812 (2010).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

Kalof, L. Rape-supportive attitudes and sexual victimization experiences of sorority and nonsorority women. Sex Roles 29, 767-780 (1993). 

Monson, C.M., Langhinrichsen-Rohling, J. & Binderup, T. Does "No" Really Mean "No" After You Say "Yes"? Attributions About Date and Marital Rape. Journal of Interpersonal Violence 15, 1156-1174 (2000)

Muehlenhard, C.L. & Hollabaugh, L.C. Do Women Sometimes Say No When They Mean Yes? The Prevalence and Correlates of Women's Token Resistance to Sex. Journal of Personality & Social Psychology 54, 872-879 (1988).

Muehlenhard, C.L. "Nice Women" Don't Say Yes and "Real Men" Don't Say No. Women & Therapy 7, 95 - 108 (1988).

Muehlenhard, C.L. & McCoy, M.L. Double Standard/Double Bind. Psychol Women Quart 15, 447-461 (1991).

Sprecher, S., Hatfield, E., Cortese, A., Potapova, E. & Levitskaya, A. Token Resistance to Sexual Intercourse and Consent to Unwanted Sexual Intercourse: College Students' Dating Experiences in Three Countries. The Journal of Sex Research 31, 125-132 (1994).


Motivated consequentialist reasoning

Nice paper by Liu & Ditto just published (advance on-line) in Social Psychology and Personality Science ("What Dilemma? Moral Evaluation Shapes Factual Belief," doi: 10.1177/1948550612456045).  It presents a series of studies-- from variants of the "trolley problem" to ones involving evidence on stem cell research--supporting the hypothesis that people will conform their assessments of an action or policy's consequences to their appraisals of its intrinsic moral worth.

As Liu & Ditto acknowledge, their findings are in keeping with those of other researchers who have been studying the influence of culturally or ideologically motivated cognition.  The design of their studies, however, was specifically geared to detecting how readily disposed their subjects were to resort to consequentialist justifications for nonconsequentialist positions. In one cool experiment, e.g., they found that exposure to compelling nonconsequenitialist arguments generated changes in the perceived deterrent efficacy of capital punishment! 

This feature of their paper enables the motivated-reasoning position to square off directly against two other important positoins in contemporary moral psychology.

The first, associated most conspicuously with Jonathan Haidt, is that ideological or partisan conflicts over policy reflect a fundamental difference in "liberal" and "conservative" moral styles. Conservatives, Haidt argues, focus on nonconsequentialist evaluations of "purity" or "sanctity," whereas liberals focus on "harm."

But as Liu & Ditto note, conservatives, every bit if not more than liberals (more on that in a second!) adopt a default utilitarian perspective. What divides contemporary American who identify as "liberals" and "conservatives" is not the normative authority of Mill's "harm" principle. It's a set of disputed factual claims that about whether forms of behavior symbolically associated with one or the other's cultural style causes  harms of the sort that any Millian Liberal would agree warrant legal redress.  

That people are impelled to impute harm to behavior that denigrates their cultural norms is, of course, the nerve of Mary Douglas's work, in particular Purity and Danger. I very much agree with Douglas's view. Indeed, I think the view that public policy debate can be characterized as one between philosophical Liberals and Antiliberals -- i.e., between those who believe that law should be confined to promotion of secular ends and those who believe that law is also a proper instrument for propogating a moral orthodoxy -- is one only those who spend far too much time in university moral philosophy seminars are likely to form. 

The second position with which Liu & Ditto join issue is the dual process theory of moral psychology. I view Josh Greene as the leading exponent of this perspective. Greene is a subtle thinker; like Haidt, he is both a first-rate philosopher and an amazing psychologist, But he has not been shy about equating nonconsequentialist (or "deontological") reasoning with emotion-driven, unconscious "system 1" (in Kahneman's terms) reasoning and consequentialism with conscious, reflective "system 2."

I don't buy it. Indeed, cultural cognition -- the tendency of people to fit their assessments of risk and related facts to their group values -- is all about the distorting force that motivated reasoning exerts over consequentialist judgments.  Greene depeicts "deontological" reasoning as a form of confabulation. But precisely because consequentiaist frameworks so often rest on contentious behavioral conjectures and contested forms of empirical proof, they furnish a notoriously pliable set of resources for those who feel impelled to reason, as opposed to intuit, their way out of policy conclusions they find ideologically noncongenial. 

If anything, it seems like those who are adept at system 2 reasoning will be more vulnerable to motivated cognition. They will be better than those who are less reflective, more intuitive, in manipulating the various bendable empirical bits and pieces out of which utilitarian argument tend to be formed. This was the premise of our Nature Climate Change study, which presented evidence that greater science comprehension magnifies cultural cognition.

But like any other proposition (or any proposition worth discussing), the claim that consequentialist reasoning is more hospitable to motivated cognition than other sorts is open to empirical testing. I count Liu & Ditto's studies as evidence in support of that conclusion.

Now, there is one other issue to discuss.

As I said, Liu & Ditto find that conservatives, as well as liberals, resort to consequentialist reasoning. Conservatives don't naturally frame their position in nonconsequentialist terms, much less confine themselves to such justifications. Indeed, in one of the studies they feature in their paper, Liu & Ditto observe "the tendency to perceive morally distasteful acts as also being practically disadvantageous was significantly more pronounced ... for political conservatives." 

So this raises the perennial (for me, in this blog; I am getting treatment, but still can't shake my obsession) issue of the "asymmetry thesis"-- the claim (ably advanced in Chris Mooney's Republican Brain) that motivated consequentialist reasoning is more characteristic of conservatives than liberals.  Is the Liu & Ditto paper evidence in "favor" of the asymmetry thesis?

Sure. In fact, in one of their studies Liu & Ditto present a statistical analysis that shows that subjects' tendency to adopt empirical positions supportive of their intrinsic moral assessments increased as subjects became more conservative. As I've noted before, proponents of the "asymmetry thesis" usually don't try to assess whether any differernces observed in the force of motivated reasoning across the ideological spectrum (or cultural spectra) is statistically, much less practically, significant. Liu & Ditto did make such an assessment.

But does that mean the asymmetry thesis is "true" after all?

It's a mistake (a sadly common one) to view scientific studies as "proving" or "disproving" claims in some binary fashion. Valid studies supply evidence that gives us more reason than we otherwise would have had to credit one hypothesis relative to some alternative one. If one wants to form a provisional judgment--and all judgments must always be viewed as provisional if one is taking a scientific attitude toward empirical proof--then one has to aggregate all the available pieces of evidence, assigning to each the weight it is due in light of how much more consistent it is one with hypothesis than another.

There's just much more valid & compelling evidence in support of the "symmetry" thesis -- that ideologically motivated reasoning is uniform, for all practical purposes, across ideologies--than there is in support of the "asymmetry" position. I myself don't view the Liu and Ditto finding of "asymmetry" as a reason to substantially revise my view of the likelihood that that position is correct.

Indeed, I don't think Liu and Ditto themselves view their results as particularly strong proof in favor of the asymmetry thesis. They note that the "associations between moral and factual beliefs" they observed--on issues like the death penalty, promotion of condoms to fight STDs,  stem cell research, and forceful interrogations--" were stronger for conservatives but "still significant for ... political liberals." "[W]hile our political psychology results can be taken as consistent with the body of work associating conservatism with heuristic and motivated thinking," they conclude, "it is important to also note the modest size of these interaction effects and that significant moral-factual coordination was found across the political spectrum." 

The paper is not a "show stopper" on the "asymmetry" question. On the contrary, it is, in this respect like the others, something much better than that: a pertinent, informative, and indeed elegant addition to an ongoing scholarly conversation.



Go local, and bring empirical toolkit: Presentation to US Global Change Research Program

Gave a talk last Friday in Washington DC for members of the US Global Change Research Program.  The statutory mandate of USGCRP, an inter-agency office within the Executive Branch, is to supervise "a comprehensive and integrated United States research program which will assist the Nation and the world to understand, assess, predict, and respond to human-induced and natural processes of global change."

I was one of a series of researchers who have been invited to make presentations on the science of science communication (SSC) to USGCRP. It's heartening to see policymakers taking steps to integrate SSC into science-informed decisionmaking. This is exactly the sort of development that the National Academy of Sciences has been trying to promote with the efforts that culminated in its Science of Science Communication colloquium last spring.  Of course, it was also a personal honor to me to be one of the researchers consulted by USGCRP, as it was to be one of those invited to participate in the NAS symposium.

In my talk to USGCRP, I stressed three points:

1. What the problem isn't, and what it really is. The first was the need to conceive of the controversy that surrounds climate change (and a number of other risk issues) as rooted not in generic constraints on human rationality but rather in the species of motivated reasoning associated with cultural cognition. Ordinary members of the public react to issues of disputed fact in the national climate debate in much the say that sports fans do to disputed officiating calls, and people who are high in cognitive reasoning ability do it all the more aggressively. 

This was the theme of my recent "World View" commentary in NatureIt is also the upshot of the empirical study we presented in our Nature Climate Change paper

2. Go local. The second point concerned the value in exploiting local decisionmaking settings as venues for promoting open-minded engagement with scientific evidence relating to climate change. When members of a community address issues of climate-change adaptation -- ones relating to rising sea levels, increased incidence of hurricanes, and depletion of water and other natural resources -- their decisionmaking is much more consequential for their individual lives. They also talk with others (neighbors, local businesses, regional utilities and like providers) who are comparably situated, whom they know and are comfortable with, and with whom they share a common idiom.

For these reasons, the group rivalries that fuel culturally motivated reasoning when "climate change" is framed as a national issue tend to dissipate. At the local level, people are more likely to see themselves as members of the same team.

Evidence for this phenomenon consists in the rich array of state-sponsored local adaption initiatives going on in places like Florida, Arizona, and California. The need for informed science communication strategies to guide such initiatives to constructive outcomes and steer them away from nonconstructive, conflictual ones is reflected in the contrasting experiences of Virginia (good) and North Carolina (bad) in addressing how to assess the potential impact of rising sea levels on their states.

3. Use genuinely evidence-based communication strategies.  The aim of SSC, of course, is to harness empirical observation and measurement to promote our collective interest in policies informed by the best available scientific evidence. But it's a mistake to think empirical observation and measurement end in social scientists' labs.

Surveys and stylized lab experiments are distinctively suited for identifying the general mechanisms that shape cognitive engagement with policy-relevant science. But they rarely generate meaningful, determinate instructions on what to say, to whom, and how. (Social scientists who don't acknowledge this risk lapsing into story-telling.)

Translating SSC insights of that type into useable guides for action is something that will happen in the field--at the site of actual communication. But there too the process must be evidence based. As field communicators use their judgment to adapt their efforts to the insights generated in surveys and lab experiments, they must employ the same forms of disciplined observation and measurement, both so they can calibrate their efforts to achieve maximum effect and so that the evidence their efforts generate isn't wasted but instead preserved and added to the growing stock of information available on what works, what doesn't, and why.

So go local. And bring your empirical-study toolkit with you!


I know my USGCRP talk, which was "open" to the public by telephone link and by internet simulcast of my slides, was also recorded. Maybe it will be put on-line at some point.

But for now, here are slides.


"Overcoming the cultural gap between scientists and the public"--video panel discussion

Last February I got to be part of a great panel discussion at the annual Ocean Sciences Meeting with science communication scholar Max Boykoff, science journalist Richard Harris from NPR, and oceanography scientist Jonathan Sharp. Now the event is available for viewing on the internet. The video reflects super high production values, too!




Honest, constructive & ethically approved response template for science communication researchers replying to "what do I do?" inquiries from science communicators

Dear [fill in name]:

I'd be happy to discuss this [select one: super interesting; interesting; particular] issue with you. I have to warn you, though, that I won't be able to offer you a set of "how to" instructions or guidelines about what you should say, how, or to whom.

As a matter of principal, I won't give that sort of "do's & don't's" advice to you or any other real-world communicator, b/c I think those who use empirical methods to study the general dynamics of science communication shouldn't mislead anyone about the nature of their insights. Study aimed at identifying general mechanisms of science communication utilize surveys & lab experiments. Those forms of study involve deliberately stripped down models that abstract from the cacophony of real-world influences that necessarily confound observation and measurement and compromise control of the particular influences of interest to the researcher.

This method is extremely valuable. It is what warrants that the insights such studies generate about mechanisms of consequence to real-world communication are real and can be relied on. The number of conjectures about how science communication works that are plausible far exceeds the number that are actually true.  Pristine models are the best method for plucking the latter out of the vast sea of the former and thus for steering the discipline of science communication toward profitable roads of engagement and away from alluring dead-ends.

Nevertheless, precisely because this method demands abstracting from the particulars of real-world communication settings, it won't produce determinate and meaningfully specific prescriptions for any real-world communication problem.  

Full realization of utility of this critical research thus depends on field studies that test informed conjectures about how the general mechanisms identified in lab experiments and surveys can be brought to bear on particular communication problems. Design of those types of field studies, in turn, demands the participation of individuals like you, who have situation-specific knowledge relating to the field-communication task at hand.

Social scientists who specialize in acquiring general knowledge of the mechanisms of cognition that shape science communication can play a vital role in field research too because they know what is required for valid observation and measurement of the results that such studies will produce.  But for them to carry on as if the bridge of intelligent field study was unnecessary to connect the mechanisms they have observed in lab experiments and surveys to realistic, concrete, meaningful prescriptions about what to do in particular situations will at best only delay the necessary work that needs to be done, and at worst degrade their findings by making them the fodder of just-so stories--one of the signal abuses of decision science.

Bottom line, then, is that I'm happy to help you think about designing field studies informed by established mechanisms of science communication, or at least making the communication efforts you are already engaged in amenable to empirical observations & measurements. In fact, to be perfect candid, the possibility of helping you design them & then collecting data that could be shared with others is also something that I am likely to try to sneak into our discussion.

Would this sort of advice be useful to you?  If so, perhaps we could talk [select one: right this second; later today; at your earliest convenience].


[fill in name]


Doing science is different from communicating it -- even when the science is the science of science communication

The primary fallacy that the science of science communication seeks to dispel is that no science of communicating science is necessary -- the truth certifies itself etc.

A corollary is that there is a difference -- a really big, huge one -- between doing and communicating science. Because the truth doesn't certify itself, certification necessarily involves more than producing good science.  What's more, there's no reason to think that those who are good at producing scientific insight will be good at communicating it-- the two activities are very different & thus will involve diverse sets of skills.

Of course, that doesn't mean that scientists themselves shouldn't pay attention to the science of science communication or make use of its insights. Scientists who do become involved in communicating their science to the public -- whether to contribute to the good of making what's known knowable to curious people or to help promote informed public debate -- should know what is known about the challenges of talking to those who don't share their professional way of seeing. Alan Leshner makes this point brilliantly in an editorial this week in Science.

But the doing/communicating distinction does mean that we shouldn't expect scientists to be the ones who bear the burden of communicating science, nor should we expect improving the science communication abilities of scientists to secure the important societal goals associated with science communication. For that we rely on the skill of those whose professional mission it is to communicate science and who are suited by disposition, experience, and utilization of craft-specific knowledge to carry out that mission expertly. 

Science journalists and documentary producers, e.g., are the ones who make the biggest contribution to the good of making what's known to science knowable to curious people generally (including individual scientists, since, like everyone else, they are not in a position to figure out "first hand" more than a tiny fraction of the things that are known by science).  

There are also professional policy analysts who use related skills to make what's known to science known to policymakers in a position to use that science.

All of this applies, btw, to the communication of the science of science communication. That is,  doing the science of science communication is different from communicating the science of science communication, and it's a mistake for anyone, including science communication scientists themselves, to think science communication scientists are the best ones to communicate what they do either to the curious public or to policymakers in need of knowing something about science communication.

It's fine for science communication scientists to try their hand at communicating what they do, of course, and I gave it a shot the day before yesterday in my "World View" column in Nature. But no surprise, there are things even I, on reflection, can see I didn't do very well. 

I very much regret, e.g., that the column was amenable to being read (certain sentences definitely were; I see that now that someone has shown me) as designed to help "nonskeptics" understand why "climate skeptics" aren't so "irrational" as they seem--they are being poisoned by the polluted environment and it is impairing their perception. My point was that everyone's capacity to figure out what the best evidence  is -- on climate change and myriad other issues --is  being compromised by the entanglement of facts that admit of scientific investigation with cultural meanings that people use to signal who they are and who they aren't.  Indeed, to speak as if only one side of an issue like climate change or one cultural constituency generally is vulnerable to the disorienting impact of this dynamic is to participate in polluting the science communication environment, since it helps cement the pernicious association of particular positions on complex issues with particular cultural identities. 

Fortunately, a genuine science communication professional took up the same theme -- that the problem we face is a polluted science communication environment and not a defect in any aspect of democratic citizens' capacity to apprehend the best available evidence -- just a few days before my "World View" column was published.  Tom Zeller is an expert at communicating science & he wrote a very insightful, accessible, and interesting column on the issue in the Huffington Post on Aug. 10.

Glad to know that more people will get the message from him than me. Those of you who got it from me--go look at what he said. That's what I meant to say.


"It's the science communication environment, stupid"-- not stupid people!

My worldview in Nature "World View" feature today.




Get ready for Snyder v. Phelps II: the "motivated reasoning" loophole in the First Amendment

Predictably, in the wake of the Supreme Court's decision the Term before last in Snyder v. Phelps, various states and now Congress have enacted new laws regulating demonstrations or picketing at military funerals.

Snyder overturned a $5 million "emotional distress" judgment against members of the Westboro Church for holding a homophobic demonstration at the funeral of a soldier killed in Iraq.  That award violated the First Amendment, the Court explained, because the "distress" experienced by the slain soldier's father (the plaintiff in the suit) "turned on the content and viewpoint of the message conveyed." Things would have been different, the Court suggested, had the Church been held liable for "interference with the funeral itself."

This ruling involved a straightforward application of the "noncommunicative harm" doctrine, which says that, for purposes of the First Amendment, harms arising from negative reactions to ideas or messages are "noncognizable" -- i.e., not a legitimate basis for regulation. The government can impose limits on political protestors and other speakers only to prevent "noncommunicative harms"--ones that can be defined independently of anyone's negative reaction to the speakers' ideas.

Well, the new laws all purport to prohibit demostrations that do or could "interfere" with military funerals in ways unrelated to the "content and viewpoint" of demonstrators' messages. Some impose penalties for blocking or obstructing. And others, like the new federal law, create "buffer" zones that restrict the proximity of the demonstrators to the funeral as a prophylactic measure against those kinds of "noncommunitive" harms.

But will the enforcement of these laws really assure that military funeral protestors are held liable only for "noncommunicative harms" and not for expressing contentious -- and in the case of the Westboro Church, genuinely noxious -- ideas? 

Cases based on these laws will turn on facts. Courts will scrutinize the evidence either to determine whether protestors "interfered" with particular funerals or to test the soundness of the governmental determination that without "buffer zones" such interference would be nearly certain to occur. The theory of cultural cognition predicts that factfinders will be unconsciously motivated to conform their assessments of the evidence on such matters to their moral appraisals of the positions the protestors are advocating.

Turns out we've already tested this very prediction. In our paper, "They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. 851 (2012), we presented the results of an experiment in which subjects playing the role of jurors watched a videotape of a protest to determine whether the demonstrators had "pushed," "shoved," "blocked" and otherwise "interfered" with pedestrian access to a building. The answer the subjects gave -- what they saw on one and the same tape -- depended on two things: (1) what we told the subjects about the protest-- that it was one conducted by anti-abortion demonstrators outside an abortion clinic or instead one by conducted by opponents of "Don't Ask, Don't Tell" outside a military-recruitment facility; and (2) the cultural outlooks of the subjects. Basically, if subjects found the protestors' message culturally disagreeable, they saw all manner of "noncommunicative harm," whereas if they concurred with the protestors' message, then they saw no such thing.

In fact, the filmed protestors weren't demonstrating against either abortion or "Don't Ask, Don't Tell." They were members of the Westboro Church, filmed at a protest that they conducted at Harvard University in 2009 (the study, too, was conducted well before the Church's case got to the Supreme Court).  Snyder v. Phelps notwithstanding, there's still plenty of room in the law to restrict the funeral protests of the Westboro Church based on the disgust people (quite legitimately) feel toward the Church members' ideas. 

The sort of censorship that sneaks through this "motivated reasoning" loophole in the First Amendment, moreover, doesn't limit itself to protestors as pathetic as the Westoboro Church. From1960s civil rights and antiwar demonstrators to last year's "Occupy Wall Street" protestors, politically charged speakers have always generated polarized responses: not about whether it's okay to punish protestors for their ideas--there's really no dispute about that; but about whether protestors advocating controversial positions have crossed the line from speech to intimidation--something we in fact all agree they can be punished for doing.  Yet First Amendment doctrine has nothing particularly helpful to say about our predictable tendency to impute danger and harm to those who threaten our worldviews.

There's a lot of John Stuart Mill in U.S. constitutional law -- and I have no problem with that. I only wish there were a little bit more William James and Herbert Simon.


Religion, political party & cognitive reflection ... hmmmm

I posted some stuff recently (here, here & here) looking at CRT, ideology, & motivated reasoning. (Indeed, the posts wiped me out so completely that I've been laying low ever since.)

Here is one additional thing I found; I don't really know what to make of it, so I  invite comment.

It has been shown in multiple papers (here & here & here) that CRT and religiosity are negatively correlated. This finding is treated as evidence that there is a causal link of some sort between religion and the more intuitive, less reflective reasoning style associated with "system 1" in Kahneman's dual process scheme.

In my dataset (a nationally representative panel of 1700 U.S. adults), I find that same negative relationship. But it's moderated by partisan self-identification. The negative impact of religiosity (measured by a scale that combined importance of religion, importance of God, and self-reported church attendance) on CRT gets bigger as respondents' identification with the Democratic Party increases.

Any ideas about what's going on here? I don't really have any.

I'm also not sure what the significance of this relationship is, if any, for the studies that find religion is associated with low-level or system 1 processing. One difficulty for me in that regard is that I'm sort of puzzled by what the psychological theory is behind the religion/low CRT finding generally (I mean, historically, plenty of really highly reflective types have been religious, right Rev. Bayes?); it only seems harder, for me at least!, to articulate a theory if it has to incorporate the religiosity/partisan-identification interaction at the same time.  

One other important thing to note is that at least a couple of the studies on religion & cognitive style also included experimental elements, in which manipulations of subjects' reliance on reflection or intuition influenced expressed indicia of religiosity or vice versa. So it's not as if everything about those studies turns on inferences from correlations. But one would still think that interactions between religiosity and other characteristics have to fit with whatever the theory is that connects religiosity to less reflective modes of cognition.

Now I could, of course, go on & try all sorts of additional combinations of demographic variables, including additional cross-product interaction terms. But frankly, I see that sort of approach as pretty mindless. The sorts of demographic variables that predict CRT will tend to co-vary; that goes not double but rather exponential for the various cross-product interactions one can form with them. When all of those get stuck indiscriminately into the regression, it starts to become very unclear what is being modeled (uh, let's see-- how might the simultaneous increase in religion and gender influence CRT holding both race and its interaction with religiosity constant at their "means...")

So if others have suggestions about tests, I'm happy to run them. But before I do, the test requester has to say why the particular combination of variables proposed (including cross-product interaction terms) makes sense. What or who does that combination of variables model given the the sorts of covariances that are being partialed out? Researchers who "over-control" in regressions -- putting everything one can think of onto the right-hand side without any thought of what such a model models -- is something that really gets me steamed!


Cultural cognition and the Oregon Citizens' Initiative Review

On a weeklong visit to Salem, Oregon, I find myself reflecting on the recent postings about ideology, motivation cognition, and the ability to process information in an unbiased and reflective way. I’m not here to enjoy the Pacific Northwest but to observe an anomalous public deliberation process, the Oregon Citizens’ Initiative Review.

The recent postings on the Cultural Cognition Project blog have reaffirmed my conviction that (1) ideologically biased information processing happens in all political camps and (2) rising above that remains possible, but it may require unconventional circumstances.

The Oregon Citizens’ Initiative Review (CIR) process was piloted in 2008, made a provisional state process in 2010, then made a permanent part of Oregon elections in 2011, with its official state commission being established earlier this year. In a nutshell, they bring together a representative random sample of 24 Oregonians to analyze a ballot measure for a full week then write a one page Citizens’ Statement that appears in the official state Voters’ Pamphlet.

The idea is that this small deliberative body, which gets to hear from and query issue advocates and opponents, can offer insight to the average voter that helps them make more reflective choices when they complete their ballots. I led a research effort in 2010 that found that for many of the 42% of Oregonians who learned about the CIR process, it had exactly that effect. On one issue, for instance, reading the CIR Statement moved the public from roughly 2/3 supporting to 2/3 opposing a mandatory minimums initiative. (Read full report here.)

While I write, I am watching the 2012 CIR process unfold. The panelists are in the first of their five days of deliberation, and this day is devoted to process training and an overview of the issue. That’s followed by two days of studying the issue, in the company of advocates and other witnesses, with the last two days devoted to writing the Citizens’ Statement, a process that includes regular feedback from advocates and opponents.

One of the veterans from the 2010 process testified earlier today that she found the process exceptional and believed it would work well to address any range of problems, political or otherwise. Perhaps so, but it’s not an inexpensive process. It is certainly cost-effective for a large state, in which the intensive deliberation might help a mass public make decisions that have profound implications, such as the fate of millions of dollars in state revenue/spending. Consequently, interested parties from a few other U.S. states will be observing the second round of these deliberations Aug 20-24 in Portland.

The process has won praise from many citizens and media, but there was a disheartening development this past week that underscores that the CIR represents a break from conventional ways of messaging and campaigning. At least some prominent members of the group Our Oregon chose to launch a quasi-boycott of this first week of deliberation, which studies an initiative they support (one that has implications for how the State of Oregon collects/spends corporate taxes). The critics’ public argument was that they didn’t have the time to inform the judgment of 24 people who won’t have any impact, and they cited the report I co-authored in 2010; I’ve since posted an August 7 op-ed in the Oregonian explaining why the opposite’s the case. In that earlier round of CIR panels, critics from the political right also tried to discredit the CIR, though they did so only after being willing and full participants in the weeklong process.

The points here, as they relate to this blog, are twofold:

  • There are many successful public deliberation processes, and the Oregon CIR represents a newer kind that aims to use small group deliberation to inform the discretion of a mass public. My colleagues and I will continue studying it—this year with help from the Kettering Foundation—to see how well it does this. So far, the evidence is encouraging: With enough care and resources, one can create an intensive deliberative process that appears to get lay citizens past both crude heuristics and more elaborate, but ideologically-motivated reasoning.
  • Those who work in political communication professionally are right to be concerned that processes like the CIR operate beyond their control. This year, as in 2010, I suspect we will see capable advocates and opponents make their case to the citizen panelists, but the outcome will hinge not on the balance of ideological bias (which is roughly even in Oregon) but on the quality of argument, reasoning, and evidence presented. 


How to recognize asymmetry in motivated reasoning if/when you see it

This is the last of installment of my series on “probing/prodding” the Republican Brain Hypothesis (RBH).  RBH posits that conservative ideology is associated with dogmatic or unreflective reasoning styles that dispose conservative people to be dismissive of policy-relevant science on climate change and other issues. This is the basic thesis of Chris Mooney’s book The Republican Brain, which ably collects and synthesizes the social science data on which the claim rests.

As I’ve explained, I’m skeptical of RHB. Studies conducted by CCP link conflict over policy-relevant science to a form of motivated reasoning to which citizens of all cultural and ideological persuasions seem worrisomely vulnerable. The problem, I believe, isn’t that citizens with one or another set of values can’t or won’t use reason; it’s that the science communication environment --on which the well-being of all citizensdepends —has become contaminated by antagonistic cultural meanings.

In the first installment in this series, I stated why I thought the social science work that RHB rests on is not persuasive: vulnerability to culturally or ideologically motivated reasoning is not associated with any of the low-quality reasoning styles that various studies find to be correlated with conservatives. On the contrary, there is powerful evidence that higher-quality reasoning styles characterized by systematic or reflective thought can magnify the tendency to fit evidence to ideological or cultural predispositions when particular facts (the temperature of the earth; the effectiveness of gun control; the health effects of administering the HPV vaccine for school girls) become entangled in cultural or ideological rivalries.

In the second installment, I described an original study that adds support to this understanding. In that study, I found, first, that one reliable and valid measure of reflective and open-minded reasoning, the Cognitive Reflection Test (CRT), is not meaningfully correlated with ideology; second, that conservatives and liberals display ideologically motivated reasoning when considering evidence of whether CRT is a valid predictor of open-mindedness toward scientific evidence on climate change; and third, that this tendency to credit and dismiss evidence in an ideologically slanted way gets more intense as both liberals and conservatives become more disposed to uses reflective or systematic reasoning as measured by their CRT scores.

If this is what happens when people consider evidence on culturally contested issues like climate change (and this is not the only study that suggests it is), then they will end polarized on policy-relevant science no matter what the correlation might be between their ideologies and the sorts of reasoning-style measures used in the studies collected in Republican Brain.

But there’s one last point to consider: the asymmetry thesis.

Mooney, who is scrupulously fair minded in his collection and evaluation of the data, acknowledges that there is evidence that liberals do sometimes display motivated cognition. But he believes, on balance (and in part based on the studies correlating ideology with quality-of-reasoning measures) that a tendency to defensively resist ideologically threatening facts is greater among Republicans—i.e., that this psychological tendency is asymmetric and not symmetric with respect to ideology.

The study I conducted furnishes some relevant data there, too.

The results I reported suggest that ideologically motivated reasoning occurred in the study subjects: how likely they were to accept that the CRT is valid depended on whether they were told the test had found “more” bias in people who share the subjects own ideology or reject it. This ideological slant got bigger, moreover, as subjects’ CRT scores increased.

But the statistical test I used to measure this effect—a multivariate regression—essentially assumed the effect was uniform or linear with respect to subjects’ political leanings. If I had plotted the result of that statistical test on a graph that had political leanings (measured by “z_conservrepub,” a scale that aggregates responses to a liberal-conservative ideology measure and a party-affiliation measure) on the x-axis and subjects’ likelihood of “agreeing” that CRT is valid on the y-axis, the results would have looked like this for subjects who score higher than average on CRT:

The tendency to “agree” or “disagree” depending on the ideological congeniality of doing so looks even for conservative Republicans and liberal Democrats. But it is constrained to do so by the statistical model. 

It is possible that the effect is in fact not even. This figure plots a hypothetical distribution of responses that is consistent with the asymmetry thesis.


Here people seem to adopt an ideologically opportunistic approach to assessing the validity of CRT only as the become more conservative and Republican; as they become more liberal and Democratic, in this hypothetical rendering, they are ideologically “neutral” with respect to their assessments. If one applies a linear model (or, as I did, a logistic regression model that assumes a symmetric sigmoid function), then an “asymmetry” of this sort could well escape notice!

But if one is curious whether an effect might not be linear, one can use a different statistical test. A polynomial regression fits a “curvilinear” model to the data. If the effect is not linear with respect to the explanatory variable (here, political outlook), that will show up in the model, the fit of which can be compared to the linear model.

So I fitted a polynomial model to the data from the experiment by adding an appropriate term (one that squared the effect of the interaction of CRT, ideology, and experimental condition). Lo and behold, that model fit better (see for yourself). The ideologically motivated reasoning that was generated by the experiment, and amplified by subjects disposition to engage in reflective information processing, really wasn’t linear!

But it wasn’t asymmetric in the sense contemplated by the ideological asymmetry thesis either! Where a “curvilinear” model fits best, one has to plot the effects of that model and see what it looks like in order to figure out what the nonlinear effect is and what it means.  This figure (which illustrates the effect captured in the polynomial model by fitting a “smoothed,” local regression line to that model’s predicted values) does that:

I guess I’d say that subjects' biased reasoning was "asymmetrical" with respect to the two experimental conditions: the intensity with which they credited or discredited ideological congenial evidence was slightly bigger in the condition that advised subjects the results of the (fictional) CRT studies had found "nonskeptics" on climate change to be closed-minded.But that was true, it seems, for those on both sides of the ideological spectrum.

In any event, the picture of what the “curvilinear” effect looks like is not even close to the picture the “asymmetry thesis” predicts. Both liberals and conservatives are engaged in motivated reasoning, and the effect is not meaningfully different for either.

Now, why go through all this?  Well, obviously, because it’s fun! Heck, if you are actually read this post and have gotten this far, you must agree.

But there’s also a take-away: One can’t tell whether a motivated reasoning effect is truly “asymmetric” unless one applies the correct statistical test.

It’s pretty much inevitable that an effect observed in any sort of social science experiment won’t be “linear.” Even in the (unlikely) event that the phenomenon one is measuring is in fact genuinely linear, data always have noise, and effects therefore always have lumps with reference to the experimental and other influences that produce them.

If the hypothesis one is testing suggests a linear effect is likely to be right or close to it, one starts with a linear test and sees if the results holds up.

If one has the hypothesis that the effect is not linear, or suspects after looking at the raw data that it might not be and is interested to find out, then one must apply an appropriate nonlinear test. If that test doesn’t corroborate that there is in fact a curvilinear effect, and that the curvilinear model fits better than the linear one, then one doesn’t have sufficient evidence to conclude the effect isn’t linear.

Sometimes when empirical researchers examine ideologically motivated reasoning the raw or summary data might make it look like the effect is “bigger” for one ideological group than the other. But that’s not enough to conclude that the effect fits the asymmetry thesis. Any researcher who wants to test the asymmetry hypothesis still has to do the right statistical test before he or she can conclude that the data really support it.

I’m not aware of anyone who has conducted a study of ideologically motivated reasoning who has reported finding a curvilinear effect that fits the logic of the asymmetry thesis.

If you know of such a study, please tell me!

Post 1 in this "series"

Post 2 in it


I've also plotted the results in the same fashion I did last time--essentially predicting the likelihood that a "high CRT" (CRT = 1.6) "conservative Republicant" (+1 SD on z_conservrepub) and a "high CRT" "liberal Democrat" (-1 SD) would view the CRT test as valid in the three experimental conditions.

The estimates in the top graph take the curvilinear effect into account, so they can be understood to be furnishing a reliable picture of the relative magnitude of the motivated reasoning effects for people with those respective characteristics. Looks pretty uniform, I'd say.

Otherwise, while the effects might be just a tad more dramatic, they clearly aren't materially different from the ones brought into view with the ordinary logit model. No real point, I'd say, in treating the polynomical model as "better" in any interesting sense; it was just interesting to find out if the polynomial model would both fit better and alter the interpretation suggested the nonpolynomial model


I agree with Chris Mooney -- on *the* most important thing

Chris Mooney offers this observation in what (I'm sure) will not be his final word on RBH (the Republican Brain Hypothesis):

The closing words of The Republican Brain are these:

believe that I am right, but I know that I could be wrong. Truth is something that I am driven to search for. Nuance is something I can handle. And uncertainty is something I know I’ll never fully dispel.

These are not the words of someone who is certain in his beliefs—much less certain of the conclusion that Dan Kahan calls the “asymmetry thesis.”

This, in my view, masterfully conveys the correct attitude for anyone who says anything that is subject to observation & testing (I guess there are other things worth saying; put that aside). It's how a person who truly gets science's way of knowing talks (those who don't really get it march around pronouncing this & that has been "proved").

I don't think there's anything wrong, either, with being willing to advance with great conviction, strength, & urgency claims that one holds subject to this attitude. Indeed, it will often be essential to do this: recognizing the provisionality of knowledge is not a reason for failing to advocate & act on the basis of the best available evidence when failure to act could result in dire consequences.

There's a ton of spirit in Mooney but not an ounce of dogmatism.

He communicates important elements of science's way of knowing by his example as well as by his words.

For record: he could be right -- because I could be wrong to think there is no consequential difference in how contemporary "liberals" & "conservatives" process policy-relevant science. (The problem, I think, is not with how anyone thinks; it is with a polluted communication enviornment that needs to be repaired and protected.)

The dialectic of conjecture & refutation is a dialog among people who agree on something much more important than anything they might disagree about.


Cognitive reasoning-style measures other than CRT *are* valid--but for what?

Happily, Chris Mooney has indicated that he is planning to take up the points I made in my post on his Republican Brain, and also the data I collected to help test surmises and hunches I formed while reflecting on his book.

I certainly want to give him his chance to present his position in full without the distraction of piecemeal qualifications, clarifications, and counterarguments.

But his first post does make me regret a part of mine, in which I conveyed low regard for what is in fact high-quality work.

The bungling occurs in the paragraph that “questions” the “validity” of self-reported reasoning-style measures and describes the evidence for their validity as “spare.”  

I do happen to believe the Cognitive Reflection Test is more predictive of vulnerability to one or another form of bias associated with what Kahneman calls System 1 (unreflective, fast) reasoning. I think this because of various recent studies, including ones in the links & references in that post. It’s also pretty well established that people who score high on all manner of reasoning-quality measures are not better than ones who score low in consciously assessing their own vulnerability to bias—so it stands to reason, I think, that we should try to use objective or performance-based measures and not self-reporting ones to predict individual differences in reasoning styles.

But how best to measure reasoning styles and reasoning quality is not a settled issue--indeed, it's at the heart of a very interesting scholarly debate.

Moreover, “validity” is not what’s at stake in that debate; predictive power is.  My language was recklessly imprecise. I am truly embarrassed by that.

What I should have confined myself to saying is that these measures have not been validated as indicators of motivated reasoning. That’s the dynamic that is understood – by Chris and by many scholars, including ones whose work he cites – to be driving ideological polarization over issues that admit of scientific investigation.

Indeed, far from being understood to predict motivated cognition, the sorts of measures of dual process reasoning that came before CRT were understood *not* to. There is ample work showing that higher-level reasoning processes thought to be measured by these scales can be recruited for identity-protection and other sorts of motivated reasoning.

So why suppose that any correlation between them and ideology predicts motivated reasoning or otherwise explains conflict over policy-relevant science? I very much do want to pose a (respectful!) challenge—one aimed at enlarging our mutual understanding—to those scholars who think that disparities in systematic or reflective reasoning, however measured and on the part of any group, is the explanation for this phenomenon.

The study I conducted was meant to explore that. I used CRT as my measure of high-quality reasoning because it is in fact now at the cutting edge of dual process reasoning research, largely as a result of the emphasis that Kahenman puts on it as the best measure of the tendency to use System 2 as opposed to System 1 reasoning. I found no meaningful correlation between CRT and ideology—which seems to me to be reason to doubt that ideology correlates with the sorts of cognitive biases that quality-of-reasoning measures in general are supposed to measure.

But in assessing the thesis of Republican Brain -- that conservative ideology is associated with styles of thought responsible for political conflict over policy-relevant science -- I don’t think anything at all turns on whether CRT or any other measure is better for measuring vulnerability to cognitive biases. What matters is experimental proof of the vulnerability to motivated reasoning—and whether there’s any correlation between that and either ideology or higher-level cognition. That’s what the experiment was designed to show, those who use higher-quality reasoning are not immune from motivated reasoning.

In the study, subjects conformed their own assessment of the validity of CRT as a predictor of bias to their ideological predispositions.

Conservatives did this.  

But so did liberals: they tended to agree that the CRT is a valid test of “reflectiveness” and “open-mindedness” when they were told that people who credit evidence of climate change scored high on it. But when told that people who are skeptical in fact score higher-- well, then then they were much more likely to dismiss CRT as invalid for that purpose.

What’s more, that effect was magnified by high scores on CRT: people who are more disposed to system 2 reasoning (as measured by CRT) were much more likely to fit their assessments of CRT’s validity to their ideological predispositions.

So liberals and conservatives displayed motivated reasoning. And they both did it more if they were the sorts of people inclined to use high-quality cognition as reflected in a very prominent measure of reflective, open-minded reasoning.

That’s evidence, I think, that the brains of liberals and conservatives are alike in this respect.  And it’s all the more reason to doubt that correlations between ideology and reasoning-style measures can help us to figure out why or when deliberations over policy-relevant science are prone to political polarization or what we should do to try to minimize that sad spectacle. 



NAS "Science of Science Communication" colloquium presentations on-line

I see that the excellent presentations made at the NAS's Sackler "Science of Science Communication" colloquium in May are now on line.

Here's mine:


Some experimental data on CRT, ideology, and motivated reasoning (probing Mooney's Republican Brain)

This is my about my zillonth post on the so-called “asymmetry thesis”—the idea that culturally or ideologically motivated reasoning is concentrated disproportionately at one end of the political spectrum, viz., the right.

But it is also my second post commenting specifically on Chris Mooney’s Republican Brain, which very elegantly and energetically defends the asymmetry thesis. As I said in the first, I disagree with CM’s thesis, but I really really like the book. Indeed, I like it precisely because the cogency, completeness, and intellectual openness of CM’s synthesis of the social science support for the asymmetry thesis helped me to crystallize the basis of my own dissatisfaction with that position and the evidence on which it rests.

I’m not trying to be cute here.

I believe in the Popperian idea that collective knowledge advances through the perpetual dialectic of conjecture and refutation. We learn things through the constant probing and prodding of empirically grounded claims that have themselves emerged from the same sort of challenging of earlier ones.

If this is how things work, then those who succeed in formulating a compelling claim in a manner that enables productive critical engagement create conditions conducive to learning for everyone. They enable those who disagree to more clearly explain why (or show why by collecting their own evidence). And in so doing, they assure those who agree with the claim that it will not evade the sort of persistent testing that is the only basis for their continuing assent to it.

A. Recapping my concern with the existing data

In the last post, I reduced my main reservations with the evidence for the asymmetry thesis to three:

First, I voiced uneasiness with the “quality of reasoning” measures that figure in many of the studies Republic Brain relies on to show conservatives are closed minded or unreflective. Those that rely on dogmatic “personality” styles and on people’s own subjective characterization of their “open-mindedness” or amenability to reasoning are inferior, in my view, to objective, performance-based reasoning measures, particularly Numeracy and the Cognitive Reflection Test (CRT), which recently haven been shown to be much better predictors of vulnerability to one or another form of cognitive bias. CRT is the measure that figures in Kahneman’s justly famous “fast/slow”-“System 1/2” dual process theory.

Second, and even more fundamentally, I noted that there’s little evidence that any sort of quality of reasoning measure helps to identify vulnerability to motivated cognition—the tendency to unconsciously fit one’s assessment of evidence to some goal or interest extrinsic to forming an accurate belief. Indeed, I pointed out that there is evidence that the people highest in CRT and numeracy are more disposed to display ideologically motivated cognition. Mooney believes—and I agree—that ideologically motivated reasoning is at the root of disputes like climate change. But if the disposition to engage in higher quality, reflective reasoning doesn’t immunize people from motivated reasoning, then one can’t infer anything about disputes like climate change from studies that correlate the disposition to engage in higher quality, reflective reasoning with ideology..

Third, we should be relying instead on experiments that test for motivated reasoning directly. I suggested that many experiments that purport to find evidence of motivated reasoning aren’t well designed. They measure only whether people furnished with arguments change their minds; that’s consistent with unbiased as well as biased assessments of the evidence at hand. To be valid proof of motivated reasoning, studies must manipulate the ideological motivation subjects have for crediting one and the same piece of evidence.  Studies that do this show that conservatives and liberals both opportunisitically adjust their weighting of evidence conditional on its support for ideologically satisfying conclusions.

B. Some more data for consideration

Okay. Now I will present some evidence from a study that I designed with all three of these points—ones, again, that Mooney’s book convinced me are the nub of the matter—in mind. 

That study tests three hypotheses:

(1) that there isn’t a meaningful connection between ideology and the disposition to use higher level, systematic cognition (“System 2” reasoning, in Kahneman’s terms) or open-mindedness, as measured by CRT;

(2) that a properly designed study will show that liberals as well as conservatives are prone to motivated reasoning on one particular form of policy-relevant scientific evidence: studies purporting to find that quality-of-reasoning measures show those on one or the other side of the climate-change debate are “closed minded” and unreflective; and

 (3) that a disposition to engage in higher-level cognition (as measured by CRT) doesn’t counteract but in fact magnifies ideologically motivated cognition.

1. Relationship of CRT to ideology

This study involved a diverse national sample of U.S. adults (N = 1,750). I collected data on various demographic characteristics, including the subjects self-reported ideology and political-party allegiance.  And I had the subjects complete the CRT test.

I’ve actually done this before, finding only tiny and inconclusive correlations between ideology, culture, and party-affiliating, on the one hand, and CRT, on the other.

The same was true this time. Consistent with the first hypothesis, there was no meaningful correlation between CRT and either liberal-conservative ideology (measured with a standard 5-point scale) or cultural individualism (measured with our CC worldview scales).

There were weak correlations between CRT and both cultural hierarchy and political party affiliation. But the direction of the effects were contrary to the Republican Brain hypothesis.

That is, both hierarchy (as measured with the CC scale) and being a Republican (as measured by a standard 7-point partisan-identification measure) predicted higher levels of reflectiveness and analytical thinking as measured by CRT.

But the effects, as I mentioned (and as in the past), were miniscule.  I’ve set to the left the results of an ordered logistic regression that predicts the likelihood that someone who identifies as a “Democrat” or a “Republican” (2 & 6 on the 7-point scale), respectively, is to answer 0, 1, 2, or all 3 three CRT questions correctly (you can click here to see the regression outputs). For comparison, I’ve also included such models for religious as opposed to nonreligious and being female as opposed to male, both of which (here & here, e.g.) are known to be associated with lower CRT scores and which have bigger effects than does party affiliation.

Hard to believe that the trivial difference between Republicans and Democrats on CRT could explain much of anything, much less the intense conflicts we see over policy-relevant science in our society.

2. Ideologically motivated reasoning—relating to the asymmetry of ideologically motivated reasoning!

The study also had an experimental component.

The subjects were divided into three groups or experimental “conditions.”  In all of them, subjects indicated whether they agreed or disagreed--and how strongly (on a six-point scale)--with the statement:

I think the word-problem test I just took [i.e., the CRT test] supplies good evidence of how reflective and open-minded someone is.

But before they did, they received background information that varied between the experimental conditions.

In the “skeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who accept evidence of climate change tend to get more answers correct than those who reject evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who believe climate change is happening are more open-minded than those who are skeptical that climate change is happening.

In contrast, in the “nonskeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who reject evidence of climate change tend to get more answers correct than those who accept evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who are skeptical climate change is happening are more open-minded than those who believe that climate change is happening.

Finally, in the “control” condition, subjects read simply that “[s]ome psychologists believe the questions you have just answered measure how reflective and open-minded someone is” before they indicated whether they themselves agreed that the test was a valid measure of such a disposition.

You can probably see where I’m going with this.

All the subjects are indicating whether they believe the CRT test is a valid measure of reflection and open-mindedness and all are being given the same evidence that it is—namely, that “[s]ome psychologists believe” that that’s what it does.

Two-thirds of them are also being told, of course, that people who take one position on climate change did better than the other. Why should that make any difference? That’s just a result (like the findings of correlations between ideology and quality-of-reasoning measures in the studies described in Republican Brain); it’s not evidence one way or the other on whether the test is valid.

However, this additional information does either threaten or affirm the identities of the subjects to the extent that they (like most people) have a stake in believing that people who share their values are smart, open-minded people who form the “right view” on important and contentious political issues. Identity-protection is an established basis for motivated cognition—indeed, the primary one, various studies have concluded, for disputes that seem to divide groups on political grounds.

We didn’t ask subjects whether they believed that climate change was real or a serious threat or anything.  But, again, we did measure their political ideologies and political party allegiances (their cultural worldviews, too, but I’m going to focus on political measures, since that’s what most of the researchers featured in Republican Brain focus on).

Accordingly, if people tend to agree that the CRT is “supplies good evidence of how reflective and open-minded someone is” when the test is represented as showing that people who hold the position associated with their political identity are “open minded” and “reflective” but disagree when the test is represented as showing that such people are “biased,” that would be strong evidence of motivated cognition. They would then be assigning weight to one and the same piece of evidence conditional on the perceived ideological congeniality of the conclusion that it supports.

To analyze the results, I used a regression model that allowed me to assess simultaneously the influence of ideology and political party affiliation, the experimental group the subjects were in, and the subjects’ own CRT scores.

These figures (which are derived from the regression output that you can also find here) illustrate the results. On the left, you see the likelihood that someone who is either a “liberal Democrat” or a “conservative Republican” and who is “low” in CRT (someone who got 0 answers correct—as was true for 60% of the sample; most people aren’t inclined to use System 2 reasoning, so that’s what you’d expect) would “agree” the CRT is a valid test of reflective and open-minded thinking in the three conditions.

Not surprisingly, there’s not any real disagreement in the control condition. But in the “skeptic biased” condition—in which subjects were told that those who don’t accept evidence of climate change tended to score low—low CRT liberal Democrats were much more likely to “agree” than were low CRT conservative Republicans. That’s a motivated reasoning effect.

Interestingly, there was no ideological division among low CRT subjects in the “nonskeptic biased” condition—the one in which subjects were told that those who “accept” evidence of climate change do worse.

But there was plenty of ideological disagreement in the “nonskepetic biased” condition among subjects who scored higher in CRT! There was only about a 25% likelihood that a liberal Democrat who was “high” in CRT (I simulated 1.6 answers correct—“87th percentile” or + 1 SD—for graphic expositional purposes) would agree that CRT was valid if told that the test predicted “closed mindedness” among those who “accept evidence” of climate change.  There was a bit higher than 50% chance, though, that a “high” CRT conservative Republican would.

The positions of subjects like these flipped around in the “skeptic biased” condition.  That’s motivated reasoning.

It’s also motivated reasoning that gets higher as subjects become more disposed to use systematic or System 2 reasoning as measured by CRT.

That’s evidence consistent with hypotheses two and three.

The result is also consistent with the finding from the CCP Nature Climate Change study, which found that those who are high in science literacy and numeracy (a component of which is CRT) are the most culturally polarized on both climate change and nuclear power.  The basic idea behind the hypothesis is that in a “toxic science communication climate”—one in which positions on issues of fact become symbols of group identity—everyone has a psychic incentive to fit evidence to their group commitments. Those who are high in science literacy and technical reasoning ability are able to use those skills to get an even better fit. . . .

None of this, moreover, is consistent with the sort of evidence that drives the asymmetry thesis:

(1) There’s not a meaningful correlation here between partisan identity and one super solid measure of higher level cognitive reasoning.

(2) What’s more, higher-level reasoning doesn’t mitigate motivated reasoning. On the contrary, it aggravates it. So if motivated reasoning is the source of political conflict on policy-relevant science (a proposition that is assumed, basically, by proponents of the asymmetry thesis), then whatever correlation might exist between low-level cognitive reasoning capacity and conservativism can’t be the source of such conflict.

(3) In a valid experimental design, there’s motivated reasoning all around—not just on the part of Republicans.

But is the level of motivated reasoning in this experiment genuinely “symmetrical” with respect to Democrats and Republicans. Is the effect “uniform” across the ideological spectrum?

Frankly, I’m not sure that that question matters. There’s enough motivated reasoning across the ideological spectrum (and cultural spectra)—this study and others suggest—for everyone to be troubled and worried.

But the data do still have something to say about this issue. Indeed, it enables me to say something directly about it because there’s enough data to employ the right sorts of statistical tests (ones that involve fitting “curvilinear” or polynomial models rather than linear ones to the data).

But I’ve said enough for now, don’t you think?

I’ll discuss that another time (soon, I promise).

Post 1 & Post 3 in this "series"