follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Get ready for Snyder v. Phelps II: the "motivated reasoning" loophole in the First Amendment

Predictably, in the wake of the Supreme Court's decision the Term before last in Snyder v. Phelps, various states and now Congress have enacted new laws regulating demonstrations or picketing at military funerals.

Snyder overturned a $5 million "emotional distress" judgment against members of the Westboro Church for holding a homophobic demonstration at the funeral of a soldier killed in Iraq.  That award violated the First Amendment, the Court explained, because the "distress" experienced by the slain soldier's father (the plaintiff in the suit) "turned on the content and viewpoint of the message conveyed." Things would have been different, the Court suggested, had the Church been held liable for "interference with the funeral itself."

This ruling involved a straightforward application of the "noncommunicative harm" doctrine, which says that, for purposes of the First Amendment, harms arising from negative reactions to ideas or messages are "noncognizable" -- i.e., not a legitimate basis for regulation. The government can impose limits on political protestors and other speakers only to prevent "noncommunicative harms"--ones that can be defined independently of anyone's negative reaction to the speakers' ideas.

Well, the new laws all purport to prohibit demostrations that do or could "interfere" with military funerals in ways unrelated to the "content and viewpoint" of demonstrators' messages. Some impose penalties for blocking or obstructing. And others, like the new federal law, create "buffer" zones that restrict the proximity of the demonstrators to the funeral as a prophylactic measure against those kinds of "noncommunitive" harms.

But will the enforcement of these laws really assure that military funeral protestors are held liable only for "noncommunicative harms" and not for expressing contentious -- and in the case of the Westboro Church, genuinely noxious -- ideas? 

Cases based on these laws will turn on facts. Courts will scrutinize the evidence either to determine whether protestors "interfered" with particular funerals or to test the soundness of the governmental determination that without "buffer zones" such interference would be nearly certain to occur. The theory of cultural cognition predicts that factfinders will be unconsciously motivated to conform their assessments of the evidence on such matters to their moral appraisals of the positions the protestors are advocating.

Turns out we've already tested this very prediction. In our paper, "They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction, 64 Stan. L. Rev. 851 (2012), we presented the results of an experiment in which subjects playing the role of jurors watched a videotape of a protest to determine whether the demonstrators had "pushed," "shoved," "blocked" and otherwise "interfered" with pedestrian access to a building. The answer the subjects gave -- what they saw on one and the same tape -- depended on two things: (1) what we told the subjects about the protest-- that it was one conducted by anti-abortion demonstrators outside an abortion clinic or instead one by conducted by opponents of "Don't Ask, Don't Tell" outside a military-recruitment facility; and (2) the cultural outlooks of the subjects. Basically, if subjects found the protestors' message culturally disagreeable, they saw all manner of "noncommunicative harm," whereas if they concurred with the protestors' message, then they saw no such thing.

In fact, the filmed protestors weren't demonstrating against either abortion or "Don't Ask, Don't Tell." They were members of the Westboro Church, filmed at a protest that they conducted at Harvard University in 2009 (the study, too, was conducted well before the Church's case got to the Supreme Court).  Snyder v. Phelps notwithstanding, there's still plenty of room in the law to restrict the funeral protests of the Westboro Church based on the disgust people (quite legitimately) feel toward the Church members' ideas. 

The sort of censorship that sneaks through this "motivated reasoning" loophole in the First Amendment, moreover, doesn't limit itself to protestors as pathetic as the Westoboro Church. From1960s civil rights and antiwar demonstrators to last year's "Occupy Wall Street" protestors, politically charged speakers have always generated polarized responses: not about whether it's okay to punish protestors for their ideas--there's really no dispute about that; but about whether protestors advocating controversial positions have crossed the line from speech to intimidation--something we in fact all agree they can be punished for doing.  Yet First Amendment doctrine has nothing particularly helpful to say about our predictable tendency to impute danger and harm to those who threaten our worldviews.

There's a lot of John Stuart Mill in U.S. constitutional law -- and I have no problem with that. I only wish there were a little bit more William James and Herbert Simon.


Religion, political party & cognitive reflection ... hmmmm

I posted some stuff recently (here, here & here) looking at CRT, ideology, & motivated reasoning. (Indeed, the posts wiped me out so completely that I've been laying low ever since.)

Here is one additional thing I found; I don't really know what to make of it, so I  invite comment.

It has been shown in multiple papers (here & here & here) that CRT and religiosity are negatively correlated. This finding is treated as evidence that there is a causal link of some sort between religion and the more intuitive, less reflective reasoning style associated with "system 1" in Kahneman's dual process scheme.

In my dataset (a nationally representative panel of 1700 U.S. adults), I find that same negative relationship. But it's moderated by partisan self-identification. The negative impact of religiosity (measured by a scale that combined importance of religion, importance of God, and self-reported church attendance) on CRT gets bigger as respondents' identification with the Democratic Party increases.

Any ideas about what's going on here? I don't really have any.

I'm also not sure what the significance of this relationship is, if any, for the studies that find religion is associated with low-level or system 1 processing. One difficulty for me in that regard is that I'm sort of puzzled by what the psychological theory is behind the religion/low CRT finding generally (I mean, historically, plenty of really highly reflective types have been religious, right Rev. Bayes?); it only seems harder, for me at least!, to articulate a theory if it has to incorporate the religiosity/partisan-identification interaction at the same time.  

One other important thing to note is that at least a couple of the studies on religion & cognitive style also included experimental elements, in which manipulations of subjects' reliance on reflection or intuition influenced expressed indicia of religiosity or vice versa. So it's not as if everything about those studies turns on inferences from correlations. But one would still think that interactions between religiosity and other characteristics have to fit with whatever the theory is that connects religiosity to less reflective modes of cognition.

Now I could, of course, go on & try all sorts of additional combinations of demographic variables, including additional cross-product interaction terms. But frankly, I see that sort of approach as pretty mindless. The sorts of demographic variables that predict CRT will tend to co-vary; that goes not double but rather exponential for the various cross-product interactions one can form with them. When all of those get stuck indiscriminately into the regression, it starts to become very unclear what is being modeled (uh, let's see-- how might the simultaneous increase in religion and gender influence CRT holding both race and its interaction with religiosity constant at their "means...")

So if others have suggestions about tests, I'm happy to run them. But before I do, the test requester has to say why the particular combination of variables proposed (including cross-product interaction terms) makes sense. What or who does that combination of variables model given the the sorts of covariances that are being partialed out? Researchers who "over-control" in regressions -- putting everything one can think of onto the right-hand side without any thought of what such a model models -- is something that really gets me steamed!


Cultural cognition and the Oregon Citizens' Initiative Review

On a weeklong visit to Salem, Oregon, I find myself reflecting on the recent postings about ideology, motivation cognition, and the ability to process information in an unbiased and reflective way. I’m not here to enjoy the Pacific Northwest but to observe an anomalous public deliberation process, the Oregon Citizens’ Initiative Review.

The recent postings on the Cultural Cognition Project blog have reaffirmed my conviction that (1) ideologically biased information processing happens in all political camps and (2) rising above that remains possible, but it may require unconventional circumstances.

The Oregon Citizens’ Initiative Review (CIR) process was piloted in 2008, made a provisional state process in 2010, then made a permanent part of Oregon elections in 2011, with its official state commission being established earlier this year. In a nutshell, they bring together a representative random sample of 24 Oregonians to analyze a ballot measure for a full week then write a one page Citizens’ Statement that appears in the official state Voters’ Pamphlet.

The idea is that this small deliberative body, which gets to hear from and query issue advocates and opponents, can offer insight to the average voter that helps them make more reflective choices when they complete their ballots. I led a research effort in 2010 that found that for many of the 42% of Oregonians who learned about the CIR process, it had exactly that effect. On one issue, for instance, reading the CIR Statement moved the public from roughly 2/3 supporting to 2/3 opposing a mandatory minimums initiative. (Read full report here.)

While I write, I am watching the 2012 CIR process unfold. The panelists are in the first of their five days of deliberation, and this day is devoted to process training and an overview of the issue. That’s followed by two days of studying the issue, in the company of advocates and other witnesses, with the last two days devoted to writing the Citizens’ Statement, a process that includes regular feedback from advocates and opponents.

One of the veterans from the 2010 process testified earlier today that she found the process exceptional and believed it would work well to address any range of problems, political or otherwise. Perhaps so, but it’s not an inexpensive process. It is certainly cost-effective for a large state, in which the intensive deliberation might help a mass public make decisions that have profound implications, such as the fate of millions of dollars in state revenue/spending. Consequently, interested parties from a few other U.S. states will be observing the second round of these deliberations Aug 20-24 in Portland.

The process has won praise from many citizens and media, but there was a disheartening development this past week that underscores that the CIR represents a break from conventional ways of messaging and campaigning. At least some prominent members of the group Our Oregon chose to launch a quasi-boycott of this first week of deliberation, which studies an initiative they support (one that has implications for how the State of Oregon collects/spends corporate taxes). The critics’ public argument was that they didn’t have the time to inform the judgment of 24 people who won’t have any impact, and they cited the report I co-authored in 2010; I’ve since posted an August 7 op-ed in the Oregonian explaining why the opposite’s the case. In that earlier round of CIR panels, critics from the political right also tried to discredit the CIR, though they did so only after being willing and full participants in the weeklong process.

The points here, as they relate to this blog, are twofold:

  • There are many successful public deliberation processes, and the Oregon CIR represents a newer kind that aims to use small group deliberation to inform the discretion of a mass public. My colleagues and I will continue studying it—this year with help from the Kettering Foundation—to see how well it does this. So far, the evidence is encouraging: With enough care and resources, one can create an intensive deliberative process that appears to get lay citizens past both crude heuristics and more elaborate, but ideologically-motivated reasoning.
  • Those who work in political communication professionally are right to be concerned that processes like the CIR operate beyond their control. This year, as in 2010, I suspect we will see capable advocates and opponents make their case to the citizen panelists, but the outcome will hinge not on the balance of ideological bias (which is roughly even in Oregon) but on the quality of argument, reasoning, and evidence presented. 


How to recognize asymmetry in motivated reasoning if/when you see it

This is the last of installment of my series on “probing/prodding” the Republican Brain Hypothesis (RBH).  RBH posits that conservative ideology is associated with dogmatic or unreflective reasoning styles that dispose conservative people to be dismissive of policy-relevant science on climate change and other issues. This is the basic thesis of Chris Mooney’s book The Republican Brain, which ably collects and synthesizes the social science data on which the claim rests.

As I’ve explained, I’m skeptical of RHB. Studies conducted by CCP link conflict over policy-relevant science to a form of motivated reasoning to which citizens of all cultural and ideological persuasions seem worrisomely vulnerable. The problem, I believe, isn’t that citizens with one or another set of values can’t or won’t use reason; it’s that the science communication environment --on which the well-being of all citizensdepends —has become contaminated by antagonistic cultural meanings.

In the first installment in this series, I stated why I thought the social science work that RHB rests on is not persuasive: vulnerability to culturally or ideologically motivated reasoning is not associated with any of the low-quality reasoning styles that various studies find to be correlated with conservatives. On the contrary, there is powerful evidence that higher-quality reasoning styles characterized by systematic or reflective thought can magnify the tendency to fit evidence to ideological or cultural predispositions when particular facts (the temperature of the earth; the effectiveness of gun control; the health effects of administering the HPV vaccine for school girls) become entangled in cultural or ideological rivalries.

In the second installment, I described an original study that adds support to this understanding. In that study, I found, first, that one reliable and valid measure of reflective and open-minded reasoning, the Cognitive Reflection Test (CRT), is not meaningfully correlated with ideology; second, that conservatives and liberals display ideologically motivated reasoning when considering evidence of whether CRT is a valid predictor of open-mindedness toward scientific evidence on climate change; and third, that this tendency to credit and dismiss evidence in an ideologically slanted way gets more intense as both liberals and conservatives become more disposed to uses reflective or systematic reasoning as measured by their CRT scores.

If this is what happens when people consider evidence on culturally contested issues like climate change (and this is not the only study that suggests it is), then they will end polarized on policy-relevant science no matter what the correlation might be between their ideologies and the sorts of reasoning-style measures used in the studies collected in Republican Brain.

But there’s one last point to consider: the asymmetry thesis.

Mooney, who is scrupulously fair minded in his collection and evaluation of the data, acknowledges that there is evidence that liberals do sometimes display motivated cognition. But he believes, on balance (and in part based on the studies correlating ideology with quality-of-reasoning measures) that a tendency to defensively resist ideologically threatening facts is greater among Republicans—i.e., that this psychological tendency is asymmetric and not symmetric with respect to ideology.

The study I conducted furnishes some relevant data there, too.

The results I reported suggest that ideologically motivated reasoning occurred in the study subjects: how likely they were to accept that the CRT is valid depended on whether they were told the test had found “more” bias in people who share the subjects own ideology or reject it. This ideological slant got bigger, moreover, as subjects’ CRT scores increased.

But the statistical test I used to measure this effect—a multivariate regression—essentially assumed the effect was uniform or linear with respect to subjects’ political leanings. If I had plotted the result of that statistical test on a graph that had political leanings (measured by “z_conservrepub,” a scale that aggregates responses to a liberal-conservative ideology measure and a party-affiliation measure) on the x-axis and subjects’ likelihood of “agreeing” that CRT is valid on the y-axis, the results would have looked like this for subjects who score higher than average on CRT:

The tendency to “agree” or “disagree” depending on the ideological congeniality of doing so looks even for conservative Republicans and liberal Democrats. But it is constrained to do so by the statistical model. 

It is possible that the effect is in fact not even. This figure plots a hypothetical distribution of responses that is consistent with the asymmetry thesis.


Here people seem to adopt an ideologically opportunistic approach to assessing the validity of CRT only as the become more conservative and Republican; as they become more liberal and Democratic, in this hypothetical rendering, they are ideologically “neutral” with respect to their assessments. If one applies a linear model (or, as I did, a logistic regression model that assumes a symmetric sigmoid function), then an “asymmetry” of this sort could well escape notice!

But if one is curious whether an effect might not be linear, one can use a different statistical test. A polynomial regression fits a “curvilinear” model to the data. If the effect is not linear with respect to the explanatory variable (here, political outlook), that will show up in the model, the fit of which can be compared to the linear model.

So I fitted a polynomial model to the data from the experiment by adding an appropriate term (one that squared the effect of the interaction of CRT, ideology, and experimental condition). Lo and behold, that model fit better (see for yourself). The ideologically motivated reasoning that was generated by the experiment, and amplified by subjects disposition to engage in reflective information processing, really wasn’t linear!

But it wasn’t asymmetric in the sense contemplated by the ideological asymmetry thesis either! Where a “curvilinear” model fits best, one has to plot the effects of that model and see what it looks like in order to figure out what the nonlinear effect is and what it means.  This figure (which illustrates the effect captured in the polynomial model by fitting a “smoothed,” local regression line to that model’s predicted values) does that:

I guess I’d say that subjects' biased reasoning was "asymmetrical" with respect to the two experimental conditions: the intensity with which they credited or discredited ideological congenial evidence was slightly bigger in the condition that advised subjects the results of the (fictional) CRT studies had found "nonskeptics" on climate change to be closed-minded.But that was true, it seems, for those on both sides of the ideological spectrum.

In any event, the picture of what the “curvilinear” effect looks like is not even close to the picture the “asymmetry thesis” predicts. Both liberals and conservatives are engaged in motivated reasoning, and the effect is not meaningfully different for either.

Now, why go through all this?  Well, obviously, because it’s fun! Heck, if you are actually read this post and have gotten this far, you must agree.

But there’s also a take-away: One can’t tell whether a motivated reasoning effect is truly “asymmetric” unless one applies the correct statistical test.

It’s pretty much inevitable that an effect observed in any sort of social science experiment won’t be “linear.” Even in the (unlikely) event that the phenomenon one is measuring is in fact genuinely linear, data always have noise, and effects therefore always have lumps with reference to the experimental and other influences that produce them.

If the hypothesis one is testing suggests a linear effect is likely to be right or close to it, one starts with a linear test and sees if the results holds up.

If one has the hypothesis that the effect is not linear, or suspects after looking at the raw data that it might not be and is interested to find out, then one must apply an appropriate nonlinear test. If that test doesn’t corroborate that there is in fact a curvilinear effect, and that the curvilinear model fits better than the linear one, then one doesn’t have sufficient evidence to conclude the effect isn’t linear.

Sometimes when empirical researchers examine ideologically motivated reasoning the raw or summary data might make it look like the effect is “bigger” for one ideological group than the other. But that’s not enough to conclude that the effect fits the asymmetry thesis. Any researcher who wants to test the asymmetry hypothesis still has to do the right statistical test before he or she can conclude that the data really support it.

I’m not aware of anyone who has conducted a study of ideologically motivated reasoning who has reported finding a curvilinear effect that fits the logic of the asymmetry thesis.

If you know of such a study, please tell me!

Post 1 in this "series"

Post 2 in it


I've also plotted the results in the same fashion I did last time--essentially predicting the likelihood that a "high CRT" (CRT = 1.6) "conservative Republicant" (+1 SD on z_conservrepub) and a "high CRT" "liberal Democrat" (-1 SD) would view the CRT test as valid in the three experimental conditions.

The estimates in the top graph take the curvilinear effect into account, so they can be understood to be furnishing a reliable picture of the relative magnitude of the motivated reasoning effects for people with those respective characteristics. Looks pretty uniform, I'd say.

Otherwise, while the effects might be just a tad more dramatic, they clearly aren't materially different from the ones brought into view with the ordinary logit model. No real point, I'd say, in treating the polynomical model as "better" in any interesting sense; it was just interesting to find out if the polynomial model would both fit better and alter the interpretation suggested the nonpolynomial model


I agree with Chris Mooney -- on *the* most important thing

Chris Mooney offers this observation in what (I'm sure) will not be his final word on RBH (the Republican Brain Hypothesis):

The closing words of The Republican Brain are these:

believe that I am right, but I know that I could be wrong. Truth is something that I am driven to search for. Nuance is something I can handle. And uncertainty is something I know I’ll never fully dispel.

These are not the words of someone who is certain in his beliefs—much less certain of the conclusion that Dan Kahan calls the “asymmetry thesis.”

This, in my view, masterfully conveys the correct attitude for anyone who says anything that is subject to observation & testing (I guess there are other things worth saying; put that aside). It's how a person who truly gets science's way of knowing talks (those who don't really get it march around pronouncing this & that has been "proved").

I don't think there's anything wrong, either, with being willing to advance with great conviction, strength, & urgency claims that one holds subject to this attitude. Indeed, it will often be essential to do this: recognizing the provisionality of knowledge is not a reason for failing to advocate & act on the basis of the best available evidence when failure to act could result in dire consequences.

There's a ton of spirit in Mooney but not an ounce of dogmatism.

He communicates important elements of science's way of knowing by his example as well as by his words.

For record: he could be right -- because I could be wrong to think there is no consequential difference in how contemporary "liberals" & "conservatives" process policy-relevant science. (The problem, I think, is not with how anyone thinks; it is with a polluted communication enviornment that needs to be repaired and protected.)

The dialectic of conjecture & refutation is a dialog among people who agree on something much more important than anything they might disagree about.


Cognitive reasoning-style measures other than CRT *are* valid--but for what?

Happily, Chris Mooney has indicated that he is planning to take up the points I made in my post on his Republican Brain, and also the data I collected to help test surmises and hunches I formed while reflecting on his book.

I certainly want to give him his chance to present his position in full without the distraction of piecemeal qualifications, clarifications, and counterarguments.

But his first post does make me regret a part of mine, in which I conveyed low regard for what is in fact high-quality work.

The bungling occurs in the paragraph that “questions” the “validity” of self-reported reasoning-style measures and describes the evidence for their validity as “spare.”  

I do happen to believe the Cognitive Reflection Test is more predictive of vulnerability to one or another form of bias associated with what Kahneman calls System 1 (unreflective, fast) reasoning. I think this because of various recent studies, including ones in the links & references in that post. It’s also pretty well established that people who score high on all manner of reasoning-quality measures are not better than ones who score low in consciously assessing their own vulnerability to bias—so it stands to reason, I think, that we should try to use objective or performance-based measures and not self-reporting ones to predict individual differences in reasoning styles.

But how best to measure reasoning styles and reasoning quality is not a settled issue--indeed, it's at the heart of a very interesting scholarly debate.

Moreover, “validity” is not what’s at stake in that debate; predictive power is.  My language was recklessly imprecise. I am truly embarrassed by that.

What I should have confined myself to saying is that these measures have not been validated as indicators of motivated reasoning. That’s the dynamic that is understood – by Chris and by many scholars, including ones whose work he cites – to be driving ideological polarization over issues that admit of scientific investigation.

Indeed, far from being understood to predict motivated cognition, the sorts of measures of dual process reasoning that came before CRT were understood *not* to. There is ample work showing that higher-level reasoning processes thought to be measured by these scales can be recruited for identity-protection and other sorts of motivated reasoning.

So why suppose that any correlation between them and ideology predicts motivated reasoning or otherwise explains conflict over policy-relevant science? I very much do want to pose a (respectful!) challenge—one aimed at enlarging our mutual understanding—to those scholars who think that disparities in systematic or reflective reasoning, however measured and on the part of any group, is the explanation for this phenomenon.

The study I conducted was meant to explore that. I used CRT as my measure of high-quality reasoning because it is in fact now at the cutting edge of dual process reasoning research, largely as a result of the emphasis that Kahenman puts on it as the best measure of the tendency to use System 2 as opposed to System 1 reasoning. I found no meaningful correlation between CRT and ideology—which seems to me to be reason to doubt that ideology correlates with the sorts of cognitive biases that quality-of-reasoning measures in general are supposed to measure.

But in assessing the thesis of Republican Brain -- that conservative ideology is associated with styles of thought responsible for political conflict over policy-relevant science -- I don’t think anything at all turns on whether CRT or any other measure is better for measuring vulnerability to cognitive biases. What matters is experimental proof of the vulnerability to motivated reasoning—and whether there’s any correlation between that and either ideology or higher-level cognition. That’s what the experiment was designed to show, those who use higher-quality reasoning are not immune from motivated reasoning.

In the study, subjects conformed their own assessment of the validity of CRT as a predictor of bias to their ideological predispositions.

Conservatives did this.  

But so did liberals: they tended to agree that the CRT is a valid test of “reflectiveness” and “open-mindedness” when they were told that people who credit evidence of climate change scored high on it. But when told that people who are skeptical in fact score higher-- well, then then they were much more likely to dismiss CRT as invalid for that purpose.

What’s more, that effect was magnified by high scores on CRT: people who are more disposed to system 2 reasoning (as measured by CRT) were much more likely to fit their assessments of CRT’s validity to their ideological predispositions.

So liberals and conservatives displayed motivated reasoning. And they both did it more if they were the sorts of people inclined to use high-quality cognition as reflected in a very prominent measure of reflective, open-minded reasoning.

That’s evidence, I think, that the brains of liberals and conservatives are alike in this respect.  And it’s all the more reason to doubt that correlations between ideology and reasoning-style measures can help us to figure out why or when deliberations over policy-relevant science are prone to political polarization or what we should do to try to minimize that sad spectacle. 



NAS "Science of Science Communication" colloquium presentations on-line

I see that the excellent presentations made at the NAS's Sackler "Science of Science Communication" colloquium in May are now on line.

Here's mine:


Some experimental data on CRT, ideology, and motivated reasoning (probing Mooney's Republican Brain)

This is my about my zillonth post on the so-called “asymmetry thesis”—the idea that culturally or ideologically motivated reasoning is concentrated disproportionately at one end of the political spectrum, viz., the right.

But it is also my second post commenting specifically on Chris Mooney’s Republican Brain, which very elegantly and energetically defends the asymmetry thesis. As I said in the first, I disagree with CM’s thesis, but I really really like the book. Indeed, I like it precisely because the cogency, completeness, and intellectual openness of CM’s synthesis of the social science support for the asymmetry thesis helped me to crystallize the basis of my own dissatisfaction with that position and the evidence on which it rests.

I’m not trying to be cute here.

I believe in the Popperian idea that collective knowledge advances through the perpetual dialectic of conjecture and refutation. We learn things through the constant probing and prodding of empirically grounded claims that have themselves emerged from the same sort of challenging of earlier ones.

If this is how things work, then those who succeed in formulating a compelling claim in a manner that enables productive critical engagement create conditions conducive to learning for everyone. They enable those who disagree to more clearly explain why (or show why by collecting their own evidence). And in so doing, they assure those who agree with the claim that it will not evade the sort of persistent testing that is the only basis for their continuing assent to it.

A. Recapping my concern with the existing data

In the last post, I reduced my main reservations with the evidence for the asymmetry thesis to three:

First, I voiced uneasiness with the “quality of reasoning” measures that figure in many of the studies Republic Brain relies on to show conservatives are closed minded or unreflective. Those that rely on dogmatic “personality” styles and on people’s own subjective characterization of their “open-mindedness” or amenability to reasoning are inferior, in my view, to objective, performance-based reasoning measures, particularly Numeracy and the Cognitive Reflection Test (CRT), which recently haven been shown to be much better predictors of vulnerability to one or another form of cognitive bias. CRT is the measure that figures in Kahneman’s justly famous “fast/slow”-“System 1/2” dual process theory.

Second, and even more fundamentally, I noted that there’s little evidence that any sort of quality of reasoning measure helps to identify vulnerability to motivated cognition—the tendency to unconsciously fit one’s assessment of evidence to some goal or interest extrinsic to forming an accurate belief. Indeed, I pointed out that there is evidence that the people highest in CRT and numeracy are more disposed to display ideologically motivated cognition. Mooney believes—and I agree—that ideologically motivated reasoning is at the root of disputes like climate change. But if the disposition to engage in higher quality, reflective reasoning doesn’t immunize people from motivated reasoning, then one can’t infer anything about disputes like climate change from studies that correlate the disposition to engage in higher quality, reflective reasoning with ideology..

Third, we should be relying instead on experiments that test for motivated reasoning directly. I suggested that many experiments that purport to find evidence of motivated reasoning aren’t well designed. They measure only whether people furnished with arguments change their minds; that’s consistent with unbiased as well as biased assessments of the evidence at hand. To be valid proof of motivated reasoning, studies must manipulate the ideological motivation subjects have for crediting one and the same piece of evidence.  Studies that do this show that conservatives and liberals both opportunisitically adjust their weighting of evidence conditional on its support for ideologically satisfying conclusions.

B. Some more data for consideration

Okay. Now I will present some evidence from a study that I designed with all three of these points—ones, again, that Mooney’s book convinced me are the nub of the matter—in mind. 

That study tests three hypotheses:

(1) that there isn’t a meaningful connection between ideology and the disposition to use higher level, systematic cognition (“System 2” reasoning, in Kahneman’s terms) or open-mindedness, as measured by CRT;

(2) that a properly designed study will show that liberals as well as conservatives are prone to motivated reasoning on one particular form of policy-relevant scientific evidence: studies purporting to find that quality-of-reasoning measures show those on one or the other side of the climate-change debate are “closed minded” and unreflective; and

 (3) that a disposition to engage in higher-level cognition (as measured by CRT) doesn’t counteract but in fact magnifies ideologically motivated cognition.

1. Relationship of CRT to ideology

This study involved a diverse national sample of U.S. adults (N = 1,750). I collected data on various demographic characteristics, including the subjects self-reported ideology and political-party allegiance.  And I had the subjects complete the CRT test.

I’ve actually done this before, finding only tiny and inconclusive correlations between ideology, culture, and party-affiliating, on the one hand, and CRT, on the other.

The same was true this time. Consistent with the first hypothesis, there was no meaningful correlation between CRT and either liberal-conservative ideology (measured with a standard 5-point scale) or cultural individualism (measured with our CC worldview scales).

There were weak correlations between CRT and both cultural hierarchy and political party affiliation. But the direction of the effects were contrary to the Republican Brain hypothesis.

That is, both hierarchy (as measured with the CC scale) and being a Republican (as measured by a standard 7-point partisan-identification measure) predicted higher levels of reflectiveness and analytical thinking as measured by CRT.

But the effects, as I mentioned (and as in the past), were miniscule.  I’ve set to the left the results of an ordered logistic regression that predicts the likelihood that someone who identifies as a “Democrat” or a “Republican” (2 & 6 on the 7-point scale), respectively, is to answer 0, 1, 2, or all 3 three CRT questions correctly (you can click here to see the regression outputs). For comparison, I’ve also included such models for religious as opposed to nonreligious and being female as opposed to male, both of which (here & here, e.g.) are known to be associated with lower CRT scores and which have bigger effects than does party affiliation.

Hard to believe that the trivial difference between Republicans and Democrats on CRT could explain much of anything, much less the intense conflicts we see over policy-relevant science in our society.

2. Ideologically motivated reasoning—relating to the asymmetry of ideologically motivated reasoning!

The study also had an experimental component.

The subjects were divided into three groups or experimental “conditions.”  In all of them, subjects indicated whether they agreed or disagreed--and how strongly (on a six-point scale)--with the statement:

I think the word-problem test I just took [i.e., the CRT test] supplies good evidence of how reflective and open-minded someone is.

But before they did, they received background information that varied between the experimental conditions.

In the “skeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who accept evidence of climate change tend to get more answers correct than those who reject evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who believe climate change is happening are more open-minded than those who are skeptical that climate change is happening.

In contrast, in the “nonskeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who reject evidence of climate change tend to get more answers correct than those who accept evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who are skeptical climate change is happening are more open-minded than those who believe that climate change is happening.

Finally, in the “control” condition, subjects read simply that “[s]ome psychologists believe the questions you have just answered measure how reflective and open-minded someone is” before they indicated whether they themselves agreed that the test was a valid measure of such a disposition.

You can probably see where I’m going with this.

All the subjects are indicating whether they believe the CRT test is a valid measure of reflection and open-mindedness and all are being given the same evidence that it is—namely, that “[s]ome psychologists believe” that that’s what it does.

Two-thirds of them are also being told, of course, that people who take one position on climate change did better than the other. Why should that make any difference? That’s just a result (like the findings of correlations between ideology and quality-of-reasoning measures in the studies described in Republican Brain); it’s not evidence one way or the other on whether the test is valid.

However, this additional information does either threaten or affirm the identities of the subjects to the extent that they (like most people) have a stake in believing that people who share their values are smart, open-minded people who form the “right view” on important and contentious political issues. Identity-protection is an established basis for motivated cognition—indeed, the primary one, various studies have concluded, for disputes that seem to divide groups on political grounds.

We didn’t ask subjects whether they believed that climate change was real or a serious threat or anything.  But, again, we did measure their political ideologies and political party allegiances (their cultural worldviews, too, but I’m going to focus on political measures, since that’s what most of the researchers featured in Republican Brain focus on).

Accordingly, if people tend to agree that the CRT is “supplies good evidence of how reflective and open-minded someone is” when the test is represented as showing that people who hold the position associated with their political identity are “open minded” and “reflective” but disagree when the test is represented as showing that such people are “biased,” that would be strong evidence of motivated cognition. They would then be assigning weight to one and the same piece of evidence conditional on the perceived ideological congeniality of the conclusion that it supports.

To analyze the results, I used a regression model that allowed me to assess simultaneously the influence of ideology and political party affiliation, the experimental group the subjects were in, and the subjects’ own CRT scores.

These figures (which are derived from the regression output that you can also find here) illustrate the results. On the left, you see the likelihood that someone who is either a “liberal Democrat” or a “conservative Republican” and who is “low” in CRT (someone who got 0 answers correct—as was true for 60% of the sample; most people aren’t inclined to use System 2 reasoning, so that’s what you’d expect) would “agree” the CRT is a valid test of reflective and open-minded thinking in the three conditions.

Not surprisingly, there’s not any real disagreement in the control condition. But in the “skeptic biased” condition—in which subjects were told that those who don’t accept evidence of climate change tended to score low—low CRT liberal Democrats were much more likely to “agree” than were low CRT conservative Republicans. That’s a motivated reasoning effect.

Interestingly, there was no ideological division among low CRT subjects in the “nonskeptic biased” condition—the one in which subjects were told that those who “accept” evidence of climate change do worse.

But there was plenty of ideological disagreement in the “nonskepetic biased” condition among subjects who scored higher in CRT! There was only about a 25% likelihood that a liberal Democrat who was “high” in CRT (I simulated 1.6 answers correct—“87th percentile” or + 1 SD—for graphic expositional purposes) would agree that CRT was valid if told that the test predicted “closed mindedness” among those who “accept evidence” of climate change.  There was a bit higher than 50% chance, though, that a “high” CRT conservative Republican would.

The positions of subjects like these flipped around in the “skeptic biased” condition.  That’s motivated reasoning.

It’s also motivated reasoning that gets higher as subjects become more disposed to use systematic or System 2 reasoning as measured by CRT.

That’s evidence consistent with hypotheses two and three.

The result is also consistent with the finding from the CCP Nature Climate Change study, which found that those who are high in science literacy and numeracy (a component of which is CRT) are the most culturally polarized on both climate change and nuclear power.  The basic idea behind the hypothesis is that in a “toxic science communication climate”—one in which positions on issues of fact become symbols of group identity—everyone has a psychic incentive to fit evidence to their group commitments. Those who are high in science literacy and technical reasoning ability are able to use those skills to get an even better fit. . . .

None of this, moreover, is consistent with the sort of evidence that drives the asymmetry thesis:

(1) There’s not a meaningful correlation here between partisan identity and one super solid measure of higher level cognitive reasoning.

(2) What’s more, higher-level reasoning doesn’t mitigate motivated reasoning. On the contrary, it aggravates it. So if motivated reasoning is the source of political conflict on policy-relevant science (a proposition that is assumed, basically, by proponents of the asymmetry thesis), then whatever correlation might exist between low-level cognitive reasoning capacity and conservativism can’t be the source of such conflict.

(3) In a valid experimental design, there’s motivated reasoning all around—not just on the part of Republicans.

But is the level of motivated reasoning in this experiment genuinely “symmetrical” with respect to Democrats and Republicans. Is the effect “uniform” across the ideological spectrum?

Frankly, I’m not sure that that question matters. There’s enough motivated reasoning across the ideological spectrum (and cultural spectra)—this study and others suggest—for everyone to be troubled and worried.

But the data do still have something to say about this issue. Indeed, it enables me to say something directly about it because there’s enough data to employ the right sorts of statistical tests (ones that involve fitting “curvilinear” or polynomial models rather than linear ones to the data).

But I’ve said enough for now, don’t you think?

I’ll discuss that another time (soon, I promise).

Post 1 & Post 3 in this "series"



What do I think of Mooney's "Republican Brain"?

Everyone knows that science journalist Chris Mooney has written a book entitled The Republican Brain. In it, he synthesizes a wealth of social science studies in support of the conclusion that having a conservative political outlook is associated with lack of reflection and closed-mindedness.

I read it. And I liked it a lot.

Mooney possess the signature craft skills of a first-rate science journalist, including the intelligence (and sheer determination) necessary to critically engage all manner of technical material, and the expositional skill required to simultaneously educate and entertain.

He’s also diligent and fair minded. 

And of course he’s spirited: he has a point of view plus a strong desire to persuade—features that for me make the experience of reading Mooney’s articles and books a lot of fun, whether I agree with his conclusions (as often I do) or not.

As it turns out, I don’t feel persuaded of the central thesis of The Republican Brain. That is, I’m not convinced that the mass of studies that it draws on supports the inference that Republicans/conservatives reason in a manner that is different from and less reasoned than Democrats/liberals.

The problem, though, is with the studies, not Mooney’s synthesis.  Indeed, Mooney’s account of the studies enabled me to form a keener sense of exactly what I think the defects are in this body of work. That’s a testament to how good he is at what he does.

In this, the first of two (additional; this issue is impossible to get away from) posts, I’m going to discuss what I think the shortcomings in these studies are. In the next post, I’ll present some results from a new study of my own, the design of which was informed by this evaluation.

1. Validity of quality-of-reasoning measures

The studies Mooney assembles are not all of a piece but the ones that play the largest role in the book and in the literature correlate ideology or party affiliation with one or another measure of cognitive processing and conclude that conservativism is associated with “lower” quality reasoning or closed-mindedness.

These measures, though, are of questionable validity. Many are based on self-reporting; "need for cognition," for example, literally just asks people whether the "notion of thinking abstractly is appealing to" them, etc. Others use various personality-style constructs like “authoritarian” personality that researchers believe are associated with dogmatism. Evidence that these sorts of scales actually measure what they say is spare.

Objective measures—ones that measure performance on specific cognitive tasks—are much better. The best  of these, in my view, are the “cognitive reflection test” (CRT) which measures the disposition to check intuition with conscious analytica reasoning, and “numeracy,” which measures quantatative reasoning capacity, and includes CRT as a subcomponent.

These measures have been validated. That is, they have been shown to predict—very strongly—the disposition of people either to fall prey to or avoid one or another form of cognitive bias. 

As far as I know, CRT and numeracy don’t correlate in any clear way with ideology, cultural predispositions, or the like. Indeed, I myself have collected evidence showing they don’t (and have talked with other researchers who report the same).

2. Relationship between quality-of-reasoning measures and motivated cognition

Another problem: it’s not clear that the sorts of things that even a valid measure of reasoning quality gets at have any bearing on the phenomenon Mooney is trying to explain. 

That phenomenon, I take it, is the persistence of cultural or ideological conflict over risks and other facts that admit of scientific evidence. Even if those quality-of-reasoning measures that figure in the studies Mooney cites are in fact valid, I don’t think they furnish any strong basis for inferring anything about the source of controversy over policy-relevant science. 

Mooney believes, as do I, that such conflicts are likely the product of motivated reasoning—which refers to the tendency of people to fit their assessment of information (not just scientific evidence, but argument strength, source credibility, etc.) to some end or goal extrinsic to forming accurate beliefs. The end or goal in question here is promotion of one’s ideology or perhaps securing of one’s connection to others who share it.

There’s no convincing evidence I know of that the sorts of defects in cognition measured by quality of reasoning measures (of any sort) predict individuals’ vulnerability to motivated reasoning.

Indeed, there is strong evidence that motivated reasoning can infect or bias higher level processing—analytical or systematic, as it has been called traditionally; or “System 2” in Kahneman’s adaptation—as well as lower-level, heuristic or “System 1” reasoning.

We aren’t the only researchers who have demonstrated this, but we did in fact find evidence supporting this conclusion in our recent Nature Climate Change study. That study found that cultural polarization—the signature of motivated reasoning here—is actually greatest among persons who are highest in numeracy and scientific literacy. Such individuals, we concluded, are using their greater facility in reasoning to nail down even more tightly the connection between their beliefs and their cultural predispositions or identities.

So, even if it were the case that liberals or Democrats scored “higher” on quality of reasoning measures, there’s no evidence to think they would be immune from motivated reasoning. Indeed, they might just be even more disposed to use it and use it effectively (although I myself doubt that this is true; as I’ve explained previously, I think ideologically motivated reasoning is uniform across cultural and ideological types.)

3. Internal validity of motivated reasoning/biased assimilation experiments

The way to figure out whether motivated reasoning is correlated with ideology or culture is with experiments. There are some out there, and Mooney mentions a few.  But I don’t think those studies are appropriately designed to measure asymmetry of motivated reasoning; indeed I think many of them are just not well designed period.

A common design simply measures whether people with one or another ideology or perhaps existing commitment to a position change their minds when shown new evidence. If they don’t—and if in fact, the participants form different views on the persuasiveness of the evidence—this is counted as evidence of motivated reasoning.

Well, it really isn’t. People can form different views of evidence without engaging in motivated reasoning. Indeed, their different assessments of the evidence might explain why they are coming into the experiment in question with different beliefs.  The study results, in that case, would be showing only that people who’ve already considered evidence and reached a result don’t change their mind when you ask them to do it again. So what?

Sometimes studies designed in this way, however, do show that “one side” budges more in the face of evidence that contradicts their position (on nuclear power, say) than the other does on that issue or on some other (say, climate change).

Well, again, this is not evidence that the one that’s holding fast is engaged in motivated reasoning. Again, those on that side might have already considered the evidence in question and rejected it; they might be wrong to reject it, but because we don’t know why they rejected it earlier, their disposition to reach the same conclusion again does not show they are engaged in motivated reasoning, which consists in a disposition to attend to information in a selective and biased fashion oriented to supporting one’s ideology.

Indeed, the evidence that challenges the position of the side that isn’t budging in such an experiment might in fact be weaker than the evidence that is moving the other side to reconsider. The design doesn’t rule this out—so the only basis for inferring that motivated reasoning is at work is whatever assumptions one started with, which gain no additional support from the study results themselves.

There is, in my view, only one compelling way to test the hypothesis that motivated reasoning explains the evaluation of information. That’s to experimentally manipulate the ideological (or cultural) implications of the information or evidence that subjects are being exposed to. If they credit that evidence when doing so is culturally/ideologically congenial, and dismiss it when doing so is ideologically uncongenial, then you know that they are fitting their assessment of information (the likelihood ratio they assign to it, in Bayesian terms) to their cultural or ideological predispositions.

CCP has done studies like that. In one, e.g., we showed that individuals who watched a video of protestors reported perceiving them to be engaged in intimidating behavior—blocking, obstructing, shouting in onlookers’ faces, etc.—when the subjects believed the protest involved a cause (either opposition to abortion rights or objection to the exclusion of gays and lesbians from the military) that was hostile to their own values. If the subjects were told the protestors’ cause was one that affirmed the subjects' own values, then they saw the protestors as engaged in peaceful, persuasive advocacy.

That’s motivated reasoning.  One and the same piece of evidence—videotaped behavior of political protests—was seen one way or another (assigned a likelihood ratio different from or equal to 1) depending on the cultural congeniality of seeing it that way.

In another study, we found that subjects engage in motivated reasoning when assessing the expertise of scientists on disputed risk issues. In that one, how likely subjects were to recognize a scientist as an “expert” on climate change, gun control, or nuclear power depended on the position that scientist was represented to be taking. We manipulated that—while holding the qualifications of the scientist, including his membership in the National Academy of Sciences, constant.

Motivated reasoning is unambiguously at work when one credits or discredits the same piece evidence depending on whether it supports or contradicts a conclusion that one finds ideologically appealing. And again we saw that process of opportunistic, closed-minded assessment of evidence at work across cultural and ideological groups.

Actually, CM discusses this second study in his book. He notes that the effect size—the degree to which individuals selectively afforded or denied weight to the view of the featured scientist depending on the scientists’ position—was larger in individuals who subscribed to a hierarchical individualistic worldviews (they tend to be more conservative) than in individuals who subscribed to an egalitarian, communitarian one. The former tend to be more conservative, the latter more liberal.

As elsewhere in the book, he was reporting with perfect accuracy here.

Nevertheless, I myself don’t view the study as supporting any particular inference that conservatives or Republicans are more prone to motivated reasoning. Both sides (as it were) displayed motivated reasoning—plenty of it. What’s more, the measures we used didn’t allow us to assess the significance of any difference in the degree of it that each side displayed. Finally, we’ve done other studies, including the one involving the videotape of the protestors, in which the effect sizes were clearly comparable in size.

But here’s the point: to be a valid, a study that finds asymmetry in ideologically motivated reasoning must allow the researcher both to conclude that subjects are selectively crediting or discrediting evidence conditional on its congruence with their cultural values or ideology and that one side is doing that to a degree that is both statistically and practically more pronounced than the other.

Studies that don’t do that might do other things--like supply occasions for sneers and self-congratulatory pats on the back among those who treat cheering for "their" poilitical ideology as akin to rooting for their favorite professional sports team (I know Mooney certainly doesn’t do that).

But they don’t tell us anything about the source of our democracy’s disagreements about various forms of policy-relevant science.

In the next post in this “series,” I’ll present some evidence that I think does help to sort out whether an ideologically uneven propensity to engage in ideologically motivated reasoning is the real culprit. 

Posts 2 & 3


Chen, Serena, Kimberly Duckworth, and Shelly Chaiken. Motivated Heuristic and Systematic Processing. Psychological Inquiry 10, no. 1 (1999): 44-49.

Frederick, Shane. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, no. 4 (2005): 25-42.

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change advance online publication (2012).

Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. "Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment." Journal of Behavioral Decision Making  (2011):advance on line publication.

Mooney, C. The Republican Brain: The Science of Why They Deny Science—and Reality. (John Wiley & Sons, Hoboken, NJ; 2012).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C.K., Burns, W.J. & Peters, E. Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach. Journal of Behavioral Decision Making, advance on line publication (2012).


Gun control, climate change & motivated cognition of "scientific consensus"

Sen. John McCain is getting blasted for comments he made on gun control yesterday.



Here's what he actually said:

I think we need to look at everything, if that even should be looked at, but to think that somehow gun control is — or increased gun control — is the answer, in my view, that would have to be proved.

And here is the conclusion from a 2005 National Academy of Sciences expert consensus report that examined the (voluminous) data on various forms of gun control:

In summary, the committee concludes that existing research studies and data include a wealth of descriptive information on homicide, suicide, and firearms, but, because of the limitations of existing data and methods, do not credibly demonstrate a causal relationship between the ownership of firearms and the causes or prevention of criminal violence or suicide.

Who is behaving more like a "global warming denier" here-- McCain or his critics? 

The reaction to McCain is impressionistic proof--akin to pointing to the U.S. summer heatwave as evidence of climate change--of the impact of politically motivated reasoning of expert scientific opinion relating to policy-consequential facts.

If you demand rigorous proof (you should), take a look at the CCP study on "cultural cognition of scientific consensus." We present experimental proof that individuals selectively credit scientists as "experts" on climate change, nuclear power, and gun control conditional on those scientists taking positions consistent with the one that predominates in individuals' cultural groups.

Actually, I wouldn't criticize people for this tendency; it's ubiquitious.

But I would criticize those who ridicule a public figure (or anyone else) who says let's take a "look at everything" but demand "proof" before making policy.


Does cultural cognition explain the conflict between the analytic and continental schools of philosophy?

Andrew Seer poses this interesting question:

I am new to this type of academic literature so please forgive me if you have stated something similar to my question in one of your papers. My question concerns the topic of philosophy and Science viewed through the lens of Cultural Cognition.

 In contemporary philosophy there are two camps that are rivals. Analytic philosophy in one corner and Continental Philosophy in the other. This wiki page does a good job explaining the differences between the tow. 

 So my question to you is this, could this bitter divide be do in part to some psychological element that could be explained by Cultural Cognition. Example, certain academics could have a world view that is more in favor of Social Criticism and thus more Continental in thought ( more likely to read Jacques Derrida or Slavoj Zizek for fun).

 Or lets take the other side of the coin there mindset is more in line with Analytical (more likely to read John Searle or Daniel Dennett for fun). Of course, this difference in mind set could be due to something that Cultural Cogitation could predict or explain. 

 I feel that if there is something to this, it could help academia open its eyes to possible biases that it could have. I know I have heard plenty of comments from people who study "Hard Sciences" on how the "Soft Sciences" are not real sciences. Or people who study "Soft Sciences" say that the "Hard Sciences" don't give a crap about the human condition. 

 Do you have any thoughts on this matter?

My response -- which I invite others to amend, extend, refine, repudiate, etc:

Short answer: No. Wait -- yes. Actually, no -- but the "no" part is less important than the "yes" part.

Longer answer:

A. I wouldn't be surprised if one could relate the appeal of analytic vs. continental philosophy to values of some kind in individuals who study philosophy. But there's no reason to expect that the nature of the predispositions and the instrument for measuring them would be at all like the ones that are featured in our theory, which was designed to explain a phenomenon that has nothing to do with that controversy. I bet Red Sox fans are more likely to perceive that Bucky Dent's 1978 homerun was actually foul than are Yankees fans. But I doubt that one could show that the cultural cognition worldivews predict any such thing. Compre They Saw a Game with They Saw a Protest.  

B.  In addition, the framework best suited for explaining/predicting the relative appeal of the two philosophies would likely involve cognitive mechanisms different from the ones that figure in studies of cultural cognition. In particular, the relationship between the values in question and the philosophical orientation might not involve motivated reasoning but rather some analytical (as it were) affinity between the corresponding sets of values and philosophical orientations. By analogy, "individualists" probably find the philosophy of Ayn Rand more persuasive than that of John Rawls; but that's likely b/c there is some overlap in the relevant normative judgments or empirical premises in the paired sets of values and philosophical positions.

C. Nonetheless, I wouldn't be surprised if one could show that commitments to one style or another of philosophy dispose individuals to biased processing of information relating to the value or correctness of that style; e.g., one might find that those who are drawn to analytic philosophy are more inclined to credit some proposition ("The moon is made of green cheese") if it is attributed, say, to Searle than Derrida. But that sort of finding would be more helpfully explained in terms of more general mechanisms of social psychology (ones relating, say, to "confirmation bias" or "in group preference") than cultural cognition, which itself can be understood as a special case of those, one distinguished by the contribution that the motivating dispositions it features is making to the operation of those dynamics.

Consider, again, "They Saw a Game," which, like cultural cognition, involves "motivated cognition" founded in "in group" allegiances, but which involves commitment to groups distinct from the ones that figure in cultural cognition.

Better yet, consider work that shows that *scientists* are vulnerable to one or another sort of bias -- including confirmation bias -- based on predispositions. Not cultural cogntion, although cultural cognition might involve some of the same mechanisms. E.g., Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993); or Wilson, T.D., DePaulo, B.M., Mook, D.G. & Klaaren, K.J. Scientists' Evaluations of Research. Psychol. Sci. 4, 322-325 (1993).

D. So if your goal is to test the hypothesis that debates in philosophy are being driven off course by cognitive biases motivated by precommitment to one or another style of philosophizing, the sorts of studies referred to in (C) -- along with the cultural cognition ones -- might supply nice templates or models of how to go about this. I suspect such a project would be very provocative and enlightening and would serve the end you mention of showing that the debate in philosophy has taken an unfortunate turn. I bet you could do the same w/ the debates on "what's a science" etc.  

The resulting work would be related to but wouldn't strictly speaking *involve* "cultural cognition" -- but that's okay. The goal is to learn things & not to score points one for one's pet theory. That's your point -- no? 


A complete and accurate account of how everything works

Okay, not really-- but in a sense better than that: a simple model that is closer to being true than the most likely alternative model a lot of people probably have in mind when they try to make sense of public risk perceptions.


Above is a diagram that I created in response a friend's question about how cultural cognition relates to Kahneman's system 1/system 2 (or "fast"/"slow") dual process reasoning framework.

Start at the bottom: exposure to information determines perception of risk.

Okay, but how is information taken in or assessed?

Well, move up to the top & you see Kahneman's 2 systems. No 1 is largely unconscious, emotional. It's the source of myriad biases. No. 2 is conscious, reflective, algorithmic. It double checks 1's assessment and thus corrects its errors--assuming one has the cognitive capacity and time needed to bring it to bear. The arrows from these influences intersect the one from information to risk perception to signify that Systems 1 & 2 determine the impact that information has.

But there has to be something more going on. We know that some people react one way & some another to one and the same piece of evidence or information about climate change, guns, nuclear power, etc . And we know, too, that the reason they do isn't that some use "fast" system 1 and others "slow" system 2 to make sense of such information; people who are able and disposed to resort to conscious, analytical assessment of information are in fact even more polarized than those who reason mainly with their gut.

The necessary additional piece of the model is supplied by cultural worldviews, which you encounter if you now move down a level. The arrows originating in "cultural worldviews" & intersecting those that run from "system 1" and "system 2" to "risk information" indicate that worldviews interact with those modes of reasoning. Worldviews don't operate as a supplementary or alternative influence on risk perception but rather determine the valence of the influence of the various forms of cognition that system 1 and system 2 each comprises.

Whether that valence is positive or negative depends on the cultural meaning of the information.  

"Cultural meaning" is the narrative congeniality or uncongeniality of the information--its disappointment or gratification of the expectations & hopes that a person with a particular worldview has about the best way of life.

Kahneman had this in mind, essentially, when, in his Sackler Lecture, he assimilated cultural cognition into system 1. System 1 is driven by emotional association. The emotional association are likely to be determined by moral evaluations of putative risk sources (nuclear power plants, say, or HPV vaccines). Because such evaluations vary across groups, members of those groups react differently to the information (some concluding "high risk" others "low"). Hence, Kahneman reasoned, cultural cognition is bound up with -- it interacts, determines the valence of-- heuristic reasoning.

The study we published recently in Nature Climate Change, though, adds the arrow that starts in cultural worldview & intersects the path between system 2 & information. We found that individuals disposed to use system 2 are more polarized, because (we surmise; we are doing experiments to test this conjecture further) they opportunistically use their higher quality reasoning faculties (better math skills, superior comprehension of statistics & the like) to fit the evidence to the narrative that fits their cultural worldview.

By the way, I stuck an arrow with an uncertain origin to the left of "risk information" to indicate that information need not be viewed as exogenous -- or unrelated to the other elements of the model. There are lots of influences on information exposure, obviously, but cultural worldviews are an important one of them! People seek out and other otherwise more likely to be exposed to information that is congenial to their cultural outlooks; this reinforces the tendency toward cultural polarization on issues that become infused with antagonistic cultural meanings.

This representation of the mechanisms of risk perception not only helps to show how things work but also how they might be made to work better. Just saturating people with information won’t help to promote convergence on the best available information. Even if one crafts one’s message to anticipate the distinctive operation of Systems 1 & 2 on information processing, people with diverse cultural outlooks will still draw opposing inferences from that information (case in point: the competing inferences people with opposing cultural worldviews draw about climate change when they reflect on recent local weather ...).

Or at least they will if the information on some issue like climate change, the HPV vaccine, gun possession or the like continues to convey antagonistic cultural meanings to such individuals. To promote open-minded engagement and preempt cultural polarization, risk communication only has to be fitted to popular information-processing styles but also framed in a manner that conveys congenial cultural meanings to all its recipients.

How does one accomplish that? That is the point of the "2 channel strategy" of science communication that we conceptualize and test in Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment, Cultural Cognition Working Paper No. 92.



Why do contested cultural meanings go extinct?

In response to couple days ago's post on motivated perception of hot/cold weather, Random Assignment/David Nussbaum asked a question ineresting enough for me to give an answer so long & drawn out & worthy of a better response that I decided to turn the exchange into a separate post in the hope that it might provoke others to weigh in.

DN's question:

I'm curious, have you ever analyzed what happens in cases where beliefs do (eventually) yield to evidence? What does that process look like in the real world? I know you can get people to be more open using self-affirmation, but I'm thinking more about changes that happen "in the wild". So when allowing women to vote didn't destroy the entire moral fabric of society (leaving the opportunity to do so open to gay marriage), how did people's views change? Did they come to accept that they were wrong? Or did the people who believed it would just get replaced by new people who didn't believe it after they died? For a topic like climate change that's probably too slow a process.

My response:

Dave--that's an interesting question b/c of the "in the wild" part. 

As I see it, what are talking about is how people who disagree about some risk or other policy-consequenital fact converge following a period of culturally motivated dissensus. We reject the explanation "b/c they finally all see the evidence & agree" on the ground that it doesn't get the premise: that in this condition people will assign weight to evidence only when it is congenial to their cultural predispositions. Accordingly, in cases in which people converge after being "shown evidence," the explanation, to be interesting, has to identify how & why the cultural meaning of the issue changed, relieving the pressure on both sides to engage in biased assimilation of the evidence.

You note that in laboratory settings, "self-affirmation" can "buffer" the identity threatening implications of a proposition that is hostile to a message recipient's cultural identity and thereby neutralize the influence of motivated reasoning (leading to open-mindedness). See Sherman, D.K. & Cohen, G.L. in Advances in Experimental Social Psychology, Vol. 38 183-242 (Academic Press, 2006).

But you ask about real world examples.

My favorite is smoking. People love to say, "See: the impact of the Surgeon General's REport of 1964 shows that people eventually can be persuaded by evidence." In fact, peak for cigarette smoking in US occurred circa 1979. It declined after public health advocates initiated a vicious and viciously successful social meaning campaign that obliterated all the various positive cultural meanings associated with smoking (or most of them) and stigmatized cigarette use as "stupid," "weak," "inconsiderate," "repulsive," etc. At that point, people not only accepted the evidence in the SG's 1964 Report but started to accept all sorts of overblown claims about 2nd hand smoke etc. Yup -- it was all about "eventually accepting evidence"; nothing to do with social meanings there... (not). (I discuss the issue, and relevant sources including 2000 Surgeon General's Report on smoking & social norms, in an essay entitled The Cognitively Illiberal State.)

But that's not really responsive to your query, or at least isn't as I'm going to understand it. That was "in the wild" but reflects a deliberate and calculated effort (although not a very precise one; the public health people have a heavily stocked soicial-meaning regulation arsenal, but every weapon in it is nuclear...) to obliterate a contested meaning. What about social meanings dying out by "natural causes"-- that is, through unguided historical and social influences? That certainly has to happen and it would be really cool & instructive to have examples.

Nuclear power is close, I think. In any case, the issue isn't nearly so radioactive (so to speak) for the left as it was in 1970s & earily 1980s. Egalitarian communitarians (of sort who agitated Douglas & Wildavsky into emitting Risk & Culture) were so successful at stigmatizing nuclear that it basically was taken off the table & disappeared from cultural consciousness; guess its toxic meaning had a half life of 30 yrs or so. But I overstate. The issue of nuclear waste does still generate cultural division, just not as much as it used to or maybe just not as much as, say, climate change or guns. Likely it could be reactivated-- who knows. 

But in any event, it would be nice to have an account of culturally contested risks or like factual issues that really did die out & become extinct all on their own.

You mention the dispute over consequences of women's suffrage ... Guess you've never read this? John, R.L., Jr & Lawrence, W.K. Did Women's Suffrage Change the Size and Scope of Government? Journal of Political Economy 107, 1163-1198 (1999).



Feeling hot? Repeat after me: the death penalty deters murder...

Great study by Hank Jenkins-Smith & collaborators showing that (a) perceptions of recent local weather predict belief in climate change but that (b) cultural worldviews more powerfully predict individuals' perceptions of recent local weather than does the actual recent weather in their communities.

The basic lesson of cultural cognition is that one can't quiet public controversy over risk with "more evidence": people won't recognize the validity or probative weight of evidence that is contrary to their cultural predispostions.

Why should things be any different when the "evidence" involves "recent weather"? 

What will those who are pointing to the current (North American) heat wave say if it's cooler next summer (it almost certainly will be; regression to the mean), or the next time we get a frigid winter? Probably that it's a mistake for individuals to think that they are in a position to figure out if climate change is happening by looking at their own thermometers (it is).

There's really only one way to fix the climate change debate: fix the science communication climate so that people with opposing values are no longer motivated to fit the evidence to their cultural predispositions. 


Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012), doi:







Is teen pregnancy a greater societal risk than climate change?! Cross-cultural cultural cognition part 2

This is the second in a series of posts on cross-cultural cultural cognition (C4).

C4 involves the application of cultural cognition to non-US samples. In the first post, I addressed certain conceptual and theoretical issues relating to C4. Now I’ll present some actual data.

I had thought I’d do both the UK and Australia in one post, but now it seems to me more realistic to break them up. So let’s make this at least a three-part series—with the UK and Australia data presented in sequence.

Maybe we’ll even make it four, since there’s also been some Canadian research. I didn’t participate in it to any significant extent, but it is really cool & of course pertinent to the topic.

Part 2. UK

As I explained last time, C4 hypothesizes that the motivating dispositions associated with Mary Douglas’s group-grid framework—“hierarchy-egalitarianism”(HE) and “individualism-communitarianism” (IC)—generalize across societies but expects the latent-variable indicators of those dispositions to be society specific.  C4 also anticipates that the mapping of risk perceptions on to the group-grid dispositions will vary across societies.

Accordingly, for both the UK and Australia, I’ll start with a summary of the data on the indicators and then turn to risk perception findings.

A. Indicators

In cultural cognition research, HE and IC are conceptualized as latent variables, which are measured by scales constructed by aggregating responses to attitudinal items, which are thus conceptualized as the observable latent-variable indicators.

Our goal in this work—which I conducted with Hank Jenkins-Smith, Tor Tarantola, & Carol Silva in the spring & summer of 2011—was to adapt to the UK the six-item “short form” versions of the HE and IC scales that we’ve used in studies of US samples. Successful “adaptation” means the construction of reliable scales that we have reason to believe measure the same dispositions in the UK subjects as they do in the US ones.

Reliability refers to those properties of the scale that furnish reason to believe that the items that it comprises are actually measuring some common, latent disposition. A common test of reliability is “Cronbach’s α,” which is based on inter-item correlation. A score of 0.70 or above (the top score is 1.0) is generally considered adequate.

Factor analysis is another test. There are various forms of factor analysis, but the basic idea is to determine whether the covariance patterns in responses to the data are consistent with existence of hypothesized latent variables. Because the twelve worldview items are hypothesized to be measures of two discrete latent dispositions, we expect variance in responses to be accounted for by two orthogonal “factors,” onto which the HE and IC sets items appropriately “load” (correlate, essentially; factor “loadings” are typically regression coefficients).

Following an initial pretesting phase in which Tor did most of the heavy lifting (using his own best judgment to start, then soliciting responses from other researchers, and from pretest subjects—a form of “cognitive testing”), we felt confident enough in our UK versions of HE and IC to conduct a large general population survey. The sample consisted of 3000 individuals—1500 from England and 1500 from the US. The subjects were recruited by YouGov/Polimetrix, a leading public opinion survey firm, which administered the appropriate version (UK or US) of the survey to the subjects via the internet.

The results of these tests for both the US and the UK samples is reflected in this figure:


It shows, in effect, that for both samples the items “loaded” in patterns that suggested the expected relationship between the HE and IC sets and two latent distortions. The Cronbach’s α’s for each set was also greater than 0.70 for both samples.  These results furnish solid ground for concluding that the UK scales, like the US ones, are reliably measuring discrete dispositional tendencies, which manifest themselves in opposing patterns of survey-item responses. (Actually, the UK versions of the scales behave a bit better here than the US versions, which are displaying a bit more attraction to each other than they usually do!)

As I said, we also want to be confident that the dispositional tendencies being measured in the UK subjects by the UK versions of HE and IC are the same as the dispositional tendencies being measured in the US subjects by the corresponding US scales. This is the cross-cultural analog to scale validity, which refers to the correspondence between what a reliable scale is actually measuring and the phenomenon it is supposed to be measuring.

A common strategy for cross-culturally validating scales is to compare the factor or component structures across samples.  By design, each HE and IC item in the US set is matched with a corresponding HE and IC item in the UK set. The coefficient of congruence measures the similarity of the loadings of the various items on the extracted factor or component scores; a high coefficient signifies that the “factor structure” is sample “invariant”—i.e., that the relationship between the respective sets of items and the latent variable they are deemed to be measuring does not vary across the samples. The likelihood that they would just happen to exhibit this sort of structural similarity if the corresponding sets of items were not measuring the same latent variable is considered remote.

There is conventionally deemed to be sufficient ground for treating scales as measuring the same dispositions across distinct national samples when the coefficient of congruence is greater than 0.90.  The coefficients of congruence for the US and UK versions of HE and IC were 0.99 and 0.94, respectively.


B. Comparative culture-risk mappings 

Now the really fun stuff. What can we learn—if anything!—from comparing risk perceptions in the US & UK samples?

In the study, we solicited responses to 24 putative risk sources using the “industrial strength risk measure.” In this figure, I’ve plotted out the mean IM ratings for each sample separately: 

The respective samples’ rankings are not wildly out of synch but there are definitely some interesting differences. People in the UK, e.g., are much more concerned about guns than are people in the US. People in the UK also appear more uptight about marijuana (surprising to me, but what do I know?) and more alarmed about immigration (huh! but I actually had an inkling of that). There less concerned about “tea party” sorts of risks (let’s call them)—one associated with excessive regulation and government spending—but not by that much.

Similarities are interesting, too. Both countries are terrified of illegal drug trafficking—lame!

Both freaked out about terrorism. Of course.

Neither is very worked up about global warming. Second-hand cigarette smoke is apparently much more of a concern. In the US, climate change is viewed as posing a lesser danger to society than teen pregnancy! 

And look at childhood vaccinations: That concerns the members of both national samples the least—by far. One has to wonder whether the “vaccine hesitancy” scare is a bit trumped up….

But much much more interesting is this:

This figure how much cultural variance there is in each society, and how it differs across the two. 

The graphs are beautifully noisy! That’s the first thing worth noting: it shows that looking at sample-wide means for risks (individual ones of which are arrayed in the same order as in the last figure—in ascending order of overall concern in the US) grossly understates how much systematic division there with each society!

Climate change generates lots of division in both. Moreover, the character of the division is similar: hierarchical individualists and egalitarian communitarians are the most divided, with hierarchical communitarians and egalitarian individualists divided too, but less so, in between.

Once one adds culture to the picture, moreover, it becomes clear how misleading it can be to talk about "societal" perceptions of risk on things like climate change and teen-pregnancy--the "societal means" for which conceal widely divergent assessments across cultural groups.

Immigration risks are also divisive in both societies, and terrorism too. The cultural cleavages look comparable.

But look at gun risks: lots of cultural division in the US but virtually not much in UK. See what we were saying, Mary Douglas?

There’s also more cultural division here than there on "deviancy risks"—US egalitarian individualists poo poo the dangers of marijuana smoking and teenage pregnancy, as hierarchical communitarians quake.

And look again at childhood vaccines: no meaningful cultural division at all in either society. The “vaccine hesitators” might have a shared cultural view of some sort, but it’s much more specialized and boutiquey than any of the ones that figure in the risk conflicts of real consequence in these societies.

Also not a tremendous amount of variation on risks of illegal street drugs. That’s something to worry about, in my view….

There’s more, including the geoengineering experiment results, which I’ve featured in other posts and which are set out more completely in CCP Working Paper No. 92. Suffice it to say that we got results that were very comparable for both samples, as one might expect given the parallel cultural divisions in the two societies.

Last point: There’s plenty of cultural variance in the UK sample, but definitely less than there is in the US. What to make of that?

One possibility: the UK is just less culturally divided than the US. Maybe.

But another possibility is that our scales just aren’t as good at measuring cultural worldviews in the UK and thus aren’t able to discern it with the same precision there as here. 

I actually think that’s more likely—or at least a bigger part of the explanation for the differing levels of cultural conflict. After all, our measures were designed—painstakingly; it took quite a while to get scales that worked, and then to figure out how to condense them from 30 items to 12—for the US general public. I think we did a decent enough job for now in getting them to work in the UK (it wasn’t as hard as I expected!), but it would be shocking if we had managed to achieve the same level of measurement fidelity.

But in any case, there’s definitely more work to be done to figure out what’s going on. 


Part 1.

Part 3.


Caprara, G.V., Barbaranelli, C., Bermúdez, J., Maslach, C. & Ruch, W. Multivariate Methods for the Comparison of Factor Structures in Cross-Cultural Research. J. Cross-Cultural Psychol. 31, 437-464 (2000).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

ten Berge, J.M.F. Some Relationships Between Descriptive Comparisons of Components from Different Studies. Multivariate Behavioral Research 21, 29-40 (1986).

Tran, T.V. Developing cross-cultural measurement. (Oxford University Press, Oxford ; New York; 2009).



What generalizes & what doesn't? Cross-cultural cultural cognition part 1

Since I’m getting ready to return from a trip to Europe, I thought it would be a good time to mention the work that CCP has been doing to investigate “cross-cultural cultural cognition.”

In our research, we use two scales—“Hierarchy-egalitarianism” (HE) and “Individualism-communitarianism” (IC)—to measure the “worldviews” featured in Douglas & Wildavsky’s (CTR). HE and IC (in the form of factor scores extracted from a collection of attitudinal items) are used as predictors to test various hypotheses about how group predispositions influence perceptions of risk and related facts.

 “Cross-cultural cultural cognition,” as I’m using this term, involves applying the same methods to non-U.S. samples. In this first of two posts, I’ll describe some of the key theoretical/conceptual issues involved in cross-cultural cultural cognition. In the second, I’ll show some results for studies involving test subjects in the UK and Australia.

Part 1: What generalizes and what doesn’t

The point of “cross-cultural” study of cultural cognition, of course, is to identify the extent to which the dynamics we observe in our studies generalize across societies.  But to avoid confusion, it’s necessary to frame the “generalizability” question in reasonably fine-grained terms.  The approach we are using to engage in cross-cultural study of risk perceptions addresses generalizability separately with respect to three elements of the cultural cognition framework: (1) motivating dispositions, (2) disposition indicators, and (3) culture-risk mappings.

A. Motivating dispositions

“Motivating dispositions” refer to the group affinities that orient individuals’ perceptions of risk. In the cultural cognition framework, these dispositions are the CTR worldviews that we measure with the HE and IC scales. The dispositions are described as “motivating” because they are what orient the various modes of cognition that unconsciously link cultural worldviews to perceptions of risk and related beliefs.

Cross-cultural cultural cognition—at least as I’m using the concept here—posits that the dispositions featured in CTR do generalize across societies. In other words, we should expect the worldviews of every society’s members to vary systematically along cross-cutting HE and IC dimensions that everywhere reflect the same orientations toward social institutions.

This is a strong claim.  HE and IC are simultaneously distinctive and spare. One could easily imagine that in a particular society, individuals’ preferences and expectations wouldn’t meaningfully vary along one or the other of these two dimensions; that is, one might think that particular societies would be relatively homogenous with respect to either HE or IC. In addition, one might imagine that the members of at least some societies might vary along worldview dimensions that can’t be reduced to either of these two.

But rather than get worked into a state of philosophical agitation about whether HE and IC generalize, I would treat the claim that they do as a hypothesis, and cross-cultural cultural cognition as an empirical test of it. If attempts to construct universal HE and IC measures go nowhere, then the claim that these dispositions generalize will be of philosophical interest only. If, in contrast, a project of this sort does contribute materially to explanation, prediction, and prescription across diverse societies, then no philosophical objection to universal motivating dispositions will be sufficient to refute it.

Nevertheless, my motivation for hypothesizing the universality of the HE and IC dispositions is not really that I think that claim is true. The value of the hypothesis is its contribution to systematizing empirical research. Tests of the hypothesis will likely prove successful and thus generate instructive models of risk variance in many societies. In others, it will probably fail, while still yielding insight into what is likely to work better and why.

B. Disposition Indicators

In our research, we use a latent variable modeling strategy to measure the motivating dispositions associated with Douglas’s group-grid framework. A latent variable is one that doesn’t admit of direct observation or measurement; it is measured indirectly by aggregating measures of indicators—observable variables that correlate with the latent variable.  

That’s exactly what the items that make up our HE and IC scales are—reliable and valid latent- variable indicators. Responses to them covary in patterns that are consistent with their being measures of two unobserved attitudinal orientations, which themselves cohere with other things (from other attitudes to demographic characteristics to preferences and behaviors of one sort or another) that one would expect people who hold the worldviews formed by the intersection of HE and IC to display.

Should we expect the indicators of the HE and IC dispositions to generalize across societies? I certainly wouldn’t.

Our scales work for members of the U.S. population because they capture reasonably well certain words that contemporary Americans use to express their commitments. But that’s just a matter of historical happenstance. Those same statements (e.g., “[i]t seems like the criminals and welfare cheats get all the breaks, while the average citizen picks up the tab”) might not even make sense to, much less divide people with opposing cultural outlooks in, Sweden or Brazil. If so, scales formed by aggregation of responses to those items would be neither reliable nor valid.

That wouldn’t necessarily mean, though, that there aren’t hierarchical individualists, hierarchical communitarians, egalitarian individualists, and egalitarian communitarians in those countries. It would mean only that if there are, measuring their dispositions would require alternative indicators—such as attitudinal items the wordings of which capture how Swedes or Brazilians with those outlooks express their commitments.

I’ll say more about that—and in particular about how one can determine whether society-specific indicators are measuring the same dispositions across societies—in the next post. But for now, it is enough to say that it’s just a mistake to think the cross-cultural study of cultural cognition demands not only that the motivating dispositions associated with Douglas’s group-grid scheme be universal but also that the indicators used to measure them be uniform across societies.

C.  Cultural mappings of risk perception

In my view, there’s no reason to expect the mappings of risk perceptions onto worldviews to generalize across societies either.  Like the items used to form the HE and IC scales, what risks mean in relation to group-grid worldviews will likely be a matter of contingent historical circumstances and thus vary across place and over time.

Take gun risks, for example. The “gun debate” in American society is one over competing risk claims: the assertions  that widespread gun possession increases the incidence of gun accidents and crime, on the one hand; versus the argument that gun control undermines the ability of law-abiding citizens to protect themselves from violent precaution, on the other. Relying on CTR, Donald “Shotgun Braman” and I have conjectured that egalitarian communitarians would be motivated to worry more about the risks associated with too few restrictions on guns, and hierarchical individualists the risks associated with too many, and our studies support that hypothesis.

Some commentators, including Mary Douglas, have expressed puzzlement over this finding. They asserted that hierarchists should support restriction of private gun possession in line with their general commitment to social regimentation and control of individuals.

This expectation, we replied, overlooks the distinctive history of guns in the U.S.: their association with Southern honor norms;  their use in settlement of the western frontier; their role in enabling resistance to Reconstruction in the 19th Century and to civil rights in the 20th. Against this background, aversion to guns conveys a recognizable egalitarian style, and enthusiasm for them (particularly among white males) a recognizable hierarchical one. But those meanings are specific to the U.S..—and thus suggest nothing about how gun risk perceptions will map onto group-grid in some other society having an entirely different historical experience with guns.

Again, it is a mistake to think that CTR, to be meaningfully cross-cultural, demands that who fears what and why generalize across societies. It requires only that the diversity of risk perceptions that people form across societies or within particular ones of them over time all be meaningfully connected to the motivating dispositions featured by group-grid.

Or at least that seems to me like the most plausible and profitable conjecture to pursue by empirical testing.

Indeed, the prospect of identifying cross-cultural divergences in how risks map onto the HE and IC worldviews is what excites me most about extending our methods to non-U.S. samples.

Within any society, the fraction of risk issues that provoke cultural conflict relative to the ones that could but don’t is always small. The primary mission of the science of science communication, in my view, is to understand the forces that divert this small set of issues from the pathways of collective-knowledge transmission that usually guide diverse citizens to the best available understanding of how the world operates.

Ideal for acquiring such knowledge would be a rich cross-cultural data set that links uniform risk-perception predictors—the cultural disposition scales derived from society-specific indicators—to distinctive patterns of variance across societies. With such data, researchers could formulate and test hypotheses about what happened in one society but not in another to cause same putative risk to become a source of cultural contestation.

On the basis of what such study revealed, we’d then be in a position to systematize our knowledge of how to design procedures that hold the precipitants of such conflict in check or counteract them when preemptive interventions have failed.

Part 2.

Part 3.


Coming soon ... cross-cultural cultural cognition

Am traveling in Europe & so not getting as much opportunity to post. But have a couple of ones planned on "cross-cultural cultural cognition," which I should manage to get up soon.

So stay tuned.

Meanwhile check out this great run in Bergen, Norway.


Lecture today at TU Delft

Will present some results of "cross-cultural cultural cognition" studies. Indeed, I'll post on that presently.


A not so "tasty" helping of pollution for the science communication environment -- at the local grocery store

Compliments of a colleague, who snapped this photo in New Haven food market.

Keith Kloor has been writing perceptively on the anti-GMO campaign recently (here & here, e.g.), as has David Tribe amidst his regular enlightening posts on all matters GMO & GMO-related.



The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)  

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of "who knows what about what" will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate change, nuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in. 

Part 1