follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Tuesday
Dec272016

Pathogen disgust & GM-food and vaccine-risk perceptions ... a fragment

From something am working on ... stay tuned:

3.1. Preliminary findings

a. PDS and political outlooks. Commentators often report that disgust sensitives, including the type measured bythe “pathogen disgust scale” (PDS), are correlated with left-right political orientations (Terrizzi et al., 2013; but see Tybur et al. 2010). In this large, nationally diverse sample, however, the relationship between PDS scores and political conservativism was trivially small (0.09, p < 0.01)

 

b.  Vaccine and GM risk perceptions and political outlooks. In the popular media, both vaccine and GM risk perceptions are frequently depicted as associated with “liberal” outlooks (e.g., Shermer 2013). Empirical data do not support this view (e.g., Kahan 2015; Kahan 2016).  In this study, too, there was no meaningful correlation  (r = 0.00, p= 0.96) between GM risk perceptions and political outlooks. For vaccines, there were small to moderate correlations, but the direction was contrary to the popular-commentary position: right-leaning scores on the political outlook measure predicted both more concern over vaccine risk perceptions (ISRPM: 0.09 p < 0.01) and less support for mandatory vaccination (r = -0.24, p < 0.01). 

Monday
Dec262016

Meta probabilistic thinking quiz...

About what percentage of US population will get correct answer (i.e., all 3 correct)? (a) 0-10%; (b) 11%-to 25%; (c) 26% to 50%; (d) 51%-75%; or (e) 76% to 100%?

Will post answer later today

 

Saturday
Dec242016

Weekend reading list

Hey-- this is just like strategy followed by commenters on this blog!

 

Canadians know a thing or two about cultural conflict so this is probably worth taking a close look at.

 

Are individualist societies doomed? Find out.

Thursday
Dec222016

Another paper crosses line from "in press" to "in print": Actively Open-minded Thinking & climate change polarization

Journals are cranking up publication speed to meet holiday demand.  And this one is *free* too.



Wednesday
Dec212016

Zika & culturally antagonistic memes -- now in print! Great stocking stuffer

Hurry up --get yours before sells out! (Best thing: it's free!)



Tuesday
Dec202016

Sufficient evidence of disgust-sensitivity & GM-food & vacciene risk perceptions ... a fragment

From conference paper due imminently ... more to come anon

2. Study

2.1. Inference strategy

This paper rests on a simple theoretical premise: that rejection of a “null hypothesis” with respect to the correlation between pathogen disgust sensitivity, on the one hand, and GM-food and vaccine risk perceptions, on the other, is not sufficient to support the conclusion that disgust sensitivity meaningfully explains these risk perceptionss.  Like all valid latent variable instruments, any scale used to measure pathogen disgust sensitivity will be imperfect. Such a scale will be highly correlated with, and thus reliably measure, a particular form of disgust sensitivity. But such a scale can still be expected to be weakly or even modestly correlate with additional negative affective dispositions.  As a result, there can be modest yet practically meaningless correlations between the pathogen disgust sensitivity scale and all manner of risk perceptions that excite negative affective reactions unrelated to disgust.

A comparative analysis is thus appropriate.  If disgust genuinely explains perceived risks over vaccines and GM foods, the relationship between a valid measure of pathogen disgust (PD) and those putative risk sources should be comparable to the relatively large ones between PD and attitudes one has good reason to believe are grounded in disgust. By the same token, if the correlation between the measure of PD and GM-food and vaccine risk perceptions, respectively, is comparable in magnitude to ones between the PD measure and putative risk sources that do not plausibly excite disgust, then there will be less reason to conclude that pathogen disgust sensitivity does not play an important role in explaining differences in the perceived risk of GM foods and vaccines.

This was the inference strategy that informed design of this study....

Monday
Dec192016

Sad news . . . 

CCP founding member Ann Richards (TC) passed away last night. I estimate she was happy every day of her life (of 15 yrs) except for the last 3. I predict that I will, after a day or 2, be happier everyday for the rest of mine as a result of having had the benefit of her companionship...

 

 

 

 

Sunday
Dec182016

Weekend update: *This* is what scientific *dis*sensus looks like...

The two scientists depicted in this photograph are researchers in the culturally divisive "cats or birds?" field, & they are performing a so-called adversarial collaboration.

 

Friday
Dec162016

Weird ... does high disgust sensitivity mitigate political polarization??...

I did a couple of posts a while back (here & here) on disgust and GM-food- and vaccine-risk perceptions.

The upshot was that, contrary to the argument advanced by some scholars and by some popular-writing commentators, neither of these risk perceptions appeared to be distinctively related to disgust sensitivities. These perceptions, and some related policy preferences, were not any more meaningfully correlated with disgust sensitivity than were myriad other risks perceptions and policy preferences that aren’t plausibly viewed as disgust related (e.g., falling down elevator shafts, flying on commercial airliners, raising income taxes for the wealthy, enacting campaign finance laws, etc.).

But here’s another thing: the disgust sensitivity measure we used—the so called “pathogen disgust” scale (PDS), which is supposed to measure a disposition to be disgusted and hence afraid of sources of bodily invasion—has some truly weird interactions with political outlooks.

Take a look for yourself: 

Basically, increasing disgust sensitivity makes the group that otherwise is inclined to perceive low risk or express low support for risk-abating policies experience an inversion of that sensibility. As a result, on issues where there was substantial political polarization, there is a convergence of positions among the citizens of highest disgust sensitivity.

Why would that be? 

What’s especially weird is that PDS is supposed to predict political conservativism; yet here we have high-disgust conservatives clearly behaving more like liberals on climate  change, and high-disgust liberals behavior more like conservatives (it didn’t in our survey; the relationship between disgust and conservative outlooks was trivail in magnitude: r = 0.09).

Maybe I just don’t feel very imaginative today, but I am not inclined to come up with a story that fits the data. 

Instead I’m experiencing a bit of uncertainty about whether I should really be trusting the “pathogen disgust” scale.  It seems, basically, to be eliciting a kind of generic survey agreement bias; it’s influence is most detectable only in that portion of the population whose members aren’t already inclined to agree with the survey item and who thus can move in concern without the constraint of a ceiling effect in the outcome measure. . . .

But what do others think?

 

Thursday
Dec152016

New NAS report on #scicomm

Here's something to read & discuss ...

Monday
Dec122016

Gore's sequel -- good idea or bad?  

I'll leave it to the 14 billion regular readers of this blog: you tell me, useful, helpful, etc. or not 

Saturday
Dec102016

Weekend update: birth announcement--twins (sort of) on politically motivated reasoning

The Emerging Trends review commentary on "politically motived reasoning" is now officially published

As you can see, the working paper turned out to be siamese twins, who were severed at the spleen & published as a "two part" set:

 

  • Kahan, D. M. (2016). The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It Emerging Trends in the Social and Behavioral Sciences: John Wiley & Sons, Inc.
  • Kahan, D. M. (2016). The Politically Motivated Reasoning Paradigm, Part 2: Unanswered Questions Emerging Trends in the Social and Behavioral Sciences: John Wiley & Sons, Inc.

 

 

 

 

Thursday
Dec082016

Making science documentaries that matter in a culturally divided society (lecture summary plus slides)

Here is the gist of my presentation at the World Congress of Science and Factual producers in Stockholm on 12/7. ( slides)

1.  I can make movies, too! Plus “identity protective cognition.” I know most of you are expert filmmakers. Well, it turns out I made a movie once myself. 

It was “produced” for use in the study featured in “They Saw a Protest.”  The production values, I’m sure, seem quite low. There are two reasons for that. One is that the production values are low. The other is that swinging my recording device around erratically helped to generate a montage of scenes that, with suitable editing, could be made to plausibly appear to be scenes from either an anti-abortion protest outside an abortion clinic or an anti-“Don’t ask, don’t tell” one held outside a college recruitment center.

Subjects, instructed to assume the role of juror, were assigned either to the “abortion clinic” condition or the “recruitment center” condition.

As you can see, subjects’ perc eptions of the coercive nature vel non of the protestors, and the corresponding justification or lack thereorf on the part of the police for dispersing the demonstrators, varied depending on the condition to which the subjects were assigned and their cultural values: subjects of opposing values disagreed with one another on key facts when they were assigned to the same condition; at the same time, subjects who shared cultural values disagreed with one another when assigned to different conditions.

The resulting pattern of perceptions reflects identity-protective cognition. That is, subjects of particular values gravitated toward assessments of what they saw that conformed to the position that was most congruent with their groups’ postion on the cause of the protestors.

2. Identity-protective reasoning on climate change, etc. The gist of my talk is that many public controversies over risk fit this same pattern. That is, when appraising societal risks, individuals of opposing cultural outlooks can be expected to form perceptions of fact that reflect and reinforce their cultural allegiances.

As an example, consider the results of “Cultural Cognition of Scientific Consensus.”  That study found that “hierarch individualists” and “egalitarian communitarians” were both inclined to selectively recognized and dismiss the expertise of the featured scientists in patterns that corresponded to whether the attributed position of the putative expert—on climate change, nuclear waste disposal, or concealed handguns--was consistent or at odds with the prevailing position in the subjects’ cultural groups.

This is identity-protective cognition, too. Like the subjects in “They Saw a Protest,” the subjects in “Cultural Cognition of Scientific Consensus” selectively affirmed or disputed the expertise of the featured scientists depending on whether his positon cohered with the one in the subjects’ cultural group.

3. System 2 motivated reasoning. The “identity protect cognition” thesis’s primary competitor is the “bounded rationality thesis. The latter holds that disagreements among members of the public is attributable to people’s overreliance on “System 1” heuristic reasoning. This position predicts that as subjects become more proficient in the deliberate, conscious, analytic form of reasoning consistent with “System 2” reasoning, they ought to converge on the best available evidence on that societal risk.

In fact, though, as individuals’ scores on any manner of critical reasoning increase, so too does the intensity with which they affirm their group’s view and denigrate the other’s. 

This result is more consistent with the “identity protective cognition” thesis, which holds that individuals can be expected to devote all their cognitive resources to forming and persisting in the position that predominates in their group as a way of protecting their status within the group.

The problem of non-convergence is a consequence not of too little rationality but instead too much. Forced to choose between a truth-convergent and identity-protective form of reasoning, actors whose personal beliefs have zero impact on their (or anyone else’s) exposure to the putative risk at issue predictably gravitate toward formation of beliefs that secure for themselves the benefits of holding group-convergent beliefs.

But if individually rational, this form of information processing remains collectively irrational. It means that members of a diverse democratic society are less likely to converge on the best-available evidence that is essential to the well-being of all.  Nevertheless, the collective good associated with truth-convergent reasoning doesn’t’ change the psychic incentives of any individual to continue to engage information in a manner that is group-convergent instead.

This is the tragedy of the science communication commons.

4. Lab remedies.  These dynamics impose severe constraints on the use of science documentaries to inform people on controversial issues. Can anything be done to steer members of diverse groups away from this form of information processing?  Here are a couple of possibilities.

a. Two channel communication. One Is the “two channel” science communication model.  This model posits that individuals assess information along two channels—one dedicated to the content of the information and the other to the identity-expressive quality of it. The two must be in synch; if they interfere with each other—if individuals perceive the information on the “meaning” channel signifies that assent to the “content” of the information risks driving a wedge between them and others who share their cultural outlooks—then they will fail to assimilate information transmitted on the content channel, no matter how Cleary it is conveyed.

The nature of the dynamics involved here is illustrated by the CCP’s study on the impact of geoengineering and cultural polarization. Whereas the “anti-pollution” message generated a negative or hostile meaning (“game over”; “we told you so”) to individuals predisposed to climate skepticism, the “geoengineering research” conveyed an identity-affirming meaning (“yes we can”; “more of the same”). Consistently with these opposing messages, subjects in the “anti-pollution” condition displayed attitude polarization relative to the control group, while ones in the “geoengineering” condition displayed diminished polarization.

b. Science curiosity. Individuals who are “science curious” process information  differently from their less curious cultural peers.  They will choose, for example, to read new stories that report exciting or novel scientific findings even when doing so means exposure to information that is hostile to their cultural identity. This plausibly explains why science curiosity, of all the predispositions associated with science comprehension, does not aggravate but rather appears to mitigate cultural polarization.

A useful communication plan, then, might focus on maximizing the congeniality of information to science-curious subjects in the expectation that those individuals, when they interact in their cultural group, will convey—by words and action—that they have confidence in climate science, a message that is likely to carry more weight than “messages” by put-up “messengers” with whom they lack a cultural affinity.

5. What to do? You tell me!  But these are very formative and maddeningly general pieces of advice. What would a program that employs them look like?

I don’t honestly know!  I know nothing in particular about making science films.  What I do know is information about lots of general dynamics relating to science communication; for those insights to be translated into real-world practice would require the “situation sense” of individuals who are intimately involved in communication within particular real-world situations.

My panel mate Sonya Pemberton is in that position.  I’ll let her speak to how she is using the “two channel model” and the phenomena of “scientific curiosity” to advance her science communication objectives.

Once she has, moreover, I will happily join efforts with her or anyone else pursuing these reflective, and well-considered judgments to do what I am best equipped to do, which is to furnish tailored empirical information fitted to enabling that professional to make the best decisions she can.

 

Monday
Dec052016

Off to Stockholm to discuss the science of science filmmaking (& of course, "post truth")

Am off for a week to Stockholm to give a couple of talks & participate in panel discussions. Audience for first is attendees of the World Congress of Science and Factual Producers.  Here's the synopsis of what I'll be saying:

Want to make a difference? Then, don’t “message” the public; satisfy its curiosity

 Can science filmmakers promote public acceptance of the best evidence relating to the reality of human-caused climate change and other disputed science issues? Maybe, but not in the manner that one might think.  In particular, it is a mistake to believe that the simple presentation of factually accurate information, even in a dramatically compelling form, will change people’s minds. Research on cultural cognition shows that most individuals can be expected to selectively credit and discredit such information in patterns that reflect and reinforce the factual positions that predominate within their cultural groups. Indeed, this form of bias, experimental data show, grows in intensity as individuals become more adept at making sense of scientific information. Nevertheless, a segment of the general population appears to be relatively immune to these dynamics. These individuals are ones who possess the highest levels of science curiosity, a general disposition to seek out and consume scientific information for personal pleasure. Science-curious individuals are the core audience for excellent science films.  Although relatively small in number, these individuals occupy a potentially critical niche in the ecology of political opinion formation, since they are situated to credibly vouch for the validity of the best evidence within their cultural communities.  The strategic upshot is that science filmmakers ought to concentrate not on “messaging” the general public but rather on simply making excellent films that satisfy their core audience's distinctive appetite to know what is known.  The new science of science communication, moreover, can help filmmakers unlock the knowledge-promoting energy of science curious citizens by furnishing filmmakers with tools they can use to make their films as appealing to as culturally diverse an audience of viewers as possible.

Somehow this got revised in the program into a statement that suggests I hold the position that science filmmakers are "all wrong" & I'm going to show them how to do it & by presenting research "demolishing" what they believe .... I'd never say that, and that's not the philosophy of the CCP Science of Science Filmmaking Initiative ... So I'll deal with a bit of "post truth" fact correction at the outset of my talk, I suppose.  But it will a lot of fun I'm sure.

Then there's a second talk for SVT, the Swedish public television producer, on misinformation. The 14 billion readers of this blog know how I fee about that.

I'll try to remember to send postcards!

Saturday
Dec032016

Weekend update: Q & A at Nature

Trump victory and QED's addition of "post truth" to its latest edition has result to an "all talking heads on deck" alert.

Read the interview.

Friday
Dec022016

Is cultural cognition an instance of "bounded rationality"? A ten-yr debate

This is basically what I remember saying last week at William & Mary in a workshop co-sponsored by the Law School & Political Science Dep't a couple weeks ago. Slides here.

1. An old but continuing debate.  The paper you read for this workshop—Motivated Numeracy and Enlightened Self Government, Behavioural Policy (in press)—originates in a debate that started 10 yrs ago.

A group of us (me, Paul Slovic, Donald Braman, and John Gastil) had written a critique of Cass Sunstein’s then-latest book Laws of Fear.  In that book, Sunstein had attributed all manner of public conflict over risk to the public’s overreliance on “System 1” heuristic reasoning. The remedy, in Sunstein’s view, was to shift as much risk-regulatory power as possible to politically insulated expert agencies, whose members could be expected to use conscious, effortful “System 2” information processing.

Our response—Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, Harvard L. Rev., 119: 1071-1109—criticized Sunstein for ignoring cultural cognition, which of course attributes a large class of such conflicts to the impact that cultural allegiances play in shaping diverse individuals’ risk perceptions.

The costs of ignoring cultural cognition, we argued, were two-fold. 

Descriptively, without some mechanism that accounts for individual differences in information processing, Sunstein could not explain why so many risk controversies (from climate change to gun control to nuclear power to the HPV vaccine) involve conflicts not between the public and experts but between different segments of the public.

Prescriptively, the cost of ignoring cultural cognition undermined Sunstein’s central recommendation to hand over all risk-regulated decisionmaking to independent expert risk regulators. That recommendation presupposed that all disagreements between the public and experts originated in the public’s bounded rationality, a defect that it was reasonable to assume could not be remedied by any feasible intervention and that generated factual errors unentitled to normative respect in lawmaking.

Cultural cognition, we argued, showed that public risk perceptions on many issues were rooted in diverse citizens’ values.  It wasn’t obvious that expert decisionmaking was “better” than public decisionmaking on risks originating in publicly contested worldviews. Nor was it obvious that conflicts originating in conflicting worldviews could not be resolved by democratic decisionmaking procedures aimed at helping culturally diverse citizens to arrive at shared perceptions of the best available evidence on the dangers that society faces.

In his (very gracious, very intelligent) reply, Cass asserted that cultural cognition could simply be assimilated to his account of the reasoning deficits that distort public decisionmaking: “I argue,” he wrote “that insofar as it produces factual judgments, ‘cultural cognition’ is largely a result of bounded rationality, not an alternative to it.”  “[W]hile it is undemocratic for officials to neglect people’s values, it is hardly undemocratic for them to ignore people’s errors of fact” (Sunstein 2006)

This position—that cultural cognition and affiliated forms of motivated reasoning are rooted in “bounded rationality"—is now the orthodox view in decision science (e.g., Lodge & Taber 2013). 

But we weren’t sure it was right.  As plausible as the claim seemed to be, it hadn’t been empirically tested.  So we set out to determine, empirically, whether the forms of information processing that are characteristic of cultural cognition really are properly attributed to overreliance on heuristic reasoning.

2.  A ten-year research program. The answer we arrived at over a course of a decade of research was that cultural cognition is not appropriately attributed to overreliance on the form of heuristic information processing associated with “System 1” reasoning.  On the contrary, the individuals in whom cultural cognition exerts the strongest effects were those most disposed to use conscious, effortful, “System 2” reasoning.

Click me! I will wither & die w/o attentionThis conclusion was supported by two testing strategies.

The first was the use of observational or survey methods. In these studies we simply correlated various measures of System 1/System 2 reasoning dispositions with public perceptions of risk and related facts. 

If public conflict over risk is a consequence of “bounded rationality,” then one should expect the individuals who evince the strongest disposition to use System 2 reasoning will form risk perceptions more consistent with expert ones than will individuals who evidence the strongest disposition to use System 1 forms of information processing.

In addition, one would expect polarization over contested risk to abate as individuals’ proficiency in System 2 reasoning dispositions increase: those individuals can be expected to “go with the evidence” and refrain from “going with their gut,” which is filled with heuristic-reasoning crap like “what do other people like me think?”

But in fact, those predictions are not borne out by the evidence.

In multiple studies, we found that the individuals who scored highest on one or another measure of the disposition to use conscious, effortful “System 2” information processing were in fact the most polarized on contentious risk issues, including the reality of climate change, the hazards of fracking, the danger of allowing citizens to carry concealed handguns etc. (Kahan, Peters et al. 2012; Kahan 2015; Kahan & Corbin 2016).

Inconsistent with the “bounded rationality” conception, this consistent finding is more consistent with the “cultural cognition thesis,” which posits that individuals can be expected to form identity-protective beliefs and to use all of the cognitive resources at their disposal to do so.

But to nail this inference down, we also conducted a series of experiments, the second type of testing strategy by which we probed Sunstein’s and others’ “bounded rationality” conception of cultural cognition and cognate forms of motivated reasoning.

Click *me*-- I have magic powers that are unlocked by clickingThese experiments consistently showed that individuals highest in the critical reasoning dispositions associated with System 2 information processing were using their cognitive proficiencies to ferret out evidence consistent with their cultural or ideological predispositions and to rationalize the peremptory dismissal of evidence inconsistent with the same (e.g., Kahan 2013).

Motivated Numeracy and Enlightened Self-government (Kahan, Peters et al. in press) reports the results of one of those studies.

3.  So what’s the upshot?  The original debate—over whether cultural cognition is a consequence of overreliance on System 1 heuristic processing—has been resolved, in my opinion.  Insofar as the individuals who demonstrate the greatest disposition to use System 2 reasoning are also the ones who most strongly evince cultural cognition, we can confident that it is not a “cognitive bias.”

But is it a socially desirable form of information processing on socially contested risks?

That’s a different question, one my own answer to which has been very much reshaped by the course of the “Ten Year Debate.”

It is in fact perfectly rational at the individual level to engage information societal risks in an identity-protective rather than a truth-convergent manner.  What an individual personally believes about climate change, e.g., won’t affect the risk she or anyone she cares about faces; whether as consumer, voter, public discussant, etc. her personal behavior will be too inconsequential to matter. 

But given what positions on climate change and other societal risk issues have come to signify about who she is and whose side she is on in a perpetual struggle for status among competing cultural groups, a person who forms a position out of line with her cultural peers risks estrangement from the people on whom she depends on for emotional and material support.

One doesn’t have to be a science whiz to get this.  But if one is endowed with the capacity to make sense of evidence in the manner that is associated with System 2 information processing, it is predictable that she will use those cognitive resources to achieve the everyday personal advantages associated with the congruence between her beliefs and those of her cultural peers.

Of course, if everyone does this all at once, we are indeed screwed.  In that situation, diverse citizens and their democratically accountable representatives won’t converge, or converge nearly as quickly as they should, on the best evidence on the risks they genuinely face. 

But sadly, this fact won’t change the psychic incentives that individuals have to use the forms of reasoning that most reliably connect their beliefs to the positions that signify membership in and loyalty to the identity-defining groups of which she is a member.

This is the tragedy of the science communications commons.

We should do something to dispel this condition.  But what?

That’s a hard question.  But it’s one for which an answer won't be forthcoming if we rely on accounts of public risk perceptions that attempt to assimilate cultural cognition into the “public uses system 1, experts system 2” framework.

I suspect Cass Sunstein by this point would largely agree with everything I’m saying. 

Or at least I hope he does, for the project to overcome “the tragedy of the science communications commons” is one that demands the fierce attention of the very best scholars of public risk perception and science communication.

References

Kahan. DM and Corbin, JC (2016) A Note on the Perverse Effects of Actively Open-minded Thinking on Climate Change Polarization. Research & Politics, 10.1177/2053168016676705.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Behavioural Policy (in press).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change 2, 732-735 (2012).

Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Evaluation of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006).

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Sunstein, C.R. Laws of fear : beyond the precautionary principle (Cambridge University Press, Cambridge, UK ; New York, 2005).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).

Wednesday
Nov302016

"They saw an election"-- my 2 cents on election result

Monday
Oct242016

Law & Cognition 2016, Session 7.5: probing the SSK data

No sooner had I finished saying “one has to take a nice statistical bite of the results and see how much variance one can digest!” than I was served a heaping portion of data from David Schkade, coauthor of Schkade, Sunstein & Kahneman Deliberating about dollars: The severity shift, Columbia Law Rev. 100, 1139-1175 (2000), the excellent paper featured in the last Law & Cognition course post.

That study presented a fascinating glimpse of how deliberation affects mock juror decisionmaking in punitive damage cases.  SSK discovered two key dynamics of interest: first, a form of group polarization with respect to judgments of culpability, whereby cases viewed as low in egregiousness by the median panel member prior to deliberation generated toward even lower collective assessments, and cases viewed as high by the median penal members toward even higher collective assessments; and second, a punitive-award severity shift, whereby all cases, regardless of egregiousness, tended toward awards that exceeded the amount favored by the median panel member prior to deliberation.

The weight of SSK’s highly negative normative appraisal of jury awards, however, was concentrated on the high variability of the punitive damage judgments, which displayed considerably less coherence at the individual and panel levels than did the culpability assessments.  SSK reacted with alarm over how the unpredictability of punitive awards arising from the deliberative dynamics they charted would affect rational planning by lawyers and litigants.

My point in the last post was that the genuinely odd deliberation dynamics did not necessarily mean that there were no resources for trying to identify systematic influences to reduce the unpredictability of the resulting punitive awards.  In a simulation that generated results like SSK’s, I was still able to construct a statistical model that explained some 40% of the variance in punitive-damage awards based on jurors’ culpability or “punishment level” assessments, which SSK measured with a 0-8 Likert scale.

It was in response to my report of the results of this simulation that Schkade sent me the data.

SSK's actual results turned out to be even more amenable to systematic explanation than my simulated ones. The highly skewed punitive-awards formed a nicely behaved normal-distribution when log transformed. 

A model that regressed the transformed results against SSK’s 400-odd punishment-level verdicts explained some 67% of the variance in the punitive awards. That’s an amount of variance explained comparable to what observational studies report when real-world punitive damages are regressed on compensatory damage judgments (Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D. & Wells, M.T., The predictability of punitive damages., Journal of Legal Studies 26, 623-661 (1997)).

Schkade made this important observation when I shared these analyses with him:

You’re right that the awards do seem more predictable with a log transformation, from a statistical point of view.  However, the regression homoscedacity assumption is imposed on log $.  The problem is that in reality where $ are actually paid you have to unlog the data and then the error variance increases in proportion to the estimate.  Worse still, this error is asymmetric and skewed toward higher awards.

So e.g. if the predicted punishment verdict is >=4 you must tell your client that the range of awards they face is exp(10) ~ $22,000 to exp(20) ~ $500,000,000.  This range is so vast that it is pretty much useless for creating an expected value for planning purposes.  In other words, $ payments are made in the R^2 == .10 world.  Of course if you factor in estimation error in assessing the punishment verdict itself, this range is even wider, and the effective R^2 even lower.

I think this is a valid point to be sure.

But I still think it understates how much more informative a statistically sophisticated, experienced lawyer could be about a client’s prospects if that lawyer used the information that the SSK data contain on the relationship between the 0-8 punishment-level verdicts and the punitive-damage judgments.

Ignoring that information, SSK describe a colloquy between a “statistically sophisticated and greatly experienced lawyer” and a client seeking advice on its liability exposure. Aware of the simple distribution of punitive awards in the SSK experiment, the lawyer advises the client that the “median” award in cases likely to return a punitive verdict is “$2 million” but that “there is a 10% chance that the actual verdict will be over $15.48 million, and a 10% chance that it will be less than $0.30 million” (SSK p. 1158).

But if that same “statistically sophisticated and experienced” lawyer could estimate that the client’s case were one that was likely to garner the average punishment-level verdict of “4,” she could narrow the expected punitive award range a lot more than that.  In such a situation, the median award would be $1 million, and the 10th and 90th percentile boundaries $250,000 and only $5,000,000, respectively.

To be sure that’s still a lot of variability, but it’s a lot less—an order of magnitude less—than what one would project without making use of the data’s information about the relationship between the punishment-level verdicts and the punitive damage awards.

Is it still too much?  Maybe; that’s a complicated normative judgment.

But having been generously served my curiosity-sating helping of data, I can attest that the there is indeed a lot of digestible variance in the SSK results after all, the weird dynamics of their juror subjects notwithstanding.

It should also be abundantly clear that the size of Schkade’s motivation to enable others to learn something about how the world works is as big as any award made by SSK’s 400 mock jury panels.  I am grateful for his virtual “guest appearance” in this on-line course!

Monday
Oct172016

Women for & against Trump: who sees what & why . . . .

If you focus on what highest-profile media commentators are saying about the revelations about Trump’s treatment of women, you will get a nice lesson in the cultural & psychological illiteracy of the mass media’s understandings of US politics.

The story being pushed—or really just assumed—by the “move as an unthinking pack” media is that the Trump’s sexually assaultive behavior and worldview must be alienating women en masse, making the imminent collapse of Trump’s campaign inevitable.

But as the most recent polls show, the race is about as close as it ever was in the popular vote.  Looking at sentiment among expected voters,  The Wash. Post/ABC Poll, produced by top-notch Langer & Assoc., has Clinton ahead only 47-43.

But even more significant is what the Langer poll shows about female voters. “Clinton leads by 8 points among women,” the poll finds,

while she and Trump run evenly among men -- an unexpected change from late September, when Clinton led by 19 points among women, Trump by 19 among men. This reflects greater support for Trump among white women who lack a college degree, partly countered by gains for Clinton among white men.

According to the survey,

Among likely voters, just 43 percent of non-college white women see Trump’s treatment of women as a legitimate issue, essentially the same as it is among non-college white men, 45 percent. By contrast, about two-thirds of college-educated whites, men and women alike, say the issue is a legitimate one.

Similarly, 56 percent of non-college-educated white women agree with Trump that his videotaped comments represent typical locker-room banter. So do 50 percent of non-college white men. Among college-educated whites, that falls to barely more than a third.

Got it?  Women aren’t reacting in a uniformly negative manner but in a polarized one to the latest Trump controversy.  So are men.

This doesn’t fit the conventional narrative, which simplistically attributes a monolithic attitude on gender equality issues to women.

But it does fit a more nuanced view that sees sex equality as involving an important cultural dimension that interacts with gender.

In her book The Politics of Motherhood, sociologist Kristin Luker points out that the abortion debate features a conflict between two larger visions about gender and social status.

On the one side is a traditional, hierarchical view that sees women’s status as tied up to their mastery of domestic norms like wife and mother.

On the other is a more modern, egalitarian one sees mastery of professional roles as status conferring for men and women alike.

Luker argues that abortion rights polarize these groups because that issue is suffused with social meanings that make it a test of the state’s endorsement of these competing visions and what they entail about the forms of behavior that entitle women to esteem and respect in contemporary society. 

The same sorts of associations, moreover, inscribe the battle lines in debates over the definition of “rape” in campus sex codes and “sexual harassment” in work place ones (if you think these issues aren’t matters of intense disagreement in today’s America, you live in a socio-ideological cocoon). 

Moreover, while these debates are ones that pit men and women who hold one set of cultural outlooks against men and women who hold another, the individuals who are in fact the most intense divided, Luker points out, are the women on the respective sides, because they are the ones with the most at stake in how the resolution of these issues link status and gender.

This view is borne out by polls that consistently show women to be the most divided on abortion rights.

They are borne out too by studies that show that women with opposing cultural worldviews are the most divided on date rape.

They are the most divided, moreover, not just on what the law should be but on what they see, the study of cultural cognition shows, in a typical date rape case in which factual matters like the woman’s consent and the man’s understanding of the same are in issue.

Perceiving that women who behave as independent professionals or as independent sexual agents are lying when they assert that they have been sexually harassed or assaulted affirms the identities of those whose status is most threatened by the norms that license such independence and impel respect for those who exercise it.

It’s not surprising—it’s inevitable—that when Trump is attacked for his attacks on women, women of a particular cultural identity will be among those  who most aggressively “reject the controversy over his sexual behavior as a legitimate issue” and “rally” to his side.

So if you want to learn something about cultural norms in America, stay tuned.  Not to the simplistic narrative that dominates our homogenous, homogenized media. But to the complex, divided reactions of real people, men and women, who are fundamentally divided in their perceptions of who deserves esteem for what and hence divided in their perceptions of who did what to whom.

Saturday
Oct152016

Weekend Up(back)date: 3 theories of risk, 2 conceptions of emotion

From Kahan, D.M. Emotion in Risk Regulation: Competing Theories, in Emotions and Risky Technologies. (ed. S. Roeser) 159-175 (Springer Netherlands, 2010).

2. Three Theories of Risk Perception, Two Conceptions of Emotion

The profound impact of emotion on risk perception cannot be seriously disputed.  Distinct emotional states–-from fear to dread to anger to disgust (Slovic, 2000)--and distinct emotional phenomena--from affective orientations to symbolic associations and imagery (Peters & Slovic, 2007)--have been found to explain perceptions of the dangerousness of all manner of activities and things--from pesticides (Alhakami & Slovic, 1994) to mobile phones (Siegrist, Earle, Gutscher, & Keller, 2005), from red meat consumption (Berndsen & van der Pligt, 2005) to cigarette smoking (Slovic, et al., 2005).

More amenable to dispute, however, is exactly why emotions exert this influence.  Obviously, emotions work in conjunction with more discrete mechanisms of cognition in some fashion.  But which ones and how?  To sharpen the assessment of the evidence that bears on these questions, I will now sketch out three alternative models of risk perception--the rational weigher, the irrational weigher, and the cultural evaluator theories--and their respective accounts of what (if anything) emotions contribute to the cognition of risk.

2.1. The Rational Weigher Theory: Emotion as Byproduct

Based on the premises of neoclassical economics, the rational weigher theory asserts that individuals, over time and in aggregate, process information about risky undertakings in a way that maximizes their expected utility.  The decision whether to accept hazardous occupations in exchange for higher wages, (Viscusi, 1983) to engage in unhealthy forms of recreation in exchange for hedonic pleasure, (Philipson & Posner, 1993) to accept intrusive regulation to mitigate threats to national security (Posner, 2006) or the environment (Posner, 2004) -- all turn on a utilitarian balancing of costs and benefits.

On this theory, emotions don’t make any contribution to the cognition of risk.  They enter into the process, if they do at all, only as reactive byproducts of individuals’ processing of information:  if a risk appears high relative to benefits, individuals will likely experience a negative emotion--perhaps fear, dread, or anger--whereas if the risk appears low they will likely experience a positive one--such as hope or relief (Loewenstein, et al., 2001). This relationship is depicted in Figure 2.1.

2.2. The Irational Weigher Theory: Emotion as bias

The irrational weigher theory asserts that individuals lack the capacity to process information that maximizes their expected utility.  Because of constraints on information, time, and computational power, ordinary individuals must resort to heuristic substitutes for considered analysis; those heuristics, moreover, invariably cause individuals’ evaluations of risks to err in substantial and recurring ways (Jolls, Sunstein, & Thaler, 1998). Much of contemporary social psychology and behavioral economics has been dedicated to cataloging the myriad distortions--from the “availability cascades” (Kuran & Sunstein, 1998) to “probability neglect” (Sunstein, 2002) to “overconfidence” bias (Fischhoff, Slovic, & Lictenstein, 1977) to “status quo bias” (Kahneman, 1991)--that systematically skew risk perceptions, particularly those of the lay public.

For the irrational weigher theory, the contribution that emotion makes to risk perception is, in the first instance, a heuristic one.  Individuals rely on their visceral, affective reactions to compensate for the limits on their ability to engage in more considered assessments (Loewenstein, et al., 2001).  More specifically, irrational weigher theorists have identified emotion or affect as a central component of “System 1 reasoning,” which is “fast, automatic, effortless, associative, and often emotionally charged,” as opposed to “System 2 reasoning,” which is “slower, serial, effortful, and deliberately controlled” ((Kahneman, 2003, p. 1451), and typically involves “execution of learned rules” (Frederick, 2005, p. 26).  System 1 is clearly adaptive in the main--heuristic reasoning furnishes guidance when lack of time, information, and cognitive ability make more systematic forms of reasoning infeasible--but it remains obviously “error prone” in comparison to the more the “more deliberative [and] calculative” System 2 (Sunstein, 2005, p. 68).

Indeed, according to the irrational weigher theory, emotion-pervaded forms of heuristic reasoning can readily transmute into bias.  The point isn’t merely that emotion-pervaded reasoning is less accurate than cooler, calculative reasoning; rather it’s that habitual submission to its emotional logic ultimately displaces reflective thinking, inducing “behavioral responses that depart from what individuals view as the best course of action”--or at least would view as best if their judgment were not impaired (Loewenstein, et al., 2001).  Proponents of this view have thus linked emotion to nearly all the cognitive biases shown to distort risk perceptions (Fischhoff, et al., 1977; Sunstein, 2005). The relationship between emotion, rational calculation of expected utility, and risk perception that results is depicted in Figure 2.2.

2.3. The Cultural Evaluator Theory: Emotion as Expressive Perception

Finally there’s the cultural evaluator theory of risk perception.  This model rests on a view of rational agency that sees individuals as concerned not merely with maximizing their welfare in some narrow consequentialist sense but also with adopting stances toward states of affairs that appropriately express the values that define their identities (Anderson, 1993).  Often when an individual is assessing what position to take on a putatively dangerous activity, she is, on this account, not weighing (rationally or irrationally) her expected utility but rather evaluating the social meaning of that activity (Lessig, 1995).  Against the background of cultural norms (particularly contested ones), would the law’s designation of that activity as inimical to society’s well-being affirm her values or denigrate them (Kahan, et al., 2006)?

Like the irrational weigher theory, the cultural evaluator theory treats emotions as entering into the cognition of risk.  But it offers a very different account of how--one firmly aligned with the position that sees emotions as constituents of reason.

Martha Nussbaum describes emotions as “judgments of value” (Nussbaum, 2001). They orient a person who values some good, endowing her with the attitude that appropriately expresses her regard for that good in the face of a contingency that either threatens or advances it.  On this account, for example, grief is the uniquely appropriate and accurate judgment for someone who values another who has died; fear is the appropriate and accurate judgment for someone who values her or another’s well-being in the face of an impending threat to it; anger is the appropriate and accurate judgment for someone who values her own honor in response to an action that conveys insufficient respect.  People who fail to experience these emotions under such circumstances--or who experience these or other emotions in circumstances that do not warrant them--lack a capacity of discernment essential to their flourishing as agents capable of holding values and pursuing them.

Rooted heavily in Aristotelian philosophy, Nussbaum’s account is, as she herself points out, amply grounded in modern empirical work in psychology and neuroscience.  Antonio Damasio’s influential “somatic marker” account, for example, identifies emotions with a particular area in the brain (Damasio, 1994).  Persons who have suffered damage to that part of the brain display impaired capacity to recognize or imagine conditions that might affect goods they care about, and thus lack motivation to respond accordingly.  They are perceived by others and often by themselves as mentally disabled in a distinctive way, as suffering from a profound kind of moral and social obtuseness that makes them incapable of engaging the world in a way that matches their own ends.  If being rational consists, at least in part, of “see[ing] which values [we] hold” and knowing how to “deploy these values in [our] judgments,” then “those who are unaware of their emotions or of their emotional lacks” will necessarily be deficient in a capacity essential to being “a rational person” (Stocker & Hegeman, 1996, p. 105).

The cultural evaluator theory views emotions as enabling individuals to perceive what stance toward risks coheres with their values.  Cultural norms obviously play a role in shaping the emotional reactions people form toward activities such as nuclear power, handgun possession, homosexuality, and the like (Elster, 1999). When people draw on their emotions to judge the risk that such an activity poses, they form an expressively rational attitude about what it would mean for their cultural worldviews for society to credit the claim that that activity is dangerous and worthy of regulation, as depicted in Figure 2.3.  Persons who subscribe to an egalitarian ethic, for example, have been shown to be particularly sensitive to environmental and technological risks, the recognition of which coheres with condemnation of commercial activities that generate distinctions in wealth and status.  Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like--an attitude that expresses their commitment to the autonomy of markets and other private orderings (Douglas, 1966).  Individualistic persons worry instead about the risk that gun control--a policy that denigrates individualist values--will render law-abiding citizens defenseless (Kahan, Braman, Gastil, Slovic, & Mertz, 2007).  Persons who subscribe to hierarchical values worry about the dangers of drug distribution, homosexuality, and other forms of behavior that defy traditional norms (Wildavsky & Dake, 1990).

This account of emotion doesn’t see its function as a heuristic one.  That is, emotions don’t just enable a person to latch onto a position in the absence of time to acquire and reflect on information.  Rather, as a distinctive faculty of cognition, emotions perform a unique role in enabling her to identify the stance that is expressively rational for someone with her commitments.  Without the contribution that emotion makes to her powers of expressive perception, she would be lacking this vital incident of rational agency, no matter how much information, no matter how much time, and no matter how much computational acumen she possessed.