follow CCP

Recent blog entries
Saturday
Aug292015

Weekend update: What is this "science of science communication" thing?

Get your copy before newstands sell out!

 

Thursday
Aug272015

Are people more conservative when “primed for reflection” or when “primed for intuition”? Apparently both . . . . (or CRT & identity-protective reasoning Part 2^8)

1.  The obvious reason people disagree with me is because they just can’t think clearly! Right? Right?? Well, I don’t think so, but I could be wrong
 

As the 14 billion readers of this blog know, I’m interested in the relationship between cognition and political outlooks. Is there a connection between critical reasoning dispositions and left-right ideology? Does higher cognitive proficiency of one sort or another counteract the tendency of people to construe empirical data in a politically biased way?

The answer to both these questions,  the data I’ve collected persuades me, is, No.

But as I explained just the other day, if one gets how empirical proof works, then one understands that any conclusion one comes to is always provisional. What one “believes” about some matter that admits of empirical inquiry is just the position one judges to be most supported by the best available evidence now at hand.

2.  New evidence that liberals are in fact “more reflective” than conservatives?


So I was excited to see the paper “Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology,” Judgment and Decision Making, July 2015, pp. 314–331, by Deppe, Gonzalez, Neiman, Jackson Pahlke, Smith & Hibbing. 

Deppe et al. report the results from a number of studies on critical reasoning and political ideology.  The one that got my attention was one in which Deppe et al. reported that they had found “moderately sized negative correlations between CRT scores and conservative issue preferences” in a “nationally representative” sample” (pp. 316, 320).

As explained 9,233 times on this blog, the CRT is the standard assessment instrument used to measure the disposition of individuals to engage in effortful, conscious “System 2” information processing as opposed to the intuitive, heuristic “System 1” sort associated with myriad cognitive biases (Frederick 2005).

It was really really important, Deppe et al. recognized, to use a stratified general population sample recruited by valid means to test the relationship between political outlooks and CRT. 

Various other studies, they noted, had relied on samples that don’t support valid inferences the relationship between cognitive style and political outlooks. These included M Turk workers, whose scores on the CRT are unrealistically high (likely b/c they’ve been repeatedly exposed to it); who underrepresent conservatives, and thus necessarily include atypical ones; and who often turn out to be non-Americans disguising their identities (Chandler,  Mueller, & Paolacci 2014; Krukpnikov & Levine 2014; Shapiro,Chandler, & Mueller 2013).

Other scholars, Deppe et al. noted, have constructed samples from “visitors to a web site” on cognition and moral values who were expressly solicited to participate in studies in exchange for finding out about the relationship between the two in themselvesAs a reflective colleague pointed out, this not particularly reflective sampling method is akin to polling ESPN.com visitors to try to figure out what the frequency of “liking football” is among different groups in the general population.

The one study Deppe et al. could find that used a valid general population sample to examine the correlation between CRT scores and right-left political outlooks was one I had done (Kahan 2013).  And mine, they noted, had found no meaningful correlation.

Deppe et al. attributed the likely difference in our results to the way in which they & I measured political orientations.  I used a composite measure that combined responses to standard, multi-point conservative-liberal ideology and party self-identification measures.  But  “self-reported ideology,” they observed, “is well-known to be a highly imperfect indicator of individual issue preferences.”

Nixon reacts w/ shock to Deppe et al. study finding that conservs are unreflectiveSo instead they measured such preferences, soliciting their subjects responses to a variety of specific policies, including gay marriage, torture of terrorist subjects, government health insurance, and government price controls (a goody but oldie; “liberal” Richard Nixon was the last US President to resort to this policy).

On the basis of these responses they formed separate “Economic,” “Moral,” and  “Punishment” “conservative policy-preference” scales.  The latter two, but not the former, had a negative correlation with CRT, as did a respectably reliable scale (α =0.69) that aggregated all of these positions.

Having collected data from a Knowledge Networks sample “to determine if the findings” they obtained with M Turk workers “held up in a more representative sample” (p. 319), they heralded this result as  “offer[ing] clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives.”

That’s pretty interesting! 

So I decided I should for sure to take the study into account in my own perpetual weighing of the evidence on how critical reasoning relates to political outlooks and comparable indicators of cultural identity.

I downloaded their data from JDM website with the intention of looking it over and then seeing if I could replicate their findings with nationally representative datasets of my own that had liberal and conservative policy positions and CRT scores.

Well, I was in fact able to replicate the results in the Deppe et al. data. 

However, what I ended up replicating were results materially different from what Deppe et al. had  actually reported. . . .

3.  Unreported data from a failed “priming” experiment: System 2 reasoners get more conservative when primed to be “reflective” and when primed to be “intuitive”!


Deppe et al. had collected their CRT and political-position data as part of a “priming” experiment.  The idea was to see if subjects’ political outlooks became more or less conservative when induced or Full results from TESS/Knowledge Networks sample (study 2). Click to inspect--very strange indeed!“primed” to rely either on “reflection,” of the sort associated with System 2 reasoning, or on “intuition,” of the sort associated with System 1.

They thus assigned 2/3 of their subjects randomly to distinct “reflection” and “intuition” conditions. Both were given word-unscrambling puzzles that involved dropping one of five words and using the other four to form a sentence.  The sentences that a person could construct in the “reflection” condition emphasized use of reflective reasoning (e.g., “analyze the numbers carefully”; “I think all day”), while those in the “intuition” condition emphasized the use of intuitive” reasoning (e.g., “Go with your gut”; “she used her instinct”).

The remaining 1/3 of the sample got a “neutral prime”: a puzzle that consisted of dropping and unscrambling words to form statements having nothing to do with either reflection or intuition (e.g., “the sky is blue”; “he rode the train”).

Deppe et al.’s hypothesis was that “subjects receiving an intuitive prime w[ould] report more conservative attitudes” and those  “receiving a reflective prime . . . more liberal attitudes,” relative to “those receiving a “neutral prime.”

Well, the experiment didn’t exactly come out as planned.  Statistical analyses, they reported  (p. 320),

show[ed] no differences in the number of correct CRT answers provided by the subjects between any group, indicating that the priming protocol manipulation . . . failed to induce any higher or lower amounts of reflection. With no differences in thinking style, again unsurprisingly, there were no statistically significant differences between the groups on self-reported ideology  or issue attitudes.

But I discovered that the results were actually way more interesting that!

There may have been “no differences” in the CRT scores and “conservative issue preferences” of subjects assigned to different conditions, but it’s not true there were no differences in the relationship between these two variables across the conditions: in both the “reflection” and “intuition” conditions, subjects scoring higher on the CRT adopted “significantly” more conservative policy stances than their counterparts in the “neutral priming” condition!

Wow!  That’s really weird!

If one took the experimental effect seriously, one would have to conclude that priming individuals for “reflection” makes those who are the most capable and motivated to use System 2 reasoning (the conscious, effortful, analytic type) become more conservative--and that priming these same persons for “intuition” makes them more conservative too!

4.  True result in Deppe et. al: “more representative sample” fails to “replicate” negative correlation between conservative policy positions and CRT!


Deppe et al. don’t report this result.  Likely they concluded, quite reasonably, that this whacky, atheoretical outcome was just noise, and that the only thing that mattered was that the priming experiment just didn’t work (same for the ones they attempted on M Turk workers, and same for a whole bunch of “replications” of classic studies in this genre).

But here’s the rub.

The “moderately sized negative correlation[] between CRT scores and conservative issue preferences overall” that Deppe et al. report finding in their "nationally representative" sample (p. 319) was based only on subjects in the “neutral prime” condition.

As I just explained, relative to the “neutral priming” condition, there was a positive relationship "between CRT scores and donservative issue preferences overall" in both the “reflection” and “intuition priming” conditions.

If Deppe et al. had included the subjects from the latter two conditions in their analysis of the results of study 2, they wouldn’t have detected any meaningful correlation –positive or negative—“between CRT scores and conservative issue preferences overall” in their critical “more representative sample.

It doesn’t take a ton of reflection to see why, under these circumstances, it is simply wrong to characterize the results in study 2 as furnishing “correlational evidence to support the hypothesis that higher CRT scores are associated with being liberal.”

For purposes of assessing how CRT and conservatism relate to one another, being assigned to the "neutral priming" condition was no more or less a "treatment" than being assigned to the “intuition" and "reflection" conditions.  The subjects in the "neutral prime" condition did a word puzzle—just as the subjects in the other treatments did.  Insofar as the experimental assignment didn't affect CRT scores and "conservative overall" scores, then either no one was treated for practical purposes or everyone was but in the same way: by being assigned to do a word puzzle that had no effect on ideology or CRT scores.

That's more like it, says Tricky Dick!As I pointed out, Deppe et al. understandably chose not to report that their “priming” experiment had shown that individuals high in System 2 reasoning capacity become more conservative both when “primed” for “reflection” and when “primed” for intuition.  The more sensible interpretation of their weird data was that the priming manipulation had no meaningful effect on either conservativism or CRT scores. 

But if one takes that very reasonable view, then it is unreasonable to treat the CRT-conservatism relationship in the “neutral priming” condition as if it alone were the “untreated” or “true” one.

If the effects of experimental assignments are viewed  simply as noise—as I agree they should be!—then the correct way to assess the relationship between CRT & conservatism in study 2 is to consider the responses of subjects from all  three conditions

An alternative that would be weird but at least fully transparent would be to say that “in 2 out of 3 ‘subsamples,’ ” the “more representative sample” failed to “replicate” the negative conservative-CRT correlation observed in their M Turk samples.

But the one thing that it surely isn’t justifiable is to divide the sample into 3 & then report the data from the one subsample that happens to support the authors' hypothesis -- that conservatism & CRT are negatively correlated -- while simply ignoring the contrary results in the other two. 

I’m 100% sure this wasn’t Deppe et al.’s intent, but by only partially reporting the data from their "nationally representative sample" Deppe et al. have unquestionably created a misimpression.  There's just no chance any reader would ever have guessed that the data looked like this given their description of the results—and no way a reader apprised of the real results would ever agree that their "more representative sample" had "replicated" their M Turk sample finding of a “negative correlation[] between CRT scores and conservative issue preferences overall” (p. 320).

5. Replicating Deppe et. al.

As I said, I was intrigued by Deppe et al.’s claim that they had found a negative correlation between conservative policy positions and CRT scores and wanted to see if I could replicate their finding in my own data set.

It turns out their study didn’t find the negative correlation they reported, though, when one includes responses of the 2/3 of the subjects unjustifiably omitted from their analysis of the relationship between CRT scores and conservative policy positions.

Well, I didn’t find any such correlation either when I performed a comparable data analysis on a large (N = 1600) nationally representative CCP (YouGov) study sample from 2012—one in which subjects hadn’t been assigned to do any sort of word-unscrambling puzzle before taking the CRT.

In my sample, subjects responded to this “issues positions” battery:

The responses formed two distinct factors, one suggesting a disposition to support or oppose legalization of prostitution and legalization of marijuana, and the other a disposition to support or oppose liberal policy positions on the remaining issues except for resumption of the draft, which loaded on neither factor.

Reversing the signs of the factor scores, I suppose one could characterize these as “social” and “economic_plus” conservativism respectively .

Both had very very small but “significant” correlations with CRT. 

bivariate correlations between CRT and "conservative overall" and subdomains in nationally representative CCP/YouGov sample. Z_conservrepub is composite scale comprising liberal-conservative ideology and partisan self-id (α = 0.82).But the signs were in opposing directions:  Economic_plus: r =  0.06, p < 0.05; and Social, r = -0.14, p < 0.01.

Not surprisingly, then, these two canceled each other out (r = -0.01, p = 0.80) when one examined “conservative policy positions overall”—i.e., all the policy positions aggregated into a single scale (α = 0.80).

That is exactly what I found, too, when I included the 2/3 of the subjects that Deppe et al. excluded from their report of the correlation between CRT and conservative policy positions in Study 2.  That is, if one takes their conservative subdomain scales as Deppe et al. formed them, there is a small negative correlation between CRT and “Punishment” conservativism ( r = -0.13, p < 0.01) but a small positive one (r = 0.17, p < 0.01) between CRT and “Economic conservativism.”

There is another, even smaller negative correlation between CRT and the “Moral” conservative policy position scale (r = - 0.08, p = 0.08).

Bivariate correlations in Deppe et al. TESS/Knoweldge Networks sample overallOverall, these tiny correlations all wash out (“conservative issue preferences overall”: r = -0.01, p = 0.76).

That—and not any deficiency in conventional left-right ideology measures (ones routinely used by the “neo-authoritarian personality” scholars (Jost et al 2003) that Deppe et al. cite their own study as supporting)— also explains why there is zero correlation between CRT and liberal-conservative ideology and partisan self-identification.

In any event, when one  simply looks at all the data in a fair-minded way, one is left with nothing—and hence nothing that supplies anyone with any reason to revise his or her views on the relationship between political outlooks and critical reasoning capacities.

6. Yucky NHT--again

One last point, again on the vices of “null hypothesis testing.”

Because they were so focused on their priming experiment non-result, I’m sure it just didn’t occur to Deppe et al. that it made no sense for them to exclude 2/3 of their sample when computing the relationship between conservativism and CRT scores in Study 2.

But here’s something I think they really should have thought a bit more about. . . . Even if the results in their study were exactly as they reported, the correlations were so trivially small that they could not, in my view, reasonably support a conclusion so strong (not to mention so clearly demeaning for 50% of the U.S. population!) as

We find a consistent pattern showing that those more likely to engage in reflection are more likely to have liberal political attitudes while those less likely to do so are more likely to have conservative attitudes....

...The results of the studies reported above offer clear and consistent support to the idea that liberals are more likely to be reflective compared to conservatives....

 I’ll say more about that “tomorrow,” when I return to a theme briefly touched on a couple days ago on the common NHT fallacy that statistical “significance” conveys information on the weight of the evidence in relation to a study hypothesis.

Refs

Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

Deppe, K.D., Gonzalez, F.J., Neiman, J.L., Jacobs, C., Pahlke, J., Smith, K.B. & Hibbing, J.R. Reflective liberals and intuitive conservatives: A look at the Cognitive Reflection Test and ideology. Judgment and Decision Making 10, 314-331 (2015).

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Jost, J.T., Glaser, J., Kruglanski, A.W. & Sulloway, F.J. Political Conservatism as Motivated Social Cognition. Psych. Bull. 129, 339-375 (2003).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).


Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014). 

Shapiro, D.N., Chandler, J. & Mueller, P.A. Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science 1, 213-220 (2013).

 

"I told you -- the ball cost 5 cents!"

 

Saturday
Aug222015

Weekend update: Who really did write the CRT-evolution paper then?

So Will Gervais has a very artful response to my post on his evolution-CRT paper.

The gist of it is that I mischarcterized his views -- that I was addressing some other "Will Gervais," who subscribes to positions wholly unrelated to his.

For sure people should read (a) his paper, (b) my post, & (c) his blog, so they can form their own view.

But I have to say that I find Will's eagerness to distance himself from the position I attributed to him perplexing.

Gervais (I think it was him!) wrote in Cognition

Many supernatural beliefs come easily to people, perhaps because they are supported by a variety of core intuitive processes. As with creationism, reliably developing intuitions support the mental representation of supernatural agents, such as God. However, dual process approaches to cognition suggest that at times people are able to analytically inhibit or override their intuitions.

[P]eople who are more willing or able to engage analytic thinking might be more likely to endorse evolution than people who tend to trust their intuitions. If true, then measures of analytic thinking should predict greater endorsement of evolution. In the present paper, two large studies tested this core hypothesis.

He concludes that his data support this conjecture:

Two studies revealed that—consistent with dual process approaches to cog nition in general, and supernatural cognition in particular—an analytic cognitive style predicts increased endorsement of evolution. Reliably developing intuitions may give creationist views an early cognitive advantage. This early advantage also is likely bolstered by early  enculturation advantages for  creationist, rather than evolutionary, concepts in many cultural contexts. However,  individuals who are better able to analytically control their thoughts are more likely to eventually endorse evolution’s role in the diversity of life and the origin of our species.

Re-analyzing his data, and primarily just showing what the actual raw data look like, I argued that the results of his study didn't support his hypothesis.  That they didn't come anywhere close to supporting it.  

The impact of the disposition to rely on "analytic" as opposed to "intuitive" thinking (measured by the CRT) was "statistically significant" but practically irrelevant. Even the most "analytic" thinkers in Gervais's sample did not endorse a conception of evolution free of divine agency--i.e., did not accept science's own conception of evolution as reflected in the modern synthesis.

The "Will Gervais" who wrote the very interesting Cognition paper states "analytic thinking consistently predicts endorsement of evolution."

But it doesn't. The (very modest incremental) effect of CRT on increased endorsement of evolution was confined to relatively non-religious subjects. Among relatively religious individuals, those who displayed the highest degree of cognitive reflection weren't any more likely to endorse science's account of the natural history of human beings than ones who scored the lowest.  

That's not what we'd expect to see if in fact disbelief in evoultion reflected a deficit in the capacity and motivation to engage in System 2 reasoning.

This result is consistent, however, with an alterative hypothesis.  At least modestly supported by existing researchthis rival position denies that cognitive reflection is something antagonistic to formation of and persistence in culturally identity-defining beliefs that are opposed to scientific evidence.

On the contrary, according to this theory, individuals will use all of the cognitive resources at their disposal to form and persist in beliefs that express their cultural identities on facts that come to symbolize their group allegiances. We should thus expect those most proficient in conscious, effortful, "System 2" analytic reasoning to be even more divided on issues like climate change & evolution than those inclined to rely on "intuitive" System 1 reasoning.

Gervais's data lends more support to that hypothesis than to what he describes as his own "core hypothesis": that "measures of analytic thinking should predict greater endorsement of evolution." 

I'm pretty sure that's all I said in my post, so I'm confused about why Gervais thinks I was mischaracterizing him (maybe he was blogging about another "Dan Kahan"?!).

Gervais complains that the media mischaracterized his study, too. So I took a look at the very impressive volume of press coverage the Cognition study generated.

For sure the media can get things horribly wrong, particularly when a researcher is reporting on how cognitive biases can influence perceptions of disputed issues in science.

But here, I think the media got it right.  Or at least they accurately reported the finding that the "Will Gervais" who authored the article in Cognition unambiguously purported to make: "individuals who are more prone and/or able to engage in analytic thinking to override their intuitions were more likely to endorse evolution."

So I'm really curious now to know who that "Will Gervais" is.  I'd also like to know what the Will Gervais who responded to me in his blog post thinks about that other Will Gervais' Cognition study; I gather he (the blog-post author Gervais) is largely in agreement with me that that the Cognition study drew conclusions not supported by the data that Gervais (not sure at this point which one) uploaded to the Cognition site.

Finally and most important of all, I'd really really like to know what the Gervais who wrote the Cognition article has to say in response to to the substance of points I made.  

The questions the study addressed are really interesting & important. They are also hard; he might point out that there's something I missed--or some additional insight to be gained from the data on the relative strengths of his hypothesis and mine--in which case, I'd like to know that!

I hope that Will Gervais joins the discussion, too.

(Note: I'm closing off comments here; readers should post their responses in the comment thread for my original post-- a more sensible place, I think, for discussion. By all means respond if you have thoughts!)

Friday
Aug212015

So we know we can't defeat entropy; but what about overplotting????

I had some correspondence off-line with loyal listener @Steve (aka @sjgenco) about the classic "what does a valid measure of climate-change risk-perceptions look like graph?"  Inspired by loyal listner @FrankL (now that they've finally discovered " missing Malaysia Airlines Flight MH370"--or at least a piece of it--maybe someone will find @FrankL, or at least a piece of him, too), the WDVMCCRLLG graphic has of course achieved iconic status and is pretty much ubiquitous in popular culture.

But it is pretty darn old. Isn't it time for something new? Can't we do better?

Yes, it's  comforting familiarity, its association with memorable moments both personal and worldhistorical, will likely motivate loud howls of protest, at least initially.

But everything, no matter how wonderful, admits of incremental improvement as human knowledge continues to expand as a result of science and improved sports drink formulas.

In response to @Steve's inquiry, I revealed the secret formula for generating the graphic. When Steve said he wasn't enamored of "jitters" as a way to handle overplotting & preferred "bubbles" scaled to reflect observation densities, I directed @Steve to a CCP dataset he could use (one posted with "codebook" the last time the CCP blog was the site for a furious display of graphic genius on the part of @thompn4) to perfect his own improvements.

Here's what he wrote back: 

Hi Dan,
I've been playing around with jitters in R. I like your Gervais jitters. Keeping the clouds more separate helps. That's harder to do when your x-var is continuous, like your libcon variable in your "challenge" dataset.
Your dataset was like catnip so I've squandered a couple of days trying to brush up on my R to see if I could implement my bubble plot idea with your data. For what it's worth, I seem to have succeeded so I thought I'd forward my results. (I use RStudio, btw, I highly recommend it.)
First, I was able to replicate your colored jitter charts in R (seems to require less code than in stata). Here's gwrisk by libcon (making the points 50% transparent also helps highlight the clustering imho):
When I figured out how to put bubbles representing the frequency of responses around each datapoint on the same plot, it looked like this:
It does show the densities nicely, I think. For comparison, here's the bubble plot for scicomp by gwrisk:
You can really see that scicomp clusters in the middle vs. libcon, and how those densities are going to generate a flat regression.
You can also combine the two plots, which is kind of interesting:
Note how the jittering on libcon stretches out the values along the x-axis. There actually aren't any "real" values above 2 or below -2.
I've attached a PPT with all my results, a commented R script for running the plots, and the Rdata image I created for inputting the data.
It was a good excuse for digging into R again. 

So what do people think? Time to retire WDVMCCRLLG? Time to adopt one of @Steve's alternatives as the new symbol of the Un-United States of Risk Perception?

Voice your opinoin --as with everything else relating to this blog, matters will be decided by a democratic vote of the site's 14 billion regular readers -- and by all means try your own hand at devising a graphic that conveys the information in WDVMCCRLLG in an even more compelling, cool way!

And if you want, you can go back to  @thompn4's project to create the perfect 3D graphic presentation that incorporates in addition the impact of science comprehension in magnifying polarization over climate change risk.

I'd offer one of our standard CCP prizes, but obviously the fame of being the originator of the successor of WDVMCCRLLG is incentive enough!

Manny models WDVMCCRLLG high fashionWDVMCCRLLG as backdrop for dramatic & inspired (but ultimately failed) gesture to heal the nation's wounds

Thursday
Aug132015

Cognitive reflection and "belief in" evolution: critically engaging the evidence

1.   Two hypotheses on "disbelief in evolution"

Why do 45% or so of Americans consistently say they don’t “believe” humans evolved from an earlier species?

How come about only one-third of them say they accept a conception of evolution—science’s conception—that features  mechanisms of natural selection, random mutation, and genetic variance  (the modern synthesis) as opposed to an alternative religious one that asserts a “supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today?”

These questions get asked constantly. Makes sense: they’re complicated,  and also extremely consequential for the status of science in a liberal democratic society.

One popular answer attributes “disbelief in” evolution to a defecit in critical reasoning that interferes with people’s ability to recognize or accept scientific evidence.  I’ve referred to this in other contexts as the “public irrationality thesis” (PIT) (Kahan in press).

Actually, I think PIT, while a plausible enough conjecture, is itself contrary to weight of the scientific evidence on who believes what and why about human evolution.  

It’s well established that there is no meaningful correlation between what a person says he or she “believes” about evolution and having the rudimentary understanding of natural selection, random mutation, and genetic variance necessary to pass a high school biology exam (Bishop & Anderson 1990; Shtulman 2006).

Click on it! Item repsonse profiles rock!There is a correlation between “belief” in evolution and possession of the kinds of substantive knowledge and reasoning skills essential to science comprehension generally.  

But what the correlation is depends on religiosity: a relatively nonreligious person is more likely to say he or she “believes in” evolution, but a relatively religious person less likely to do so, as their science comprehension capacity goes up (Kahan 2015).

That’s what “belief in” evolution of the sort measured in a survey item signifies: who one is, not what one knows. 

Americans don’t disagree about evolution because they have different understandings of or commitments to science.  They disagree because they subscribe to competing cultural worldviews that invest positions on evolution with identity-expressive significance. 

As with the climate change debate, the contours and depth of the divide on evolution are a testament not to defects in human rationality but to the adroit use of it by individuals to conform their “beliefs” to the ones that signal their allegiance to groups engaged in a (demeaning, illiberal, and unnecessary) form of cultural status compeitition.  

Call this the “expressive rationality thesis” (ERT). It's what I believe—on the basis of my understanding of the best currently available evidence (Kahan 2015).

2. New evidence for PIT?

But if one gets how science works, then one knows that all one’s positions—all of one’s “beliefs”—about empirical issues are provisional.  If I encounter evidence contrary to the view I just stated, I’ll revise my beliefs on that accordingly (I’ve done it before; it doesn’t hurt!).

So I happily sat down last weekend to read Gervais., W, “Override the controversy: Analytic thinking predicts endorsement of religion,” Cognition, 142, 312-321 (2015).

Gervais is a super smart psychologist at the University of Kentucky. He's done a number of interesting and important studies that I think are really cool, including one  that shows that people engage in biased information processing to gratify their animus against atheists (Garvais, Shariff & Norenzayan 2011), and another that reports a negative association between critical reasoning and religiosity (Gervaise & Norenzayan 2012). 

In this latest study, Gervais correlated the scores of two samples of Univ. of Kentucky undergrads on the Cognitive Reflection Test (CRT) and their beliefs on evolution.

As discussed in 327 previous posts, the CRT  is regarded as the premiere measure of the capacity and disposition to use conscious, effortful, “System 2” information processing as opposed to unconscious, heuristic “System 1” processing, the sort that tends to be at the root of various cognitive miscues, from confirmation bias to the gambler's fallacy, from base rate neglect to covariance non-detection (Frederick 2005).

Gervais hypothesized that disbelief in evolution is associated with overreliance on “intuitive” or heuristic “System 1” forms of information processing as opposed to conscious or “analytic” “System 2” forms.

 “[M]any scientific concepts are difficult for people to grasp intuitively while supernatural concepts may come more easily,” he explains.

From a young age, children view things in the world as existing for a reason; they view objects as serving functions. This promiscuous teleology persists into adulthood, even among those with advanced scientific training. Further, functionally specialized features of animals (such as a zebra’s stripes or a kangaroo’s tail) are viewed as inherently characteristics of an animal’s ‘‘kind,’’ perhaps implying a deeper and more temporally stable essence of the animal. If objects in the world, including living things, are intuitively imbued with function and purpose, it seems a small step to viewing them as intentionally designed by some external agent. . ..

Given that children and adults alike share the intuition that objects in the world, including living things, serve functions and exist for purposes, they may infer intentional agency behind intuited purpose.

Finding a negative correlation between CRT and belief in evolution, he treats the results of his study as supporting the hypothesis that “analytic thinking consistently predicts endorsement of evolution.” 

Because the influence of CRT persists after the inclusion of religiosity covariates, Gervais concludes that the “cultural” influence of religiosity, while not irrelevant, is “less robust” an explanation for “disbelief in” evolution than overreliance on heuristic reasoning.

In sum, Gervais is offering up what he regards as strong evidence for PIT.

3. Weighing Gervais’s evidence

So what do I think now?

I think Gervais's data are really cool and add to the stock of evidence that it makes sense to assess in connection with competing conjectures on the source of variance in belief in evolution.

But in fact, I don’t think the study results furnish any support for PIT! On the contrary, on close examination I think they more strongly support the alternative expressive rationality thesis (ERT).

a. Just look at the data. To begin, the correlation that Gervais reports between CRT scores and disbelief in evolution  actually belies his conclusion.

Sure, the correlation is “statistically significant.” But that just tells us we wouldn’t expect to find an effect as big as or bigger than that if the true correlation were zero.  The question we are interested in is whether the effect is as big as PIT implies it should be.

The answer is no way!

People familiar with logistic regression would probably have an inkling of this when Gervais reports that the “odds ratio” coefficient for CRT is a mere 1.3. An odds ratio of “1” means that there is no effect—and 1.3 isn’t much different from 1.

But researcher shouldn’t presuppose readers have “inklings,” much less leave them with nothing more to go on.  They should graphically display the data in a way that makes their practical effect amenable to reasoned assessment by any reflective person.

The simplest way to do that is to look at the raw data here.

Admirably, Gervais posted his data to his website.  Here’s a scatterplot that helps convey what the “OR = 1.3” finding means as a practical matter:

These scatter plots relate CRT to endorsement of the modern synthesis position as opposed to either “new earth creationism” or a “divine agency” conception of evolution in which a “supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."

I think that’s the right comparison if we are trying to assess Gervais's conjecture that overreliance on System 1 reasoning accounts for the stubbornness of “the intuition that objects in the world, including living things, serve functions and exist for purposes" reflecting "intentional agency." But the picture is pretty much the same when we look at how CRT relates to endorsement of the proposition that “God created human beings pretty much in the present form at one no part in guided the present time within the last 10,000 years or so."

Sure, there’s a modest uptick in belief in evolution as CRT increases.

But even those extremely reflective "3's"--a decided majority of whom attribute the natural history of human beings to divine agency-- don't exactly look like a sample of Richard Dawkinses to me!

Gervais states that these “results suggest that it does not take a great deal of analytic thinking to overcome creationist intuitions.”

But in fact they show that, at least for the overwhelming majority of University of Kentucky undergrads, it would take an amount that far exceeds the maximum value on the CRT scale!

This just isn't the picture one would expect to see if resistance to science's account of evolution was a consequence of overreliance on heuristic or System 1 reasoning.

b. Test the alternative hypothesis!  Even more important, the data do look like what you’d expect if the expressive rationality thesis (ERT) explained “belief”/“disbelief” in evolution. 

ERT posits that individuals will use their reason to fit their beliefs to the ones that predominate in their cultural group (Kahan 2013).  As explained, existing evidence is consistent with that: it shows that individuals who have a cultural style that features modest Mmm raw data! Always demand a helping of it when served statistically processed datareligiosity become more likely, but those with one that features strong religiosity less likely, to profess belief in evolution.

The way to test for such an effect is not to put religion into a multivariate model as a “control” as Gervais did,  but to examine whether there is an interaction between religiosity and CRT such that the effect of the latter depends on the level of the former.

Here’s what what that interaction looks like in a regression model of "belief in evolution" for a general population sample, in which religiosity is measured with a composite scale reflecting self-reported church attendance, frequency of prayer, and importance of religion in one’s life (α = 0.80):

If we look, we can find the same interaction in Gervais’s data. 

This figure graphically displays output of a regression model that uses Study 1’s 7-point “belief in God” scale.

 

The modest impact of CRT in the sample as a whole is driven entirely by its effect on relatively less religious subjects.

yummy! raw data for regression model above!

Study 2 has a “belief in God” measure, too, scaled 1-100.  One-hundred point measures are a very bad idea; they aren’t going to measure variance any better than a 10-point (or probably even 7-point) one, but are going to have tons of noise in them.

The study also had a 7-point church attendance measure, so I combined these two into a scale.

Here’s what the raw data look like when we examine how CRT relates to acceptance of the modern synthesis position on evolution in Study 2:

Once more, it's plain to see that CRT isn't having any effect on subjects above average in religiosity.  The interaction is there in the regression model, too, but because of the wobbly religiosity measure and smaller sample the model is underpowered (b = -.36, p = 0.07, for "theistic evoluition vs "creationism"; b = -.39, p = 0.19, for "naturalistic" vs. "creationism"). (Actually, if one just uses the 100-point "belief in God" measure, there it is, "statistically significant"--for those who view p < 0.05 as having talismanic significance.)

Contrary to what Gervais concludes from his analyses, then, the evidence doesn’t in fact show a “consistent pattern whereby individuals who are more prone and/or able to engage in analytic thinking” use that capacity to “override” the “intuition objects in the world, including living things, serve functions and exist for purposes” reflecting “intentional agency” in their creation.

We see that “pattern” consistently only in non-religious individuals.

That’s what ERT predicts: as individuals become more cognitively proficient, they become even more successful at forming and persisting in beliefs that express their identity.

I think Gervais missed this because he didn’t structure his analyeses to assess the relative support of his data for the most important rival hypothesis to his own.

In fairness, Gervais does advert to some analyses in his footnotes that might have led him to believe he could ruled out this view. E.g., he didn’t find an interaction between the predictors, he reports, when he regressed belief in evolution on CRT and a “religious upbringing” variable in study 1.  But that's hardly surprising: that variable was dichotomous and answered affirmatively by 75% of the subjects; it doesn’t have enough variance, and hence enough statistical power, to detect a meaningful interaction.

In study 2, Gervais administered a nonstandard collection of variables he calls “CREDS,” or “credibility enhancing displays.”  Unfortunately, the item wording wasn't specified in the paper, but Gervaise describes them as measuring variance in “believing” in and “acting” on “supernatural beliefs.”  

Gervais reports that the CREDS had only a modest correlation with disbelief in evolution, and also didn’t interact with religiosity when included as predictors of CRT.  I really don’t know what to say about that, except that the discrepancy in the performance of the CRED items, on the one hand, and the Belief in God and church attendance ones, on the other, make me skeptical about what the former is measuring.

I think Gervais should have displayed a bit more skepticism too before he concluded that his data supported PIT.

5.  Limits of Yucky NHT

One last point, this one on methods.

The problem I have with Gervais’s paper is that it relies on an analytical strategy that doesn't test the weight of the evidence in his data in relation to hypotheses of consequence.  He tells us that he has found a “significant” correlation—but doesn’t show us that the effect observed supports the inference that his hypothesis depends on or rules out a contrary inference supportive of an alternative hypothesis .

These problems are intrinsic to so-called “null hypothesis testing.” Because the “null” is not usually a plausible hypothesis, and because “rejecting" it is often perfectly consistent with multiple competing hypotheses that are plausible, a testing strategy that aims only to “reject the null” will rarely give us any reason to revise our prior assessments of how the world works.

Good studies pit opposing hypotheses against each other in designs where the result, whatever it is, is highly likely to give us more reason than we had before for crediting one over the other.

Gervais is a very good psychologist, whose previous studies definitely reflect this strategy. This one, in my view, wasn’t as well designed—or at least as well analyzed—as his previous ones.

Or maybe I'm missing something, and he or someone else will helpfully tell me what that is!

But no matter what, given the balance of the evidence, I remain as convinced that Gervais is a superb scholar as I am that PIT doesn't explain conflicts over evolution, climate change, and other culturally contested science issues in the U.S.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Frederick, S. (2005). Cognitive Reflection and Decision Making. Journal of Economic Perspectives, 19(4), 25-42.

Gervais, W. M. (2015). Override the controversy: Analytic thinking predicts endorsement of evolution. Cognition, 142, 312-321. doi: http://dx.doi.org/10.1016/j.cognition.2015.05.011

Gervais, W. M., & Norenzayan, A. (2012). Analytic Thinking Promotes Religious Disbelief. Science, 336(6080), 493-496. doi: 10.1126/science.1215647

Gervais, W. M., Shariff, A. F., & Norenzayan, A. (2011). Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of Personality and Social Psychology, 101(6), 1189.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. What is the science of science communication?” J. Sci. Comm. (in press).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

 

Tuesday
Aug112015

The science of science documentary filmmaking: the missing audience hypothesis

More on this, soon . . .

The scholarly and practical motivation behind the proposed research is to reconcile two facts about science documentary programming in American society. The first is that such programming has outstanding content. Programs like NOVA, Nature, and Frontline, among others, enable curious non-experts to participate in the thrill of discoveries attained through the most advanced forms of scientific inquiry. Second, the audience for these programs is modest and demographically distinctive. These viewers, television industry analyses consistently find, tend to be older, more affluent, and more educated than the general television audience. They are known to be less religious, and they are more likely to identify themselves as politically liberal.

Why is enjoyment of such excellent programming confined so disproportionately to this particular audience? The most straightforward explanation is that these are the only members of the public who are situated to comprehend and enjoy science documentary programming. They are the natural audience for programs like NOVA, whereas non-viewers simply are not interested in the content of science documentaries.

The professionals who produce such programs find this “natural audience” hypothesis unconvincing, and so do we. One reason to doubt the “natural audience” hypothesis is that it’s plainly not the case that appreciation of science is confined to individuals who fit the distinctive profile of typical PBS documentary viewers. Measures of attitudes such as interest in science and trust of scientists are not strongly associated with demographic variables (Gauchat, 2011) and in fact are highly positive across the entire population (National Science Board, 2014, ch. 7).

Another reason to question the “natural audience” explanation is the popularity of what might be called “reality TV” science programming. Mythbusters is a weekly show broadcast by the Discovery Channel that features the use of innovative, jury-rigged experiments to test popular lore (“would a penny dropped from the top of the Empire State building really penetrate the skull of a person on the sidewalk?”). Consistently among the top-rated primetime cable television programs among men 25-54 years of age (Good, 2010) , the show is broadly representative of a niche collection of successful shows  that feature real-life characters interacting in dramatic ways with technology or nature .

It would be impossible to explain the appeal of these programs if those who watch them did not find science and environmental TV shows entertaining. The protagonists of Mythbusters are not scientists, but they are using the mode of discovering truth—controlled experimentation—that is the signature of scientific inquiry. The show would not be such a tremendous success unless there was a broad popular audience that is exhilarated to observe such methods being used to satisfy curiosity about how the world works.

The audience for National Geographic Channel (co-owned by Fox Cable Networks) also serves an audience markedly different from PBS’s. Nat Geo’s series Wild Justice—a popular program that for four seasons chronicled the activities of California Game Wardens patrolling the wilds of the Sierra Nevada Mountains—testifies to its viewers’ fascination with nature and to their identification with the characters’ mission of protecting wildlife.

The reality-based science/nature genre is distinct from science documentary programming, which focuses on conveying the work of, and the insights generated by, professional scientists. But when combined with evidence of the breadth of curiosity about science across diverse segments of the population, including those from which these shows draw their principal viewers, the popularity of Mythbusters and like programs suggests an alternative explanation for the more limited appeal of science documentaries. We will call it the “excluded audience” hypothesis.

At least as striking as the difference in content between the reality-based shows, on the one hand, and science documentary programs, on the other, is the feel of them. Contrasting elements of the two—including the personality of the characters they feature, the dramatic quality of the situations they depict, and the narrative modes of presentation that they use—seem to fit the distinctive cultural styles of their audiences.

“The only difference between science and screwing around,” Mythbusters host Adam Savage once explained, “is when you write it down” (OneDublin.org, 2012). This statement might well perplex one class of documentary viewers, who would cringe at the suggestion that, say, work being done to investigate conjectures on quantum gravity at the Hadron Collider is even remotely akin to “screwing around.”

But Savage’s statement no doubt made perfect sense—even thrilled—the person to whom it was made: a sixth grade girl, whose adulatory letter asked Savage and his co-host, “what did you want to be when you grow up, and what inspired you to be scientists?” When that girl grows up, she might well be a scientist. Even if she decides to do something else, there is every likelihood that she’ll have retained the disposition to experience wonder and awe (as Savage plainly has) at how science enlarges our knowledge.

But what is most likely of all is that she will still be the kind of person who was engaged by Mythbusters. Science documentaries that don’t resonate with that person’s outlooks will thus be highly unlikely to engage her.

The “excluded audience” hypothesis holds that the failure to find an idiom that can speak to the diversity of cultural styles that characterize citizens of a pluralistic society creates a barrier between science documentaries and a class of viewers, ones whose curiosity to participate in knowing what is known to science these programs could fully satisfy. The barrier takes the form of cues that viewers unconsciously use to determine if a program is “right” for someone with their distinctive experiences, values, and social ties (Kahan, Jenkins-Smith, Tarantola, Silva & Braman 2015).

If anything approaching a “law” has been established at this point by the nascent science of science communication, it is that hostile or antagonistic cultural meanings stifle cognitive engagement (Kahan, 2010; Nisbet, 2010). A better understanding of how science documentary programming can avoid conveying such meanings would allow them to make their shows more cognitively engaging to a larger segment of the population. The now missing audience would then be enabled to experience the thrill and wonder that such programs consistently allow their current audience to enjoy.

Refs

Gauchat, G. (2011). The cultural authority of science: Public trust and acceptance of organized science. Public Understanding of Science, 20(6), 751-770. doi: 10.1177/0963662510365246.

Kahan, D. (2010). Fixing the Communications Failure. Nature, 463, 296-297.

Kahan, D. M., Hank, J.-S., Tarantola, T., Silva, C., & Braman, D. (2015). Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science, 658, 192-222.

National Science Board. 2014. Science and Engineering Indicators 2014.Arlington, VA: National Science Foundation.

Nisbet, M. C. (2010). Framing Science: A New Paradigm of Public Engagement Communicating science. In L. Kahlor & P. Stout, (Eds.), New agendas in communication (pp. 40-67). New York: Routledge.

OneDublin.org (2012). MythBusters Adam Savage and Kari Byron on the Art of Science and Experimentation, http://onedublin.org/2012/03/06/mythbusters-adam-savage-and-kari-byron-on-the-art-of-science-and-experimentation/.

Friday
Aug072015

Perplexed--once more--by "emotions in criminal law," Part 3: Motivated reasoning & the evaluative conception

Okay, here's part 3 of the n-part series on my continuing perplexity over  the criminal law's understanding of emotions.

I started off with the fundamental question: what's really going on?

This is what one asks when one has to swim through the current of dissonant idioms on emotions that flow through judicial opinions:  of “highly respected” men of “good moral character,” possessing “ high conceptions of the sanctity of the home and the virtue of women,” in whom the “shock” of spousal infidelity would thus naturally trigger “temporary insanity” and a resulting “loss of control” over their “mental processes”; versus the “rounders and libertines,” whose own lack of virtue would surely inure them to the same “mind-unbalancing” effect of discovering immorality on the part of others.

It's also what one asks when one encounters the sort of selectivity courts display toward impassioned offenders: excusing the "true man" who resorts to lethal violence to protect the "sacredness of his person" rather than beat a cowardly retreat when "wrongfully assailed" in a place he has "every right to be" -- b/c after all, who thinks "rationally in the presence of an uplifted knife"?—while condemning the chronically battered woman who shoots her sleeping husband, because she was motivated not by the "primal impulse" of "self-preservation" but only by her perception that the alternative was a "life of the worst kind of torture and . . . degradation . . . ."

Wha?...

In the last part I offered an explanation, one advanced in a 1996 article I wrote w/ Martha Nussbaum, that I called the "two conceptions thesis" or TCT.

TCT identifies two positions on what emotions are and why they matter: the "mechanistic conception," which treats emotions as unreasoning forces or impulses that acquit an actor of moral responsibility in whole or in part because of their destructive effect on volition; and the "evaluative" conception, which sees actors' emotions as moral evaluations that can in turn can be evaluated in light of social norms that define who is entitled to what.

From voluntary manslaughter to duress, from self-defense to insanity--doctrines of criminal law all appear on casual inspection to reflect the "mechanistic conception."

But on reflection their legal elements create space for and thus demand the exercise of moral judgment, which decisionmakers inevitably exercise in the manner the evaluative conception envisions—by measuring the quality of the impassioned actor’s character, as revealed by his or her anger or fear or disgust.

That’s the account that in the 2011 essay “Two conceptions of two conceptions of emotion” I declared  I no longer found satisfactory.  The source of my doubts about it was the work I had done in the intervening time, mainly in collaboration with others, on cultural cognition, which to me suggested an alternative and likely more compelling answer to the “what is going on question”: not  conscious moral evaluation of the evaluations embodied in impassioned actors’ emotional motivations but rather the unconscious subversion of a genuine commitment to the normative theory (however cogent) that informs the rival mechanistic conception of emotion.

Below I reproduce form the 2011 essay the explanation for this shift in my understanding.

“Tomorrow” I’ll tell you why I now no longer have confidence in that view either. 

Because that’s what this whole series is about: repeatedly changing one’s mind. I don't think there's anything wrong with that; on the contrary, I thnk something is wrong when this doesn't to someone who is doing what one is supposed to as an empiricist: using valid methods of observation, measurement and inference to incrementally enlarge the stock of evidence available to adjudicate between competing plausible explanations for a matter of genuine complexity....

4.

So what’s wrong with TCT? Despite its considerable explanatory power, TCT still leaves one obvious mystery unresolved: why is the mechanistic conception so conspicuous in the law? If it is merely a veneer, why are the decisionmakers covering things up? Why don’t they just say, in unmistakably clear terms, that they are evaluating the moral evaluations that that offenders’ (and sometimes’ victims') emotions embody?

My answer is that they aren’t covering up anything. I see this response as not so much an alternative to TCT, however, as an alternative to the version I have just described. I will call this alternative the cognitive conception of TCT (or C-TCT) and distinguish it from the standard one, which I will call the moral evaluation conception (ME-TCT).

To sharpen the relevant distinctions, consider three models of the role of emotions in criminal law (Figure 1). The first contemplates that decisionmakers’ perceptions of the impact of offenders’ emotions should (and does when decisionmakers aren’t being dishonest) determine outcomes wholly independent of any moral evaluations of the quality of those emotions. This the naïve mechanistic view that TCT seeks to discredit and that it aggressively critiques when articulated by conservative opponents of reforming traditional doctrines. In its place, ME-TCT asserts that outcomes in fact flow from decisionmakers’ evaluations of the moral quality of emotions independently of their perceptions of the impact of emotions on offenders’ volition. This is what I’m calling ME-TCT. C-TCT, in contrast, accepts that decisionmakers are honestly (at least in most cases) reaching outcomes based on their view of the volitional impact of emotions. However, in assessing the intensity of emotions, they are unconsciously conforming what they see—actually, their perception of something that they can’t literally see—to outcomes that reflect culturally congenial social meanings.

One reason that I find C-TCT more compelling than ME-TCT is that I can’t bring myself to take seriously any understanding of TCT that implies decisionmakers are being systematically disingenuous when they appeal to the mechanistic conception of emotion to explain their legal determinations. The idea that they might be secretly invoking it en masse in order to conceal their commitments to politically contestable evaluative norms is preposterous; there’s no way the ever-expanding number of insiders could maintain—or even be expected uniformly to want to maintain—such a conspiracy! The idea that they are being openly disingenuous—that they are winking and grinning as they turn loose the cuckold, the homophobe, or the battered woman—also doesn’t ring true. People just aren’t that cynical; on the contrary, anyone who has taught substantive criminal law to thoughtful people will see that they are as intensely earnest as they are divided about the mental lives of cuckolds, battered women, beleaguered subway car commuters, and all the others, a point that Mark Kelman has brilliantly explored.

Even more important, though, I find myself compelled to accept C-TCT by what I’ve learned about the phenomenon of motivated reasoning during the years since I co-authored Two Conceptions of Emotion in Criminal Law. Motivated reasoning refers to a complex of unconscious cognitive processes that converge to promote formation of factual beliefs that suit some end or need extrinsic to the actual truth of those beliefs. One such end is the stake individuals have in protecting their association with and status within groups united by their commitment to shared understandings of the best life and the ideal society.

In the course of an ongoing research project that I have had the good fortune to be a part of, my collaborators and I have studied on how this dynamic shapes perceptions of risk. People unconsciously search out and selectively credit information that supports beliefs that predominate in their cultural affinity groups; they turn to those who share their values, and whom they therefore trust, to certify what sorts of empirical claims they should believe; they even construe their first-hand experiences, including what they see and hear, to fit expectations that cohere with their defining group commitments. As a result, even when they agree on ends—safe streets, a clean environment, a prosperous economy—they end up culturally divided on the means of how to secure them.

Our research group has recently begun to use these methods to explain disagreement about legally consequential facts. We’ve found, for example, that people of diverse cultural outlooks form systematically different impressions when they view videotape evidence bearing on the degree of risk associated with a high-speed police car chase or on the intent of political demonstrators to intimidate passersby.

Much like the work I did earlier on emotions in criminal law, moreover, this work is part of a multi-faceted and dynamic scholarly conversation. Our work on cultural cognition and law builds on that of social psychologists such as Mark Alicke. More recently, too, other scholars, including Janice Nadler, and John Darley and Avani Sood have completed important studies supporting the likely impact of motivated reasoning on perceptions of legally consequential facts.

C-TCT flows naturally out of this work. The most plausible reason that the mechanistic conception is so conspicuous in the criminal law, on this view, is that ordinary people, including the ones who become judges, juries, and legislators, believe it. They believe (not without reason, including personal experience!) that volition-constraining affect is a signature element of emotion; they also accept that the intensity of such affective responses should have moral consequence akin to what doctrines informed by the mechanistic view seem to say they should. But in assessing one or another form of evidence that bears on offenders’ emotions, culturally diverse individuals unconsciously gravitate toward perceptions that connect them to and otherwise are congenial to persons who share their defining commitments.

There are two studies, in particular, that are supportive of this conclusion. One is a study that Donald Braman and I did, in which we found that mock jurors of opposing cultural outlooks formed opposing pro-defendant or pro-prosecution fact perceptions in a self-defense case involving a battered woman who killed her sleeping husband—and then flipped positions in one involving a beleaguered subway commuter who killed an African-American panhandler. Another study, by Nadler, found that extrinsic facts bearing on the moral quality of parties’ characters, influenced mock jurors’ perceptions of various facts, including intent and causation.

I certainly would not say that the verdict is in on the relative strength of C-TCT and SE-TCT. But I’m convinced the case can and should be decided by empirical proof, and that the weight of the evidence to date supports C-TCT.

Thursday
Aug062015

Fall seminar: Law & Cognition

Will be offering this course in law school & psychology dept this fall:

Law & Cognition. The goal of this seminar will be to deepen participants' understanding of how legal decisionmakers--particularly judges and juries--think. We will compile an in-depth catalog of empirically grounded frameworks, including ones founded in behavioral economics, social psychology, and political science; relate these to historical and contemporary jurisprudential perspectives, such as "formalism," "legal realism," and the "legal process school"; and develop critical understandings of the logic and presuppositions of pertinent forms of proof--controlled experiments, observational studies, and neuroscience imaging, among others. Students will write short response papers on weekly readings.

I've taught the course before, but for sure I'll be updating the previous reading list, particularly in connection with the study of judicial decisionmaking, where there is now valid experimental alternatives to the observational studies of "judicial behavior" featured in political science.

The course is really pretty cool because it is equally valuable, in my view, for those who want to learn the "laws of cognition" (or at least the best current understandings of the mechanisms of them) & those who want to learn how cognitive dynamics shape the law. 

I advanced a theme similar to this to explain why law furnishes such a useful laboratory for studying cognitive science in Laws of Cognition and Cognition of Law, 135 Cognition 56 (2015),  which is a passable preview for this course.  

click me! pls!!!For sure we'll get to do fun things w/ little diagrams that relate various decisionmaking dynamics--from confirmation bias to motivated cognition, from the "story telling model" to "coherence based reasoning"-- to a straightforward Bayesian model of information processing!

I'm hoping, too, that this course can have a "virtual space," on-line counterpart.  That worked super well for last spring's Science of Science Communication seminar.

If anyone is eager to help facilitate the on-line counterpart, I'm happy to accommodate. Just send me an email!

I'll post various materials as they become available. But for now here is some more "course information":

General Information & Course Outline

A.  Nature of the Seminar

The focus of this seminar will be a set of interrelated frameworks for studying how legal decisionmakers think. These frameworks use concepts and methods from a variety of disciplines, including social psychology, behavioral economics, and political science. What unites—but also divides—them is their ambition to generate empirically grounded accounts of the various cognitive elements of legal decisionmaking: from values and motivations to perceptions and reasoning processes.

For our purposes, “legal decisionmakers” will mean mainly judges and jurors. Our aim will be to assess the contribution that the various frameworks make to explaining, predicting, and identifying means for improving the judgments of these actors. Because we will be interested in how the cognitive tendencies of these two groups of decisionmakers diverge, moreover, we will also afford some consideration to the professional(ized) habits of minds of lawyers more generally.

There are a number of things that we will not be examining in great detail. We will not be trying to identify how the study of cognition can be used to enhance the regulatory efficacy of the law, for example. Nor will we be examining the contribution that the study of cognition might make to improving the law’s use of forensic science. We will, of course, form some insights on these matters, for it is impossible to evaluate the cognitive functioning of legal decisionmakers without reference to its impact on the effectiveness of law and the accuracy of adjudication. But the limited duration of the seminar will prevent us from systematically assessing the relevance of the frameworks to these objectives—in large part because doing so adequately would require consideration of so many phenomena in addition to how legal decisionmakers think.

The seminar will also have a secondary objective: to form a working familiarity with the empirical methods featured in the study of cognition. We will not be designing studies or performing statistical analyses.  But we will be devoting time and attention to acquiring the conceptual knowledge necessary to make independent critical appraisals of the empirical work we will be examining. 

Monday
Aug032015

Vaccine hesitancy, acupuncture mania, and the methodological challenge of making senses of "boutique risk-benefit perceptions" (BRBPs)

A thoughtful correspondent drew my attention to evidence of the persistence of enthusiasm for acupuncture despite evidence that it doesn’t have any actual benefit.

He was struck by the contrast with the mirror image resistance to evidence that the benefits of childhood vaccines far outweigh their risks.

What sorts of cultural outlook might there be, he wondered, that predisposes some people to believe that sticking needles into their bodies promotes health and others that doing so will compromise it?! 

Maybe it’s a continuum with vaccine-hesitant people at one end and acupuncture devotees on the other?

Tongue-in-cheek on his part, but there’s an important point here about the role of fine-grained local influences on risk perception.  

My response:

Uh, no. The study finds that a group of exemption-seekers with those characteristics are *atypical* of people seeking exemptions generallyI am willing to bet that belief in the benefits of acupuncture will defy explanation by the sort of correlational, risk-predisposition profiling methods of which cultural cognition is an example.

Indeed, your comment actually highlights a research blind spot in the project to identify risk-perception propensities and to anticipate them through effective science communciation.  

The counterproductive media din to contrary notwithstanding, vaccine hesitancy defies explanation by the sorts of cultural & like profiles that are so helpful in charting conflict over various other risks.

Ditto with GM food risks.

Same w/, oh, concern about pasteurized milk (and belief in the benefits of raw milk); fear of cell phone radiation; anxiety about drones; fluoridation of water  etc.

There's some small segment of the US general population that believes in the effectiveness of acupuncture and its advantages over conventional medical treatments, which presumably those same people view as nonbeneficial or overly risky.  I bet their views are unshared by the vast majority of people who share their cutlural commitments generally.

Let's call these outlier views "boutique risk-benefit perceptions" -- BRBPs.

But let’s agree with "fearless Dave" Ropeik’s consistent point that it is not satisfying to shrug off BRPBs as disconnected from any social context, as lacking any genuine social meaning, or as simply random patterns of risk perception, unamenable to systematic explanation ...

I think the problem in accounting for BRBPs has two related causes:

First, the sorts of characteristics that matter in BRBPs might be ones that are featured in schemes like cultural cognition but they always depend in addition on some local variable, one that makes those characteristics matter only in particular places, & indeed could make different sets of characteristics have different valences across space.

Second, the large-sample correlational studies that are used to examine such relationships in standard risk-profiling studies are unsuited for identifying the relevant indicators of BRPB because the local variable will resist being operationalized in such a  study, and when it's omitted the remaining cultural characteristics will always lack any systematic relationship to the risk perception in question.

For an example of a closely related research problem where this dynamic is present and researchers just don't seem to get its significance, consider studies that purport to corroborate the trope that "rich, white, liberal, suburbanite parents" are anti-vax militants.

The most recent highly publicized study (or most recent highly publicized one I noticed) that purported to support this conclusion used a form of analysis that identifies "clusters" of school districts in which parents requested personal-belief exemptions in Calif.  

The clusters, as hypothesized, were in particular highly affluent, white, suburban school districts in Marin county (bay area) and in certain demographically comparable suburban school districts in the vicinity of LA.

Taking the cue from the authors' own characterization of their results, the media widely reported the studying as confirming that “[t]he parents most likely to opt out of vaccines” are “typically white and well-to-do" etc.

One doctor, who has no training in or familiarity with the empirical study of risk perception and science communication, & who apparently no familiarity with the empirical methods used in this particular study either, excitedly proclaimed that "[w]hile the study looked only at California,  ... similar patterns of demographics on parents would show up in other states as well."

Well, if so, then the conclusion will be that personal-exemption rates are not correlated with being "affluent, white, and suburban."  

In a state-wide regression analysis, this same study showed that suburban schools (which are affluent and mainly white in California) had substantially lower personal-exemption rates.

There's no contradiction or even paradox here.

"Cluster" analysis is a statistical technique designed, in effect, to find outliers: concentrated patterns of results that defy the sort of distribution one would expect in a statistical model in which one variable or set of variables is treated as the "cause" of another generally.  

If one can find such a cluster (i.e., one that can't be explained by a simple linear model that includes appropraite predictors), and can confidently rule out its appearance by chance, then necessarily one can infer that there is some other unobserved influence at work that is causing this unexpected concentration of whatever one is observing.

Generally speaking, cluster analysis isn't designed to identify causes of diseases or other like conditions. It is a form of analysis that tells you that there's some anomaly in need of explanation, almost certainly by other forms of empirical methods.

Strangely, the authors of the study apparently didn't get this.

They noted, with evident surprise, that "[s]suburban location had a negative relationship with PBEs [personal belief exemptions], opposite of what was anticipated given the maps of cluster assignments”  -- & trot out a series of post hoc explanations for this supposed anomaly.

But there was no anomaly to explain.  

If  there are genuinely high-personal-exemption-rate clusters in certain white, affluent, suburban schools, that implies that that there isn't an association between those characteristics and high personal-exemption rates generally--indeed, that there is more likely a negative association between them (if the association weren't negative outside the clusters, the high concentration in the clusters would be more likely to generate a positive linear correlation overall, albeit a weak one).  

Thus, the researchers, if it made sense for them to resort to spatial cluster analysis in the first place, should have anticipated the finding that "affluent, white, and suburban" school districts don’t have high personal-exemption rates generally.

Instead of announcing that their results had corroborated a common but incorrect stereotype, they should have recognized and advised readers that their study shows that in fact the influence that accounts for higher personal exemption rates in these schools is not that they are “affluent, white, and suburban” -- and is necessarily still unaccounted for!

They should also have called attention to the surplus of personal-exemption rate requests in school districts that are non-suburban-- in fact, among students in charter schools, whose attendees are more likely to be poor and minority.

I don't know why there would be higher exemption rates in students attending those schools. I seriously doubt that parents of these children are teeming with anti-vax sentiment. More likely, there’s a hole in the universal-vaccination net that should be identified and repaired here.

But the point is, researchers (at least those looking for the truth and not for the attention they can get for confirming a congenial misconception) aren't going to find out what influences, cultural or otherwise, explain vaccine hesitancy or ambivalence using general-population correlational studies.  The influences are too local, too fine grained, to be picked up by such means.  

Indeed, the "cluster" analysis methodology used in this and other studies is proof that something else-- something still not observed  -- is causing such behavior in these areas.  

It's something that necessarily evades the sorts of profiles one can identify using the sorts of attitudes and characteristics one can measure with a general-population survey.  

That's exactly what sets BRBPs apart from other types of risk perceptions.

BRBPs fall into a blind spot in the study of risk perception and science of science communication.  

We need valid empirical methods to remedy that. 

Thursday
Jul302015

*Now* what do alternative sanctions mean? And how'd I miss the memo?

There were pretty much three things that I found very mysterious about the disconnect between empirical evidence and public policy when I started as an academic in the late 1950s or whenever it was, and the main one was the excessive reliance on imprisonment in the U.S.

I've reproduced the first few paragraphs of what was one of my first published articles (Kahan 1996) (the other was on how the latest developments in cold fusion were likely to radically alter constitutional interpretation; could still happen!). But basically the idea was that argument for so-called "alternative sanctions" was a loser b/c it ignored the phenomenon of social meaning. 

The case for reducing or eliminating imprisonment for a host of non-violent offenders, ones who didn't need to be incapacitated for public safety, was largely focused on costs and benefits: Tossing people in jail is expensive for society, not to mention degrading and debilitating for offenders, and doesn't deter those forms of criminality any more effectively ("empirical evidence demonstrated") than fines and community service.

The reason this argument, which had proponents across the ideological spectrum, persistently failed to gain traction, I maintained, was that it disregarded the societal expectation that punishment convey an official attitude of disapprobation, and indeed visit, symbolically, on offenders a kind of lowering in status commensurate with the severity of their own disregard for the value of the goods their actions had transgressed.  Decades' worth of experience, I concluded, showed things wouldn't get better until the stock of alternatives was enriched with punishments that not only regulated behavior more efficiently than imprisonment but expressed condemnation as effectively.  I proposed shaming punishments as a candidate.

Well, something seems to have changed. Very dramatically so. 

It's not just that there is "bipartisan support" for reducing incarceration -- at various times there had been that, too, in the past.

But the actual carrying through on these policies seems now to be largely a matter of indifference to the public.  The mood hasn't so much changed as just evaporated. 

Who cares? (Hey, did you hear about that lion in Milwaukee?!)

And what's more, I have no idea how this transformation took place. 

I don't think the explanation is that those making the argument for "alternative sanctions" just stuck to it, refining and improving and amplifying their arguments until finally everyone "got it."

I think the arguments that are being credited now were just as available 10, 20, or 30 years ago (the process that led to the dominance of incarceration as a mode of punishment started in the 1970s and really got locked in by the mid-80s).

What changed was the unacceptable meaning of the alternatives.

Or even more accurately, I think, what changed was the intensity with which the demand for what imprisonment conveys-- the distinctive gesture of condemnation associated with liberty deprivation -- just sort of withered and was forgotten about.... Take away that motivation to resist it, and the case that has always been so compelling actually starts to compel.

But like I said, I have no idea why this happened, and barely any idea when the change in the significance of the meaning of imprisonment changed.

I just averted my eyes, or widened my perspective to try to make sense of other examples of public policy disputes where the question of what laws do seemed subordinate, not just morally but cognitively, to what laws say, particularly about the social status of competing groups—and “poof,” the “alternative sanctions” debate was gone. . . .

Unless of course, it isn’t!

BTW, the second place where this same dynamic loomed large and fascinated me when I started “working” as an academic was the debate over capital punishment.  The primacy of “symbolic” motivations (morally, cognitively) to instrumental, deterrence considerations was widely understood to explain the persistence of capital punishment in the U.S. (Kahan 1999; Ellsworth & Gross 1994; Ellsworth & Ross 1983; Stolz 1983; Tyler & Weber 1982).

It was assumed, too, that that the intensity and durability of those expressive sensibilities meant the death penalty, like the overreliance on imprisonment in the U.S., was not going to go away.

Well, guess what? That’s changed too—and again for reasons that I don’t feel confident I can identify I do feel confident that the “obvious” reasons—cost, conviction of innocent, etc., are not the reasons; those arguments were always available and likely even more compelling at an earlier time! The strength of the arguments didn’t change; the strength of the motivation to resist did—because, as with imprisonment, the demand for the meanings that capital punishment expresses abated.

Likely these developments are related. Capital punishment and “get tough on crime” were big issues—really, really big!—in every presidential election between 1968 and 1988.  And then the whole thing just went away. . . .

Huh.

The last issue of the three that had this quality when I started: gun control.  Good to see that some things never change.

But even better that many things do--in ways that furnish assurance that there will never be any shortage of mysteries to investigate.

What Do Alternative Sanctions Mean?

Dan M. Kahan

 

Imprisonment is the punishment of choice in American jurisdictions. In everyday life, the modes of human suffering are numerous and diverse: when we lose our property, we experience need; when we are denounced by those whose opinions we respect, we feel shame; when our bodies are tormented, we suffer physical pain. But for those who commit serious criminal offens­ es, the law strongly prefers one form of suffering-the depriva­ tion of liberty-to the near exclusion of all others. Some alterna­ tives to imprisonment, such as corporal punishment, are barely conceivable. Others, including fines and community service, do exist but are used sparingly and with great reluctance.

 The singularity of American criminal punishments has been widely lamented. Imprisonment is harsh and  degrading for offenders and extraordinarily expensive for society. Nor is there any evidence that imprisonment is more effective than its rivals in deterring various crimes. For these reasons, theorists of widely divergent orientations-from economics-minded conservatives to reform-minded civil libertarians-are united in their support for alternative  sanctions.

The problem is that there is no political constituency for such reform. If anything, the public's commitment to  imprisonment has intensified in step with the theorists' disaffection with it. In the last decade, prison sentences have been both dramatically lengthened for many offenses and extended to others that have traditionally been punished only with fines and probation.

What accounts for the resistance  to  alternative  sanctions? The conventional answer is a failure of democratic politics. Members of the public are ignorant of the availability and feasi­ bility of alternative sanctions; as a result, they are easy prey for self-interested politicians, who exploit their fear of crime by advocating more severe prison sentences.5 The only possible solution, on this analysis, is a relentless effort to  educate  the public on the virtues of the prison's rivals.

I want to advance a different explanation. The political unacceptability of alternative sanctions, I will  argue,  reflects their inadequacy along the expressive dimension of punishment. The public rejects the alternatives not because they perceive that these punishments won't work or aren't severe enough, but because they fail to express condemnation as dramatically and unequivocally as imprisonment.

This claim challenges the central theoretical premise of the case for alternative sanctions: that all forms of punishment are interchangeable along the dimension of severity or "bite." The purpose of imprisonment, on this account, is to make offenders suffer. The threat of such discomfort is intended to deter crimi­ nality, and the imposition of it to afford a criminal his just deserts. But liberty deprivation, the critics point out, is not the only way to make criminals uncomfortable. On this account, it should be possible to translate any particular term of imprison­ ment into an alternative sanction that imposes an equal amount of suffering. The alternatives, moreover, should be preferred whenever they can feasibly be imposed and whenever they cost less than the equivalent term of imprisonment.

This account is defective because it ignores what different forms of affliction mean. Punishment is not just a way to make offenders suffer; it is a special social convention that signifies moral condemnation. Not all modes of imposing suffering express condemnation or express it in the same way. The message of condemnation is very clear when society deprives an offender of his liberty. But when it merely fines him for the same act, the message is likely to be different: you may do what you have done, but you must pay for the privilege. Because community service penalties involve activities that conventionally entitle people to respect and admiration, they also fail to express condemnation in an unambiguous way. This mismatch between the suffering that a sanction imposes and the meaning that  it has for society is what makes alternative sanctions politically unacceptable.

The importance of the expressive dimension of punishment should be evident. It reveals, for one thing, that punishment reformers face certain objective constraints. The  social norms that determine what different forms of suffering mean cannot be simply dismissed as the product of ignorance or bias; rather, they reflect deeply rooted public understandings that mere exhortation is unlikely to change. But there are also more hopeful implica­ tions. If we can understand the expressive dimension of punish­ ment, we should be able to perceive not only what kinds of punishment reforms won't work but also which ones will. Careful attention to social norms might allow us to translate alternative sanctions into a punitive vocabulary that makes them a meaning­ ful substitute for imprisonment.

 Refs

Ellsworth, P.C. & Ross, L. Public-Opinion and Capital-Punishment - a Close Examination of the Views of Abolitionists and Retentionists. Crime & Delinquency 29, 116-169 (1983).

Ellsworth, P.C. & Gross, S.R. Hardening of the Attitudes: Americans’ Views on the Death Penalty. J. Soc. Issues 50, 19 (1994).

Kahan, D.M. The Secret Ambition of Deterrence. Harv. L. Rev. 113, 413 (1999).


Stolz, B.A. Congress and Capital Punishment: An Exercise in Symbolic Politics. L. & Pol. Q. 5, 157-180 (1983).

Tyler, T.R. & Weber, R. Support for the Death Penalty: Instrumental Response to Crime, or Symbolic Attitude. L. & Soc. Rev. 17, 21-45 (1982).

 

Tuesday
Jul282015

Cognitive dualism as an adaptive resource in a polluted science communication environment ... a fragment

from something I'm working on. . . .

I. Overview: the “entanglement” problem

By no means the only threat to the science communication environment, the “entanglement problem” nonetheless comprises a recurring and especially damaging one. It occurs when positions on issues that admit of scientific investigation become suffused with antagonistic cultural meanings, transforming them into badges of membership in and loyalty to competing groups. At that point, to protect the standing of their groups and their status within them, individuals can be expected to conform their assessment of all manner of information to the position that predominates among those who share their defining commitments.

It’s almost certainly a mistake to attribute this form of identity-protective cognition (Kahan 2010) to the constraints on rationality responsible for “base rate neglect,” “the availability effect,” “confirmation bias” and like reasoning errors (Kahneman, Slovic & Tversky 1982). For one thing, unlike those biases, identity-protective cognition does not originate in overreliance on heuristic (“System 1”) information processing. On the contrary, the forms of conscious, effortful information (“System 2”) processing most essential to recognizing and giving proper effect to scientific evidence—including cognitive reflection, numeracy, and science comprehension—amplify the tendency of individuals to form and persist in identity-protective beliefs (Kahan 2013b; Kahan, Peters et al. 2013; Kahan, Peters et al. 2012). . . .

This problem—the entanglement problem—is not a consequence of stupid people but of a polluted science communication environment ("stupid!") (Kahan 2012). The antagonistic cultural meanings that transform positions on scientific issues into badges of cultural identity are a toxin that disables the normally reliable reasoning faculties that people use to align themselves with what’s known by science.

Protecting the science communication environment from this sort of contamination is a central mission of the science of science communication (Kahan in press). . . .

II.  Entanglement and science communication environment protection

. . . . Once some scientific issue has become entangled in antagonistic cultural meanings, the process of detoxification is likely to be a slow one. In the interval it takes to quiet the dynamics that excite culturally polarizing forms of identity-protective cognition, society will stand in need of techniques for counteracting the debilitating impact of such a condition on its citizens’ capacity to reason (Hall Jamieson & Hardy 2014). . . .

B.  Cognitive dualism

Observed in both religious students of science and in religious science-trained professionals, cognitive dualism involves the capacity of individuals to maintain apparently contradictory beliefs about some fact—such as the natural history of human beings—that admits of scientific investigation.

Cognitive dualism challenges the premise, however, that such beliefs are genuinely contradictory. According to this position, a “belief” cannot, as a psychological matter, be defined solely by the propositions they embody.

As mental objects, “beliefs” exist only within clusters or ensembles of mental states (including emotions, desires, and moral evaluations) distinctly suited for the performance of some action (Pierce 1877; Braithwaite 1933, 1946; Hetherington 2011). A highly religious doctor, for example, might explain that whether he “believes” in evolution depends on where he is: at “work,” where he uses knowledge of human evolution in his practice as an oncologist or as a medical researcher; or at “home,” where belief that humans were divinely created guides his behavior as a member of a particular religious community (Everhart & Hameed 2013). Because those opposing stances on the natural history of human beings exist only within the mental routines that enable him to do those activities, and because those activities do not contradict one another, the idea that the doctor harbors self-contradictory "beliefs" imposes a psychologically false criterion of identity on the constituents of his mind.

A similar account exists for religious science students who “don’t believe” in evolution. Research shows that it is possible to teach the modern synthesis to students who say they “don’t believe” in evolution just as readily as students who say they “do believe” in it. Afterwards, however, the former still profess not to “believe in” or accept evolution (Lawson & Worsnop 1992), a result that typically is understood by researchers to signify a limitation in the success of instruction for “nonbelieving” students.

Cognitive dualism, however, suggests that it is a mistake to infer that there is in fact any meaningful difference in the impact of the instruction on “believing” and “nonbelieving” students. If, as cognitive dualism supposes, beliefs as mental objects are “dispositions to action,” the science class has in fact generated the same belief in both: the sort that is linked to demonstrating the sort of knowledge of the modern synthesis certified by a high school biology exam (DiSessa 1982).

Such instruction has also left completely unaffected in both a completely distinct state of “belief” that exists for purposes of being a particular sort of person. The “disbelief in” evolution that the religious student has retained obviously performs that function. But so did the “belief in” evolution the nonreligious student held before he learned the modern synthesis. Believing in” evolution at that point enabled him to inhabit a particular cultural style notwithstanding that he almost certainly subscribed to the naive Lamarckian view of how it works that the vast majority of people—believers and nonbelievers—entertain (Bishop & Anderson Shtulman 2006). What is more, he will almost certainly retain that identity-enabling “belief in” evolution even if (as is again highly likely) he thereafter completely forgets the rudiments of the modern synthesis. Should the religious student, in contrast, grow up, say, to be a doctor, she is likely to remember what she learned about the modern synthesis and to use it when doing anything that requires that knowledge—even as she continues to “disbelieve in” evolution in her life as a person who finds meaning in holding a particular faith (Everhart & Hameed 2013; cf. Hermann 2012).

The course, in sum, imparted in both the “believer” and “nonbeliever” the sort of knowledge supportive of doing the things that one can do effectively only by accepting science’s understanding of the natural history of human beings (take exams, carry out responsibilities as a science-trained professional).  But it left unaffected -- in both -- a state of “belief” the enables something completely orthogonal to what science actually knows: being a person who finds meaning in the world through the exercise of free reason in collaboration with others exercising the same.

Cognitive dualism supplies an adaptive resource in a polluted science communication environment.  Where a person experiences as distinct opposing states of belief embedded in discrete and fully compatible clusters of action-enabling intentional states, she is freed from having to choose between being who she is and knowing what’s known by science. Understanding how to accommodate cognitive dualism, and to repel conditions that in fact can be shown to subvert it (Hameed 2015), is thus a form of scientific understanding integral to promoting the effective transmission of scientific knowledge—in classrooms, in businesses, in public meeting halls, and anywhere else—during the periods in which one or another scientific proposition has become enmeshed in antagonistic cultural meanings.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning*. Cognitive science 6, 37-75 (1982).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hall Jamieson, K. & Hardy, B.W. Leveraging scientific credibility about Arctic sea ice trends in a polarized political environment. Proceedings of the National Academy of Sciences 111, 13598-13605 (2014).

Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, (in press).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Pierce, C.S. Philosophical Writings of Peirce, The Fixation of Belief. Popular Science Monthly (1877).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

Saturday
Jul252015

Weekend update: going to SENCER summer camp to learn about the "self-measurement paradox," the "science communication problem," & the "disentanglement project"

I'll be participating next week in the annual SENCER Summer Institute.

The 14 billion regular readers of this blog already know this, but for the rest of you, SENCER is an organization dedicated to obliterating the “self-measurement paradox” -- the truly weird and ultimately intollerable failure of professions that traffic in scientific knowledge to use science's signature methods to assess and refine their own craft norms.  

Most of the organizations' members are educators who teach math & science.

But SENCER definitely recognizes the link between the self-measurement paradox and the broader science communication problem in the Liberal Republic of Science.  That problem is a consequence of the self-measurement paradox on a grand scale--our systematic failure to use of evidence-based methods of science communication to assure that the vast scientific knowledge at our society's disposal is conveyed under conditions that enable free, reasoning citizens to reliably recognize it and give it the effect it is due when they govern themselves.

(Just to be clear: What effect it is due depends on citizens' values. Anyone who insists the best available scientific evidence uniquely determines policies either is very ill-informed or engaged in deliberative bad faith. Values, of course, naturally vary in a free society, creating the project of deliberative accommodation that is democracy's answer to the puzzle of how to reconcile individual autonomy with law.)

So ... in the session I'll be helping to lead, we'll be focusing on what I regard as the precise point of intersection between the self-measurement paradox and the science communication problem: the disentanglement project.  

In the science classroom, the "disentanglement project" refers to the development (by scientific means, of course) of strategies for unconfounding the question "what does science know" from the question "who are you & whose side are you on" in the study of scientific topics that have become enmeshed in antagonistic cultural meanings.

Critical in itself, learning how to disentangle knowledge and identity in education can, however, be expected to generate benefits that are even more far-reaching.  Disentangling knowledge from identity is in fact central to solving the broader science communication problem. Thus, studies aimed at implementing the disentanglement principle in science classrooms supply researchers with classrooms for acquiring the knowledge necessary for them to discern how to implement the disentanglement principle in institutions of self-government, too. That is the primary objective of the "new political science" essential to perfecting the Liberal Republic of Science as a political regime (Kahan in press). . . .

Boy, I can't wait for my SENCER summer camp session! Not to mention the all between-session volleyball games and evening marshmallow roasts!

My session description:

The science communication  disentanglement  project: What is to be done -- and how to do it with reliable and valid empirical methods

The topics of climate change  and  human evolution both feature the science communication  entanglement  problem. This problem occurs when a fact or set of facts that admit of scientific investigation become enmeshed in antagonistic cultural meanings that transform positions on those facts into badges of membership in opposing cultural groups.   This condition is actually rate, but where it occurs the consequences can be spectacularly damaging to propagation of both the collective knowledge and the norms of constructive deliberation essential to enlightened self-government.  The session will feature existing research on how to  disentangle  knowledge from antagonistic meanings both in and outside the classroom. The primary goal, however, will be to draw on the informed judgment of the participants to form conjectures on how, using the tools of empirical inquiry, educators and other science communicators can enlarge public understanding of how to protect free and reasoning citizens from being put in the position of having to choose between knowing what's known by science and being who they are.

Refs

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm  (in press).





Friday
Jul242015

On "best practices," "how to" manuals, and *genuinely* evidence-based science communication

From correspondence with a reflective person on whether there is utility in compiling “guide books” of “best practices” for climate-science and like-situated communicators . . . .

I think our descriptions of what we each have in mind are likely farther apart than what each of us actually has in mind.  My fault, I'm sure, b/c I haven't articulated clearly what it is that I think is "good" & what "not good" in the sorts of manuals that synthesizers of social science research compile and distribute.

I think the best thing would be for me to try to show you examples of each.

This is very very very good:

The concept of "best practices as best guesses" that is featured in the intro & at various points throughout is very helpful. It reminds users that advice is a provisional assessment of the best current evidence -- and indeed, can't even be meaningfully understood by a potential user who doesn't have a meaningful comprehension of what observations & inferences therefrom inform the "guess."

Also, as developed, the "best practices as best guesses" concept makes readers conscious that a recommendation is necessarily a hypothesis, to be applied in a manner that enables empirical assessment both in the course of implementation & at the conclusion of the intervention.  They are not mechanical, do-this directives.  The essays are written, too, in a manner that reflects an interpretive synthesis of bodies of literature, including the issues on which there are disagreements or competing understandings.  

This is bad-- very very very very bad.

It is a compilation of general banalities.  No one can get any genuine guidance from information presented in this goldilocks form: e.g., "don't use numbers, engage emotions to get attention ... but be careful to rely too much on emotions b/c that will numb people..."

If they think they are getting that, they are just projecting their own preconceptions onto the cartoons -- literally -- that the manual comprises.  


The manual  ignores complexity and issues of external validity that reflective real-world communicators should be conscious of.  

Worst of all, there is zero engagement with what it means to have an evidence-based orientation and mode of operation.  As a result, this facile type of work reinforces rather than revises & reforms the understandings of real-world communicators who mistakenly expect lab researchers to hand them a set of "how to" directives, as opposed to a set of tools for testing their own best judgments about how to proceed.

I know you have concerns about whether I have unrealistic expectations about the motivation and ability of individuals associated with climate-science communication groups to make effective use of materials of the sort I think are "good."  Maybe you won't have that reaction after you look at the FDA manual.  

But if you do, then I'd say that part of the practice that has to change here involves evaluation of which sorts of groups ought to be funded by NGOs eager to promote better public engagement with climate science.  Those NGOs should adopt standards for awards that will reliably weed out of the pool of support recipients the ones that by disposition & mindset can't conduct themselves in a genuinely evidence-based way & replace them with ones who can and will structure themselves in a manner that enables them to do so.  

There's too much at stake here to rely on people who just won't use the available financial resources in a manner that one could reasonably expect to generate success in the world.

In particular, such resources shouldn't go to any group that thinks the success of a “science communication strategy” should be measured by how much it boosts contributions to the group’s own fund raising efforts.  It doesn’t surprise me to know that this happens but it does shock me to constantly observe members of these groups talking so unself-consciously about it, in a manner that betrays that perpetuation of their own existence is a measure of success in their minds independently of whether they are achieving the results that they presumably exist to bring about.


Thursday
Jul232015

Perplexed--once more--by "emotions in criminal law," Part 2: The "evaluative conception"

This is the second in an n-part series describing my evolving view of the significance of emotions in substantive criminal.  

Actually shifting view would be a better way to put it.  I took a position at one point that I later concluded missed if not the point then a very important point, one that had caused me to lose confidence in the original position.  

Now I find myself thinking that the successor position is also likely inadequate. Maybe the earlier position was right after all. Or perhaps some sort of dialectical synthesis will reveal itself to me if I think more about how the pieces of evidence before me actually fit together.

I'm really not sure!

Should I be worried that I don't know whether either of the announced positions I took before is right, and thus what I actually believe anymore?

The point of this series of posts, in addition to inviting reflection & comment on an interesting part of the law, is to explore "changing one's mind." 

One of my principal research interests is the ubiquity of defensive resistance to evidence that challenges people's perceptions of risk and like facts on culturally contested issues--climate change, gun control, etc.

But more intriguing to me at this particular moment is that it seems just as unusual for scholars studying this very phenomeon--or pretty much any other intriguing aspect of human behavior or cognition--to change their minds about what explains it.

Why would this be so?  By hypothesis, those scholars are using empirical methods to make sense of complex phenomena, the workings of which don't' admit of direct observation and that must therefore be investigated indirectly, on the basis of the observations of other things we'd expect to see or not depending on the truth of different plausible theories of how those unobserved phenomena work.

Given the very nature this activity, one might expect shifts in position to be common-place. If  the phenomena in quesiton are complex and not open to direct observation; if multiple plausible theories compete to account for them; and if the evidence for deciding between those theories consists of observations that necessarily do nothing more than alter incrementally the balance of then-existing considerations in favor of one position or another, then why wouldn't individual researchers' positions display the character of successive estimates of a random variable subject to imperfect measurement?

Meanignful shifts might be expected to abate over time, as sound studies--valid measurements of the quantity of interest--start to coverge on some value, estimates of which are less and less affected by the marginal impact of additional studies.  But where something is complex, and measuring instruments imperfect, that sort of stability will often take quite a while to emerge.  Moreover, it is during the interval it takes for such a state to form that we should expect to see the greatest volume of active, intense research--and thus the most occassion for those carrying out such investigations to shift positions as they update their views based on new evidence.

Scholarly inquiry as a whole takes this form.  We view such shifts in prevailing understanding as signs of "progress," a byproduct of the enlargement of knowledge associated with the of use science's signature method of inquiry. (I really do mean to be talking only about "normal science," or as I prefer "progressive research programs," the operation of which is predominantly made up of successive incremental advances driven by investigation of competing solutions to unresolved questions or unexplained anomalies; so-called "paradigm shifts" are another matter altogether.)

So why shouldn't we observe this same thing in the career of individual researchers' own understandings of the complex phenomena they are studying? If scholars' own research programs are progressing, and their knowledge of the phenonena they are studying enlarging as a result, then shouldn't their own work be expected to furnish them periodically with reason not just for refinement and fine tuning of their previous understandings but with cause for announcing that they've discovered some decisive objection to an inference they drew earlier?

In Part 1, I reproduced an excerpt from Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz, in the Political Heart of Criminal Procedure 163  (David Skeel Michael Klarman & Carol Steiker eds., 2011).  In that exerpt, I sketched out the hard question that the treatment of emotion in criminal law puts: namely, "what is really going on"-- when courts selectively treat impassioned behavior as a grounds for mitigating or aggravating the law's appraisal of the moral quality of an offender's, or victim's, conduct?

Here's another snippet from that same essay, one in which I trace out an answer developed in Two Conceptions of Emotion in Criminal Law, 96 Colum. L. Rev. 269 (1996), an article I coauthored with Martha Nussbaum.  It's a position that, for reasons I'll discuss in "tomorrow's" post, I decided, at the time this article was written, no longer seemed right to me. The "day after tomorrow" I'll exlain why I now don't think the reason I rejected that earlier paper seems right either.

But I'll tell you now how I feel about this: kind of excited, actually.

* * *

I will call [one account of the discordant themes that pervade the criminal law’s discussion of emotions] the two conceptions thesis, or “TCT.” This label derives from [Kahan & Nussbaum (1996)]. But the basic position—this particular solution to the puzzle of emotions in criminal law—was in line with ones that other scholars, including Sam Pillsbury and Victoria Nourse were developing at roughly the same time, and that many others, including Cynthia Lee and Carol Steiker, have since refined and extended.

TCT posits that substantive criminal law features two competing views of what emotions are and why they matter. The first is the mechanistic conception, which sees emotions as thoughtless surges of affect or “impulses.” Emotions excuse or mitigate, on this account, because—and to the extent—that they deprive an individual of the power to control his or her actions.

The second account is the evaluative conception of emotion. This view treats emotions and related sensibilities as a species of moral judgment that express an actor’s evaluation of contingencies that threaten or promote ends the actor cares about. As such, emotions, on this view, can be evaluated, not just as strong or weak, but as good or bad, right or wrong, reasonable or unreasonable, depending on whether we at the values they express are ones we think are appropriate or not for someone in the actor’s situation.

Each conception of emotion has an impressive pedigree in philosophy and psychology, and both are on display in the Oklahoma Court of Criminal Appeals decisions I started with. The mechanistic figures in those portions of the opinions emphasizing the “intensity of mental shock,” and resulting “loss of control,” “unseating of reason,” and “unbalancing of mind” that attend the discovery of adultery; the evaluative in those that distinguish between the “man of good character” and “refined sensibilities,” whose aggrievement warrants our solicitude, from the “rounder and libertine,” whose resentment of a man whose disregard for “the sanctity of the home” and “the virtue of women” he himself shares does not.

On their surface, the doctrines of criminal law are pervaded by mechanistic idioms and metaphors. But at their core, TCT asserts, they are evaluative. All of the doctrines contain one or another normative element that invites (or at least enables) decisionmakers to confine their liability-discharging or punishment-mitigating consequence to offenders whose emotional evaluations decisionmakers morally approve of. If they find that element to be satisfied, they needn’t find that the offenders’ passion embodied any particular quantum of volition-destroying force; if they find that this particular quality to be absent, they needn’t afford the slightest dispensation no matter how overwhelming or irresistible the offender’s (or victim’s, in the case of “intervening causation”) was.

The anger of the man who kills his wife or her paramour, for example, is right for someone in his situation, because adultery is “the gravest possible offence which a wife can commit against her husband” and “the highest invasion of [a man's] property” by another man. Having “no such right to control the woman as a husband has to control his wife,” in contrast, the resentment of the man who kills the lover of his mistress reveals a morally incorrect overvaluation of his own prerogatives. Only the “heat of passion” of the former, then, will be deemed to have been “adequately provoked” for purposes of the involuntary manslaughter doctrine.

The fear of the woman who aids the armed robber to protect her child appropriately loves her children more than she loves strangers, whereas one who acquiesces in the abuse of her own child to avoid harm to herself excessively prefers her own well-being to her children’s. The threat to the former, then, but not the latter is sufficient to “overcome the will of a person of reasonable firmness”—not because their wills were any more or less compromised but because reasonable women appropriately value their children’s well-being over that of anyone else’s, including their own.

What’s “true” about the man who stands his ground and kills is his character: like a “true beam,” it is straight, not warped. Because he appropriately values his “rights,” “liberty,” and “sacredness of . . . person” more than the life of a “wrongful” aggressor who tries to drive him from a public place where he has a “right to be,” he “reasonably” perceives flight to be as destructive of his “self-preservation” as death. The true woman, quite evidently, does not make the mistake of thinking her right to stay put ahead of the life of her abusive husband, even if the alternative is to remain “a life of the worst kind of torture and . . . degradation.”

The law refuses to accept any expert definition of “mental disease” for purposes of insanity. “[F]or all his insight into the dynamics of behavior, [the medical expert] has not solved the riddle of blame. The question remains an ethical one, the answer to which lies beyond scientific truth.” However implausible, then, it might be to think the explosive shock of infidelity invariably reverberates with greater intensity in the mind of a “man of refined sensibilities, having high conceptions of the sanctity of the home and the virtue of women,” than in that of a “moral degenerate, in the habit of consorting with prostitutes and dissolute women,” it is perfectly compatible with the law to characterize the former alone as sick.

The TCT solution to the puzzle of emotions in criminal law has three principal strengths. The first is its explanatory power. The evaluations that decisionmakers make of the values expressed in impassioned offenders’ emotions are informed by social norms. It is thus no surprise to see decisionmakers who are using the evaluative conception of emotion selectively exonerating (in whole or in part) offenders’ whose emotional valuations conform to prevailing expectations of what goods and states of affairs individuals occupying particular social roles are expected to value.

These norms, of course, are not fixed. They shift over time, and at any given moment might well be in a state of flux and contestation. . . . TCT thus explains . . . why the law’s appraisal of impassioned offenders shifts over time and why at any given moment can be the focus of intense political conflict.***

A second, related strength of TCT is its critical power. . . . TCT proponents have often successfully exposed the conservative bias of [commentators], who piously denounce as “political” any shift or proposed reform in the law’s treatment of impassioned offenders while displaying a comically blind eye to the necessarily political content of the evaluations that inform traditional doctrines and their applications. . . .

The third and final attraction of TCT is its prescriptive power. Critical commentary begs the question: what should the law be? Accounts that treat the mechanistic veneer of the doctrine seriously don’t help; at best they produce muddle, and at worst they make us unwitting apologists for the norms that just below the surface inform traditional doctrine and doctrinal applications. If the core of the law is evaluative, then those who want to make the law as good as it can be should be self-consciously evaluative, TCT proponents (myself included!) argued. We should face up to the necessity and appropriateness of making the law a reflection of the best moral and political understanding we can fashion of the values that good people ought to have.

 

Tuesday
Jul212015

Perplexed--once more--by "emotions in criminal law": Part 1

So to try to terminate my obsession with the " 'hot hand' fallacy" fallacy, I have resorted to intellectual methadone, finding a new puzzle that I can substitute to quench my cravings but that I'm sure I'll be able to drop once those subside....

Actually, it is the issue that was in the background of yesterday's post on "changing my mind." I offered up the topic of "emotions in criminal law"--the question how the law conceives of their nature and their normative significance--as a matter on which I had acknowledged, in a published paper (Kahan 2011)-- that the position I had taken in an article written yrs earlier (Kahan & Nussbaum 1996) had come to seem wrong to me based on things I had learned in the interim.

But in the course of reminding myself what position I had adopted in the later paper, it occurred to me that there were certain things about it that now seemed hard to reconcile with what I'd learned in the 4 yrs since I wrote that paper....

So I'm going to try work out what my new for-now position should be based on the current state of how I understand various not directly observable things in the world to work.  

In the course of doing that, moreover, I want to advance a claim about being in exactly this situation -- of finding that what one offered as a well-considered account of some phenomenon has to be qualified or simply replcaed with a different position based on new things one has learned.

The claim is that this should be a normal, even common-place thing.  Or at least it should be if one, first, chooses to devote one's attention to matters of genuine complexity, phenomena the workings of which are not demonstrable on the basis of direct inspection but rather only indirectly inferrable on the basis of evidence, i.e., additional phenomena that can be observed and that one has reason to believe are caused by those nonobservable complex matters; and, second, recognizes that anything pertinent one discovers under these conditions necessarily doesn't settle the issue but rather supplies one only with more or less reason to credit one plausible account, rather than another, about what's really going on.

For in that situation, whatever one's current best undertanding is will be in the nature of an estimate of a very fine quantity, and ones' work in the nature of progressively more precise measurements, which can be expected to jump from one side of some critical value to the other and back again as one's knowledge continues to expland . . . .

This is actually how things look, more or less, within a "progressive research program" that engages the collaborative, conversational attention of a group of researchers engaged in scholarly conversation.

So shouldn't it in be the way the work of any particular researcher working within such a program looks, too, if he or she is genuinely trying to figure out the truth about some complex thing, the operations of which cannot be directly see but rather only indirectly inferred on the basis of disciplined observation & measurement?....

Well, anyway, this post is the first of what I anticipate will be between 3 and 600 on the evolution of my understanding on "emotions in criminal law," which has been marked by a series of shifting positions animated by a constant state of perplexity.

In this first part, I reproduce an excerpt from Two Conceptions of Two Conceptions of Emotion, the essay I mentioned in yesterday's post, which is designed to conjure apprehension of the unobservable phenomenon apprehension of which is the goal of the inquiry.

* * *

2.

To introduce (or re-introduce) the puzzle I am concerned with, I will start with a pair of old decisions, both by the Oklahoma Court of Criminal Appeals. The issue in each was the same: whether the trial court erred by foreclosing the effective presentation of an insanity defense by a man charged with murder for killing his wife’s paramour.

In the first case, the court reversed the defendant’s conviction.[1] “Two doctors,” the court noted, “testified that the defendant . . . temporarily lost control of his mental processes” as a result of the “provocation” of his wife’s seduction.[2] “[W]e can perceive,” the court continued, that

a man of good moral character such as that possessed by the defendant, highly respected in his community, having regard for his duties as a husband and the virtue of women, upon learning of the immorality of his wife, might be shocked, or such knowledge might prey upon his mind and cause temporary insanity. In fact it would appear that such would be the most likely consequence of obtaining such information.[3]

In the second case, however, the court affirmed the conviction.[4] In that case, the court noted, “the state, over the objections of the defendant,” introduced evidence of “specific conduct tending to show . . . the defendant [to be] . . . a rounder and a libertine”:[5]

Facts were shown indicating that defendant's ideals of the sanctity of the home and the virtue of women were not so exalted, and that therefore the shock to his mind and finer sensibilities could not be so very great--at least not so great as to unbalance his mind. . . .

We think, in reason, that the shock would not be so great as it would to a man of refined sensibilities, having high conceptions of the sanctity of the home and the virtue of women.[6]

Thus, any trial rulings that prevented him from presenting a temporary insanity defense, the court held, were at most harmless error.

What’s really going on here? That is the question that any thoughtful reader who sets these two opinions out next to each other will feel compelled to ask. The court’s conclusion is straightforward: discovery of a wife’s infidelity is likely to deprive a sexually faithful man of his ability to comprehend or control his actions; such a discovery is not likely to have that effect, however, on an unfaithful man. But what’s not so straightforward is how to integrate the mélange of psychological and moral concepts that inform the court’s reasoning—“intensity of mental shock,” “unbalan[cing of] mind,” “loss of control,” on the one hand; “good moral character,” “regard for . . . the virtue of women,” “rounder and libertine,” on the other—into a coherent whole. How exactly does the court conceive of the nature of the emotional state of the “mentally insane” offender? What is it, precisely, about that condition that entitles someone to a defense?

These questions try to make sense of the decisions in philosophical or jurisprudential terms; but we might also feel impelled to ask “what is going on here” from a psychological or even political point of view. Do the judges really believe their own explanation of the distinction between two cases? Or are they deliberately concealing part of what they think from view? If concealing, are they trying to fool us, or are they just being coy? Do we imagine them straight-faced and earnest, or winking and slyly grinning, as they pronounce their judgments?

What’s likely to strike thoughtful readers as puzzling about these two decisions, it turns out, is the puzzle of emotions in criminal law. The discordant pictures that the decisions paint—of “highly respected” men of “good moral character” who are “shocked” to the point of mindless “loss of control,” on the one hand; of “rounders and libertines,” whose own lack of virtue insulates them from “mind-unbalancing” assaults on their reason, on the other—pervades basic doctrines and their application.

“Detached reflection cannot be demanded in the presence of an uplifted knife,” we are told.[7] Hence we cannot blame the “true man” who refuses to flee “an assailant, who by violence or surprise maliciously seeks to” drive him from a public place “where [he] has the right to be.”[8] But the woman who “ ‘believed herself . . . doomed . . .  to a life of the worst kind of torture and . . . degradation” cannot on that basis be excused for killing her abusive husband in his sleep: because she had the option of leaving their home and striking out on her own, her will was not overcome by the “primal impulse” of “self-preservation.”[9]

A man who “discovered his wife in flagrante delicto with a man who was a total stranger to him, and at a time when [he] was trying to save his marriage and was deeply concerned about both his wife and his young child,” will necessarily experience the form of “ungovernable passion” that mitigates first-degree murder to manslaughter.[10] The same volitional impairment cannot be imputed to the man who kills the lover of his mistress, however, for he “has no such right to control the woman as a husband has to control his wife.”[11]

The deep “shame” of being subjected to rape is one of the “physical and mental injuries, the natural and probable result of which would render the [an unmarried woman] mentally irresponsible,” making her subsequent commission of suicide an act attributable to her rapist, who could therefore be convicted of murder.[12] But a man could not be deemed to have “caused” the death of his (8-months pregnant) wife—“a high tempered woman” who was “hard to get along with” and who on previous “occasions ran off and left her husband” alone with the couple’s infant—because her decision to expose herself to the nighttime cold of winter in fleeing their farmhouse was her own choice following a fight.[13]

Again and again, we are confronted with a kaleidoscope of dissonant reports of virtuous offenders too mentally enfeebled to obey the law and impassioned ones too vicious not to be deemed to have “voluntarily” chosen to transgress. So what is really going on?

 


[1] Hamilton v. State, 244 P.2d 328 (Okla. Crim. App. 1952).

[2] Id. at 335.

[3] Id.

[4] Coffeen v. State, 210 P. 288 (Okla. Crim. App. 1922).

[5] Id. at 290.

[6] Id. at 290-91

[7] Brown v. United States, 256 U.S. 335, 343 (1921) (Holmes, J.).

[8] State v. Bartlett, 71 S.W. 148, 152 (Mo. 1902).

[9] State v. Norman, 378 S.E.2d 8, 11, 12-13 (N.C. 1989).

[10] State v. Thornton, 730 S.W.2d 309, 312, 315 (Tenn. 1987).

[11] Rex v. Greening, 3 KB. 846, 849 (1913).

[12] Stephenson v. State, 179 N.E. 633, 635, 649 (Ind. 1932).

[13] Hendrickson v. Commonwealth, 3 S.W. 166, 167 (Ky. Ct. App. 1887).

 

Monday
Jul202015

Changing my mind on "emotions in criminal law"

I sometimes get asked--sometimes in a challenging way--whether I've ever "changed my mind" or "admitted I was wrong" about something.  Hell yeah! Here's an example-- Kahan, D. M. (2011), Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz,In The Political Heart of Criminal Procedure. D. S. Michael Klarman & C. Steiker (Eds.), (pp. 163-176): Cambridge University Press  (working paper version here) , where I shift my views on a number of key points from an earlier paper, Kahan, D. M., & Nussbaum, M. C. (1996). Two Conceptions of Emotion in Criminal Law. Colum. L. Rev., 96, 269. There's more where this came from, too!  

Indeed, I was looking at this particular paper the other day (after I offered it as an example to someone challenging me to show that I've very acknolwedged I was "wrong") & wondering if maybe it's wrong in light of Kahan, D. M., Hoffman, D. A., Evans, D., Devins, N., Lucci, E. A., & Cheng, K. (in press), 'Ideology'or'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev., 164.  There's at least a tension to be explained....Maybe the first paper was right...

Do I like saying I've changed my mind? Sure, if the reason is that I actually managed to figure out something that I didn't know before. If one never had occassion to announce that one had changed his or her mind for that reason, it would mean either (a) one was studying unchallenging, non-complex things (boring); or (b) one wasn't actually advancing in understanding in the course of study & reflection.

Do I worry that, as a result of saying "I think I wasn't right on X," people might not "believe me" when I say think I know something in the future? No. First of all, they ought to be thinking critically about anything I say. Second, they ought to trust me more when they know that if I conclude I was wrong or have to qualify my previous view in some important way, I'll make an effort to tell them! Those who prefer to put their trust in scholars who wouldn't change their minds when they should, or wouldn't tell them when they did, are ones whose confidence I take no particular pride in earning.


Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz

Dan M. Kahan 

This essay examines alternative explanatory theories of the treatment of emotion in criminal law. In fact, it re-examines a previous exposition on this same topic. In Two Conceptions of Emotion in Criminal Law (Kahan & Nussbaum 1996), I argued that the law, despite a surface profession of fidelity to a mechanistic conception of emotion, in fact reflects an evaluative one: rather than thoughtless surges of affect that impair an actor’s volition, emotions, on this account, embody a moral evaluation of the actor that is in turn subject to moral evaluation by legal decisionmakers as “right” or “wrong,” “virtuous” or “vicious,” and not merely as “strong” or “weak” in relation to the actor’s volition. I now qualify this claim—and indeed reject certain parts of it.  I do so on the basis of an alternative conception of the evaluative conception of emotion: whereas the position in Kahan & Nussbaum (1996) treats the evaluative conception as  implementing a conscious moral appraisal on the part of decisionmakers, the alternative sees it, at least sometimes, as a product of decisoinmakers’ unconscious vulnerability to appraisals they themselves would view as subversive of the law’s moral principles, which might well invest volitional impairment with normative significance. I examine the empirical evidence, amassed by various researchers including (without giving this point much thought) by me, for this third view, which I label the “cognitive conception” as opposed to the earlier (Kahan & Nussbaum 1996) “moral conception” of the “evaluative” view of emotions in criminal law.

 

Sunday
Jul192015

Weekend update: Still fooled by non-randomness? Some gadgets to help you *see* the " 'hot hand' fallacy" fallacy

Well, I'm still obsessed with the " 'hot hand fallacy' fallacy." Are you?

As discussed previously, the classic "'hot hand' fallacy"  studies purported to show that people are deluded when they perceive that basketball players and other athletes enjoy temporary "hot streaks" during which they display an above-average level of proficiency.

The premise of the studies was that ordinary people are prone to detect patterns and thus to  confuse chance sequences of events (e.g., a consecutive string of successful dice rolls in craps) as evidence of some non-random process (e.g., a "hot streak," in which a craps player can be expected to defy the odds for a specified period of time).

For sure, people are disposed to see signal in noise.

But the question is whether that cognitive bias truly accounts for the perception that athletes are on a "hot streak."

The answer, according to an amazing paper by Joshua Miller & Adam Sanjurjo, is no

Or in any case, they show that the purported proof of the "hot hand fallacy" itself reflects an alluring but false intuition about the the conditional independence of binary random events.

The "test" the "hot hand fallacy" researchers applied to determine whether a string of successes indicate a genuine "hot hand"--as opposed to the illusion associated with our over-active pattern-detection imaginations--was to examine how likely basketball players were to hit shots after some specified string of "hits" than they were to hit shots after an equivalent string of misses.  

If the success rates for shots following strings of "hits" was not "significantly" different from the success rates for shots following strings of "failures," then one could infer that the probability of hitting a shot after either a string of hits or misses was not significantly different from the probability of hitting a shot regardless of the outcome of previous shots. Strings of successful shots being no longer than what we should expect by chance in a random binary process, the "hot hand" could be dismissed as product of our vulnerability to see patterns where they ain't, the researchers famously concluded.

Wrong!

This analytic strategy itself reflects a cognitive bias-- an understanding about the relationship of independent events that is intuitively appealing but in fact incorrect.

Basically, the mistake -- which for sure should now be called the " 'hot hand fallacy' fallacy" -- is to treat the conditional probability of success following a string of successes in a past sequence of outcomes as if it were the same as the conditional probability of success following a string of successes in a future or ongoing sequence. In the latter situation, the occurrence of independent events generated by a random process is (by definition) unconstrained by the past.  But in the former situation -- where one is examining a past sequence of such events --  that's not so.  

In the completed past sequence, there is a fixed number of each outcome.  If we are talking about successful shots by a basketball player, then in a season's worth of shots, he or she will have made a specifiable number of "hits" and "misses."

The cool Miller-Sanjurjo machine! It can be yours, because you-- unlike some *other* people (or robots or aliens or badgers with operational internet connections) who shall remain namless -- never miss an episode of this blog! Just click!Accordingly, if we examine the sequence of shots after the fact, the probability the next shot in the sequence will be a "hit" will be lower immediately following a specified number of "hits" for the simple reason that the proportion of "hits" in the remainder of the sequence will necessarily be lower than it it was before the previous successful shot or shots.

By the same token, if we observe a string of "misses," the proportion of "misses" in the remainder will be lower than it had been before the first shot in the string.  As a result, following a string of "misses," we can deduce that the probability has now gone up that the next shot in the sequence will turn out to have been a "hit."

Thus, it is wrong to expect that, on average, when we examine a past sequence of random binary outcomes, P(success|specified string of successes) will be equal to P(success|specified string of failures).  Instead, in that that situation, we should expect P(success|specified string of successes)  to be less than P(success|specified string of failures).

That means the original finding of the "hot hand fallacy" researchers that P(success|specified string of successes) = P(success|specified string of failures) in their samples of basketball player performances wasn't evidence that the "hot hand" perception is an illusion.  If P(success|specified string of successes) = P(success|specified string of failures) within an adequate sample of sequences, then we are observing a higher success rate following a string of successes than we would expect to see by chance

In other words, the data reported by the original "hot hand fallacy" studies supported the inference that there was a hot-hand effect after all!

So goes M&S's extremely compelling proof, which I discussed in a previous blog.  The M&S paper was featured in Andrew Gelman's Statistical Modeling, Causal Inference blog, where the comment thread quickly frayed and broke, resulting in a state of total mayhem and bedlam!

How did the "hot hand fallacy" researchers make this error? Why did it go undetected for 30 yrs, during which the studies they did have been celebrated as classics in the study of "bounded rationality"? Why do so many smart people find it so hard now to accept that those studies themselves rest on a mistaken understanding of the logical properties of random processes?

The answer I'd give  for all of these questions is the priority of affective perception to logical inference.

Basically, we see valid inferences before we apprehend, through ratiocination, the logical cogency of the inference.

What makes people who are good at drawing valid inferences good at that is that they more quickly and reliably perceive or feel the right answer -- or feel the wrongness of a seemingly correct but wrong one -- than those less adapt at such inferences.

This is an implication of a conception of dual process reasoning that, in contrast to the dominant "System 1/System 2" one, sees unconscious reasoning and conscious effortful reasoning as integrated and reciprocal rather than discrete and hierarchical.

The "discrete & hierarchical" position imagines that people immediately form a a heuristic response ("System 1") and then, if they are good reasoners, use conscious, effortful processing ("System 2")  to "check" and if necessary revise that judgment.

The "integrated and reciprocal" position, in contrast, says that good reasoners experience are more likely to experience an unconscious feeling of the incorrectness of a wrong answer, and the need for effortful processing to determine the right answer, than are people who are poor reasoners. 

The reason the former are more likely to feel that right answers are right and wrong answers wrong is that they have through the use of their proficiency in conscious, effortful information processing trained their intuitions to alert them to the features of a problem that require the deployment of conscious, effortful processing.

Now what makes the fallacy inherent in the " 'hot hand fallacy' fallacy" so hard to detect, I surmise, is that those who've acquired reliable feelings about the wrongness of treating independent random events as dependent (the most conspicuous instance of this is the "gambler's fallacy") will in fact have trained their intuitions to recognize as right the corrective method of analyzing such events as genuinely independent.

If the "hot hand" perception is an illusion, then it definitely stems from mistaking an independent random process for one that is generating systematically interdependent results.

So fix it -- by applying a test that treats those same events as independent!

That's the intuition that the "hot hand fallacy" researchers had, and that 1000's & 1000's of other smart people have shared in celebrating their studies for three decades -- but it's wrong wrong wrong wrong wrong!!!!!

But because it feels right right right right right to those who've trained their intuitions to avoid heuristic biases involving the treatment of independent events as interdependent, it is super hard for them to accept that the method reflected in the "hot hand fallacy" studies is indeed incorrect.

So how does one fix that problem?

Well, no amount of logical argument will work!  One must simply see that the right result is right first; only then will one be open to working out the logic that supports what one is seeing.

And at that point, one has initiated the process that will eventually (probably not in too long a time!) recalibrate one's reciprocal and integrated dual-process reasoning apparatus so as to purge it of the heuristic bias that concealed the " 'hot hand fallacy' fallacy" from view for so long!

BTW, this is an account that draws on the brilliant exposition of the "integrated and reciprocal" dual process reasoning offered by Howard Margolis

For Margolis, reason giving is not what it appears: a recitation of the logical operations that make an inference valid. 

Rather it is a process of engaging another reasoner's affective perception, so that he or she sees why a result is correct, at which point the "reason why" can be conjured through conscious processing.  (The "Legal Realist" scholar Karl Llewellyn gave the same account of legal arguments, btw.)

To me, the way in which the " 'hot hand fallacy' fallacy" fits Margolis's account -- and also Ellen Peters's of the sorts of heuristic biases that only those high in Numeracy are likely to be vulnerable too-- is what makes the M&S paper so darn compelling!

But now...

If you, like me and 10^6s of others, are still having trouble believing that the analytic strategy of the original "hot hand" studies was wrong, here are some gadgets that I hope will enable you, if you play with them, to see that M&S are in fact right.  Because once you see that, you'll have vanquished the intuition that bars the path to your conscious, logical apprehension of why they are right.  At which point, the rewiring of your brain to assimilate M&S's insight, and avoid the "'hot hand fallacy' fallacy" can begin!

Indeed, in my last post, I offered an argument that was in the nature of helping you to imagine or see why the " 'hot hand fallacy' fallacy" is wrong. 

But here--available exclusively to the 14 billion regular subscribers to this blog (don't share it w/ nonsubscribers; make them bear the cost of not being as smart as you are about how to use your spare time!)-- are a couple of cool gadgets that can help you see the point if you haven't already.

Gadget 1 is the "Miller-Sanjurjo Machine" (MSM). MSM is an excel sheet that random generates a sequence of 100 coin tosses.  It also keeps track of how each successive toss changes the probability that the next toss in the sequence will be a "heads."  By examining how that probability goes up & down in relation to strings of "heads" and "tails," one can see why it is wrong to simply expect P(H|any specified string of Hs) - P(T|any specified string of Ts) to be zero.

MSM also keeps track of how many times "heads" occcurs after three previous "heads" and how many times "heads" occurs after three previous "tails."  If you keep doing tosses, you'll see that most of the time P(H|HHH)-P(H|TTT) < 0.

Or you'll likely think you see that. 

Because you have appropriately trained yourself to feel something isn't quite right about that way of proceeding, you'll very sensibly wonder if what you are seeing is real or just a reflection of the tendency of you as a human (assuming you are; apologies to our robot, animal, and space alien readers) to see pattern signals in noise.

Hence, Gadget 2: the "Miller-Sanjurjo Turing Machine" (MSTM)! 

OMG!!! A Miller-Sanjurjo Turing Maching! No matter how many times you run it, you'll sware it's another human being who behaves just the way you do!!MSTM is not really a "Turing machine" (& I'm conflating "Turing machine" with "Turing test")-- but who cares?  It's a cool name for what is actually just a simple statisical simulation that does 1,000 times what it's baby sister MSM does only once -- that is, flip 100 coins and tabluate the  P(H|HHH) & P(H|TTT). 

MSTM then reports the average difference between the two.  That way you can see in fact it's true that P(H|HHH) - P(H|TTT) for sure should be expected to be < 0. 

Indeed, you can see exactly how much less than 0 we should expect P(H|HHH) - P(H|TTT) to be: about 8%. That amount is the bias that was built into the original "hot hand" studies against finding a "hot hand."

(Actually, as M&S explain, the size of the bias could be more or less than that depending on the length of the sequences of shots one includes in the sample and the number of previous "hits" one treats as the threshold for a potential "hot streak".)

MSTM is written to operate in Stata.  But if you don't have Stata, you can look at the code (opening the file as a .txt document) & likely get how it works & come up with an equivalent program to run on some other application.

Have fun seeing, ratiocinating, and rewiring [all in that order!] your affective perception of valid inferences! 

Friday
Jul172015

Two threats to the public-health good of childhood vaccines ... a fragment

From something in the pipeline:

The tremendous benefit that our society enjoys by virtue of universal childhood immunizations is being put in jeopardy by two threats.  The first is the deliberate miscommunication of scientific evidence on vaccine safety. The second is  our society’s persistent neglect of the best available scientific evidence on risk communication.  Indeed, these two threats are linked: the void created by the absence of scientifically informed, professional risk communication is predictably being filled by uniformed, ad hoc, unprofessional alternatives, which nourish the state of confusion that miscommunicators aim to sow.  The value of the scientific knowledge embodied in childhood vaccinations demands a commensurate investment in effectively using science to protect the science communication environment in which ordinary members of the public come to know what is known by science. Every constituent of the public health establishment—from government agencies to research universities, from professional associations to philanthropic organizations—must contribute its share to this vital public good.

Friday
Jul102015

Holy smokes! The "'hot-hand fallacy' fallacy"!

It' super-duper easy to demonstrate that individuals of low to moderate Numeracy --an information-processing disposition that consists in the capacity & motivation to engage in quantitative reasoning -- are prone to all manner of biases--like "denominator neglect," "confirmation bias," "covariance [non]detection," the "conjunction fallacy," etc.

It's harder, but not impossible, to show that individuals high in Numeracy are more prone to biased reasoning under particular conditions.

In one such study, Ellen Peters and her colleagues did an experiment in which subjects evaluated the attractiveness of proposed wagers.

For one group of subjects, the proposed wager involved outcomes of a positive sum & nothing, with respective probabilities adding to 1.  

For another group, the proposed wager had a slightly lower positive expected value and proposed outcomes were a positive sum & anegative sum (again with respective probabilities adding to 1).

Because the second wager had a lower expected value, and added "loss aversion" to boot, one might have expected subjects to view the first as more attractive.

But in fact subjects low in Numeracy ranked the two comparable in attractiveness.  Maybe they couldn't do the math to figure out the EVs. 

But the real surprise was that among subjects high in Numeracy, the second wager-- the one that coupled a potential gain and a potential loss-- was rated as being substantially more attractive than the first -- the one that coupled a potential gain with a potential outcome of zero and a higher expected EV.  

Go figure!

This result, which is hard to make sense of if we assume that people generally prefer to maximize their wealth, fit Peters et al.'s hypothesis that the cognitive proficiency associated with high Numeracy guides decisionmaking through its influence in calibrating affective perceptions.  

Because those high in Numeracy literally feel the significance of quantitative information, the necessity of doing the computations necessary to evaluate the second wager, Peters et al. surmised, would generate a more intense experience of positive affect for them than would the process of evaluating the first wager, the positive expected value of which can be seen without doing any math at all.  Lacking the same sort of emotional connection to quantitative information, the subjects low in Numeracy wouldn't perceive much difference between the two wagers.

Veeeeery interesting.   

But can we find real-world examples of biases in quantitative information-processing distinctive to individuals high in Numeracy?  Being able to is important not only to show that the Peters et. al result has "practical" significance but also show that it is valid.  Their account of what they expected to and did find hangs together, but as always there are alternative explanations for their results.  We'd have more reason to credit the explanation they gave-- that high Numeracy can actually cause individuals to make mistakes in quantitative reasoning that low Numeracy ones wouldn't -- in the real world. 

That way of thinking is an instance of the principle of convergent validity: because we can never be "certain" that the inference we are drawing from an empirical finding isn't an artifact of some peculiarity of the study design, the corroboration of that finding by an empirical study using different methods -- ones not subject to whatever potential defect diminished our confidence in the first -- will supply us with more reason to treat the first finding as valid.

Indeed, the confidence enhancement will be reciprocal: because there will always be some alternative explanation for the findings associated with the second method, too, the concordance of the results reached via those means with the results generated by whatever method informed the first study gives us more reason to credit the inference we are drawing from the second.

Okay, so  now we have some realllllllly cool "real world" evidence of the distinctive vulnerability of high Numeracy types to a certain form of quantitative-reasoning bias.

It comes in a paper, the existence of which I was alerted to in the blog of stats legend  (& former Freud expert) Andrew Gelman, that examines the probability that we'll observe the immediate recurrence of an outcome if we examine some sequence of binary outcomes generated by a process in which the outcomes are independent of one another-- e.g., of getting "heads" again after one getting "heads" rather than "tails" in the previous flip of a fair coin.

We all know that if the events are independent, then obviously the probability of the previous event recurring is exactly the same as the probability that it would occur in the first place.

So if someone flipped a coin 100 times, & we then examined her meticulously recorded results, we'd discover the probability that she got "heads" after any particular flip of "heads" was 0.50, the same as it would be had she gotten "tails" in the previous flip.

Indeed, only real dummies don't get this!  The idea that the probability of independent events is influenced by the occurrence of past events is one of the mistakes that those low to moderate Numeracy dolts make!  

They (i.e., most people) think that if a string of "heads" comes up in a "fair" coin toss (we shouldn't care if the coin is fair; but that's another stats legend /former Freud expert Andrew Gelman blog post), then the probability we'll observe "heads" on the next toss goes down, and the probability that we'll observe "tails" goes up. Not!

Only a true moron, then, would think that if we looked at a past series of coin flips, the probability of a "heads" after a "heads" would be lower than the probability of a "heads" after a "tail"! Ha ha ha ha ha! I want to play that dope in poker! Ha ha ha!

Um ... not so fast, say Miller & Sanjurjo in their working paper, "Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of Small Numbers."

The "assumption that in a sequence of coin tosses, the relative frequency of heads on those flips that immediately follow a streak of heads is expected to be equal to the relative frequency of heads on those flips that immediately follow a streak of tails" is "seemingly correct, but mistaken" (p. 19).

Yeah, right.

"We prove," M&S announce (p. 22),

that in a fi nite sequence generated by repeated trials of a Bernoulli random variable the expected conditional relative frequency of successes, on those realizations that immediately follow a streak of successes, is strictly less than the fi xed probability of success.

What? (I'm asking myself this as the same time you are asking me). "That can't possibly be the case"!

You'll feel like someone is scratching his fingers on a chalkboard as you do it, but read the first 6 pages of their paper (two or three times if you can't believe what you conclude the first time) & you'll be convinced this is true.

Can I explain this really counterintuitive (for high Numeracy people, at least) result in conceptual terms? Not sure but I'll try!

If we flip a coin a "bunch" of times, we'll get roughly 0.50 "heads" & 0.50 "tails" (it will land on its edge 10^-6 of the time). But if we go back & count the "heads" that came up only after a flip of "heads," we'll come up w/ less than 0.5 x 1 "bunch."

If we look at any sequence in the "bunch," there will be some runs of "heads" in there.  Consider THHTHTTTHTHHHTHT..."  In this sequence of 16, there were (conveniently!) 8 "heads" & 8 "tails."  But only 3 of the 8 (conveniently!) occurred after a previous flip of "heads"; 5 of the 8 ocurred after a flip of "tails."

In this sample, then, the probability of getting "heads" again after getting "head"s on the previous flip was not 0.5. It was 3/8 or .375 or ... about 0.4!

You might wonder (because for sure you are searching for the flaw in the reasoning) that this result was just a consequence of the sequence I happened to "report" for my (N = 16) "experiment." 

You'd not be wrong to respond that way!

But if you think hard enough & start to play around with the general point --that we are looking at the history of at a past sequence of coin tosses  -- you'll see (eventually!) that the probability of "heads" in the sample that occur after a previous "heads" (not to mention "several" heads in a row!) always is lower than the overall probability that any particular flip in that sequence was "heads."

That indeed it has to be. 

What will you be seeing/feeling when you "get" this? Perhaps this: 

  1. Imagine I perform 100 coin tosses and observe 50 "heads" and 50 "tails." (No problem so far, right?)
  2. If I now observe the recorded sequence and begin to count backwards from 50 every time I see a "heads," I'll always know how many "heads" remain in the sequence.  (Still okay?  Good.)
  3. Necessarily, the number goes down by 1 every time I see a "heads" in the sequence. 
  4. And necessarily the number does not go down -- it stays the same -- every time I see a "tails" in the sequence.
  5. From this we can deduce that the probability that the next flip in the sequence will be a "heads" is always lower if the previous flip was a "heads" than if it was a "tails."
  6. Oh, btw, steps 2-5 still apply if you happened to get 51 "heads," or 48 or 55 or whatever, in your 100 tosses. Think about it!

At this point you are saying, um, "now I'm not sure anymore"; go through that again.  Okay...

But here is the really cool & important thing: M&S show that the methodology used in literature examining the so-called "hot hand fallacy" doesn't reflect this logic.

Those studies have been understood to "debunk" the common perception that basketball players go through "hot streaks" during which it makes sense for others to expect them to achieve a level of shooting success that exceeds their usual or average level of success.

The researchers who purported to "debunk" the perception of "hot hands" report that if one examines game data, the probability of players making a shot after making a specified number of shots in a row is roughly their average level of success. Just as one would expect if shots are independent events-- so there's no "hot hand" in reality--only in our fallible, error-prone minds!

But this method of analyzing the data, M&S demonstrate, is wrong. 

It overlooks that, "by conditioning on a streak of hits within a sequence of finite length, one creates a selection bias towards observing shots that are misses" (p. 19).

Yeah, that's what I was trying to say!

So if the data show, as the "hot hand fallacy" researchers found, that the probability a player would make his or her next shot after making a specified number in a row was the same as the probability that he or she would make a shot overall, their data, contrary to their conclusion, support the inference that players do indeed enjoy "hot streaks" longer than one would expect to observe by chance in a genuinely random process (& necessarily, somewhere along the line, "cold streaks" longer than one would expect by chance too).

I'm sold!

But for me, the amazing thing is not the cool math but the demonstration, w/ real world evidence, of high Numeracy people being distinctively prone to a bias in quantitative reasoning.

The evidence consists in the mistake made by the authors of the original "hot hand" studies and repeated by 100s or even 1000s (tens of thousands?) of decision science researchers who have long celebrated these classic studies and held them forward was a paradigmatic example of the fallibility of human perception.

As M&S point out, this was a mistake that we would expect only a high Numeracy person to make. A low Numeracy person is more prone to believe that independent events are not independent; that's what the "gambler's fallacy" is about. 

Someone who gets why the gambler's fallacy is a fallacy will feel that the way in which "hot hand fallacy" researchers analyzed their data was obviously correct: because events that are independent occur with the same probability irrespective of past outcomes, it seems to make perfect sense to test the "hot hand" claim by examining whether players' shooting proficiency immediately after making a shot differs significantly from their proficiency immediately after missing.

But in fact, that's not the right test!  Seriously, it's not!  But it really really really seems like it is to people whose feelings of correctness have been shaped in accord with the basic logic of probability theory--i.e., to high Numeracy people!  (I myself still can't really accept this even though I accept it!)

That's what Peters says happens when people become more Numerate: they develop affective perceptions attuned to sound inferences from quantitative information.  Those affective perceptions help to alert high Numeracy people to the traps that low Numeracy ones are distinctively vulnerable to.

But they can create their own traps -- they come with their own affective "Sirens," luring the highly Numerate to certain nearl-irresitible but wrong inferences....

Holy smokes!

M&S don't make a lot of this particular implication of their paper. That's okay-- they like probability theory, I like cognition!

But they definitely aren't oblivious to it. 

On the contrary, they actually propose-- in a casual way in a footnote (p. 2, n.2)-- a really cool experiment that could be used to test the hypothesis that the "'hot hand fallacy' fallacy" is one that high Numerate individuals are more vulnerable to than low ones:

Similarly, it is easy to construct betting games that act as money pumps while defying intuition. For example, we can offer the following lottery at a $5 ticket price: a fair coin will be flipped 4 times. if the relative frequency of heads on flips that immediately follow a heads is greater than 0.5 then the ticket pays $10; if the relative frequency is less than 0.5 then the ticket pays $0; if the relative frequency is exactly equal to 0.5, or if no flip is immediately preceded by a heads, then a new sequence of 4 flips is generated. While, intuitively, it seems like the expected payout of this ticket is $0, it is actually $-0.71 (see Table 1). Curiously, this betting game may be more attractive to someone who believes in the independence of coin flips, rather that someone who holds the Gambler’s fallacy.

If someone did that study & got the result-- high Numeracy taking the bet more often than low--we'd have "convergent validation" of the inference I am drawing from M&S's paper, which I now am treating (for evidentiary purposes) as part of a case study in how those who know a lot can make distinctive -- spectacular, colossal even! -- errors.

But my whole point is that M&S's paper, by flushing this real-world mistake out of hiding, convergently validates the experimental work of Peters et al.

But for sure, more experiments should be done! Because empirical proof never "proves" anything; it only gives us more reason than we otherwise would have had for believing one thing rather than another to be true....

Two last points: 

1.  The gambler's fallacy is still a fallacy! Coin tosses are independent events; getting "heads" on one flip doesn't mean that one is "less likely" to get "heads" on the next.

The gambler's fallacy concerns the tendency of people mistakenly to treat independent events as non-independent when they make predictions about future events.

The " 'hot hand fallacy' fallacy" -- let's call it--involves expecting the probability that binary outcomes will immediately recur is the same as the probability that they will occur on average in the sample.  That's a logical error that reflects failing to detect a defect in the inference strategy reflected in the "hot-hand" studies.

Indeed, the same kind of defect in reasoning can explain why the gambler's fallacy is so prevalent -- or at least M&S surmise.

In the world, when we see independent events occurring, we observe or collect data in relatively short bursts -- let's call them “attention span” units (M&S present some data on self-reports of the longest series of coin tosses observed: the mean was a mere 6; strange, because I would have guessed every person flipped a coin at least 1000 times in a row at some point during his or her childhood!). If, in effect, we "sample" all the sequences recorded during “attention span” units, we'll observe that in fact the recurrence of an outcome immediately after it occurred was generally less than the probability it would occur on average.

That's correct.

But it's not correct to infer from such experience that, in any future sequence, the probability of that event recurring will be lower than the probability of it ocurring in the first place.  That's the gambler's fallacy.

The "'hot hand fallacy' fallacy" invovles not noticing that correcting the logical error in the gambler's fallacy does not imply that if we examine a past sequence of coin tosses, we should expect to observe that "heads" came up just as often immedately after one or more "tails" than it did immediately after one or more "heads."

Ack! I find myself not believing this even though I know it's true!

2. Is "motivated numeracy" an instance of a bias that is more prevalent among high Numeracy persons?

That depends!

"Motivated Numeracy" is the label that my collaborators-- who include Ellen Peters -- & I give to the tendency of individuals who are high in Numeracy to display a higher level of motivated reasoning in analyzing quantitative information.  We present experimental evidence of this phenomenon in the form of a covariance-detection task in which high-Numeracy partisans were more likely to construe (fictional) gun control data in a manner consistent with their ideological predispositions than low-Numeracy partisans.

The reason was that the low-partisan subjects couldn't reason well enough with quantitative information to recognize when the data were and weren't consistent with their ideological predispositions.  The high-Numeracy subjects could do that, and so never failed to credit predispositions-affirming evidence or to explain away predisposition-confounding evidence.

But whether that's a bias depends on what you think people are trying to do when they reason about societal risks.  If they are trying to get the "right answer," then yes, Motivated Numeracy is a bias.

But if they are trying to form identity-congruent beliefs for the sake of conveying their membership in and loyalty to important affinitty groups, the answer is no; motivated Numeracy is an example of how one can do an even better job of that form of rational information processing if one is high in Numeracy.

I think the latter interpretation is right ... I guess ... hmmmm.... "Now I'm not sure anymore..."

But I am sure that the "hot hand" study authors, and all those who have celebrated their studies, were really trying to get the right answer.

They didn't, because their high Numeracy tempted them to error.

p.s. I'll bet $10^3 against this, but if someone proves the paper wrong, the example of high Numeracy subjects being led to error by an argument only they could be seduced by still holds!

Tuesday
Jul072015

Three points about "believing in" evolution ... a travel report

the colored bars are 0.95 CIs!!0. I was ambushed!

Emlen Metz and Michael Weisberg, my fellow panelists at the International Society for the Hisotry of Philosophy and Social Studies of Biology, were lying in wait and bombarded me with a fussilade of counter-proofs and thoughtful alternative explanations! 

For such treachery, they should, at a minimum, compensate me by sharing summaries of their own presentations with the 14 billion readers of this blog, so that subscribers can see for themselves the avalanche of critical reason that crashed down on me.  I am working to exact this settlement.

For my part, I made three points about “believing in” evolution:  one empirical, one political, and one philosophical. (Slides here.)

1. The empirical point was that what people "believe" about evolution doesn’t measure what they know about science but rather expresses who they are, culturally speaking. 

Not a new point for me, I relied primarily on data from The Measurement Problem study to illustrate.

Whipping out my bewildering array of multi-colored item response profiles, I showed that the probability of correctly responding to the NSF Science Indicators Evolution item—“human beings evolved from an earlier species of animals—true or false?”—doesn’t vary in in relation to people’s scores on the Ordinary Science Intelligence (OSI) assessment. Instead the probability of responding correctly depends on the religiosity of the test taker.

Indeed, using factor analysis, one can see that the Evolution item doesn’t share the covariance structure of the items that indicate OSI but instead shares that of the items that indicate religiosity.

Finally, I showed how it’s possible to unconfound the Evolution item’s measurement of identity from its measurement of “science literacy” by introducing it with the phrase, “According to the theory of evolution . . . .”

At that point, religious test takers don’t have to give a response that misrepresents who they are in order to demonstrate that they know science’s understanding of the natural history of human beings.  As a result, the gap between responses to the item and the OSI scores of non-religious and religious respondents, respectively, essentially disappears.

Unconfounding identity and knowledge, I noted, is essential not only to assessing understanding of evolutionary science but also to imparting it. The classic work of Lawson and Worsnop (1992; see also Lawson 1999), I told the audience, demonstrates that kids who say they “don’t believe in” evolution can learn the essential elements of the modern synthesis just as readily as kids who say they “do believe it” (and who  are otherwise are not any more likely be able to give a cogent account of natural selection, genetic variance and random mutation).

But because what one says one “believes” about evolution  is in fact not an indicator of knowledge but an indicator of identity, teaching religiously inclined students how the theory of evolution actually works doesn’t make them any more likely to profess “acceptance” of it.

Indeed, Lawson stresses that the one way to assure that more religiously inclined students won’t learn the essential elements of evolutionary science is to make them perceive that the point of the instruction is to change their “beliefs”: when people are put in the position of having to choose between being who they are and knowing what’s known by science, they will predictably choose being who they are, and will devote all of their formidable reasoning proficiencies to that.

The solution to the measurement problem posed by people's "beliefs in" evolution, then, is the  science communication disentanglement principle: “Don’t  make reasoning, free people choose between knowing what’s known & being who they are.”

2.  The political point I made was the imperative to enforce the science communication disentanglement principle in every domain in which citizens acquire and make use of scientific information.

Liberal market democracies are the form of society distinctively suited both to the generation of scientific knowledge and to the protection of free and reasoning individuals' formation of their own understandings of the best way to live.

In my view, the citizens of such states have the individual right to enjoy both of these benefits without having to trade off one for the other.   To secure that right, liberal democratic societies must use the science of science communication to repel the dynamics that conspire to make what science knows a focal point for cultural status competition (Kahan in press).

Here  I focused on the public controversy over climate change.

Drawing on Measurement Problem and other CCP studies (Kahan, Peters, et al. 2012), I showed that what “belief in” human-caused climate change measures is not what people know but who they are as well.

The typical opinion poll item on “belief in” climate change, these evidence suggest, are is also not a valid indicator of the sort of latent cultural identity indicated by variously by cultural cognition worldview items and conventional “right-left” political outlook ones.

People with those identities don’t converge but rather polarize as their OSI scores increase.

Using techniques derived from unconfounding identity and knowledge in the assessment of what people understand about evolution, one can fashion an assessment instrument—the “Ordinary Climate Science Intelligence” (OCSI) test—that confounds identity from what they understand about the causes and consequences of climate change.

They don’t understand very much, it turns out, but they get the basic message that climate scientists are conveying: human activity is causing climate change and putting all of us at immense risk.

Nevertheless those who score the highest on the OCSI still are the most politically polarized on whether they “believe in” human climate change—because the question they are answering when they respond to a survey item on that is “who are you, whose side are you on?”

To enable people to acquire and make use of the knowledge that climate scientists are generating, science communication researchers are going to have to do the same sort of hard & honest work that education researchers did to figure out how to disentangle knowledge of evolutionary science from identity.

But they're going to need to figure out how to to do that not only in the classroom but also in the democratic political realm.  The science communication environment is now filled with toxic meanings that force people in their capacity as democratic citizens to choose between knowing what’s known about climate and being who they are.

Because individuals forced to make that choice will predictably--rationally-- use their reasoning proficiencies to express their identities, culturally diverse citizens will be unable to make collective decisions informed by what science knows about climate change until the disentanglement project is extended to our public discourse.

Indeed, conflict entrepreneurs (posing as each other's enemy as they symbiotically feed off one another's noxious efforts to stimulate a self-reinforcing atmosphere of contempt among rival groups) continue to pollute our science communication environment with antagonistic cultural meanings on evolution as well. 

Those who actually care about making it possible for diverse citizens to be able to know what’s known by science without having to pay the tax of acquiescing in others' denigration of their cultural identities are obliged to oppose these tapeworms of cognitive illiberalism no matter “whose side” they purport to be on in the dignity-annihilating, reason-enervating cultural status competition in which positions on climate change & evolution have been rendered into tribal totems.

3. The philosophical point was the significance of cognitive dualism.

Actually, cognitive dualism is not, as I see it, a philosophical concept or doctrine. 

It is a conjecture, to be investigated by empirical means, on what is “going on in heads” of those who—like the Pakistani Dr and the Kentucky Farmer—both “believe” and “disbelieve” in facts like human evolution and human-caused climate change.

But what the tentative and still very formative nature of the conjecture shows us, in my view, is just how much in need  the disentanglement project is of philosophers' help.

In the study of “beliefs” in evolution, cases like these are typically assumed to involve a profound cognitive misfire. 

The strategies skillful science teachers use to disentangle knowledge from identity in the classroom, far from being treated as a solution to a practical science communication dilemma, are understood to present us with another “problem”—that of the student who “understands” what he or she is taught but who will not “accept” it as true.

In my view, the work that reflects this stance is failing to engage meaningfully with the question of what it means to "believe in" evolution, climate change etc.

The work I have in mind simply assumes that “beliefs” are atomistic propositional stances identified by reference to the states of affairs (“natural history of humans,” “rising temperature of the globe”) that are their objects.

In this literature, there is no cognizance of an alternative view—one with a rich tradition in philosophy (Pierce 1877; Braithwaite 1933, 1946; Hetherington 2011)—of “beliefs” as dispositions to action.  

Haven't figured out yet what to get Kentucky Farmer for X-mas? Here's a hint!

On this account, beliefs as mental objects always inhere in clusters of intentional states  (emotions, values, desires, and the like) that are distinctively suited for doing particular things.

The Pakistani Dr’s belief in evolution is integral to the mental routines that enable him to be (and take pride in being) a Dr; his disbelief in it is part of a discrete set of mental routines that he uses to be a member of a particular religious community (Everhart & Hameed 2013).  The Kentucky Farmer disbelieves in “human caused climate change” in order to be  a hierarchical individualist but believes in it—indeed, excitedly downloads onto his IPad custom-tailored predictions based on the same "major climate-change models ... under constant assault by doubters" in order to be a successful farmer.
 

If as mental objects “beliefs” exist only as components of more elaborate ensembles of action-enabling mental states, then explanations of the self-contradiction or "self-deception" of the Pakistani Dr, Kentucky Farmer--or of the creationist high school student who wants to be a veterinarian but "loves animals too much" to simply "forget" what she has learned about natural selction in her AP biology course-- are imposing a psychologically false criterion of identity on the contents of their minds.

So long as there is no conflict in the things that these actors are enabled to do with the clusters of mental states in which their opposing stances toward evolution or toward climate change inhere, there is no "inconsistency" to explain.

There is also no “problem” to "solve" when actors who use their acceptance of what science knows to do what scientific knowledge is uniquely suited for don't "accept" it in order to do something on which science has nothing to say.  

Unless the "problem" is really that what they are doing with nonacceptance is being the kind of person whose behavior or politics or understandings of the best way to live bother or offend us.  But if so, say that -- & don't confuse matters by suggesting that one's goals have anything to do with effecitvely communciating science.

Or at least that is what the upshot of cogntive dualism would be if in fact it is the right account of the Pakistani Dr, and the Kentucky Farmer, and the many many many other people in whose mental lives such "antinomies" coexist.

Of course,  it doesn’t bother me that cognitive dualism is not now the dominant explanation of “who believes what” about evolution or climate change and “why.”

But what does is the innocence of those who are studying these phenomena of the very possibility that the account of "belief" of which cognitive dualism is a part might account for what they are investigating, a state of inattention that assures that they will fail to conduct valid empirical research-- and fail to reflect consciously on the moral significance of their prescriptions.

This is exactly the sort of misadventure that philosophers ought to protect empirical researchers from experiencing, I told the roomful of curious and reflective people who paid us the privilege of attending our session and sharing their views on our research.

And for the first time in all my experiences introducing people to the Pakistani Dr and the Kentucky Farmer, no one seemed to disagree with me . . . .

References 

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).


Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. What is the science of science communication?” J. Sci. Comm. (in press).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Lawson, A.E. A scientific approach to teaching about evolution & special creation. The American Biology Teacher, 266-274 (1999).

Pierce, C.S. Philosophical Writings of Peirce, The Fixation of Belief. Popular Science Monthly  (1877).