follow CCP

Recent blog entries
Monday
Aug032015

Vaccine hesitancy, acupuncture mania, and the methodological challenge of making senses of "boutique risk-benefit perceptions" (BRBPs)

A thoughtful correspondent drew my attention to evidence of the persistence of enthusiasm for acupuncture despite evidence that it doesn’t have any actual benefit.

He was struck by the contrast with the mirror image resistance to evidence that the benefits of childhood vaccines far outweigh their risks.

What sorts of cultural outlook might there be, he wondered, that predisposes some people to believe that sticking needles into their bodies promotes health and others that doing so will compromise it?! 

Maybe it’s a continuum with vaccine-hesitant people at one end and acupuncture devotees on the other?

Tongue-in-cheek on his part, but there’s an important point here about the role of fine-grained local influences on risk perception.  

Uh, no. The study finds those characteristics *don't* explain vaccine hesitancy...I am willing to bet that belief in the benefits of acupuncture will defy explanation by the use of the sort of correlational, risk-predisposition profiling methods of which cultural cognition is an example.

Indeed, your comment actually highlights a research blind spot in the project to identify risk-perception propensities and how to address them through effective science communciation.  

Vaccine denial defies explanation, e.g., by the sorts of cultural & like profiles that are so helpful in charting conflict over various other risks (despite the counterproductive media din to contrary).

Same w/, oh, concern about pasteurized milk (and belief in the benefits of raw milk); fear of cell phone radiation; anxiety about drones; fluoridation of water  etc.

I bet belief in the effectiveness of acupuncture (and its advantages over conventional medical treatments, which presumably those who promote acupuncture think are nonbeneficial or overly risky) is like that too. 

Let's call these "boutique" risk-benefit perceptions -- BRBPs.

But let’s agree with "fearless Dave" Ropeik’s consistent point that it is not satisfying to shrug off BRPBs as disconnected from any social context, as lacking any genuine social meaning, or as simply random patterns of risk perception, unamenable to systematic explanation ...

I think the problem in accounting for BRBPs has two related causes:

First, the sorts of characteristics that matter in BRBPs might be ones that are featured in schemes like cultural cognition but they always depend in addition on some local variable, one that makes those characteristics matter only in particular places, & indeed could make different sets of characteristics have different valences across space.

Second, the large-sample correlational studies that are used to examine such relationships in standard risk-profiling studies are unsuited for identifying the relevant indicators of BRPB because the local variable will resist being operationalized in such a  study, and when it's omitted the remaining cultural characteristics will always lack any systematic relationship to the risk perception in question.

For an example of a closely related research problem where this dynamic is present and researchers just don't seem to get its significance, consider studies that purport to corroborate the trope that "rich, white, liberal, suburbanite parents" are anti-vax militants.

The most recent highly publicized study (or most recent highly publicized one I noticed) purported to support this conclusion w/ a form of analysis that identifies "clusters" of school districts in which parents requested personal-belief exemptions in Calif.  

The clusters, as hypothesized, were in particular highly affluent, white, suburban school districts in Marin county (bay area) and in certain demographically comparable suburban school districts in the vicinity of LA.

Taking the cue from the authors' own characterization of their results, the media widely reported the studying as confirming that “[t]he parents most likely to opt out of vaccines” are “typically white and well-to-do" etc.

One doctor, who has no training in or familiarity with the empirical study of risk perception and science communication, & who apparently no familiarity with the empirical methods used in this particular study either, excitedly proclaimed that "[w]hile the study looked only at California,  ... similar patterns of demographics on parents would show up in other states as well."

Well, if so, then the conclusion will be that personal-exemption rates are not correlated with being "affluent, white, and suburban."  

In a state-wide regression analysis, this same study showed that suburban schools (which are affluent and mainly white in California) had substantially lower personal-exemption rates.

There's no contradiction or even paradox here.

"Cluster" analysis is a statistical technique designed, in effect, to find outliers: concentrated patterns of results that defy the sort of distribution one would expect in a statistical model in which one variable or set of variables is treated as the "cause" of another generally.  

If one can find such a cluster (i.e., one that can't be explained by a simple linear model that includes appropraite predictors), and can confidently rule out its appearance by chance, then necessarily one can infer that there is some other unobserved influence at work that is causing this unexpected concentration of whatever one is observing. 

Strangely, the authors of the study apparently didn't get this.

They noted, with evident surprise, that "[s]suburban location had a negative relationship with PBEs [personal belief exemptions], opposite of what was anticipated given the maps of cluster assignments”  -- & trot out a series of post hoc explanations for this supposed anomaly.

But there was no anomaly to explain.  

If  there are genuinely high-personal-exemption-rate clusters in certain white, affluent, suburban schools, that implies that that there isn't an association between those characteristics and high personal-exemption rates generally, and indeed, is more likely a negative association between them (if the association weren't negative outside the clusters, the high concentration in the clusters would be more likely to generate a positive linear correlation overall, albeit a weak one).  

Thus, the researchers, if it made sense for them to resort to spatial cluster analysis in the first place, should have anticipated the finding that "affluent, white, and suburban" school districts don’t have high personal-exemption rates generally.

Instead of announcing that their results had corroborated a common but incorrect stereotype, they should have recognized and advised readers that their study shows that in fact the influence that accounts for higher personal exemption rates in these schools is not that they are “affluent, white, and suburban” -- and is necessarily still unaccounted for!

They should also have called attention to the surplus of personal-exemption rate requests in school districts that are non-suburban-- in fact, amont students in charter schools, whose attendees tend to be poor and minority.

I don't know why there would be higher exemption rates in students attending those schools. I seriously doubt that parents of these children are teeming with anti-vax sentiment. More likely, there’s a hole in the universal-vaccination net that should be identified and repaired here.

But the point is, researchers (at least those looking for the truth and not for the attention they can get for confirming a congenial misconception) aren't going to find out what influences, cultural or otherwise, explain vaccine hesitancy or ambivalence using correlational methods.  The influences are too local, too fine grained, to be picked up by such means.  

Indeed, the "cluster" analysis methodology used in this and other studies is proof that something else-- something still not observed -- is causing such behavior in these areas.  

It's something that necessarily evades the sorts of profiles one can identify using the sorts of attitudes and characteristics one can measure with a general-population survey.  

That's exactly what sets BRBPs apart from other types of risk perceptions.

BRBPs fall into a blind spot in the study of risk perception and science of science communication.  

We need valid empirical methods to remedy that. 

Thursday
Jul302015

*Now* what do alternative sanctions mean? And how'd I miss the memo?

There were pretty much three things that I found very mysterious about the disconnect between empirical evidence and public policy when I started as an academic in the late 1950s or whenever it was, and the main one was the excessive reliance on imprisonment in the U.S.

I've reproduced the first few paragraphs of what was one of my first published articles (Kahan 1996) (the other was on how the latest developments in cold fusion were likely to radically alter constitutional interpretation; could still happen!). But basically the idea was that the case for so-called "alternative sanctions" was that it ignored the phenomenon of social meaning. 

The case for reducing or eliminating imprisonment for a host of non-violent offenders, ones who didn't need to be incapacitated for public safety, was largely focused on costs and benefits: Tossing people in jail is expensive for society, not to mention degrading and debilitating for offenders, and doesn't deter those forms of criminality any more effectively ("empirical evidence demonstrated") than fines and community service.

The reason this argument, which had proponents across the ideological spectrum, persistently failed to gain traction, I maintained, was that it disregarded the societal expectation that punishment convey an official attitude of disapprobation, and indeed visit, symbolically, on offenders a kind of lowering in status commensurate with the severity of their own disregard for the value of the goods their actions had transgressed.  Decades' worth of experience, I concluded, showed things wouldn't get better until the stock of alternatives was enriched with punishments that not only regulated behavior more efficiently than imprisonment but expressed condemnation as effectively.  I proposed shaming punishments as a candidate.

Well, something seems to have changed. Very dramatically so. 

It's not just that there is "bipartisan support" for reducing incarceration -- at various times there had been that, too, in the past.

But the actual carrying through on these policies seems now to be largely a matter of indifference to the public.  The mood hasn't so much changed as just evaporated. 

Who cares? (Hey, did you hear about that lion in Milwaukee?!)

And what's more, I have no idea how this transformation took place. 

I don't think the explanation is that those making the argument for "alternative sanctions" just stuck to it, refining and improving and amplifying their arguments until finally everyone "got it."

I think the arguments that are being credited now were just as available 10, 20, or 30 years ago (the process that led to the dominance of incarceration as a mode of punishment started in the 1970s and really got locked in by the mid-80s).

What changed was the unacceptable meaning of the alternatives.

Or even more accurately, I think, what changed was the intensity with which the demand for what imprisonment conveys-- the distinctive gesture of condemnation associated with liberty deprivation -- just sort of withered and was forgotten about.... Take away that motivation to resist it, and the case that has always been so compelling actually starts to compel.

But like I said, I have no idea why this happened, and barely any idea when the change in the significance of the meaning of imprisonment changed.

I just averted my eyes, or widened my perspective to try to make sense of other examples of public policy disputes where use on empirical evidence is constrained by what the meanings that policies express—where the question of what laws do seemed subordinate, not just morally but cognitively, to what laws say, particularly about the social status of competing groups—and “poof,” the “alternative sanctions” debate was gone. . . .

Unless of course, it isn’t!

BTW, the second place where this same dynamic loomed large and fascinated me when I started “working” as an academic was the debate over capital punishment.  The primacy of “symbolic” motivations (morally, cognitively) to instrumental, deterrence considerations was widely understood to explain the persistence of capital punishment in the U.S. (Kahan 1999; Ellsworth & Gross 1994; Ellsworth & Ross 1983; Stolz 1983; Tyler & Weber 1982).

It was assumed, too, that that the intensity and durability of those expressive sensibilities meant the death penalty, like the overreliance on imprisonment in the U.S., was not going to go away.

Well, guess what? That’s changed too—and again for reasons that I don’t feel confident I can identify I do feel confident that the “obvious” reasons—cost, conviction of innocent, etc., are not the reasons; those arguments were always available and likely even more compelling at an earlier time! The strength of the arguments didn’t change; the strength of the motivation to resist did—because, as with imprisonment, the demand for the meanings that capital punishment expresses abated.

Likely these developments are related. Capital punishment and “get tough on crime” were big issues—really, really big!—in every presidential election between 1968 and 1988.  And then the whole thing just went away. . . .

Huh.

The last issue of the three that had this quality when I started: gun control.  Good to see that some things never change.

But even better that many things do--in ways that furnish assurance that there will never be any shortage of mysteries to investigate.

What Do Alternative Sanctions Mean?

Dan M. Kahan

 

Imprisonment is the punishment of choice in American jurisdictions. In everyday life, the modes of human suffering are numerous and diverse: when we lose our property, we experience need; when we are denounced by those whose opinions we respect, we feel shame; when our bodies are tormented, we suffer physical pain. But for those who commit serious criminal offens­ es, the law strongly prefers one form of suffering-the depriva­ tion of liberty-to the near exclusion of all others. Some alterna­ tives to imprisonment, such as corporal punishment, are barely conceivable. Others, including fines and community service, do exist but are used sparingly and with great reluctance.

 The singularity of American criminal punishments has been widely lamented. Imprisonment is harsh and  degrading for offenders and extraordinarily expensive for society. Nor is there any evidence that imprisonment is more effective than its rivals in deterring various crimes. For these reasons, theorists of widely divergent orientations-from economics-minded conservatives to reform-minded civil libertarians-are united in their support for alternative  sanctions.

The problem is that there is no political constituency for such reform. If anything, the public's commitment to  imprisonment has intensified in step with the theorists' disaffection with it. In the last decade, prison sentences have been both dramatically lengthened for many offenses and extended to others that have traditionally been punished only with fines and probation.

What accounts for the resistance  to  alternative  sanctions? The conventional answer is a failure of democratic politics. Members of the public are ignorant of the availability and feasi­ bility of alternative sanctions; as a result, they are easy prey for self-interested politicians, who exploit their fear of crime by advocating more severe prison sentences.5 The only possible solution, on this analysis, is a relentless effort to  educate  the public on the virtues of the prison's rivals.

I want to advance a different explanation. The political unacceptability of alternative sanctions, I will  argue,  reflects their inadequacy along the expressive dimension of punishment. The public rejects the alternatives not because they perceive that these punishments won't work or aren't severe enough, but because they fail to express condemnation as dramatically and unequivocally as imprisonment.

This claim challenges the central theoretical premise of the case for alternative sanctions: that all forms of punishment are interchangeable along the dimension of severity or "bite." The purpose of imprisonment, on this account, is to make offenders suffer. The threat of such discomfort is intended to deter crimi­ nality, and the imposition of it to afford a criminal his just deserts. But liberty deprivation, the critics point out, is not the only way to make criminals uncomfortable. On this account, it should be possible to translate any particular term of imprison­ ment into an alternative sanction that imposes an equal amount of suffering. The alternatives, moreover, should be preferred whenever they can feasibly be imposed and whenever they cost less than the equivalent term of imprisonment.

This account is defective because it ignores what different forms of affliction mean. Punishment is not just a way to make offenders suffer; it is a special social convention that signifies moral condemnation. Not all modes of imposing suffering express condemnation or express it in the same way. The message of condemnation is very clear when society deprives an offender of his liberty. But when it merely fines him for the same act, the message is likely to be different: you may do what you have done, but you must pay for the privilege. Because community service penalties involve activities that conventionally entitle people to respect and admiration, they also fail to express condemnation in an unambiguous way. This mismatch between the suffering that a sanction imposes and the meaning that  it has for society is what makes alternative sanctions politically unacceptable.

The importance of the expressive dimension of punishment should be evident. It reveals, for one thing, that punishment reformers face certain objective constraints. The  social norms that determine what different forms of suffering mean cannot be simply dismissed as the product of ignorance or bias; rather, they reflect deeply rooted public understandings that mere exhortation is unlikely to change. But there are also more hopeful implica­ tions. If we can understand the expressive dimension of punish­ ment, we should be able to perceive not only what kinds of punishment reforms won't work but also which ones will. Careful attention to social norms might allow us to translate alternative sanctions into a punitive vocabulary that makes them a meaning­ ful substitute for imprisonment.

 Refs

Ellsworth, P.C. & Ross, L. Public-Opinion and Capital-Punishment - a Close Examination of the Views of Abolitionists and Retentionists. Crime & Delinquency 29, 116-169 (1983).

Ellsworth, P.C. & Gross, S.R. Hardening of the Attitudes: Americans’ Views on the Death Penalty. J. Soc. Issues 50, 19 (1994).

Kahan, D.M. The Secret Ambition of Deterrence. Harv. L. Rev. 113, 413 (1999).


Stolz, B.A. Congress and Capital Punishment: An Exercise in Symbolic Politics. L. & Pol. Q. 5, 157-180 (1983).

Tyler, T.R. & Weber, R. Support for the Death Penalty: Instrumental Response to Crime, or Symbolic Attitude. L. & Soc. Rev. 17, 21-45 (1982).

 

Tuesday
Jul282015

Cognitive dualism as an adaptive resource in a polluted science communication environment ... a fragment

from something I'm working on. . . .

I. Overview: the “entanglement” problem

By no means the only threat to the science communication environment, the “entanglement problem” nonetheless comprises a recurring and especially damaging one. It occurs when positions on issues that admit of scientific investigation become suffused with antagonistic cultural meanings, transforming them into badges of membership in and loyalty to competing groups. At that point, to protect the standing of their groups and their status within them, individuals can be expected to conform their assessment of all manner of information to the position that predominates among those who share their defining commitments.

It’s almost certainly a mistake to attribute this form of identity-protective cognition (Kahan 2010) to the constraints on rationality responsible for “base rate neglect,” “the availability effect,” “confirmation bias” and like reasoning errors (Kahneman, Slovic & Tversky 1982). For one thing, unlike those biases, identity-protective cognition does not originate in overreliance on heuristic (“System 1”) information processing. On the contrary, the forms of conscious, effortful information (“System 2”) processing most essential to recognizing and giving proper effect to scientific evidence—including cognitive reflection, numeracy, and science comprehension—amplify the tendency of individuals to form and persist in identity-protective beliefs (Kahan 2013b; Kahan, Peters et al. 2013; Kahan, Peters et al. 2012). . . .

This problem—the entanglement problem—is not a consequence of stupid people but of a polluted science communication environment ("stupid!") (Kahan 2012). The antagonistic cultural meanings that transform positions on scientific issues into badges of cultural identity are a toxin that disables the normally reliable reasoning faculties that people use to align themselves with what’s known by science.

Protecting the science communication environment from this sort of contamination is a central mission of the science of science communication (Kahan in press). . . .

II.  Entanglement and science communication environment protection

. . . . Once some scientific issue has become entangled in antagonistic cultural meanings, the process of detoxification is likely to be a slow one. In the interval it takes to quiet the dynamics that excite culturally polarizing forms of identity-protective cognition, society will stand in need of techniques for counteracting the debilitating impact of such a condition on its citizens’ capacity to reason (Hall Jamieson & Hardy 2014). . . .

B.  Cognitive dualism

Observed in both religious students of science and in religious science-trained professionals, cognitive dualism involves the capacity of individuals to maintain apparently contradictory beliefs about some fact—such as the natural history of human beings—that admits of scientific investigation.

Cognitive dualism challenges the premise, however, that such beliefs are genuinely contradictory. According to this position, a “belief” cannot, as a psychological matter, be defined solely by the propositions they embody.

As mental objects, “beliefs” exist only within clusters or ensembles of mental states (including emotions, desires, and moral evaluations) distinctly suited for the performance of some action (Pierce 1877; Braithwaite 1933, 1946; Hetherington 2011). A highly religious doctor, for example, might explain that whether he “believes” in evolution depends on where he is: at “work,” where he uses knowledge of human evolution in his practice as an oncologist or as a medical researcher; or at “home,” where belief that humans were divinely created guides his behavior as a member of a particular religious community (Everhart & Hameed 2013). Because those opposing stances on the natural history of human beings exist only within the mental routines that enable him to do those activities, and because those activities do not contradict one another, the idea that the doctor harbors self-contradictory "beliefs" imposes a psychologically false criterion of identity on the constituents of his mind.

A similar account exists for religious science students who “don’t believe” in evolution. Research shows that it is possible to teach the modern synthesis to students who say they “don’t believe” in evolution just as readily as students who say they “do believe” in it. Afterwards, however, the former still profess not to “believe in” or accept evolution (Lawson & Worsnop 1992), a result that typically is understood by researchers to signify a limitation in the success of instruction for “nonbelieving” students.

Cognitive dualism, however, suggests that it is a mistake to infer that there is in fact any meaningful difference in the impact of the instruction on “believing” and “nonbelieving” students. If, as cognitive dualism supposes, beliefs as mental objects are “dispositions to action,” the science class has in fact generated the same belief in both: the sort that is linked to demonstrating the sort of knowledge of the modern synthesis certified by a high school biology exam (DiSessa 1982).

Such instruction has also left completely unaffected in both a completely distinct state of “belief” that exists for purposes of being a particular sort of person. The “disbelief in” evolution that the religious student has retained obviously performs that function. But so did the “belief in” evolution the nonreligious student held before he learned the modern synthesis. Believing in” evolution at that point enabled him to inhabit a particular cultural style notwithstanding that he almost certainly subscribed to the naive Lamarckian view of how it works that the vast majority of people—believers and nonbelievers—entertain (Bishop & Anderson Shtulman 2006). What is more, he will almost certainly retain that identity-enabling “belief in” evolution even if (as is again highly likely) he thereafter completely forgets the rudiments of the modern synthesis. Should the religious student, in contrast, grow up, say, to be a doctor, she is likely to remember what she learned about the modern synthesis and to use it when doing anything that requires that knowledge—even as she continues to “disbelieve in” evolution in her life as a person who finds meaning in holding a particular faith (Everhart & Hameed 2013; cf. Hermann 2012).

The course, in sum, imparted in both the “believer” and “nonbeliever” the sort of knowledge supportive of doing the things that one can do effectively only by accepting science’s understanding of the natural history of human beings (take exams, carry out responsibilities as a science-trained professional).  But it left unaffected -- in both -- a state of “belief” the enables something completely orthogonal to what science actually knows: being a person who finds meaning in the world through the exercise of free reason in collaboration with others exercising the same.

Cognitive dualism supplies an adaptive resource in a polluted science communication environment.  Where a person experiences as distinct opposing states of belief embedded in discrete and fully compatible clusters of action-enabling intentional states, she is freed from having to choose between being who she is and knowing what’s known by science. Understanding how to accommodate cognitive dualism, and to repel conditions that in fact can be shown to subvert it (Hameed 2015), is thus a form of scientific understanding integral to promoting the effective transmission of scientific knowledge—in classrooms, in businesses, in public meeting halls, and anywhere else—during the periods in which one or another scientific proposition has become enmeshed in antagonistic cultural meanings.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning*. Cognitive science 6, 37-75 (1982).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hall Jamieson, K. & Hardy, B.W. Leveraging scientific credibility about Arctic sea ice trends in a polarized political environment. Proceedings of the National Academy of Sciences 111, 13598-13605 (2014).

Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, (in press).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Pierce, C.S. Philosophical Writings of Peirce, The Fixation of Belief. Popular Science Monthly (1877).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

Saturday
Jul252015

Weekend update: going to SENCER summer camp to learn about the "self-measurement paradox," the "science communication problem," & the "disentanglement project"

I'll be participating next week in the annual SENCER Summer Institute.

The 14 billion regular readers of this blog already know this, but for the rest of you, SENCER is an organization dedicated to obliterating the “self-measurement paradox” -- the truly weird and ultimately intollerable failure of professions that traffic in scientific knowledge to use science's signature methods to assess and refine their own craft norms.  

Most of the organizations' members are educators who teach math & science.

But SENCER definitely recognizes the link between the self-measurement paradox and the broader science communication problem in the Liberal Republic of Science.  That problem is a consequence of the self-measurement paradox on a grand scale--our systematic failure to use of evidence-based methods of science communication to assure that the vast scientific knowledge at our society's disposal is conveyed under conditions that enable free, reasoning citizens to reliably recognize it and give it the effect it is due when they govern themselves.

(Just to be clear: What effect it is due depends on citizens' values. Anyone who insists the best available scientific evidence uniquely determines policies either is very ill-informed or engaged in deliberative bad faith. Values, of course, naturally vary in a free society, creating the project of deliberative accommodation that is democracy's answer to the puzzle of how to reconcile individual autonomy with law.)

So ... in the session I'll be helping to lead, we'll be focusing on what I regard as the precise point of intersection between the self-measurement paradox and the science communication problem: the disentanglement project.  

In the science classroom, the "disentanglement project" refers to the development (by scientific means, of course) of strategies for unconfounding the question "what does science know" from the question "who are you & whose side are you on" in the study of scientific topics that have become enmeshed in antagonistic cultural meanings.

Critical in itself, learning how to disentangle knowledge and identity in education can, however, be expected to generate benefits that are even more far-reaching.  Disentangling knowledge from identity is in fact central to solving the broader science communication problem. Thus, studies aimed at implementing the disentanglement principle in science classrooms supply researchers with classrooms for acquiring the knowledge necessary for them to discern how to implement the disentanglement principle in institutions of self-government, too. That is the primary objective of the "new political science" essential to perfecting the Liberal Republic of Science as a political regime (Kahan in press). . . .

Boy, I can't wait for my SENCER summer camp session! Not to mention the all between-session volleyball games and evening marshmallow roasts!

My session description:

The science communication  disentanglement  project: What is to be done -- and how to do it with reliable and valid empirical methods

The topics of climate change  and  human evolution both feature the science communication  entanglement  problem. This problem occurs when a fact or set of facts that admit of scientific investigation become enmeshed in antagonistic cultural meanings that transform positions on those facts into badges of membership in opposing cultural groups.   This condition is actually rate, but where it occurs the consequences can be spectacularly damaging to propagation of both the collective knowledge and the norms of constructive deliberation essential to enlightened self-government.  The session will feature existing research on how to  disentangle  knowledge from antagonistic meanings both in and outside the classroom. The primary goal, however, will be to draw on the informed judgment of the participants to form conjectures on how, using the tools of empirical inquiry, educators and other science communicators can enlarge public understanding of how to protect free and reasoning citizens from being put in the position of having to choose between knowing what's known by science and being who they are.

Refs

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm  (in press).





Friday
Jul242015

On "best practices," "how to" manuals, and *genuinely* evidence-based science communication

From correspondence with a reflective person on whether there is utility in compiling “guide books” of “best practices” for climate-science and like-situated communicators . . . .

I think our descriptions of what we each have in mind are likely farther apart than what each of us actually has in mind.  My fault, I'm sure, b/c I haven't articulated clearly what it is that I think is "good" & what "not good" in the sorts of manuals that synthesizers of social science research compile and distribute.

I think the best thing would be for me to try to show you examples of each.

This is very very very good:

The concept of "best practices as best guesses" that is featured in the intro & at various points throughout is very helpful. It reminds users that advice is a provisional assessment of the best current evidence -- and indeed, can't even be meaningfully understood by a potential user who doesn't have a meaningful comprehension of what observations & inferences therefrom inform the "guess."

Also, as developed, the "best practices as best guesses" concept makes readers conscious that a recommendation is necessarily a hypothesis, to be applied in a manner that enables empirical assessment both in the course of implementation & at the conclusion of the intervention.  They are not mechanical, do-this directives.  The essays are written, too, in a manner that reflects an interpretive synthesis of bodies of literature, including the issues on which there are disagreements or competing understandings.  

This is bad-- very very very very bad.

It is a compilation of general banalities.  No one can get any genuine guidance from information presented in this goldilocks form: e.g., "don't use numbers, engage emotions to get attention ... but be careful to rely too much on emotions b/c that will numb people..."

If they think they are getting that, they are just projecting their own preconceptions onto the cartoons -- literally -- that the manual comprises.  


The manual  ignores complexity and issues of external validity that reflective real-world communicators should be conscious of.  

Worst of all, there is zero engagement with what it means to have an evidence-based orientation and mode of operation.  As a result, this facile type of work reinforces rather than revises & reforms the understandings of real-world communicators who mistakenly expect lab researchers to hand them a set of "how to" directives, as opposed to a set of tools for testing their own best judgments about how to proceed.

I know you have concerns about whether I have unrealistic expectations about the motivation and ability of individuals associated with climate-science communication groups to make effective use of materials of the sort I think are "good."  Maybe you won't have that reaction after you look at the FDA manual.  

But if you do, then I'd say that part of the practice that has to change here involves evaluation of which sorts of groups ought to be funded by NGOs eager to promote better public engagement with climate science.  Those NGOs should adopt standards for awards that will reliably weed out of the pool of support recipients the ones that by disposition & mindset can't conduct themselves in a genuinely evidence-based way & the replace them with ones who can and will structure themselves in a manner that enables them to do so.  

There's too much at stake here to rely on people who just won't use the available financial resources in a manner that one could reasonably expect to generate success in the world.

In particular, such resources shouldn't go to any group that thinks the success of a “science communication strategy” should be measured by how much it boosts contributions to the group’s own fund raising efforts.  It doesn’t surprise me to know that this happens but it does shock me to constantly observe members of these groups talking so unself-consciously about it, in a manner that betrays that perpetuation of their own existence is a measure of success in their minds independently of whether they are achieving the results that they presumably exist to bring about.


Thursday
Jul232015

Perplexed--once more--by "emotions in criminal law," Part 2: The "evaluative conception"

This is the second in an n-part series describing my evolving view of the significance of emotions in substantive criminal.  

Actually shifting view would be a better way to put it.  I took a position at one point that I later concluded missed if not the point then a very important point, one that had caused me to lose confidence in the original position.  

Now I find myself thinking that the successor position is also likely inadequate. Maybe the earlier position was right after all. Or perhaps some sort of dialectical synthesis will reveal itself to me if I think more about how the pieces of evidence before me actually fit together.

I'm really not sure!

Should I be worried that I don't know whether either of the announced positions I took before is right, and thus what I actually believe anymore?

The point of this series of posts, in addition to inviting reflection & comment on an interesting part of the law, is to explore "changing one's mind." 

One of my principal research interests is the ubiquity of defensive resistance to evidence that challenges people's perceptions of risk and like facts on culturally contested issues--climate change, gun control, etc.

But more intriguing to me at this particular moment is that it seems just as unusual for scholars studying this very phenomeon--or pretty much any other intriguing aspect of human behavior or cognition--to change their minds about what explains it.

Why would this be so?  By hypothesis, those scholars are using empirical methods to make sense of complex phenomena, the workings of which don't' admit of direct observation and that must therefore be investigated indirectly, on the basis of the observations of other things we'd expect to see or not depending on the truth of different plausible theories of how those unobserved phenomena work.

Given the very nature this activity, one might expect shifts in position to be common-place. If  the phenomena in quesiton are complex and not open to direct observation; if multiple plausible theories compete to account for them; and if the evidence for deciding between those theories consists of observations that necessarily do nothing more than alter incrementally the balance of then-existing considerations in favor of one position or another, then why wouldn't individual researchers' positions display the character of successive estimates of a random variable subject to imperfect measurement?

Meanignful shifts might be expected to abate over time, as sound studies--valid measurements of the quantity of interest--start to coverge on some value, estimates of which are less and less affected by the marginal impact of additional studies.  But where something is complex, and measuring instruments imperfect, that sort of stability will often take quite a while to emerge.  Moreover, it is during the interval it takes for such a state to form that we should expect to see the greatest volume of active, intense research--and thus the most occassion for those carrying out such investigations to shift positions as they update their views based on new evidence.

Scholarly inquiry as a whole takes this form.  We view such shifts in prevailing understanding as signs of "progress," a byproduct of the enlargement of knowledge associated with the of use science's signature method of inquiry. (I really do mean to be talking only about "normal science," or as I prefer "progressive research programs," the operation of which is predominantly made up of successive incremental advances driven by investigation of competing solutions to unresolved questions or unexplained anomalies; so-called "paradigm shifts" are another matter altogether.)

So why shouldn't we observe this same thing in the career of individual researchers' own understandings of the complex phenomena they are studying? If scholars' own research programs are progressing, and their knowledge of the phenonena they are studying enlarging as a result, then shouldn't their own work be expected to furnish them periodically with reason not just for refinement and fine tuning of their previous understandings but with cause for announcing that they've discovered some decisive objection to an inference they drew earlier?

In Part 1, I reproduced an excerpt from Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz, in the Political Heart of Criminal Procedure 163  (David Skeel Michael Klarman & Carol Steiker eds., 2011).  In that exerpt, I sketched out the hard question that the treatment of emotion in criminal law puts: namely, "what is really going on"-- when courts selectively treat impassioned behavior as a grounds for mitigating or aggravating the law's appraisal of the moral quality of an offender's, or victim's, conduct?

Here's another snippet from that same essay, one in which I trace out an answer developed in Two Conceptions of Emotion in Criminal Law, 96 Colum. L. Rev. 269 (1996), an article I coauthored with Martha Nussbaum.  It's a position that, for reasons I'll discuss in "tomorrow's" post, I decided, at the time this article was written, no longer seemed right to me. The "day after tomorrow" I'll exlain why I now don't think the reason I rejected that earlier paper seems right either.

But I'll tell you now how I feel about this: kind of excited, actually.

* * *

I will call [one account of the discordant themes that pervade the criminal law’s discussion of emotions] the two conceptions thesis, or “TCT.” This label derives from [Kahan & Nussbaum (1996)]. But the basic position—this particular solution to the puzzle of emotions in criminal law—was in line with ones that other scholars, including Sam Pillsbury and Victoria Nourse were developing at roughly the same time, and that many others, including Cynthia Lee and Carol Steiker, have since refined and extended.

TCT posits that substantive criminal law features two competing views of what emotions are and why they matter. The first is the mechanistic conception, which sees emotions as thoughtless surges of affect or “impulses.” Emotions excuse or mitigate, on this account, because—and to the extent—that they deprive an individual of the power to control his or her actions.

The second account is the evaluative conception of emotion. This view treats emotions and related sensibilities as a species of moral judgment that express an actor’s evaluation of contingencies that threaten or promote ends the actor cares about. As such, emotions, on this view, can be evaluated, not just as strong or weak, but as good or bad, right or wrong, reasonable or unreasonable, depending on whether we at the values they express are ones we think are appropriate or not for someone in the actor’s situation.

Each conception of emotion has an impressive pedigree in philosophy and psychology, and both are on display in the Oklahoma Court of Criminal Appeals decisions I started with. The mechanistic figures in those portions of the opinions emphasizing the “intensity of mental shock,” and resulting “loss of control,” “unseating of reason,” and “unbalancing of mind” that attend the discovery of adultery; the evaluative in those that distinguish between the “man of good character” and “refined sensibilities,” whose aggrievement warrants our solicitude, from the “rounder and libertine,” whose resentment of a man whose disregard for “the sanctity of the home” and “the virtue of women” he himself shares does not.

On their surface, the doctrines of criminal law are pervaded by mechanistic idioms and metaphors. But at their core, TCT asserts, they are evaluative. All of the doctrines contain one or another normative element that invites (or at least enables) decisionmakers to confine their liability-discharging or punishment-mitigating consequence to offenders whose emotional evaluations decisionmakers morally approve of. If they find that element to be satisfied, they needn’t find that the offenders’ passion embodied any particular quantum of volition-destroying force; if they find that this particular quality to be absent, they needn’t afford the slightest dispensation no matter how overwhelming or irresistible the offender’s (or victim’s, in the case of “intervening causation”) was.

The anger of the man who kills his wife or her paramour, for example, is right for someone in his situation, because adultery is “the gravest possible offence which a wife can commit against her husband” and “the highest invasion of [a man's] property” by another man. Having “no such right to control the woman as a husband has to control his wife,” in contrast, the resentment of the man who kills the lover of his mistress reveals a morally incorrect overvaluation of his own prerogatives. Only the “heat of passion” of the former, then, will be deemed to have been “adequately provoked” for purposes of the involuntary manslaughter doctrine.

The fear of the woman who aids the armed robber to protect her child appropriately loves her children more than she loves strangers, whereas one who acquiesces in the abuse of her own child to avoid harm to herself excessively prefers her own well-being to her children’s. The threat to the former, then, but not the latter is sufficient to “overcome the will of a person of reasonable firmness”—not because their wills were any more or less compromised but because reasonable women appropriately value their children’s well-being over that of anyone else’s, including their own.

What’s “true” about the man who stands his ground and kills is his character: like a “true beam,” it is straight, not warped. Because he appropriately values his “rights,” “liberty,” and “sacredness of . . . person” more than the life of a “wrongful” aggressor who tries to drive him from a public place where he has a “right to be,” he “reasonably” perceives flight to be as destructive of his “self-preservation” as death. The true woman, quite evidently, does not make the mistake of thinking her right to stay put ahead of the life of her abusive husband, even if the alternative is to remain “a life of the worst kind of torture and . . . degradation.”

The law refuses to accept any expert definition of “mental disease” for purposes of insanity. “[F]or all his insight into the dynamics of behavior, [the medical expert] has not solved the riddle of blame. The question remains an ethical one, the answer to which lies beyond scientific truth.” However implausible, then, it might be to think the explosive shock of infidelity invariably reverberates with greater intensity in the mind of a “man of refined sensibilities, having high conceptions of the sanctity of the home and the virtue of women,” than in that of a “moral degenerate, in the habit of consorting with prostitutes and dissolute women,” it is perfectly compatible with the law to characterize the former alone as sick.

The TCT solution to the puzzle of emotions in criminal law has three principal strengths. The first is its explanatory power. The evaluations that decisionmakers make of the values expressed in impassioned offenders’ emotions are informed by social norms. It is thus no surprise to see decisionmakers who are using the evaluative conception of emotion selectively exonerating (in whole or in part) offenders’ whose emotional valuations conform to prevailing expectations of what goods and states of affairs individuals occupying particular social roles are expected to value.

These norms, of course, are not fixed. They shift over time, and at any given moment might well be in a state of flux and contestation. . . . TCT thus explains . . . why the law’s appraisal of impassioned offenders shifts over time and why at any given moment can be the focus of intense political conflict.***

A second, related strength of TCT is its critical power. . . . TCT proponents have often successfully exposed the conservative bias of [commentators], who piously denounce as “political” any shift or proposed reform in the law’s treatment of impassioned offenders while displaying a comically blind eye to the necessarily political content of the evaluations that inform traditional doctrines and their applications. . . .

The third and final attraction of TCT is its prescriptive power. Critical commentary begs the question: what should the law be? Accounts that treat the mechanistic veneer of the doctrine seriously don’t help; at best they produce muddle, and at worst they make us unwitting apologists for the norms that just below the surface inform traditional doctrine and doctrinal applications. If the core of the law is evaluative, then those who want to make the law as good as it can be should be self-consciously evaluative, TCT proponents (myself included!) argued. We should face up to the necessity and appropriateness of making the law a reflection of the best moral and political understanding we can fashion of the values that good people ought to have.

 

Tuesday
Jul212015

Perplexed--once more--by "emotions in criminal law": Part 1

So to try to terminate my obsession with the " 'hot hand' fallacy" fallacy, I have resorted to intellectual methadone, finding a new puzzle that I can substitute to quench my cravings but that I'm sure I'll be able to drop once those subside....

Actually, it is the issue that was in the background of yesterday's post on "changing my mind." I offered up the topic of "emotions in criminal law"--the question how the law conceives of their nature and their normative significance--as a matter on which I had acknowledged, in a published paper (Kahan 2011)-- that the position I had taken in an article written yrs earlier (Kahan & Nussbaum 1996) had come to seem wrong to me based on things I had learned in the interim.

But in the course of reminding myself what position I had adopted in the later paper, it occurred to me that there were certain things about it that now seemed hard to reconcile with what I'd learned in the 4 yrs since I wrote that paper....

So I'm going to try work out what my new for-now position should be based on the current state of how I understand various not directly observable things in the world to work.  

In the course of doing that, moreover, I want to advance a claim about being in exactly this situation -- of finding that what one offered as a well-considered account of some phenomenon has to be qualified or simply replcaed with a different position based on new things one has learned.

The claim is that this should be a normal, even common-place thing.  Or at least it should be if one, first, chooses to devote one's attention to matters of genuine complexity, phenomena the workings of which are not demonstrable on the basis of direct inspection but rather only indirectly inferrable on the basis of evidence, i.e., additional phenomena that can be observed and that one has reason to believe are caused by those nonobservable complex matters; and, second, recognizes that anything pertinent one discovers under these conditions necessarily doesn't settle the issue but rather supplies one only with more or less reason to credit one plausible account, rather than another, about what's really going on.

For in that situation, whatever one's current best undertanding is will be in the nature of an estimate of a very fine quantity, and ones' work in the nature of progressively more precise measurements, which can be expected to jump from one side of some critical value to the other and back again as one's knowledge continues to expland . . . .

This is actually how things look, more or less, within a "progressive research program" that engages the collaborative, conversational attention of a group of researchers engaged in scholarly conversation.

So shouldn't it in be the way the work of any particular researcher working within such a program looks, too, if he or she is genuinely trying to figure out the truth about some complex thing, the operations of which cannot be directly see but rather only indirectly inferred on the basis of disciplined observation & measurement?....

Well, anyway, this post is the first of what I anticipate will be between 3 and 600 on the evolution of my understanding on "emotions in criminal law," which has been marked by a series of shifting positions animated by a constant state of perplexity.

In this first part, I reproduce an excerpt from Two Conceptions of Two Conceptions of Emotion, the essay I mentioned in yesterday's post, which is designed to conjure apprehension of the unobservable phenomenon apprehension of which is the goal of the inquiry.

* * *

2.

To introduce (or re-introduce) the puzzle I am concerned with, I will start with a pair of old decisions, both by the Oklahoma Court of Criminal Appeals. The issue in each was the same: whether the trial court erred by foreclosing the effective presentation of an insanity defense by a man charged with murder for killing his wife’s paramour.

In the first case, the court reversed the defendant’s conviction.[1] “Two doctors,” the court noted, “testified that the defendant . . . temporarily lost control of his mental processes” as a result of the “provocation” of his wife’s seduction.[2] “[W]e can perceive,” the court continued, that

a man of good moral character such as that possessed by the defendant, highly respected in his community, having regard for his duties as a husband and the virtue of women, upon learning of the immorality of his wife, might be shocked, or such knowledge might prey upon his mind and cause temporary insanity. In fact it would appear that such would be the most likely consequence of obtaining such information.[3]

In the second case, however, the court affirmed the conviction.[4] In that case, the court noted, “the state, over the objections of the defendant,” introduced evidence of “specific conduct tending to show . . . the defendant [to be] . . . a rounder and a libertine”:[5]

Facts were shown indicating that defendant's ideals of the sanctity of the home and the virtue of women were not so exalted, and that therefore the shock to his mind and finer sensibilities could not be so very great--at least not so great as to unbalance his mind. . . .

We think, in reason, that the shock would not be so great as it would to a man of refined sensibilities, having high conceptions of the sanctity of the home and the virtue of women.[6]

Thus, any trial rulings that prevented him from presenting a temporary insanity defense, the court held, were at most harmless error.

What’s really going on here? That is the question that any thoughtful reader who sets these two opinions out next to each other will feel compelled to ask. The court’s conclusion is straightforward: discovery of a wife’s infidelity is likely to deprive a sexually faithful man of his ability to comprehend or control his actions; such a discovery is not likely to have that effect, however, on an unfaithful man. But what’s not so straightforward is how to integrate the mélange of psychological and moral concepts that inform the court’s reasoning—“intensity of mental shock,” “unbalan[cing of] mind,” “loss of control,” on the one hand; “good moral character,” “regard for . . . the virtue of women,” “rounder and libertine,” on the other—into a coherent whole. How exactly does the court conceive of the nature of the emotional state of the “mentally insane” offender? What is it, precisely, about that condition that entitles someone to a defense?

These questions try to make sense of the decisions in philosophical or jurisprudential terms; but we might also feel impelled to ask “what is going on here” from a psychological or even political point of view. Do the judges really believe their own explanation of the distinction between two cases? Or are they deliberately concealing part of what they think from view? If concealing, are they trying to fool us, or are they just being coy? Do we imagine them straight-faced and earnest, or winking and slyly grinning, as they pronounce their judgments?

What’s likely to strike thoughtful readers as puzzling about these two decisions, it turns out, is the puzzle of emotions in criminal law. The discordant pictures that the decisions paint—of “highly respected” men of “good moral character” who are “shocked” to the point of mindless “loss of control,” on the one hand; of “rounders and libertines,” whose own lack of virtue insulates them from “mind-unbalancing” assaults on their reason, on the other—pervades basic doctrines and their application.

“Detached reflection cannot be demanded in the presence of an uplifted knife,” we are told.[7] Hence we cannot blame the “true man” who refuses to flee “an assailant, who by violence or surprise maliciously seeks to” drive him from a public place “where [he] has the right to be.”[8] But the woman who “ ‘believed herself . . . doomed . . .  to a life of the worst kind of torture and . . . degradation” cannot on that basis be excused for killing her abusive husband in his sleep: because she had the option of leaving their home and striking out on her own, her will was not overcome by the “primal impulse” of “self-preservation.”[9]

A man who “discovered his wife in flagrante delicto with a man who was a total stranger to him, and at a time when [he] was trying to save his marriage and was deeply concerned about both his wife and his young child,” will necessarily experience the form of “ungovernable passion” that mitigates first-degree murder to manslaughter.[10] The same volitional impairment cannot be imputed to the man who kills the lover of his mistress, however, for he “has no such right to control the woman as a husband has to control his wife.”[11]

The deep “shame” of being subjected to rape is one of the “physical and mental injuries, the natural and probable result of which would render the [an unmarried woman] mentally irresponsible,” making her subsequent commission of suicide an act attributable to her rapist, who could therefore be convicted of murder.[12] But a man could not be deemed to have “caused” the death of his (8-months pregnant) wife—“a high tempered woman” who was “hard to get along with” and who on previous “occasions ran off and left her husband” alone with the couple’s infant—because her decision to expose herself to the nighttime cold of winter in fleeing their farmhouse was her own choice following a fight.[13]

Again and again, we are confronted with a kaleidoscope of dissonant reports of virtuous offenders too mentally enfeebled to obey the law and impassioned ones too vicious not to be deemed to have “voluntarily” chosen to transgress. So what is really going on?

 


[1] Hamilton v. State, 244 P.2d 328 (Okla. Crim. App. 1952).

[2] Id. at 335.

[3] Id.

[4] Coffeen v. State, 210 P. 288 (Okla. Crim. App. 1922).

[5] Id. at 290.

[6] Id. at 290-91

[7] Brown v. United States, 256 U.S. 335, 343 (1921) (Holmes, J.).

[8] State v. Bartlett, 71 S.W. 148, 152 (Mo. 1902).

[9] State v. Norman, 378 S.E.2d 8, 11, 12-13 (N.C. 1989).

[10] State v. Thornton, 730 S.W.2d 309, 312, 315 (Tenn. 1987).

[11] Rex v. Greening, 3 KB. 846, 849 (1913).

[12] Stephenson v. State, 179 N.E. 633, 635, 649 (Ind. 1932).

[13] Hendrickson v. Commonwealth, 3 S.W. 166, 167 (Ky. Ct. App. 1887).

 

Monday
Jul202015

Changing my mind on "emotions in criminal law"

I sometimes get asked--sometimes in a challenging way--whether I've ever "changed my mind" or "admitted I was wrong" about something.  Hell yeah! Here's an example-- Kahan, D. M. (2011), Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz,In The Political Heart of Criminal Procedure. D. S. Michael Klarman & C. Steiker (Eds.), (pp. 163-176): Cambridge University Press  (working paper version here) , where I shift my views on a number of key points from an earlier paper, Kahan, D. M., & Nussbaum, M. C. (1996). Two Conceptions of Emotion in Criminal Law. Colum. L. Rev., 96, 269. There's more where this came from, too!  

Indeed, I was looking at this particular paper the other day (after I offered it as an example to someone challenging me to show that I've very acknolwedged I was "wrong") & wondering if maybe it's wrong in light of Kahan, D. M., Hoffman, D. A., Evans, D., Devins, N., Lucci, E. A., & Cheng, K. (in press), 'Ideology'or'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev., 164.  There's at least a tension to be explained....Maybe the first paper was right...

Do I like saying I've changed my mind? Sure, if the reason is that I actually managed to figure out something that I didn't know before. If one never had occassion to announce that one had changed his or her mind for that reason, it would mean either (a) one was studying unchallenging, non-complex things (boring); or (b) one wasn't actually advancing in understanding in the course of study & reflection.

Do I worry that, as a result of saying "I think I wasn't right on X," people might not "believe me" when I say think I know something in the future? No. First of all, they ought to be thinking critically about anything I say. Second, they ought to trust me more when they know that if I conclude I was wrong or have to qualify my previous view in some important way, I'll make an effort to tell them! Those who prefer to put their trust in scholars who wouldn't change their minds when they should, or wouldn't tell them when they did, are ones whose confidence I take no particular pride in earning.


Two Conceptions of Two Conceptions of Emotion in Criminal Law: An Essay Inspired by Bill Stuntz

Dan M. Kahan 

This essay examines alternative explanatory theories of the treatment of emotion in criminal law. In fact, it re-examines a previous exposition on this same topic. In Two Conceptions of Emotion in Criminal Law (Kahan & Nussbaum 1996), I argued that the law, despite a surface profession of fidelity to a mechanistic conception of emotion, in fact reflects an evaluative one: rather than thoughtless surges of affect that impair an actor’s volition, emotions, on this account, embody a moral evaluation of the actor that is in turn subject to moral evaluation by legal decisionmakers as “right” or “wrong,” “virtuous” or “vicious,” and not merely as “strong” or “weak” in relation to the actor’s volition. I now qualify this claim—and indeed reject certain parts of it.  I do so on the basis of an alternative conception of the evaluative conception of emotion: whereas the position in Kahan & Nussbaum (1996) treats the evaluative conception as  implementing a conscious moral appraisal on the part of decisionmakers, the alternative sees it, at least sometimes, as a product of decisoinmakers’ unconscious vulnerability to appraisals they themselves would view as subversive of the law’s moral principles, which might well invest volitional impairment with normative significance. I examine the empirical evidence, amassed by various researchers including (without giving this point much thought) by me, for this third view, which I label the “cognitive conception” as opposed to the earlier (Kahan & Nussbaum 1996) “moral conception” of the “evaluative” view of emotions in criminal law.

 

Sunday
Jul192015

Weekend update: Still fooled by non-randomness? Some gadgets to help you *see* the " 'hot hand' fallacy" fallacy

Well, I'm still obsessed with the " 'hot hand fallacy' fallacy." Are you?

As discussed previously, the classic "'hot hand' fallacy"  studies purported to show that people are deluded when they perceive that basketball players and other athletes enjoy temporary "hot streaks" during which they display an above-average level of proficiency.

The premise of the studies was that ordinary people are prone to detect patterns and thus to  confuse chance sequences of events (e.g., a consecutive string of successful dice rolls in craps) as evidence of some non-random process (e.g., a "hot streak," in which a craps player can be expected to defy the odds for a specified period of time).

For sure, people are disposed to see signal in noise.

But the question is whether that cognitive bias truly accounts for the perception that athletes are on a "hot streak."

The answer, according to an amazing paper by Joshua Miller & Adam Sanjurjo, is no

Or in any case, they show that the purported proof of the "hot hand fallacy" itself reflects an alluring but false intuition about the the conditional independence of binary random events.

The "test" the "hot hand fallacy" researchers applied to determine whether a string of successes indicate a genuine "hot hand"--as opposed to the illusion associated with our over-active pattern-detection imaginations--was to examine how likely basketball players were to hit shots after some specified string of "hits" than they were to hit shots after an equivalent string of misses.  

If the success rates for shots following strings of "hits" was not "significantly" different from the success rates for shots following strings of "failures," then one could infer that the probability of hitting a shot after either a string of hits or misses was not significantly different from the probability of hitting a shot regardless of the outcome of previous shots. Strings of successful shots being no longer than what we should expect by chance in a random binary process, the "hot hand" could be dismissed as product of our vulnerability to see patterns where they ain't, the researchers famously concluded.

Wrong!

This analytic strategy itself reflects a cognitive bias-- an understanding about the relationship of independent events that is intuitively appealing but in fact incorrect.

Basically, the mistake -- which for sure should now be called the " 'hot hand fallacy' fallacy" -- is to treat the conditional probability of success following a string of successes in a past sequence of outcomes as if it were the same as the conditional probability of success following a string of successes in a future or ongoing sequence. In the latter situation, the occurrence of independent events generated by a random process is (by definition) unconstrained by the past.  But in the former situation -- where one is examining a past sequence of such events --  that's not so.  

In the completed past sequence, there is a fixed number of each outcome.  If we are talking about successful shots by a basketball player, then in a season's worth of shots, he or she will have made a specifiable number of "hits" and "misses."

The cool Miller-Sanjurjo machine! It can be yours, because you-- unlike some *other* people (or robots or aliens or badgers with operational internet connections) who shall remain namless -- never miss an episode of this blog! Just click!Accordingly, if we examine the sequence of shots after the fact, the probability the next shot in the sequence will be a "hit" will be lower immediately following a specified number of "hits" for the simple reason that the proportion of "hits" in the remainder of the sequence will necessarily be lower than it it was before the previous successful shot or shots.

By the same token, if we observe a string of "misses," the proportion of "misses" in the remainder will be lower than it had been before the first shot in the string.  As a result, following a string of "misses," we can deduce that the probability has now gone up that the next shot in the sequence will turn out to have been a "hit."

Thus, it is wrong to expect that, on average, when we examine a past sequence of random binary outcomes, P(success|specified string of successes) will be equal to P(success|specified string of failures).  Instead, in that that situation, we should expect P(success|specified string of successes)  to be less than P(success|specified string of failures).

That means the original finding of the "hot hand fallacy" researchers that P(success|specified string of successes) = P(success|specified string of failures) in their samples of basketball player performances wasn't evidence that the "hot hand" perception is an illusion.  If P(success|specified string of successes) = P(success|specified string of failures) within an adequate sample of sequences, then we are observing a higher success rate following a string of successes than we would expect to see by chance

In other words, the data reported by the original "hot hand fallacy" studies supported the inference that there was a hot-hand effect after all!

So goes M&S's extremely compelling proof, which I discussed in a previous blog.  The M&S paper was featured in Andrew Gelman's Statistical Modeling, Causal Inference blog, where the comment thread quickly frayed and broke, resulting in a state of total mayhem and bedlam!

How did the "hot hand fallacy" researchers make this error? Why did it go undetected for 30 yrs, during which the studies they did have been celebrated as classics in the study of "bounded rationality"? Why do so many smart people find it so hard now to accept that those studies themselves rest on a mistaken understanding of the logical properties of random processes?

The answer I'd give  for all of these questions is the priority of affective perception to logical inference.

Basically, we see valid inferences before we apprehend, through ratiocination, the logical cogency of the inference.

What makes people who are good at drawing valid inferences good at that is that they more quickly and reliably perceive or feel the right answer -- or feel the wrongness of a seemingly correct but wrong one -- than those less adapt at such inferences.

This is an implication of a conception of dual process reasoning that, in contrast to the dominant "System 1/System 2" one, sees unconscious reasoning and conscious effortful reasoning as integrated and reciprocal rather than discrete and hierarchical.

The "discrete & hierarchical" position imagines that people immediately form a a heuristic response ("System 1") and then, if they are good reasoners, use conscious, effortful processing ("System 2")  to "check" and if necessary revise that judgment.

The "integrated and reciprocal" position, in contrast, says that good reasoners experience are more likely to experience an unconscious feeling of the incorrectness of a wrong answer, and the need for effortful processing to determine the right answer, than are people who are poor reasoners. 

The reason the former are more likely to feel that right answers are right and wrong answers wrong is that they have through the use of their proficiency in conscious, effortful information processing trained their intuitions to alert them to the features of a problem that require the deployment of conscious, effortful processing.

Now what makes the fallacy inherent in the " 'hot hand fallacy' fallacy" so hard to detect, I surmise, is that those who've acquired reliable feelings about the wrongness of treating independent random events as dependent (the most conspicuous instance of this is the "gambler's fallacy") will in fact have trained their intuitions to recognize as right the corrective method of analyzing such events as genuinely independent.

If the "hot hand" perception is an illusion, then it definitely stems from mistaking an independent random process for one that is generating systematically interdependent results.

So fix it -- by applying a test that treats those same events as independent!

That's the intuition that the "hot hand fallacy" researchers had, and that 1000's & 1000's of other smart people have shared in celebrating their studies for three decades -- but it's wrong wrong wrong wrong wrong!!!!!

But because it feels right right right right right to those who've trained their intuitions to avoid heuristic biases involving the treatment of independent events as interdependent, it is super hard for them to accept that the method reflected in the "hot hand fallacy" studies is indeed incorrect.

So how does one fix that problem?

Well, no amount of logical argument will work!  One must simply see that the right result is right first; only then will one be open to working out the logic that supports what one is seeing.

And at that point, one has initiated the process that will eventually (probably not in too long a time!) recalibrate one's reciprocal and integrated dual-process reasoning apparatus so as to purge it of the heuristic bias that concealed the " 'hot hand fallacy' fallacy" from view for so long!

BTW, this is an account that draws on the brilliant exposition of the "integrated and reciprocal" dual process reasoning offered by Howard Margolis

For Margolis, reason giving is not what it appears: a recitation of the logical operations that make an inference valid. 

Rather it is a process of engaging another reasoner's affective perception, so that he or she sees why a result is correct, at which point the "reason why" can be conjured through conscious processing.  (The "Legal Realist" scholar Karl Llewellyn gave the same account of legal arguments, btw.)

To me, the way in which the " 'hot hand fallacy' fallacy" fits Margolis's account -- and also Ellen Peters's of the sorts of heuristic biases that only those high in Numeracy are likely to be vulnerable too-- is what makes the M&S paper so darn compelling!

But now...

If you, like me and 10^6s of others, are still having trouble believing that the analytic strategy of the original "hot hand" studies was wrong, here are some gadgets that I hope will enable you, if you play with them, to see that M&S are in fact right.  Because once you see that, you'll have vanquished the intuition that bars the path to your conscious, logical apprehension of why they are right.  At which point, the rewiring of your brain to assimilate M&S's insight, and avoid the "'hot hand fallacy' fallacy" can begin!

Indeed, in my last post, I offered an argument that was in the nature of helping you to imagine or see why the " 'hot hand fallacy' fallacy" is wrong. 

But here--available exclusively to the 14 billion regular subscribers to this blog (don't share it w/ nonsubscribers; make them bear the cost of not being as smart as you are about how to use your spare time!)-- are a couple of cool gadgets that can help you see the point if you haven't already.

Gadget 1 is the "Miller-Sanjurjo Machine" (MSM). MSM is an excel sheet that random generates a sequence of 100 coin tosses.  It also keeps track of how each successive toss changes the probability that the next toss in the sequence will be a "heads."  By examining how that probability goes up & down in relation to strings of "heads" and "tails," one can see why it is wrong to simply expect P(H|any specified string of Hs) - P(T|any specified string of Ts) to be zero.

MSM also keeps track of how many times "heads" occcurs after three previous "heads" and how many times "heads" occurs after three previous "tails."  If you keep doing tosses, you'll see that most of the time P(H|HHH)-P(H|TTT) < 0.

Or you'll likely think you see that. 

Because you have appropriately trained yourself to feel something isn't quite right about that way of proceeding, you'll very sensibly wonder if what you are seeing is real or just a reflection of the tendency of you as a human (assuming you are; apologies to our robot, animal, and space alien readers) to see pattern signals in noise.

Hence, Gadget 2: the "Miller-Sanjurjo Turing Machine" (MSTM)! 

OMG!!! A Miller-Sanjurjo Turing Maching! No matter how many times you run it, you'll sware it's another human being who behaves just the way you do!!MSTM is not really a "Turing machine" (& I'm conflating "Turing machine" with "Turing test")-- but who cares?  It's a cool name for what is actually just a simple statisical simulation that does 1,000 times what it's baby sister MSM does only once -- that is, flip 100 coins and tabluate the  P(H|HHH) & P(H|TTT). 

MSTM then reports the average difference between the two.  That way you can see in fact it's true that P(H|HHH) - P(H|TTT) for sure should be expected to be < 0. 

Indeed, you can see exactly how much less than 0 we should expect P(H|HHH) - P(H|TTT) to be: about 8%. That amount is the bias that was built into the original "hot hand" studies against finding a "hot hand."

(Actually, as M&S explain, the size of the bias could be more or less than that depending on the length of the sequences of shots one includes in the sample and the number of previous "hits" one treats as the threshold for a potential "hot streak".)

MSTM is written to operate in Stata.  But if you don't have Stata, you can look at the code (opening the file as a .txt document) & likely get how it works & come up with an equivalent program to run on some other application.

Have fun seeing, ratiocinating, and rewiring [all in that order!] your affective perception of valid inferences! 

Friday
Jul172015

Two threats to the public-health good of childhood vaccines ... a fragment

From something in the pipeline:

The tremendous benefit that our society enjoys by virtue of universal childhood immunizations is being put in jeopardy by two threats.  The first is the deliberate miscommunication of scientific evidence on vaccine safety. The second is  our society’s persistent neglect of the best available scientific evidence on risk communication.  Indeed, these two threats are linked: the void created by the absence of scientifically informed, professional risk communication is predictably being filled by uniformed, ad hoc, unprofessional alternatives, which nourish the state of confusion that miscommunicators aim to sow.  The value of the scientific knowledge embodied in childhood vaccinations demands a commensurate investment in effectively using science to protect the science communication environment in which ordinary members of the public come to know what is known by science. Every constituent of the public health establishment—from government agencies to research universities, from professional associations to philanthropic organizations—must contribute its share to this vital public good.

Friday
Jul102015

Holy smokes! The "'hot-hand fallacy' fallacy"!

It' super-duper easy to demonstrate that individuals of low to moderate Numeracy --an information-processing disposition that consists in the capacity & motivation to engage in quantitative reasoning -- are prone to all manner of biases--like "denominator neglect," "confirmation bias," "covariance [non]detection," the "conjunction fallacy," etc.

It's harder, but not impossible, to show that individuals high in Numeracy are more prone to biased reasoning under particular conditions.

In one such study, Ellen Peters and her colleagues did an experiment in which subjects evaluated the attractiveness of proposed wagers.

For one group of subjects, the proposed wager involved outcomes of a positive sum & nothing, with respective probabilities adding to 1.  

For another group, the proposed wager had a slightly lower positive expected value and proposed outcomes were a positive sum & anegative sum (again with respective probabilities adding to 1).

Because the second wager had a lower expected value, and added "loss aversion" to boot, one might have expected subjects to view the first as more attractive.

But in fact subjects low in Numeracy ranked the two comparable in attractiveness.  Maybe they couldn't do the math to figure out the EVs. 

But the real surprise was that among subjects high in Numeracy, the second wager-- the one that coupled a potential gain and a potential loss-- was rated as being substantially more attractive than the first -- the one that coupled a potential gain with a potential outcome of zero and a higher expected EV.  

Go figure!

This result, which is hard to make sense of if we assume that people generally prefer to maximize their wealth, fit Peters et al.'s hypothesis that the cognitive proficiency associated with high Numeracy guides decisionmaking through its influence in calibrating affective perceptions.  

Because those high in Numeracy literally feel the significance of quantitative information, the necessity of doing the computations necessary to evaluate the second wager, Peters et al. surmised, would generate a more intense experience of positive affect for them than would the process of evaluating the first wager, the positive expected value of which can be seen without doing any math at all.  Lacking the same sort of emotional connection to quantitative information, the subjects low in Numeracy wouldn't perceive much difference between the two wagers.

Veeeeery interesting.   

But can we find real-world examples of biases in quantitative information-processing distinctive to individuals high in Numeracy?  Being able to is important not only to show that the Peters et. al result has "practical" significance but also show that it is valid.  Their account of what they expected to and did find hangs together, but as always there are alternative explanations for their results.  We'd have more reason to credit the explanation they gave-- that high Numeracy can actually cause individuals to make mistakes in quantitative reasoning that low Numeracy ones wouldn't -- in the real world. 

That way of thinking is an instance of the principle of convergent validity: because we can never be "certain" that the inference we are drawing from an empirical finding isn't an artifact of some peculiarity of the study design, the corroboration of that finding by an empirical study using different methods -- ones not subject to whatever potential defect diminished our confidence in the first -- will supply us with more reason to treat the first finding as valid.

Indeed, the confidence enhancement will be reciprocal: because there will always be some alternative explanation for the findings associated with the second method, too, the concordance of the results reached via those means with the results generated by whatever method informed the first study gives us more reason to credit the inference we are drawing from the second.

Okay, so  now we have some realllllllly cool "real world" evidence of the distinctive vulnerability of high Numeracy types to a certain form of quantitative-reasoning bias.

It comes in a paper, the existence of which I was alerted to in the blog of stats legend  (& former Freud expert) Andrew Gelman, that examines the probability that we'll observe the immediate recurrence of an outcome if we examine some sequence of binary outcomes generated by a process in which the outcomes are independent of one another-- e.g., of getting "heads" again after one getting "heads" rather than "tails" in the previous flip of a fair coin.

We all know that if the events are independent, then obviously the probability of the previous event recurring is exactly the same as the probability that it would occur in the first place.

So if someone flipped a coin 100 times, & we then examined her meticulously recorded results, we'd discover the probability that she got "heads" after any particular flip of "heads" was 0.50, the same as it would be had she gotten "tails" in the previous flip.

Indeed, only real dummies don't get this!  The idea that the probability of independent events is influenced by the occurrence of past events is one of the mistakes that those low to moderate Numeracy dolts make!  

They (i.e., most people) think that if a string of "heads" comes up in a "fair" coin toss (we shouldn't care if the coin is fair; but that's another stats legend /former Freud expert Andrew Gelman blog post), then the probability we'll observe "heads" on the next toss goes down, and the probability that we'll observe "tails" goes up. Not!

Only a true moron, then, would think that if we looked at a past series of coin flips, the probability of a "heads" after a "heads" would be lower than the probability of a "heads" after a "tail"! Ha ha ha ha ha! I want to play that dope in poker! Ha ha ha!

Um ... not so fast, say Miller & Sanjurjo in their working paper, "Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of Small Numbers."

The "assumption that in a sequence of coin tosses, the relative frequency of heads on those flips that immediately follow a streak of heads is expected to be equal to the relative frequency of heads on those flips that immediately follow a streak of tails" is "seemingly correct, but mistaken" (p. 19).

Yeah, right.

"We prove," M&S announce (p. 22),

that in a fi nite sequence generated by repeated trials of a Bernoulli random variable the expected conditional relative frequency of successes, on those realizations that immediately follow a streak of successes, is strictly less than the fi xed probability of success.

What? (I'm asking myself this as the same time you are asking me). "That can't possibly be the case"!

You'll feel like someone is scratching his fingers on a chalkboard as you do it, but read the first 6 pages of their paper (two or three times if you can't believe what you conclude the first time) & you'll be convinced this is true.

Can I explain this really counterintuitive (for high Numeracy people, at least) result in conceptual terms? Not sure but I'll try!

If we flip a coin a "bunch" of times, we'll get roughly 0.50 "heads" & 0.50 "tails" (it will land on its edge 10^-6 of the time). But if we go back & count the "heads" that came up only after a flip of "heads," we'll come up w/ less than 0.5 x 1 "bunch."

If we look at any sequence in the "bunch," there will be some runs of "heads" in there.  Consider THHTHTTTHTHHHTHT..."  In this sequence of 16, there were (conveniently!) 8 "heads" & 8 "tails."  But only 3 of the 8 (conveniently!) occurred after a previous flip of "heads"; 5 of the 8 ocurred after a flip of "tails."

In this sample, then, the probability of getting "heads" again after getting "head"s on the previous flip was not 0.5. It was 3/8 or .375 or ... about 0.4!

You might wonder (because for sure you are searching for the flaw in the reasoning) that this result was just a consequence of the sequence I happened to "report" for my (N = 16) "experiment." 

You'd not be wrong to respond that way!

But if you think hard enough & start to play around with the general point --that we are looking at the history of at a past sequence of coin tosses  -- you'll see (eventually!) that the probability of "heads" in the sample that occur after a previous "heads" (not to mention "several" heads in a row!) always is lower than the overall probability that any particular flip in that sequence was "heads."

That indeed it has to be. 

What will you be seeing/feeling when you "get" this? Perhaps this: 

  1. Imagine I perform 100 coin tosses and observe 50 "heads" and 50 "tails." (No problem so far, right?)
  2. If I now observe the recorded sequence and begin to count backwards from 50 every time I see a "heads," I'll always know how many "heads" remain in the sequence.  (Still okay?  Good.)
  3. Necessarily, the number goes down by 1 every time I see a "heads" in the sequence. 
  4. And necessarily the number does not go down -- it stays the same -- every time I see a "tails" in the sequence.
  5. From this we can deduce that the probability that the next flip in the sequence will be a "heads" is always lower if the previous flip was a "heads" than if it was a "tails."
  6. Oh, btw, steps 2-5 still apply if you happened to get 51 "heads," or 48 or 55 or whatever, in your 100 tosses. Think about it!

At this point you are saying, um, "now I'm not sure anymore"; go through that again.  Okay...

But here is the really cool & important thing: M&S show that the methodology used in literature examining the so-called "hot hand fallacy" doesn't reflect this logic.

Those studies have been understood to "debunk" the common perception that basketball players go through "hot streaks" during which it makes sense for others to expect them to achieve a level of shooting success that exceeds their usual or average level of success.

The researchers who purported to "debunk" the perception of "hot hands" report that if one examines game data, the probability of players making a shot after making a specified number of shots in a row is roughly their average level of success. Just as one would expect if shots are independent events-- so there's no "hot hand" in reality--only in our fallible, error-prone minds!

But this method of analyzing the data, M&S demonstrate, is wrong. 

It overlooks that, "by conditioning on a streak of hits within a sequence of finite length, one creates a selection bias towards observing shots that are misses" (p. 19).

Yeah, that's what I was trying to say!

So if the data show, as the "hot hand fallacy" researchers found, that the probability a player would make his or her next shot after making a specified number in a row was the same as the probability that he or she would make a shot overall, their data, contrary to their conclusion, support the inference that players do indeed enjoy "hot streaks" longer than one would expect to observe by chance in a genuinely random process (& necessarily, somewhere along the line, "cold streaks" longer than one would expect by chance too).

I'm sold!

But for me, the amazing thing is not the cool math but the demonstration, w/ real world evidence, of high Numeracy people being distinctively prone to a bias in quantitative reasoning.

The evidence consists in the mistake made by the authors of the original "hot hand" studies and repeated by 100s or even 1000s (tens of thousands?) of decision science researchers who have long celebrated these classic studies and held them forward was a paradigmatic example of the fallibility of human perception.

As M&S point out, this was a mistake that we would expect only a high Numeracy person to make. A low Numeracy person is more prone to believe that independent events are not independent; that's what the "gambler's fallacy" is about. 

Someone who gets why the gambler's fallacy is a fallacy will feel that the way in which "hot hand fallacy" researchers analyzed their data was obviously correct: because events that are independent occur with the same probability irrespective of past outcomes, it seems to make perfect sense to test the "hot hand" claim by examining whether players' shooting proficiency immediately after making a shot differs significantly from their proficiency immediately after missing.

But in fact, that's not the right test!  Seriously, it's not!  But it really really really seems like it is to people whose feelings of correctness have been shaped in accord with the basic logic of probability theory--i.e., to high Numeracy people!  (I myself still can't really accept this even though I accept it!)

That's what Peters says happens when people become more Numerate: they develop affective perceptions attuned to sound inferences from quantitative information.  Those affective perceptions help to alert high Numeracy people to the traps that low Numeracy ones are distinctively vulnerable to.

But they can create their own traps -- they come with their own affective "Sirens," luring the highly Numerate to certain nearl-irresitible but wrong inferences....

Holy smokes!

M&S don't make a lot of this particular implication of their paper. That's okay-- they like probability theory, I like cognition!

But they definitely aren't oblivious to it. 

On the contrary, they actually propose-- in a casual way in a footnote (p. 2, n.2)-- a really cool experiment that could be used to test the hypothesis that the "'hot hand fallacy' fallacy" is one that high Numerate individuals are more vulnerable to than low ones:

Similarly, it is easy to construct betting games that act as money pumps while defying intuition. For example, we can offer the following lottery at a $5 ticket price: a fair coin will be flipped 4 times. if the relative frequency of heads on flips that immediately follow a heads is greater than 0.5 then the ticket pays $10; if the relative frequency is less than 0.5 then the ticket pays $0; if the relative frequency is exactly equal to 0.5, or if no flip is immediately preceded by a heads, then a new sequence of 4 flips is generated. While, intuitively, it seems like the expected payout of this ticket is $0, it is actually $-0.71 (see Table 1). Curiously, this betting game may be more attractive to someone who believes in the independence of coin flips, rather that someone who holds the Gambler’s fallacy.

If someone did that study & got the result-- high Numeracy taking the bet more often than low--we'd have "convergent validation" of the inference I am drawing from M&S's paper, which I now am treating (for evidentiary purposes) as part of a case study in how those who know a lot can make distinctive -- spectacular, colossal even! -- errors.

But my whole point is that M&S's paper, by flushing this real-world mistake out of hiding, convergently validates the experimental work of Peters et al.

But for sure, more experiments should be done! Because empirical proof never "proves" anything; it only gives us more reason than we otherwise would have had for believing one thing rather than another to be true....

Two last points: 

1.  The gambler's fallacy is still a fallacy! Coin tosses are independent events; getting "heads" on one flip doesn't mean that one is "less likely" to get "heads" on the next.

The gambler's fallacy concerns the tendency of people mistakenly to treat independent events as non-independent when they make predictions about future events.

The " 'hot hand fallacy' fallacy" -- let's call it--involves expecting the probability that binary outcomes will immediately recur is the same as the probability that they will occur on average in the sample.  That's a logical error that reflects failing to detect a defect in the inference strategy reflected in the "hot-hand" studies.

Indeed, the same kind of defect in reasoning can explain why the gambler's fallacy is so prevalent -- or at least M&S surmise.

In the world, when we see independent events occurring, we observe or collect data in relatively short bursts -- let's call them “attention span” units (M&S present some data on self-reports of the longest series of coin tosses observed: the mean was a mere 6; strange, because I would have guessed every person flipped a coin at least 1000 times in a row at some point during his or her childhood!). If, in effect, we "sample" all the sequences recorded during “attention span” units, we'll observe that in fact the recurrence of an outcome immediately after it occurred was generally less than the probability it would occur on average.

That's correct.

But it's not correct to infer from such experience that, in any future sequence, the probability of that event recurring will be lower than the probability of it ocurring in the first place.  That's the gambler's fallacy.

The "'hot hand fallacy' fallacy" invovles not noticing that correcting the logical error in the gambler's fallacy does not imply that if we examine a past sequence of coin tosses, we should expect to observe that "heads" came up just as often immedately after one or more "tails" than it did immediately after one or more "heads."

Ack! I find myself not believing this even though I know it's true!

2. Is "motivated numeracy" an instance of a bias that is more prevalent among high Numeracy persons?

That depends!

"Motivated Numeracy" is the label that my collaborators-- who include Ellen Peters -- & I give to the tendency of individuals who are high in Numeracy to display a higher level of motivated reasoning in analyzing quantitative information.  We present experimental evidence of this phenomenon in the form of a covariance-detection task in which high-Numeracy partisans were more likely to construe (fictional) gun control data in a manner consistent with their ideological predispositions than low-Numeracy partisans.

The reason was that the low-partisan subjects couldn't reason well enough with quantitative information to recognize when the data were and weren't consistent with their ideological predispositions.  The high-Numeracy subjects could do that, and so never failed to credit predispositions-affirming evidence or to explain away predisposition-confounding evidence.

But whether that's a bias depends on what you think people are trying to do when they reason about societal risks.  If they are trying to get the "right answer," then yes, Motivated Numeracy is a bias.

But if they are trying to form identity-congruent beliefs for the sake of conveying their membership in and loyalty to important affinitty groups, the answer is no; motivated Numeracy is an example of how one can do an even better job of that form of rational information processing if one is high in Numeracy.

I think the latter interpretation is right ... I guess ... hmmmm.... "Now I'm not sure anymore..."

But I am sure that the "hot hand" study authors, and all those who have celebrated their studies, were really trying to get the right answer.

They didn't, because their high Numeracy tempted them to error.

p.s. I'll bet $10^3 against this, but if someone proves the paper wrong, the example of high Numeracy subjects being led to error by an argument only they could be seduced by still holds!

Tuesday
Jul072015

Three points about "believing in" evolution ... a travel report

the colored bars are 0.95 CIs!!0. I was ambushed!

Emlen Metz and Michael Weisberg, my fellow panelists at the International Society for the Hisotry of Philosophy and Social Studies of Biology, were lying in wait and bombarded me with a fussilade of counter-proofs and thoughtful alternative explanations! 

For such treachery, they should, at a minimum, compensate me by sharing summaries of their own presentations with the 14 billion readers of this blog, so that subscribers can see for themselves the avalanche of critical reason that crashed down on me.  I am working to exact this settlement.

For my part, I made three points about “believing in” evolution:  one empirical, one political, and one philosophical. (Slides here.)

1. The empirical point was that what people "believe" about evolution doesn’t measure what they know about science but rather expresses who they are, culturally speaking. 

Not a new point for me, I relied primarily on data from The Measurement Problem study to illustrate.

Whipping out my bewildering array of multi-colored item response profiles, I showed that the probability of correctly responding to the NSF Science Indicators Evolution item—“human beings evolved from an earlier species of animals—true or false?”—doesn’t vary in in relation to people’s scores on the Ordinary Science Intelligence (OSI) assessment. Instead the probability of responding correctly depends on the religiosity of the test taker.

Indeed, using factor analysis, one can see that the Evolution item doesn’t share the covariance structure of the items that indicate OSI but instead shares that of the items that indicate religiosity.

Finally, I showed how it’s possible to unconfound the Evolution item’s measurement of identity from its measurement of “science literacy” by introducing it with the phrase, “According to the theory of evolution . . . .”

At that point, religious test takers don’t have to give a response that misrepresents who they are in order to demonstrate that they know science’s understanding of the natural history of human beings.  As a result, the gap between responses to the item and the OSI scores of non-religious and religious respondents, respectively, essentially disappears.

Unconfounding identity and knowledge, I noted, is essential not only to assessing understanding of evolutionary science but also to imparting it. The classic work of Lawson and Worsnop (1992; see also Lawson 1999), I told the audience, demonstrates that kids who say they “don’t believe in” evolution can learn the essential elements of the modern synthesis just as readily as kids who say they “do believe it” (and who  are otherwise are not any more likely be able to give a cogent account of natural selection, genetic variance and random mutation).

But because what one says one “believes” about evolution  is in fact not an indicator of knowledge but an indicator of identity, teaching religiously inclined students how the theory of evolution actually works doesn’t make them any more likely to profess “acceptance” of it.

Indeed, Lawson stresses that the one way to assure that more religiously inclined students won’t learn the essential elements of evolutionary science is to make them perceive that the point of the instruction is to change their “beliefs”: when people are put in the position of having to choose between being who they are and knowing what’s known by science, they will predictably choose being who they are, and will devote all of their formidable reasoning proficiencies to that.

The solution to the measurement problem posed by people's "beliefs in" evolution, then, is the  science communication disentanglement principle: “Don’t  make reasoning, free people choose between knowing what’s known & being who they are.”

2.  The political point I made was the imperative to enforce the science communication disentanglement principle in every domain in which citizens acquire and make use of scientific information.

Liberal market democracies are the form of society distinctively suited both to the generation of scientific knowledge and to the protection of free and reasoning individuals' formation of their own understandings of the best way to live.

In my view, the citizens of such states have the individual right to enjoy both of these benefits without having to trade off one for the other.   To secure that right, liberal democratic societies must use the science of science communication to repel the dynamics that conspire to make what science knows a focal point for cultural status competition (Kahan in press).

Here  I focused on the public controversy over climate change.

Drawing on Measurement Problem and other CCP studies (Kahan, Peters, et al. 2012), I showed that what “belief in” human-caused climate change measures is not what people know but who they are as well.

The typical opinion poll item on “belief in” climate change, these evidence suggest, are is also not a valid indicator of the sort of latent cultural identity indicated by variously by cultural cognition worldview items and conventional “right-left” political outlook ones.

People with those identities don’t converge but rather polarize as their OSI scores increase.

Using techniques derived from unconfounding identity and knowledge in the assessment of what people understand about evolution, one can fashion an assessment instrument—the “Ordinary Climate Science Intelligence” (OCSI) test—that confounds identity from what they understand about the causes and consequences of climate change.

They don’t understand very much, it turns out, but they get the basic message that climate scientists are conveying: human activity is causing climate change and putting all of us at immense risk.

Nevertheless those who score the highest on the OCSI still are the most politically polarized on whether they “believe in” human climate change—because the question they are answering when they respond to a survey item on that is “who are you, whose side are you on?”

To enable people to acquire and make use of the knowledge that climate scientists are generating, science communication researchers are going to have to do the same sort of hard & honest work that education researchers did to figure out how to disentangle knowledge of evolutionary science from identity.

But they're going to need to figure out how to to do that not only in the classroom but also in the democratic political realm.  The science communication environment is now filled with toxic meanings that force people in their capacity as democratic citizens to choose between knowing what’s known about climate and being who they are.

Because individuals forced to make that choice will predictably--rationally-- use their reasoning proficiencies to express their identities, culturally diverse citizens will be unable to make collective decisions informed by what science knows about climate change until the disentanglement project is extended to our public discourse.

Indeed, conflict entrepreneurs (posing as each other's enemy as they symbiotically feed off one another's noxious efforts to stimulate a self-reinforcing atmosphere of contempt among rival groups) continue to pollute our science communication environment with antagonistic cultural meanings on evolution as well. 

Those who actually care about making it possible for diverse citizens to be able to know what’s known by science without having to pay the tax of acquiescing in others' denigration of their cultural identities are obliged to oppose these tapeworms of cognitive illiberalism no matter “whose side” they purport to be on in the dignity-annihilating, reason-enervating cultural status competition in which positions on climate change & evolution have been rendered into tribal totems.

3. The philosophical point was the significance of cognitive dualism.

Actually, cognitive dualism is not, as I see it, a philosophical concept or doctrine. 

It is a conjecture, to be investigated by empirical means, on what is “going on in heads” of those who—like the Pakistani Dr and the Kentucky Farmer—both “believe” and “disbelieve” in facts like human evolution and human-caused climate change.

But what the tentative and still very formative nature of the conjecture shows us, in my view, is just how much in need  the disentanglement project is of philosophers' help.

In the study of “beliefs” in evolution, cases like these are typically assumed to involve a profound cognitive misfire. 

The strategies skillful science teachers use to disentangle knowledge from identity in the classroom, far from being treated as a solution to a practical science communication dilemma, are understood to present us with another “problem”—that of the student who “understands” what he or she is taught but who will not “accept” it as true.

In my view, the work that reflects this stance is failing to engage meaningfully with the question of what it means to "believe in" evolution, climate change etc.

The work I have in mind simply assumes that “beliefs” are atomistic propositional stances identified by reference to the states of affairs (“natural history of humans,” “rising temperature of the globe”) that are their objects.

In this literature, there is no cognizance of an alternative view—one with a rich tradition in philosophy (Pierce 1877; Braithwaite 1933, 1946; Hetherington 2011)—of “beliefs” as dispositions to action.  

Haven't figured out yet what to get Kentucky Farmer for X-mas? Here's a hint!

On this account, beliefs as mental objects always inhere in clusters of intentional states  (emotions, values, desires, and the like) that are distinctively suited for doing particular things.

The Pakistani Dr’s belief in evolution is integral to the mental routines that enable him to be (and take pride in being) a Dr; his disbelief in it is part of a discrete set of mental routines that he uses to be a member of a particular religious community (Everhart & Hameed 2013).  The Kentucky Farmer disbelieves in “human caused climate change” in order to be  a hierarchical individualist but believes in it—indeed, excitedly downloads onto his IPad custom-tailored predictions based on the same "major climate-change models ... under constant assault by doubters" in order to be a successful farmer.
 

If as mental objects “beliefs” exist only as components of more elaborate ensembles of action-enabling mental states, then explanations of the self-contradiction or "self-deception" of the Pakistani Dr, Kentucky Farmer--or of the creationist high school student who wants to be a veterinarian but "loves animals too much" to simply "forget" what she has learned about natural selction in her AP biology course-- are imposing a psychologically false criterion of identity on the contents of their minds.

So long as there is no conflict in the things that these actors are enabled to do with the clusters of mental states in which their opposing stances toward evolution or toward climate change inhere, there is no "inconsistency" to explain.

There is also no “problem” to "solve" when actors who use their acceptance of what science knows to do what scientific knowledge is uniquely suited for don't "accept" it in order to do something on which science has nothing to say.  

Unless the "problem" is really that what they are doing with nonacceptance is being the kind of person whose behavior or politics or understandings of the best way to live bother or offend us.  But if so, say that -- & don't confuse matters by suggesting that one's goals have anything to do with effecitvely communciating science.

Or at least that is what the upshot of cogntive dualism would be if in fact it is the right account of the Pakistani Dr, and the Kentucky Farmer, and the many many many other people in whose mental lives such "antinomies" coexist.

Of course,  it doesn’t bother me that cognitive dualism is not now the dominant explanation of “who believes what” about evolution or climate change and “why.”

But what does is the innocence of those who are studying these phenomena of the very possibility that the account of "belief" of which cognitive dualism is a part might account for what they are investigating, a state of inattention that assures that they will fail to conduct valid empirical research-- and fail to reflect consciously on the moral significance of their prescriptions.

This is exactly the sort of misadventure that philosophers ought to protect empirical researchers from experiencing, I told the roomful of curious and reflective people who paid us the privilege of attending our session and sharing their views on our research.

And for the first time in all my experiences introducing people to the Pakistani Dr and the Kentucky Farmer, no one seemed to disagree with me . . . .

References 

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).


Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. What is the science of science communication?” J. Sci. Comm. (in press).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Lawson, A.E. A scientific approach to teaching about evolution & special creation. The American Biology Teacher, 266-274 (1999).

Pierce, C.S. Philosophical Writings of Peirce, The Fixation of Belief. Popular Science Monthly  (1877).

Monday
Jul062015

In Montreal, asking philosophers (& others) to help make sense of Pakistani Drs & Kentucky Farmers

As the 1.4 x 10^10 regular readers of this blog know, I've committed this summer to visiting every country in the world to introduce people to the Pakistani Dr and the Kentucky Farmer.  

So I've done UK (England, Wales) & France.  

Today is Canada.  

Next week Macao (CCP headquarters-- ironic that I haven't even done a talk there on this topic!).  

Then N. Korea. Then Netherlands Antilles, & after that Las Vegas.  

Then I'll be all done!

I'll be giving today's talk as my contribution to this really cool panel:

This is a great venue for discussing the Pakistani Farmer & Kentucky Farmer b/c I think philosophers really need to get in on this issue.  I'm convinced the empirical study of "belief/disbelief in" both evolution & climate change is a matter being impeded by a failure to engage reflectively with the concept of "belief," & that philosophers are best situated to help empiricists here see that. 

Maybe I'll have something more to say about this event "tomorrow."

Anyway, turns out Manny & Krista are attending this conference, too!

Total coincidence--they just came b/c Krista really "likes learning about this stuff" and because Manny had nothing else to do, having refused to get a job for the summer to protest the failure of the U.S. to furnish free universal college education.

 

Friday
Jul032015

Ambivalence about "messaging"

State of the art "messaging" 2008From correspondence with a reflective person & friend who asked my opinion on how one might use "message framing" to promote public engagement with specific climate-mitigation policies:

A couple of things occur to me; I hope they are not completely unhelpful.

1. I think one has to be cautious about both the external & operational validity of "messaging" & "framing" studies in this area.  

The external validity concern goes to the usual problem w/ measuring public opinion on any particularly specific public policy proposal: there's likely no opinion to measure.  

People have a general affective orientation toward climate change. You'll know you are measuring it if the responses they give to what you are asking them are highly correlated with what they say they "believe" about climate change. 

But people know essentially nothing about climate change in particular.  For or against it (as it were), they will say things like "human carbon emissions are expected to kill plants in greenhouses." Seriously

Accordingly, if you start asking them specific things about policy, very soon you'll no longer be measuring the "thing" inside them that is their only true attitude toward climate change.  This is what makes it possible [for some researchers] to say ridiculous things like "70% of Republicans want to regulate carbon emissions!" when only 25% of Republicans say "yes" to the question "are human beings causing climate change."  What’s being measured with the policy questions is a non-opinion.

In sum, the point is, as soon as you get into specifics about policy, you'll be very uncertain what you are measuring, & as a result whether you are learning something about how opinion works in the real world.

I'm not saying that it's impossible to do studies like the one you are proposing, only that it's much easier to do invalid than valid ones.  Likely you are nodding your head saying "yes, yes, I know..."

The "operational validity" point has to do with the translation of externally valid lab studies of how people process information on these issues into real-world communication materials that will effectively make use of that knowledge.  

To pick on myself for a change, I'm positive that our framing study on "geoengineering" & open-minded assessment of climate science has "zero" operational validity.  

I do think it was internally & externally valid: that is, I think the design supported the inference we were drawing about the resutls we were observing in the experiment, and that the experiment was in turn modeling a mechanism of information-processing that matters for climate-science communication outside the lab.

But I don't think that anything we learned in the study supports any concrete form of "messaging." For sure it would be ridiculous, e.g., to send our study stimulus to every white hierarchical individualist male & expect climate skepticism to disappear!  

There almost certainly is something one can do in the real world that will reproduce the effects that we observed in the lab.  But what that is is something one would have to use empirical methods, conducted in the field & not the lab, to figure out.

Knowing you, you are likely planning to test communication materials that will be actually used in the real-world, and in a way that will give you & others more confidence or less to believe that one or another plausible strategy will work (that's what valid studies do of course!).

But I feel compelled to say all of this just b/c I know so many people don't think the way you do -- & b/c I am genuinely outraged at how many people who study climate-science communication refuse to admit what I just said, and go around making empirically insupportable pronouncements about "what to do" (here’s what they need to do: get off their lazy asses & do some field research).

Definitely a PR coup for organization that dreamed up this plan, but what is "message" people get when they read (or are told about) a NY Times story that applauds a clever strategy to "message" them?2.  I myself have become convinced that "messaging" is not relevant to climate-change science communication.  Or at least that the sort of "messaging" people have in mind when they do framing studies, & then propose extravagant social marketing campaigns based on them, is not.

For "messaging" to work, we have to imagine either one of 2 things to be true.  The first is that there is some piece of information that people are getting "wrong" about climate change & will get right if it is "framed" properly.

But we know that there is zero correlation between people's positions on climate change & any information relating to it.  Or any information relating to it other than "this is my side's position, & this theirs."  And they aren't wrong at all, sadly, about that information.

TState of the art 2014...he second thing we might imagine, then, is that a "messaging" campaign featuring appropriately selected “messengers” could change people's assessment of what "their side's" position is.  

I don't believe it.  

I don't believe it, first, because people aren't that gullible: they know people are trying to shape that understanding via "messaging" (in part b/c the people doing it are foolish enough to discuss their plans  within earshot of those whose belefs they are trying to “manage” in this way).  

I don't believe it, second, b/c it's been tried already & flopped big time.

There have been multiple "social marketing campaigns" that say, "see? even Republicans like you believe in climate change & want to do something! Therefore you should feel that way or you'll be off  the team!" 

There has been zero purchase.  Probably b/c people just aren't gullible enough to believe stuff like that when they live in a world filled with accurate information about what "their side" "believes."

To make progress, then, you have go into their world & show them something that's true but obscured by the pollution that pervades our science communication enviornment: that "their side"already is engaging climate change in a way that evinces belief in the science & a resolve to do something.  

That's the lesson of SE Fla "climate political science ..."    I've seen that in action.  It really really really does work.  

But it really really really doesn't satisfy the motivations of those who want to use the climate change controversy to gratify their appetite to condemn those who have different cultural values from theirs as evil and selfish.  So its successes get ignored, its power to reconfigure the political economy of climate change in the U.S. never tapped.

As always, & as you know, this is what I think for now.  One knows nothing unless one knows it provisionally w/ a commitment to revising based on new evidence. You are the sort of person I know full well will produce evidence, on a variety of things, that will enable me to update & move closer to truth.

But for now, I think the truth is that "messaging" (as normally understood) isn't the answer.

Thursday
Jul022015

For the 10^6 time: GM foods is *not* polarizing issue in the U.S., plus an initial note on Pew's latest analysis of its "public-vs.-scientists" survey

Keith Kloor asked me whether a set of interesting reflections by Mark Lynas on social and cultural groundings of conflict over GM food risks in Europe generalize to the U.S.

The answer, in my view, is: no.

In Europe, GM food risks is a matter of bitter public controversy, of the sort that splinters people of opposing cultural outlooks (Finucane 2002).

But as scholars of risk perception are fully aware (Finucane & Holup 2005), that ain't so in the U.S.

Consider:

These data come from the study reported in Climate-Science Communication and the Measurement Problem, Advances in Pol. Psych. (2015).

But there are tons more where this came from.  And billions of additional blog posts in which I've addressed this question! Including:

I'm pretttttttttty sure, in fact, that Keith was "setting me up," "throwing me a softball," "yanking my chain" etc-- he knows all of this stuff inside & out.

One of the things he knows is that general population surveys of GM food risks in the US are not valid

Ordinary Americans don't have any opinions on GM foods; they just eat them in humongous quantities.

Accordingly, if one surveys them on whether they are "afraid" of "genetically modified X" -- something they are likely chomping on as they are being interviewed but in fact don't even realize exists-- one ends up not with a sample of real public opinion but with the results of a weird experiment in which ordinary Americans are abducted by pollsters and probed w/ weird survey items being inserted into places other than where their genuine risk perceptions reside.

Pollsters who don't acknowledge this limitation on public opinion surveys -- that surveys presuppose that there is a public attitude to be measured & generate garbage otherwise (Bishop 2005) -- are to legitimate public opinion researchers what tabloid rreporters are to real science journalists.

A while back, I criticized Pew, which is not a tabloid pollster operation, for resorting to tabloid-like marketing of its own research findings after it made a big deal out of the "discrepancy" between "public" and "scientist" (i.e., AAAS member) perceptions of GM food risks.

So now I'm happy to note that Pew is doing its part to try to disabuse people of the persistent miconception that there is meaningful public conflict over GM foods in the U.S.

It issued a supplementary analysis of its public-vs.-AAAS-member survey, in which it examined how the public's responses related to individual characteristics of various sorts:

As this graphic shows, neither "political ideology" nor "religion" -- two characteristics that Lynas identifies as important for explaining conflict over GM foods in Europe-- are meaningfully related to variance in perceptions of GM food risks in the U.S.

Pew treats "education or science knowledge" as having a "strong effect." 

I'm curious about this.

I know from my own analyses of GM food risks that even when one throws every conceivable individual predictor at them, only the tiniest amount of variance is explained.

In other words, variation is mainly noise.

click for regression analysis of gm food risk perceptions... yum!One can see from my own data above that science comprehension, as measured by the "ordinary science intelligence test," reduces risk perceptions (for both right-leaning and left-leaning respondents).

But the pct of variance explained (R^2) is less than 2% of the total variance in the sample. It's a "statistically significant" effect but for sure I wouldn't characterize it as "strong"!

I looked at Pew's own account of how it determined its characterizations of effects as "strong" & have to admit I couldn't understand it.

But with its characteristic commitment to helping curious and reflective people learn, Pew indicates that it will furnish more information on these analyses on request.

So I'll make a request, & figure out what they did.  Wouldn't be surprised if they figured out something I don't know!

Stay tuned...

Refs

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Finucane, M.L. Mad cows, mad corn and mad communities: the role of socio-cultural factors in the perceived risk of genetically-modified food. P Nutr Soc 61, 31-37 (2002). 

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

 

Wednesday
Jul012015

Two publics, two modes of reasoning, two forms of information in science communication: a fragment . . .

From something I'm working on . . .

Members of the public vary in the mode of reasoning they use to engage information on decision-relevant science. To be sure, many—including not just official decisionmakers but leaders of important stakeholder groups,  media professionals, and also ordinary citizens of high civic engagement—apply their reason to making informed judgment about science content.  Evidence-based methods (Kahan 2014; Han & Stenhouse 2014) are essential to anticipating how affect, numeracy, and cultural cognition interact when these "proximate information evaluators" assess scientific information (Peters, Burraston & Mertz 2004; Dieckman, Peters & Gregory 2015; Slovic, Finucane et al. 2004; Kahan, Peters et al. 2012).

Most members of the public, however, use a different reasoning strategy to assess the validity and consequence of decision-relevant science. Because everyone (even scientists, outside of their own domain) must accept as known by science much more than they could possibly comprehend on their own, individuals—all of them—become experts at using social cues to recognize valid science of consequence to their lives (Baron 1993).

The primary cue that these "remote information evaluators" use consists not  in anything communicated directly by scientists or other experts. Instead, it consists in the confidence that other ordinary members of the public evince in scientific knowledge through their own words and actions. The practical endorsement of science-informed practices and policies by others with whom individuals have contact in their everyday lives and whom they regard as socially competent and informed furnishes ordinary members of the public with a reliable signal that relying on the underlying science is “the sensible, normal thing to do” (Kahan 2015).

Much of the success of the Southease Florida Regional Climate Compact in generating widespread public support for the initiatives outlined in its Regional Climate Action Plan reflect the Compact’s success in engaging this mode of public science communication. Because so many diverse private actors—from  business owners to leaders of prominent civic organizations to officers in neighborhood resident associationsparticipated in the planning and decisionmaking that produced the RCAP, the process the Compact used created a science communication environment amply stocked with actors who play this certifying role in the diverse opinion-formation communities in which "remote evaluators" exercise this rational form of information processing (Kahan 2015).

As was so in Southeast Florida, evidence-based methods are essential for effective transmission of information to "remote evaluators." In particular, communicators must take steps to protect the science communication environment from contamination by antagonistic cultural meanings, which predictably disable the rational faculties ordinary citizens use to recognize the best available evidence (Kahan 2012). . . .

References

Baron, J. Why Teach Thinking? An Essay. Applied Psychology 42, 191-214 (1993).

Dieckmann, N.F., Peters, E. & Gregory, R. At Home on the Range? Lay Interpretations of Numerical Uncertainty Ranges. Risk Analysis (2015).

Han, H. & Stenhouse, N. Bridging the Research-Practice Gap in Climate Communication Lessons From One Academic-Practitioner Collaboration. Science Communication, 1075547014560828 (2014).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Making Climate-Science Communication Evidence-Based—All the Way Down. in Culture, Politics and Climate Change (ed. M. Boykoff & D. Crow) 203-220 (Routledge Press, New York, 2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility. Risk Analysis 24, 1349-1367 (2004).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Tuesday
Jun302015

Self-deception at L'université Toulouse: an encore!

I offered a report on my presentation at the fun "self-deception" symposium sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole). I also described my ambivalence toward characterizing identity-protective cognition--the species of motivated reasoning that is at work in public conflict over societal risks & like facts-- as a form of "self-deception."

These reflections have now inspired/provoked a report from another of the symposium participants, Joël Van der Weele, who presented really cool study results on the dynamics of self-deception in job interviewing.  In addition to summarizing the study highlights, Joël's post widens the lens to take in how "self-deception" has figured more generally in the study of behavioral economics.  Having read & reflected on the post, I would definitely now qualify my own ambivalence. I think "self-deception" fits more comfortably when the "self" is the object as well as the subject of the asserted "deception" than it does when the objects are societal risks.... But I'm perplexed, which is good!

Strategic self-deception

Joël van der Weele

 (with thanks to Peter Schwardmann for input)

Like Dan, I attended the workshop on self-deception in Toulouse, and like Dan, I will focus on my own talk. Unlike Dan, my viewpoint is that of a behavioral economist, with associated convictions and blind spots, of which I am happy to be reminded.

Joël van der Weele, steely resisting self-deceptionMost of the empirical literature on motivated cognition and self-deception is focused on establishing the existence of this phenomenon. Social psychologists in particular have made great progress in showing that people will systematically bias their beliefs and their information processing in a self-serving manner, and end up believing that they are smarter, nicer and more beautiful than they really are, and that the world is a safer, more just and more manageable place than it really is.

As usual, behavioral economists arrived to this research area a few decades after the psychologists, and are now confirming some of these results in economic contexts, using their own experimental and theoretical paradigms. While they have questioned that some of the overconfidence evidence is really inconsistent with rationality (Benoît and Dubra, 2011), they also find that much of it seems to be a truly self-serving bias.

At the workshop, several talks were dedicated to summarizing or adding to the evidence of when and where this kind of motivated cognition may occur, for example in the domain of information seeking about stock performance (George Loewenstein), scientific but politicized beliefs about gun control and climate change (Dan Kahan), trust in others (Roberto Weber), and self-inferences from test scores (Zoë Chance).

At the same time, economic studies are showing that overconfidence is expensive. Both in real world data (Barber and Odean, 2000) and experiments (Biais et al. 2005), traders who are overconfident tend to trade too much and make less money. There is more anecdotal evidence from other domains: I am sure that you all know people who think they are really good at something they really are not that good at, with embarrassing or painful results.

Given these costs, why would people deceive themselves? A popular account in both psychology and economics is that people simply like to think well of themselves, or like to think that things turn out well for them in the future, but this is not a very satisfactory explanation. Why wouldn’t evolution or the market take care of those sentimental souls in favor of more hard-boiled types? Where, in other words, are the material benefits that self-deception can bring?

The answer to this question is still mostly in the hands of theorists. Roland Bénabou, who gave the opening talk at the conference, has, together with his co-author Jean Tirole, proposed an explanation in terms of motivation (Bénabou and Tirole, 2002). If people suffer from laziness or have other difficulties in seeing through their plans, overconfidence may be a helpful `anti-bias’ that gets them out of their seat and into action. I don’t know of experiments testing this idea, but if you can help me out I am happy to hear of some. 

Another influential idea has been put forward by a biologist, Robert  Trivers, in several publications since the mid ‘80s (most prominently Von Hippel and Trivers, 2011),  including this book. Trivers argues that self-deception enables you to better deceive others and thus achieve social gain. If you truly believe you are great, you will do a much better job at convincing others that you are. This will help you impress potential sexual partners, achieve sales, land jobs, etc. Self-deception is useful because if you are not aware of lying about being great, you’ll be less likely to feel bad about your deception, give yourself away or face retribution in case of subsequent failure to live up to your proclaimed greatness.

This hypothesis is strikingly consistent with the folk wisdom peddled in the popular self-help literature. Just search for “success” and “confidence” on amazon, and you will find a score of books telling you that if you just believe in yourself (no matter the evidence), riches will soon be yours. While this may be true of the authors of these books, the kind of evidence that is cited in this literature is not very convincing to someone trained in scientific inference (“Look at Person X, she’s is confident and rich. So if you become as confident as X, you’ll sure be rich.”).

So my co-author Peter Schwardmann and I decided to subject the folk wisdom to a proper experimental test. We got about 300 people to the lab to perform a cognitively challenging task.  We then split the group in two. Our treatment group was told that they would be able to earn about 15 euros ($17) if they can persuade others in a face-to-face “interview” that they were amongst the top performers in the task. The control group is not told anything.

Before actually conducting the interviews, participants in both groups then privately report their beliefs about the likelihood of being in the top half of performers on the task, where we pay them for submitting an accurate belief. We find that treatment and control group are both overconfident on average, with the average belief of being in the top half being 60%, i.e. 10% higher than the true number.

In line with Trivers’ hypothesis, the shadow of future interactions increases overconfidence by about 50%, from 8% to 12%. This effect does not go away after we give participants some noisy information about their actual performance, as the prospect of future deception responsibilities also reduces responsiveness to new information about performance. Thus, anticipation of future deception opportunities indeed causes a more optimistic self-assessment amongst our participants, a case of strategic self-deception.

Our next question was whether self-deception paid off in the interview phase, i.e. whether increased confidence made a participant more likely to be flagged as a good performer, conditional on real performance. The interactions followed a speed-dating protocol, where we promoted the control group to interviewers, tasked with assessing the performance of the treatment group.

The results in this phase of the experiment crucially depend on the details of the environment. We had given some of the interviewers a short tutorial in lie-detection. It turned out that these interviewers were pretty good at spotting the true good performers and the self-deceptive strategies of the interviewees were ineffective. Against untrained interviewers, however, the average level of self-deception in our experiment (i.e. the increase in overconfidence of our treatment group) led to a substantial increase in the chance of being flagged as a top performer and the associated earnings. 

All of this is somewhat preliminary, as we are currently refining results and putting them on paper on paper. As far as we know, there are no other studies showing causal evidence for strategic self-deception in social contexts, although some are suggestive of it (Burks et al. 2013, Charness et al. 2014). If this finding holds up in a wider array of settings, we may find that the pop psychology literature is not that wrong after all.

References

Barber, B. M. and T. Odean. 2000. "Trading is hazardous to your wealth: Common stock investment performance of individual investors", Journal of Finance 55, 773-806.

Bénabou, Roland and Jean Tirole. 2002. “Self-confidence and Personal Motivation”, Quarterly Journal of Economics, 117:3, 871-915.

Benoit, J.P. and J. Dubra. 2011. “Apparent Overconfidence”, Econometrica, 79:5, 1591-625.

Biais, B. D. Hilton, K. Mazurier and S. Pouget. 2005. “Judgemental overconfidence, self-monitoring, and trading performance in an experimental financial market” Review of Economics Studies, 72:2, 287-311.

Burks, S. V., J. P. Carpenter, L. Goette and A. Rustichini. 2010. “Overconfidence and Social Signaling”, Review of Economics Studies, 80:3, 949-983.

Charness, G., A. Rustichini, and J. van de Ven. 2014. “Self-confidence and strategic behavior”, Amsterdam University mimeo.

Von Hippel, William, and Robert Trivers. "The evolution and psychology of self-deception." Behavioral and Brain Sciences 34.01 (2011): 1-16.

 

Monday
Jun292015

On the provisionality & conjectural status of claims about Pakistani Drs & Kentucky Farmers

This is a response to a friend & scholar who wrote to me with some responses to "yesterday's" post on identity-protective reasoning & self-deception.  In the response, I found myself being clearer than I usually am in my posts about the tentative & conjectural status of the views I have been advancing about "cognitive dualism"--the state in which an actor appears to entertain opposing states of belief within bundles or ensembles of action-enabling mental routines that are summoned for discrete activities.  

So I'm posting this portion of my response, both to remedy the failure to be as consistently clear as I should be that "cognitive dualism" is a conjecture and to create a "location" for this qualification when I have occassion to discuss this concept in the future & wish to emphasize what my attitude actually is about its status as an explanation for certain intriguing phenomena.

* * *

Thanks for the feedback & by all means feel free to share any portions of the post with others who you think might find the ideas expressed & arguments advanced to be of value. 

On the "believe/disbelief" issue: I should start by saying that my views on this are certainly very provisional. This is always true, at least for anyone who knows how empirical proof works and is committed to treating it as his or her guide for enlarging knowledge.  But in this case, my intuitions are way out in front of my evidence; I am eager to lessen the gap.

I am drawn to this by two types of observations. The first the results of a study in which I tried to develop a climate-change knowledge assessment that unconfounded the "affective identity" measured by most questions about "belief in" climate change from genuine knowledge.  The results of that study suggested, not surprisingly, that there is essentially no correlation between understanding of the basic mechanisms of climate science (ones relating to causes or consequences) and "beliefs in" it (whether it is happening, human caused, etc.); the latter are simply indicators of identity of the same nature as response to political outlook questions.

The thing that disoriented me was what to make of the finding that the individuals who scored highest on the assessment (& who also scored highest on a general science knowledge assessment) were also the most polarized. They obviously "know" what the best evidence is & yet say they "believe" or "disbelieve" in a manner that indicates their political identity.  What is going on in their heads-- I asked myself this & was asked the question over & over again by many curious & reflective people.

So I tried to come up with a taxonomy of explanations, one of which was the "cognitive dualism" explanation.

On this account -- which is based on various general sources on the nature of belief & action but also specific investigations of "disbelief in" evolution among people use such knowledge professionally -- starts with a psychological conception of "beliefs" as "dispositions to action."  It then proceeds to the proposition that beliefs of opposing valences can be bundled into discrete complexes of intentional states suited for doing distinct things-- like being a good Muslim & a Dr; or being a good Hierarch individualist & a good farmer; or being a good cosmologist & a good mother.  Yes, the "beliefs" that are elements of the discrete bundles "conflict" as propositional assertions; but as mental objects, they don't exist independently of the action-enabling ensembles of mental states of which they are a part.  If those don't conflict, then there is no practical, experienced contradiction.  The criterion of identity that is used to individuate the "beliefs" & find contradiction in them is one that is alien to the psychology of the actor & likely to confuse us about how that person's reason works.

You ask about what happens when the actions that are enabled do conflict.  I want to say that is in fact an entirely different sort of phenomenon or set of mental dynamics.  In the taxonomy, it would be "compartmentalization," which refers to the conscious, effortful separation of contradictory action-enabling beliefs & associated mental states in the mind of the same actor.  Think of the closeted gay person who belongs to a religious group that persecutes gays, e.g.  This is a form of dissonance avoidance.  It is distinct from what happents with "cognitive dualism."  It is not what is going on, I think, in the case of the Pakistani Dr or the Kentucky Farmer (or his prospective veterinarian daughter).

It is also not what is going on, in my view, in South East Florida.  My experiences there in doing field-based science communication studies is the second source of my interest in this issue.

There I see people who "don't believe in" climate change when they are being who they are as members of cultural groups, but who do when they are deliberating as citizens about what to do in their local political communities to try to protect their way of life from impending climate impacts.  I think they are enabled to do this by cognitive dualism.  But I think they are enabled to pursue the cognitive dualism strategy only as a result of astute leaders who create an environment in which there isn't conflict in being who they are and using what they know in their local political life...  This is a very profound accomplishment in my view, one I discuss in the same paper that presents the results of the climate-science comprehension assessment instrument.

I am now in the course of designing studies that bear down more on this phenomenon, that try to conjure the observations that would give us more reason or less to credit one or another of the candidate accounts (which are not limited to "cognitive dualism" & "compartmentalization") of what is "going on in their heads."

And am eager for feedback-- even if quite critical, since I agree that there is more than one plausible account of what is going on & those who are drawn to accounts different from the one I find most consistent with what I've already seen can help me to identify what sorts of observations it would be helpful to make to decide the relative strength of the competing explanations.

Thursday
Jun252015

Travel report: Self-deception at L'université Toulouse

I attended a great conference on "self-deception" sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole)

The concept of "self-deception" encompasses forms of information-processing that predictably bias individuals' beliefs toward some self-serving end or goal.

The main theoretical/scholarly issues are two: first, whether "self-deception" is at least under some circumstances "rational" or in any case beneficial to those who engage in it; and second, whether there is a cogent psychological mechanism that could explain the feasibility of this sort of rational or "adaptive" self-deception, given that presumably it is self-defeating to pursue such a state consciously (b/c if one knows one is decieving oneself, one will not be deceived into subscribing to the false belief).

We heard many interesting takes on these questions.

I myself gave a talk on "Motivated System 2 Reasoning." 

Slides here.

I made two principal points. 

First, contrary to the dominant decision-science and political science accounts, identity-protective cognition --the species of motivated reasoning that generates political polarization on decison-relevant science -- is not a consequence of over-reliance on heuristic or "system 1" information processing; indeed, it is magnified by proficiency in one or another of the reasoning dispositions associated with conscious, effortful form of information processing associated with "System 2"

Or so I argued on the basis of various CCP study results.

To me this suggests it is not tenable to see identity-protective reasoning as a "cognitive bias."

It is individually rational to process information on societal risks in this manner when one's own exposure to that risk is not materially affected by the correctness of one's views but where one's status in one's cultural group is very much affected by the congruity of one's beliefs with those that predominate in the group.

This is so for climate change, gun control, fracking, etc.

Of course, if everyone engages in this individually rational mode of information processing at the same time, the results can be collectively disastrous.  Under these conditions, culturally diverse citizens will fail to converge on the best currently available evidence essential to enactment of democratic laws that protect the welfare of all.

That consequence, though, won't change anyone's individual psychic incentives to process information in the personally beneficial manner associated with identity-protective cognition.  This is, as I've described it before, the "tragedy of the science communications commons."

This point aligned me pretty squarely with the economist contingent at the conference, which was mainly intent on demonstrating that "self-deception" is "rational" in the sense of welfare-maximizing at the individual level.

My second point was less in line with the views of the economists but likely more in line with at least some of the members of psychologist contingent at the conference (& I think with Richard Holton, the lone philosopher on the program, who gave a very insightful & helpful talk).

The point was that I didn't really think it was theoretically cogent or psychologically realistic to describe identity-protective reasoning as a form of self-deception.

It's true that this mode of information processing systematically promotes formation of beliefs that aren't aligned to the best currently available evidence. (There was some pushback on this along the predictable "but that's perfectly consistent with Bayesianism..." lines.  It never ceases to astonish me how many economists & political scientists have trouble grasping the conceptual distinction between truth-convergent Bayesian updating, in which one's priors are updated on the basis of evidence the likelihood ratio or weight of which is determined on the basis of independent truth-convergent criteria; and confirmation bias, in which one uses one's priors to determine the likelihood ratio assigned to new evidence.)

But I don't really see why this makes identity-protective cognition an instance of "self-deception."

People do things with information other than use it to form "accurate beliefs."  One of those other things they use information for is to cultivate dispositions that evince their commitment to values that unite them with other members of affinity groups important to their identity.

Sometimes the way to evince such commitments is by holding certain beliefs about risks or other related facts that, by virtue of one or another socially and historically contignent set of events, has come to be understood as a badge of membership in a particular cultural group.

If the person has no other purpose for the belief in question, then someone who forms beliefs using this style of information processing is not deceiving him- or herself at all, any more than such a person would be if the person used this form of information processing, say, to form the disposition to leave a tip at a restaurant (Frank 1988).

Or so it seems to me.

I think the reason so many scholars regard this form of information processing as "self-deception" is rooted in a psychologically implausible view of "beliefs" as isolated states of assent or nonassent to factual propositions.

The mind is not a registry of atomistic propositional stances.

It comprises a wide array of mental routines, which themselves consist of bundles of intentional states--desires, emotions, moral evaluations--each of which is suited for doing something.

As elements of these action-enabling ensembles, beliefs are dispositions to action (Pierce 1877; Braithwaite 1946).

If someone is using a style of information processing to form clusters of intentional states that reliably alert and motivate him or her to display identity-congruent societal risk perceptions in appropriate circumstances, then that person is is doing with his or her reason something akin to what someone does when internalizing a disposition to conform to norms that signify being a socially competent actor. 

In this sense, "beliefs" in "climate change," "evolution," "the deterrent effect of gun control laws" & the like are more akin to action-promoting attitudes than bare states of assent or non-assent to context-free factual propositions.  

If one accepts this view, none of the puzzles that vex "self-deception" need arise.  

A person who forms "beliefs" on these issues in the course of cultivating affective states that express his or her identity (Akerlof & Cranton 2000; Anderson 1993) is not "deceiving"  him- or herself -- or anyone else --about anything.

This assumes, of course, that this is what a person is doing with information relevant to forming a "belief" on a risk or like fact.

Sometimes people do other things with such beliefs-- like be good "doctors," or "farmers," or "judges" or other types of professionals.  

In that case, we might see "cognitive dualism," the condition in which the actor forms opposing states of beliefs as part of separate and discrete action-enabling ensembles of intentional states.

The Pakistani Dr "disbelieves in" evolution at home to be a good Muslim, but "believes in" it at work to be a good Dr.

The Kentucky Farmer, likewise, "disbelieves in" climate change to be a good Hierarch Individualist, in the settings where that is what he is doing; but "believes in" it when he is atop his tractor engaged in "zero tillage" or like practices that he knows will help him master the challenges that global warming is going to create for success in his occupation.

The propositional stances in the disbelief-belief couplings are indeed inconsistent if we abstract them from the action-enabling ensemble of mental states of which they are a part.  

But doing that is not faithful to the agent's psychology.  The opposing "beliefs" and "disbeliefs" don't exist apart from the action-enabling bundles of intentional states they reside in.  If those actions aren't inconsistent, then there's no "conflict" between any meaningful mental object that resides in the agent's mind.

Introduced with a discussion of the Pakistani Dr & the Kentucky Farmer, this last point -- about cognitive dualism -- predictably dominated discussion.  

I'm not sure how I feel about that.

It's interesting and fun to see people struggle with the point (especially when one invokes Kantian dualism & adds a Laplacian cosmologist who is proud of his or her children to the mix).

But if that point isn't really the point of the presentation, it can end up being a bit of show stealer and ultimately a distraction.

That doesn't make me doubt "cognitive dualism," of course.  If anything, it strengthens my resolve to investigate it; that it bothers and disorients people so much means something, I suspect.

But "cognitive dualism" is severable from "motivated system 2" reasoning, certainly, and I don't want to leave anyone with any misimpressions about that.

Better to address difficult issues one at a time.

But here is something that can be figured out w/o any great difficulty at all: L'université Toulouse is really cool!  I was awed at the number of talented scholars engaged both in high-level investigations of human behavior and high-level scholarly exchange w/ one another across disciplines.

Refs 

Akerlof, G.A. & Kranton, R.E. Economics and identity. The Quarterly Journal of Economics 115, 715-753 (2000).

Anderson, E. Value in Ethics and Economics (Harvard University Press, Cambridge, Mass., 1993).

Frank, R.H. Passions within reason : the strategic role of the emotions (Norton, New York, 1988).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Pierce, C.S. The Fixation of Belief. Popular Science Monthly, 12, 1-15  (1877).

20

 

 

 

Friday
Jun192015

MAPKIA #73 part IV: Revenge of the disgust skeptics! Does *disgust* really play any role in vaccine & GM-food risk perceptions?

CCP blog subscriber special offer: get this paper *now*, so you can be smarter than others for at least several weeks!!!So I’ve spent a day or so reflecting on the really great Wendell & Clifford guest post, along with their fantastic “in press” paper, on disgust sensibilities and vaccine-risk and GM-food risk perceptions.  I learned a ton from doing so.

I have some questions, certainly.

But in my experience, the best studies are always the ones that make you pay for the solution to a vexing puzzle by obliging you to see multiple additional ones that you now feel impelled to find an explanation for.  That's the way I feel about W&C's post & paper.

I’ve divided my reactions into two parts.  The first set address W&C’s own data, the second their “alternative interpretation” of the data analyses that earned @Mw her now disputed 5th straight MAPKIA! crown (the Chair of CCP Gaming Commission has stripped her of the synthetic biology giganto E. coli first prize. . . heart breaking . . .).

A. W&C's data

1. High or low, disgust sensitivities predict a high level of support for vaccines, no? Unlike a lot of researchers, W&C don’t hang their hat on disembodied correlation coefficients with long strings of asterisks. They get that a “statistically significant” correlation is not equivalent to a practically meaningful influence.  They respect the reason of readers by showing them the raw data, so that readers can meaningfully reflect on whether they agree the relationship expressed in the correlation bears the interpretation—because that’s inevitably what it is!—assigned to it.

I certainly respect and value the account they give to support their conclusion.

But when I look at the cool W&C data, I infer that people who vary in “pathogen” disgust are not in much disagreement: childhood vaccines are a good idea. 

W&C don’t describe the wording of the individual survey items used to from the “opposition to vaccines” scale, but their scatterplot does make it possible for us to see that all the subjects in their sample are heavily concentrated at the lowest values of “opposition.” In other words, across the items, the sample was highly skewed toward responses the evince “support” for vaccines.

from W&C postEven the individuals who scored high on the “pathogen disgust sensibilities” (PDS) scale were many times more likely to hold a positive than a negative attitude toward vaccines.  The “r = 0.15” (students) and “r = 0.20” (M turk) coefficients, then, don’t bear out the inference that high-PDS subjects were afraid of or against vaccines; they imply only that the high degree of support that those subjects had for vaccines wasn’t quite as high as was that of subjects low in PDS.

Just to try to add some perspective to the admirably concrete picture W&C show us, consider these data from the  CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report:

These are the sort of data that make it possible to see that those who think that there is meaningful ideological contestation over vaccine risks are uninformed (to put it politely).  Yes, subjects who are more left-leaning in their outlooks love vaccines a smidgen more than those who are right-leaning. But it is clear enough that those who are “right-leaning” love them too!

The correlation between this item and left-right ideology (r = -0.14) is about the same one that W&C report in their student sample.

The correlation that W&G report in their other M Turk subjects—r = 0.20—is a bit higher. 

But here is what an "r = 0.20" relationship looks like in raw data relating the Industrial Strength Risk Perception measures for childhood vaccines, and in comparison to perceptions of a bunch of other putative risks (again from the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report):

The point of showing the data that stand behind disembodied “statistically significant” correlations is to see whether they support the inferences that people draw from them.

Just as I think it would be unreasonable for someone to treat these CCP data as saying “conservative ideology predicts fear of” or “opposition to” to childhood vaccines, so I think it is not persuasive to treat W&C’s data as suggesting that high pathogen-disgust sensitivities predict any sort of opposition to or concern about childhood vaccines in either their M Turk or student samples.

Indeed, in their excellent paper, W&C characterize the relationship between PDS and the perception that vaccines cause autism as "weak and not statistically significant” (p. 26) for their student subjects.

2. Inferential sufficiency? W&C show us that pathogen-disgust sensitivities are correlated, but not very strongly, with both GM-food and vaccine risk perceptions.  But that’s not actually enough information for us to assess whether either, much less both of these risk perceptions, are meaningfully explained by variance in disgust sensitivity.

Before we can draw that inference, we'd need to  be shown, first, that the relationship between PDS and both GM-food and vaccine risk perceptions is comparable to what we’d expect to see between PDS and the perceived risks of other putative risk sources that we are already confident do provoke pathogen-disgust reactions. If the relationship is smaller, then that’s a reason for thinking that disgust sensitivities aren’t that important in the case of GM-food and vaccine-risk perceptions.

Second, we’d need to be shown what the relationship is between PDS and other putative risk sources that we have good reason to believe don’t provoke meaningful pathogen disgust sensitivities.  If those relationships are comparable in size to those between PDS and either GM-food or vaccine-risk perceptions, that would be reason, too, for discounting the inference that GM-food and vaccine-risk perceptions are meaningfully “explained” by differences in pathogen-disgust sensitivities.

This was the nub of @Mw’s case against treating disgust sensitivities as linking GM-food and vaccine-risk perceptions.  The relationship between the two was the same as the one between each of those risks and myriad other risk perceptions of putative risk sources, like drones and nuclear power, that didn’t seem to have much to do with disgust.

W&C don’t present this sort of info—the equivalent of what one would need to fill in a 2x2 covariance matrix—in the blog post, but they do have some data on other risk perceptions in their excellent paper.

Others should look and see what they think, but I found this data somewhat puzzling.

E.g., they report that neither drugs nor cigarettes, which they say are recognized in the literature as exciting pathogen-disgust sensitivities,  seemed to have meaningful relationships with PDS in their sample.  Indeed, they reported that sexual- disgust sensitivities were more meaningfully associated with anti-drug attitudes in their sample than pathogen disgust ones!

If the disgust scale didn’t perform as we expected on risk perceptions that we think we are related to disgust, then I’m left confused about what to make of the (pretty modest) relationships that they report between the scale and attitudes toward vaccines and GM foods.

Perhaps this is something W&C can clarify in a follow up or in fact do address in a revised version of the paper.

3. Why aren’t conservatives disgust sensitive? I found it remarkable that there was no meaningful correlation between PDS and ideology in the W&C sample. The idea that conservatives are “disgust sensitive” is a big theme in the moral psychology literature; the claim is made about “pathogen” as well as “sexual disgust” sensitivities.

I’d surmise that atypicality of the M Turk subjects, whose ideologies (W&C report) were heavily skewed toward liberalism, might have something to do with the explanation, except that on Twitter, Clifford supplied data showing that PDS had no meaningful relation with ideology in a YouGov sample, which I presume was drawn from a sample recruited and stratified for national representativeness.

I gather that “sex disgust sensitivities” (SDS) are generally understood to have a higher correlation with conservatism than PDS ones.  But the two are supposed to be correlated.  That, plus the W&C results on the relationship between SDS and drug laws, and the very modest relationships reported in studies that do seem to show an ideological-disgust relationship, have  now made me wonder whether the relationship between disgust and conservatism is as meaningful as it is made out to be by many commentators.

I’m sure moral psychologists will sort all this out!

B. @Mw's "factor 1"

1. Who sees what as a “pathogen” and why?  I myself was not entiredly persuaded that the loading of GM food risks on @Mw’s “factor 1” supports W&C's inference that  variance in GM foods is explained by PDS.

For one thing, it seems ad hoc to treat the eclectic assortment of risks that happened to load on “factor 1” as evincing a latent PDS sensibility.

@Mw's Factor Analysis from disputed MAPKIA #73 episode

Why did “residential exposure to magnetic field of high-voltage power lines” (POWER) and “user exposure to radio waves from cell phones” (CELL) load on factor 1?

click here to see the cool ISRPMs!I suppose the explanation would be that high PDS subjects are prone to see even invisible electronic waves travelling through the air as “pathogens” penetrating their bodies.

But then why didn’t nuclear power load on that factor? The idea that nuclear power plant radiation is hazardous is in fact a much more conspicuous, much more contentious matter in our society than that either cell phones or high-voltage power lines harm anyone.

Why didn’t “fracking”—which involves injecting noxious chemicals into bedrock, where it can leach into the groundwater—load on “factor 1” if it is measuring a latent PDS sensibility?

Again, drug use is generally understood in the literature to excite PDS.  So why didn’t marijuana legalization load on “factor 1”?

What about "drinking raw milk (milk that has not been pasteurized)" (RAWMILK)? That stuff is brimming with delicious E. coli, salmonella & other pathogens.  Shouldn't it load on Factor 1 if Factor 1 is about "pathogent disgust" sensibilities?

“Private operation of drones in U.S. airspace” (DRONES) correlates more strongly with “Factor 1” (r = 0.20, p < 0.01) than does  raw milk (r = 0.09, p =, p < 0.01).  That’s weird, I think, if the factor is supposed to be measuring some generic anxiety about bodily invasion by foreign agents (there are some really small drones--they’re adorable!--but none will make it very easily into your blood stream!).

I suggested that “factor 1” is a catchall: there isn’t much public concern about any of the risks that load on it, including consumption GM foods, in the US general public.  What explains variance in them is just some unobserved disposition to worry about things not many other people do.

But I accept for sure that there might be more to it.

Indeed, one possibility that occurs to me is a weak form of “environmental risk” sensitivity that is associated with being culturally egalitarian.

Actually, I don’t have cultural outlook scores in this dataset!

But I do have right-left ideology, which is correlated with being egalitarian and communitarian and definitely is an indicator of environmental-risk concern.

I also have the Ordinary Science Intelligence scale.

Click on this regression. It's a cool 1970s-era motif computer outputWhen I regress “factor 1” on those two variables and their interaction, it turns out that being more “left-leaning” predicts a higher level of the “factor 1” latent risk concern. 

Moreover, the disposition to worry about the Factor 1 risks becomes even more politically polarized as science comprehension does—a sign that identity-protective reasoning played a role in the formation of the relevant risk perceptions.

So there’s an explanation that competes with catchall: an environmental risk concern that is characteristic of an egalitarian-communitarian identity but that is less proximate to that identity than concerns about the more culturally freighted risks that figure in “factor 3.”

The effects are not big at all. But given that “conservatives” supposedly have greater PDS, it’s hard to reconcile these data with the proposition that “factor 1” is measuring a risk sensitivity related to pathogen disgust sensitivities.

Unless, of course, “disgust sensitivities” are themselves programmed by cultural outlooks, in which case, contrary to “moral foundations theory,” we’d expect disgust sensitivities to be symmetric with respect to cultural outlooks or political ideologies but to attach to different putative risk sources in patterns that reflect the cultural meanings that the sources in question have for the types involved.

I find that very plausible—even with respect to drones. 

(A last point: the @Mw “factors” were rotated so that they would be, or be close to, orthogonal.  Accordingly, it is not really useful to compare the correlations of the factors to one another, as @W&C had helpfully suggested.  Nevertheless, if we do that, it turns out that “factor 1” is in fact more strongly correlated (r = 0.13, p < 0.01) with “factor 3,” the “white hierarchical male” risk-skepticism group, than with “factor 2” (r = 0.05, p = 0.02), the social-deviancy “disgust” factor.)

2. No one sees vaccines as a “pathogen.” In any case, as @W&C note, vaccine risk perceptions do not load on “factor 1.”  So if “factor 1” is a latent PDS sensibility, concern over vaccines isn’t associated meaningfully with PDS.

click on this cool graphic that shows "affect" heuristic at work for vaccine risks/benefit perceptionsW&C suggest that maybe vaccines, because they confer health benefits as well as risks, might not excite PDS.  That sounds like a reason for thinking the hypothesis—that people who are vaccine hesitant are motivated by their disgust with needles in their veins—is false, not a reason to think the industrial strength risk perception measure for vaccine risks isn’t a valid measure of vaccine risk perceptions.

For sure the industrial strength measure is a valid indicator of the general affective orientation that people have toward vaccines, ones that informs all manner of assessment they make about vaccine risks and benefits. That's another of the findings from the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report).

* * *

So those are some of the thoughts & questions that occur to me.  Thanks a ton to W&C for making me both better informed and more perplexed!

[Note: I'm closing off comments here so that the discussion of W&C's own analysis occurs in 1 place-- after their post.]