follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Wednesday
Jul272016

"Mirror mirror on the wall ... who is the most partisan of all?!" MAPKIA Episode No. 978!

Hey, everybody, I think you know what it's time for . . . .

That’s right-- another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

To get the technicalities out of the way, here's the posting of the "official statement of contest terms & conditions,"  as mandated by the Gaming Commission:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Okay, this is a tricky one!

It’s going to take (a) a Feynmanite/Selbstian level of analytical thought, (b) a Fredrickian resistance to the seductive tug of WEKS, plus (c) a Barry-Bonds-sized dose of political-psychology HGH (& yes former Freud expert & current stats legend Andrew Gelman and Josh " 'Hot Hand Fallay' Fallacy" Miller both remain eligible for this MAPKIA pending their appeals for testing positive in the aftermath of their stunning post “CCP-APPC Political Polarization IQ Test”™ victories).

Let’s start by creating a “political partisanship index.”  The recipe for that is as follows:

  1. Take a left-right political outlook scale formed by standardizing the sum of the sums of responses to conventional 7-point political-party identification and 5-point liberal-conservative ideology survey items. A very nice feature of this approach when one uses it with a nationally representative sample is that “0” is “moderate Independent,” while -1 and +1 SD are “liberal Democrat” and “conservative Republican,” respectively. Scores in the vicinity of -1.8 and +1.8 will be “Extremely liberal, Strong Democrat” and “Extremely conservative, Strong Republican,” respectively. In case you’ve forgotten how nicely this simple scale performs in picking in partisan polarization on contested issues, check out the policy-polarization figure below or watch a re-run of the wildly popular episode on the “CCP-APPC PPQ IQ Test”™).

  2. Then take the absolute value of the scores on this Left_right scale. The result is a “Partisanship Index” (PI), one that registers the intensity of one’s left-right outlooks without regard to their valence. Thus, if one is either a “liberal Democrat” or a “conservative Republican,” one gets a PI score of “1.0.” If one is either an “Extremely liberal, Strong Democrat” and “Exremely conservative, Strong Republican,” one gets a PI score of 1.8. A milqetoast politically sissy who is a “moderate Independent” will get a score of “0.”

Okay, got that?  Good. (If you are curious for what the relationshop between Left_right and PI looks like without smoothing--and why the intercept for zero on y axis is slightly above zero--good for you! Click here).

Now here is the MAPKIA question:

What is the profile of a “super partisan”? On the basis of characteristics like (a) gender, (b) race, (c) income, (d) education, (e) science comprehension (measured by OSI), (f) science curiosity (measured with SCS), (g) religiosity, (h) cultural worldivews (measured with the CCW scales) etc. or appropriate combinations thereof, who is the most partisan “type” of person (i.e., gets the highest PI score) in U.S. society????

You know the rules: don’t just gesture toward an answer in some vague discursive way; be specific, both about what your conjecture and why, and tell me how to test it using the sort of data that typically appears in a CCP data set.

Realize that basically the question is, What's the relationship between the specified characteristics and partisanship? If you want to specificy simple correlations between partisanship and one or more of these attributes or (better still)  combinations of them, that's fine!

But if you have some more clever way to specify how the characteristics should be combined into some latent-variable "identity" variable or how the relationship between the characteristics (individually or in combination) should be related to the Partisanship index (in that regard, you might want to check out "yesterday's" post on how science curiosity and science comprehension relate to each other), go for it!

Now, an important proviso: Do not tell me to just jam every one of these characteristics onto the right hand side of a goddam linear regression and “see what comes out statistically significant.”  The reason  is that the results of such an analysis will be gibberish. 

Actually, the R2  will be fine & might be interesting if you want to get an idea of the the upper limit of the possibilities for explaining PI. But the parameter estimates will be meaningless in relation to our task, which is to identify the sorts of real-world people who are super partisans. 

And with that . . . mark, get set, MAPKIA! 

Monday
Jul252016

What is the relationship between science curiosity & science comprehension? A fragment . . .

From something I’m woring on (and useful refinement of this discussion of how to think about size of individual differences in one or another reasoning disposition). . .

c. Compared to ordinary science intelligence. Science curiosity—generally or as measured here—ought to be have some relationship to science comprehension. It is difficult to experience the pleasure of contemplating scientific insight if one is utterly devoid of any capacity for making sense of scientific evidence. Similarly, if one is aggressively uncurious about scientific insights, one is less likely to acquire the knowledge or the experience-based habits of mind to reason well about scientific insights.

Yet the two dispositons shouldn’t be viewed as one and the same.  Many people who can detect covariances and successfully compute conditional probailities—analytical tasks essential to making sense of empirical evidence—are nevertheless uninterested in science for its own sake.  Even more obvious, many people who are only modestly proficient in these technical aspects of assessing empirical evidence are interested—passionate even—about science. In sum, one would expect a science-curiosity measure, if valid, to be modestly correlated with but definitey not equivalent to a valid science comprhension measure.

SCS, the science-curious measure we formed (Kahan, Landrum & Carpenter 2015), has these properties.  The association between SCS and the Ordinary Science Intelligence (OSI) assessment (Kahan 2016) was r = 0.26 in our two data collections. To make this effect more practically meaningful,

SCS has these properties.  The association between SCS and the Ordinary Science Intelligence (OSI) assessment (Kahan 2016) was r = 0.26 in our two data collections. To make this effect more practically meaningful, the relationship between these measures implies that that individuals in the top quartile of SCS are over four times more likely than those in the bottom quartile to score in 90th percentile or above on the OSI assessment (Figure 6).  This is a degree of association consistent with the expectation that higher science curiosity contributes materially to higher science comprehension. Nevertheless, in both studies science comprehension lacked meaningful predictive power in relation to engagement with the three science videos featured in our two studies (Figure 7). In other words, SCS measures a disposition that is apparently integral to the kind of proficiency in scientific reasoning measured by OSI, yet generates a form of behavior—the self-motivated consumption of science information for its own sake—that is unassociated with science comprehension by itself.

References

Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res. (2016), advance on line at http://www.tandfonline.com/doi/pdf/10.1080/13669877.2016.1148067. 

Kahan, D., Landrum, A. & Carpenter C. Evidence-based Science Filmmaking Initiative, Study No. 1 (2015), at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2713563.

 

 

 

Thursday
Jul212016

What antagonistic memes look like: the case of the HPV Vaccine

From the new APPC/CCP Working Paper, Culturally Antagonistic Memes & the Zika Virus

2.1. In general

* * *

“Memes” refer to ideas and practices that enjoy wide circulation and arouse self-reinforcing forms of attention as well as spontaneous adaptation and elaboration (Balkin 1998; Blackmore 1999). A small subset of these sorts self-replicating ideas and practices, the ones we call “culturally antagonistic memes” refer to highly evocative, highly inflammatory argumentative tropes used by members of one group to stigmatize another.

When they figure in debates over risk, these contempt-pervaded tropes invest positions on them with affective resonances symbolic of opposing groups’ values or identities.  In the resulting discourse climate, individuals will come to perceive risk regulation as “express[ing] the public worth of one subculture’s norms relative to those of others, demonstrating which cultures have legitimacy and public domination” and thereby “enhnanc[ing] the social status of groups carrying the affirmed culture and degrad[ing] groups carrying that which is condemned as deviant” (Gusfield 1968, p. 59). Conducted in the idiom of instrumental consequences, the stances diverse citizens adopt on which activities genuinely threaten society and which policies truly mitigate the attendant dangers are become rhetorical subterfuges in an “ongoing debate about the ideal society” (Douglas &Wildavsky 1982, p. 36). 

This process is effected through a decisive switch in the sort of information processing that is characteristic of the AH-CCT model. From a reliable and consensus-generating guide to valid decision relevant-science, the affective heuristic and cultural cognition at this point combine to generate a divisive, nontruth-convergent source of identity-protective cognition (Sherman & Cohen 2002; Kahan 2010).

By fusing contending positions on a risk or like facts to opposing group identities, antagonistic memes effectively transform positions on them into badges of membership in, and loyalty to, competing groups. Because this state of affairs pits opposing groups’ knowledge-certification systems against one another, the forms of information-processing associated with cultural cognition and the affect heuristic will under these conditions necessarily lose their power to generate truth-convergent forms of consensus across them.

This switch will not cause such information processing to abate, however.  There is rarely any personal action that an individual can take that will affect the level of danger that a societal risk poses to him or anyone he cares about; his decisions as a consumer, voter, or participant in public debate won’t matter enough, for example, to affect the course of climate change, or the regulation of fracking, or the siting of nuclear waste facility.  In contrast, such an individual’s personal behavior, including the attitudes he evinces on issues infused with social meanings, will typically have tremendous significance for the impressions that others form of his character (Sherman & Cohen 2002; Lessig 1996).  As a result, it will be individually rational, if collectively disastrous, for individuals to form habits of mind that reliably produce identity-affirming rather than accurate ones when societal risks become infused with meanings that divide their groups from others (Kahan 2015b).  

Hi, it's me again! Click me to say hello!Indeed, these habits of mind will become seamlessly interwoven into the capacities essential for assessing scientific information. “Motivated system 2 reasoning” refers to the tendency of individuals to use their proficiency in Numeracy, cognitive reflection, and science comprehension to ferret out and credit identity-congruent evidence and explain away the rest (Kahan in press_b).  Much as a virus does to the genetic material of an otherwise healthy cell, identity protective cognition effectively insinuates itself into reasoning dispositions essential to recognizing the best available evidence (Kahan 2013; Kahan, Peters et al.  2013).  Their cognitive faculties having been redirected in this fashion, the individuals most adept in these forms of reasoning will end up the most polarized on culturally contentions risks (Hamilton 2011, 2012; Kahan, Peters et al..  2012).

Identity-protective cognition is thus not a not a natural outgrowth of but rather a pathological deformation of the processes associated with the AH-CT model. The trigger of this pathology, moreover, is the advent of culturally antagonistic memes (Figure 1).

2.2. A concrete illustration

Many persistently contested science issues fit this pattern.  But we will focus on one that we believe is particularly well suited for illustration: the U.S. experience with the HPV vaccine.

I dare you: click me!

The HPV vaccine confers (near-perfect) immunity to the human papilloma virus, an extremely common s

exually transmitted disease that cause cervical cancer.  The vaccine also has the distinction of being the only childhood immunization recommended for universal administration by the U.S. Centers for Disease Control that is not now on the schedule of mandatory school-enrollment immunizations in the United States.  Legislative proposals to add it were defeated in dozens of states in the years from 2007 to 2008 as a result of intense political controversy over the safety and effectiveness of the vaccine (Kahan 2013).

Although the proposal to add the HPV vaccine to the list of mandatory vaccinations divided the public along predictable lines, the conflict over it was in fact not inevitable.  Only a few years before nearly every state had endorsed the CDC’s proposal for universal administration of the HBV vaccine, which likewise confers immunity for a sexually transmitted disease, hepatitis-b, that causes cancer (of the liver).  The HBV vaccine is now given in infancy, but at that time it was an adolescent shot, just like the HPV vaccine.  During the years in which legislative battles were raging over the latter vaccine, nationwide vaccination rates for the former were well over 90% (ibid).

Like every other childhood vaccine that preceded it, the HBV vaccine was considered and approved for inclusion in state universal-immunization schedules by non-political public health agencies delegated this expert task by state legislatures.  The vast majority of parents thus learned of the vaccine for the first

time when consent to administer it was sought from their pediatricians, trusted experts who advised them the vaccine was a safe addition to the array of prophylactic treatments for keeping their children healthy.  Just as important, regardless of who these parents were—Republican or Democrat, devout evangelical or atheist—they were all afforded ample evidence that parents just like them were getting their kids vaccinated for HBV.  This is a science communication environment in which the AH-CCT model can be expected to generate largely convergent affective reactions across all groups—exactly the outcome that was observed.

The HPV’s vaccine path to public awareness, in contrast, was much more treacherous. Seeking to establish a dominant position in the market before the approval of a competing shot, the manufacturer of the HPV vaccine  orchestrated a nationwide campaign to establish immunization mandates by statutes enacted by state legislatures.  What was normally a routine, nonpolitical decision—the administrative updating of states’ mandatory-vaccination immunization schedules—thus became a high-profile, highly partisan dispute.  People became acquainted with the vaccine not during visits to their pediatricians’ office but while viewing Fox News, MSNBC, and other political news outlets. There they were bombarded with reports on the “slut shot” (Taormino 2006) and “virgin vaccine” (Page 2006) for school girls, a framing enabled by the manufacturer’s decision to seek fast-track FDA approval of a women’s-only shot as part of company’s plan to vault over the conventional, less speedy, depoliticized administrative-approval process (Gollust, Lorusso et al.  2015).

These media stories and resulting social media reaction were replete with what we are referring to as “culturally antagonistic memes.”  “Trust us: Vioxx, Now Gardasil,” declared a viral internet feature that mocked the manufacturer’s own advertising campaign (Figure 2). “HPV vaccine: Republicans prove themselves morons once again,” sneered liberal commentators (2011). “They value your virginity more than your life,” another righteously intoned; “there was a time when only the loony left believed that the loony right favored death over sex; not any more” (Goodman 2005).  Individualist-oriented commentators retorted: “Let’s use teenage girls as lab rats for a monopoly” (Erickson 2011).

These are exactly the conditions one would expect to fuse a risk issue to antagonistic social meanings, thereby triggering identity-protective cognition on the vaccine’s risks and benefits (Fowler & Gollust 2015; Bolsen, Druckman & Cook 2013).  Studies confirmed that exactly that happened (Gollust, Dempsey et al.  2010; Kahan et al.  2010).

References

“HPV Vaccine: Republicans Prove Themselves Morons Once Again.” Why Evolution Is True. (Sept. 14, 2011).

Bolsen, T., Druckman, J.  & Cook, F.L.  The effects of the politicization of science on public support for emergent technologies.  Institute for Policy Research Northwestern University Working Paper Series (2013).

Bolsen, T., Druckman, J.N.  & Cook, F.L.  The influence of partisan motivated reasoning on public opinion.  Political Behav. 36, 235-262 (2014).

Bolsen, T., Druckman, J.N. & Cook, F.L. Citizens’, scientists’, and policy advisors’ beliefs about global warming. The ANNALS of the American Academy of Political and Social Science 658, 271-295 (2015).

Douglas, M. & Wildavsky, A.B. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers (University of California Press, Berkeley, 1982).

Douglas, M. Purity and Danger: An Analysis of Concepts of Pollution and Taboo (1966).

Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011).

Erickson, Erick. Let’s Use Teenage Grils as Lab Rats for a Monopoly. RedSate. (Aug. 17, 2011), at http://www.redstate.com/erick/2011/08/17/lets-use-teenage-girls-as-lab-rats-for-a-monopoly/

Fowler, E.F. & Gollust, S.E. The content and effect of politicized health controversies. The ANNALS of the American Academy of Political and Social Science 658, 155-171 (2015).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affair 29, 2041-2046 (2010).

Gollust, S.E., LoRusso, S.M., Nagler, R.H. & Fowler, E.F. Understanding the role of the news media in HPV vaccine uptake in the United States: Synthesis and commentary. Human Vaccines & Immunotherapeutics, 1-5 (2015).

Goodman, Ellen. Abstinance-only crowd laments cancern breakthrough. Boston Globe. (Nov. 14, 2005), at http://articles.baltimoresun.com/2005-11-14/news/0511140054_1_abstinence-papilloma-virus-vaccine.

Gusfield, J.R. On Legislating Morals: The Symbolic Process of Designating Deviance. Cal. L. Rev. 56, 54 (1968).

Hamilton, L.C. Education, politics and opinions about climate change evidence for interaction effects. Climatic Change 104, 231-242 (2011).

Hamilton, L.C., Cutler, M.J. & Schaefer, A. Public knowledge and concern about polar-region warming. Polar Geography 35, 155-168 (2012). 


Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013b).

Page, Christina. The Virgin Vaccine. Nerve. (June 28, 2006), at http://www.nerve.com/dispatches/cpage/virginvaccine.

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Taormino, Tristan. The Slut Shot. Village Voice., (Aug. 15, 2006), at http://www.villagevoice.com/news/the-slut-shot-6427195.

 

 

 

Wednesday
Jul202016

What antagonistic memes look like: the case of the Zika virus

From the new APPC/CCP Working Paper, Culturally Antagonistic Memes & the Zika Virus:

3.1. Why Zika

The focus of the study was the impact of culturally antagonistic-meme generating communications on the perceived risks of the Zika virus.

We selected the Zika virus for two reasons. The first is that we are confident there isn’t currently meaningful cultural dissensus on Zika at the current time. For over five months, the Annenberg Public Policy Center (2016a) has been tracking U.S. public opinion on the disease. Attention early on spiked and then leveled off, and is now rising again; knowledge about the health effects of the virus and about effective means of self-protection have proven uneven; certain misunderstandings about the link between the virus and microcephaly have persisted, albeit at modest levels (Annenberg Public Policy Center 2016b).

But nothing in this mix varies meaningfully with ideology, religion, or like forms of cultural identity.  There is reason to be apprehensive about the speed with which members of the public are progressing in their understanding of key facts about the virus. But the evidence suggests that culturally diverse members of the public are progressing in unison, much in the manner one would expect under the “normal,” nonpathological process contemplated by the AH-CCT Model (Figure 1).

click me-- more refreshing than a cold lemonaid in the middle of an AGW heatwave!At the same time, there has been a steady accumulation of communications tying the Zika health threat to already culturally charged issues (Figure 3).  The voice of public health officials furnishing the public with precautionary advice is only one in a chorus, whose other members include a collection of advocacy groups all seeking to leverage public anxiety over Zika into greater attention to their special cause.

Among these are anti-immigrant groups. These actors suggest that the spread of Zika is likely to be accelerated by undocumented aliens as well as lawful immigrants from Zika-affected regions. “Latin America’s Zika virus is the latest undocumented immigrant to hit our shores,” one commentator caustically notes (Malkin 2016). It’s obvious from the “available evidence” that “open borders contribute to the vulnerability of the United States to the virus” (Corsi 2016). “People from Central and South America, ground zero for Zika and other infectious diseases including tuberculosis, dengue, Chagas, Chikungunya and schistosomiasis, make up nearly 15 percent of the illegal-immigrant population in the U.S.” (Malkin 2016). “[A] drain on our economy, a peril to our national security, and a drag on our souls,” illegal immigrants are now “hazardous to our health, thanks to sloppy U.S. immigration laws acting as incubators for diseases once foreign to North America — like the untreatable Zika virus” (Abruzzo 2016).

Climate change advocates have also latched onto Zika. “Zika is the kind of thing we’ve been ranting about for 20 years,” one observes. “We should’ve anticipated it. Whenever the planet has faced a major climate change event, man-made or not, species have moved around and their pathogens have come into contact with species with no resistance” (Milman 2016). Now “thanks to climate change” Zika could “soon enjoy a greater reach” (Mercer 2016), “spread[ing] deeper” into now secure areas of the U.S. (Gillis 2016).  Of all the “tragedies stemming from global warming,” including the “floods and droughts and storms, the failed harvests and forced migrations, . . . no single item on the list seems any more horrible than the emerging news from South America about the newly prominent Zika disease” (McKibben 2016). “We need to face up to the fact that pushing the limits of the planet’s ecology has become dangerous in novel ways.”  “The Republicans are in denial about climate change, but in the real world, we can feel it . . . . It’s also an invitation for breeding mosquitoes and putting Americans at risk all across the United States” (Johnson 2016).

The situation presented, then, furnishes an ideal one to extend previous research.  The tropes that inform advocacy material linking Zika to other culturally contested issues are replete with the accusatory and resentment-focusing tropes featured in highly polarized risk disputes. Yet in no  previous study  has there been an opportunity to test the impact of such tropes in relation to an issue not already the subject of at least modest contestation.

It is possible, of course, that the explanation for the patchwork of contestation and tranquility that forms the fabric of public risk perception is some as-yet undetected factor intrinsic to particular risk sources. It is perfectly plausible to believe, too, that deeper, historical influences render a particular risk source either impervious or distinctly amenable to controversy of a particular form, in particular societies. But through an appropriately constructed study, one can test the alternative hypothesis that it is the contingent advent of exposure to culturally antagonistic memes that triggers such conflict, and accounts for its complexion and intensity.  The study we conducted was aimed at furnishing evidence relevant to assessing the relative plausibility of these alternative conjectures.

click for closer look!Referances

Abruzzo, S.  Illegals, not American travelers, may be bringing Zika to our shores.  Brooklyn Daily (Feb.  5, 2016), available at http://www.brooklyndaily.com/stories/2016/6/all-britview-zika-virus-2016-02-05-bd.html.

Annenberg Public Policy Center. Annenberg Science Knowledge Survey (2016a). Available at http://www.annenbergpublicpolicycenter.org/science-communication/ask/.

Annenberg Public Policy Center. More than 4 in 10 Mistakenly Think Zika is Fatal, Symptoms are Noticeable. Annenberg Science Knowledge Survey (Mar. 10, 2016b). At http://www.annenbergpublicpolicycenter.org/more-than-4-in-10-mistakenly-think-zika-is-fatal-and-symptoms-are-noticeable/.

Corsi, J.  Zika Virus Joins List of Diseases Brought by Illegals.  WND (Feb.  1, 2016), available at http://www.wnd.com/2016/02/zika-virus-joins-list-of-diseases-brought-by-illegals/#

Gillis, J. In Zika Epidemic, a Warning on Climate Change. N.Y. Times, A6 (2016).

Johnson, B., Dem Leaders: Climate change stoking Zika, which could b ‘greater threat’ than Ebola. PJ Media. (Apr. 26, 2016), at https://pjmedia.com/news-and-politics/2016/04/26/dem-leaders-climate-change-stoking-zika-which-could-be-greater-threat-than-ebola/

Malkin, M. Chicken Little Chuckie Schumer: America's Disease-Fighting Phony. National Review (Feb. 3, 2016), available at www.nationalreview.com/article/430713/zika-virus-illegal-immigration-connection.

Mercer, G. The Link Between Zika and Climate Change. Atlantic (2016).

Milman, O. Climate change may have helped spread Zika virus, according to WHO Scientists. Guardian (Feb. 11, 2016), at https://www.theguardian.com/world/2016/feb/11/climate-change-zika-virus-south-central-america-mosquitos

Tuesday
Jul192016

New paper: Zika risk perceptions & culturally antagonistic memes!

Latest from the APPC/CCP "Science of scienc communication initiative working paper series" (previous installments include Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change, J. Risk Res, 1-22 (2016), & Kahan, D.M. Climate-Science Communication and the Measurement Problem, Advances in Political Psychology 36, 1-43 (2015)). More on this paper "tomorrow."

 

Culturally Antagonistic Memes and the Zika Virus: An
Experimental Test

Abstract

This paper examines a remedy for a defect in existing accounts of public risk perceptions. The accounts in question feature two dynamics: the affect heuristic, which emphasizes the impact of visceral feelings on information processing; and the cultural cognition thesis, which describes the tendency of individuals to form beliefs that reflect and reinforce their group commitments. The defect is the failure of these two dynamics, when combined, to explain the peculiar selectivity of public risk controversies: despite their intensity and disruptiveness, such controversies occur less frequently than the affect heuristic and the cultural cognition thesis seem to predict. To account for this aspect of public risk perceptions, the paper describes a model that adds the phenomenon of culturally antagonistic memes—argumentative tropes that fuse positions on risk with contested visions of the best life. Arising adventitiously, antagonistic memes transform affect and cultural cognition from consensus-generating, truth-convergent influences on information processing into conflictual, identity-protective ones. The paper supports this model with experimental results involving perceptions of the risk of the Zika virus: a general sample of U.S. subjects, whose members were not polarized when exposed to neutral information, formed culturally polarized affective reactions when exposed to information that was pervaded with antagonistic memes linking Zika to global warming; when exposed to comparable information linking Zika to unlawful immigration, the opposed affective stances of the subjects flipped in direction. Normative and prescriptive implications of these results are discussed.  

 

Download it, read it, improve it by comments!

 

 

Saturday
Jul162016

Weekend update: Could Trump really win? What people do & don't *fear* says, "sure, why not?"

Thursday
Jul072016

Why don't we have more gun control given that there is such overwhelming bipartisan public consensus in favor of that policy? WEKS strikes again . . .

So there have been a rash of news commentaries recently about “why” we don’t have more gun control given that there is overwhelming  “public support” for it.

I myself have offered explanations for this in the past.

But I’m wondering: is the premise really true? Is there really overwhelming public support for more gun control?

Or is this (like the “astonishing change in societal norms on gay marriage”) another instance of “WEKS” – “ ‘what everyone knows [is true]' syndrome,” the condition in which people with like-minded cultural outlooks convince themselves that “everyone” agrees with them on some issue that is in fact highly contested as a cultural matter?

Well, here’s some evidence for WEKS:

click for bigger viewAs the captions indicate, the data come from two separate APPC /CCP studies, one just concluded and the other from Jan.

They both show that gun control is not only massively polarizing but is among the most polarizing issue in American politial life—right up there with climate change and affirmative action.

The left-hand panel uses the tried-and-true Industrial Strength Risk Perception Meaure, which, as a result of the “affect heuristic” (Finucane, Alhakami, Slovic & Johnson 2000), magically encapuslates in one simple it the same level of covariance one would see when one relates the variable on the x-axis (here political outlooks) to any other more specific question that individuals would recognize as having to do with the risk in questin (e.g., on global warming, “is it happening,” “are humans causing it,” “are we all going to suffer horrendous harm as a result of it” etc).

It helps to show, then, that the proposition that there is as much polarization on guns—whether one frames the issue as one of the risks of allowing or not allowing people to have them—as there is on climate change, which is pretty much the most polarizing issue today (maybe ever) in American politics (there’s definitely a lot of “WEKS” on that, btw, although there is also the disturbing influence of attempts to “message” people with invalid surveys; maybe I’ll talk about that “tomorrow”).

click here for policy preference item wordingI put the right-hand panel in to help show that the sort of polarized affective responses look like when one cashes them out in terms of “policy positions.”

It shows, again, that proposals for stricter gun control laws have the same political-polarization profile as many of the issues we recognize as benchmarks of left-right factionalization.  I’ve also put in a couple of “non-polarized” issues just as a reference point (if you didn’t know vaccines were non-polarizing, you need to get out—of the WEKS bubble—more often).

Those data, again, are from Jan.

But another reason for putting in the left-hand side ISRPM panel is to help asnwer the question whether “something might have changed” given recent mass shooting.  Because the covariance from the ISRPM will always be nearly identical to the covariance on policy issues like this (for a miraclous proof of that propostion, check this out), we can be confident that if we are seeing the sort of ISRPM profile displayed in the left-hand panel, then we’d still see today the sort of division on “policy preferences,” or any other gun control question we might ask that people could actually understand.

So . . .

Why do so many people (but not all! there are plenty of people, it should be pointed out, who recognize gun control is polarizing) think there is consensus in the public for more gun control?

Like I said, I’ve definitely myself formed and expressed this impression myself!

But I do think it is almost certainly WEKS at work.  The people who say there is consensus for "more control" are on the “left” or at least tend to be inside the left’s political-communication bubble. Actually, people on the "right" think there is consensus against gun control; they live in their own bubble!

But there might be other explanations, too. . .

What do you think?

Reference

Finucane, Melissa L., Ali Alhakami, Paul Slovic, and Stephen M. Johnson. 2000. "The Affect Heuristic in Judgments of Risks and Benefits."  Journal of Behavioral Decision Making 13 (1):1-17.

 

Tuesday
Jul052016

Travel report: Has liberal democracy lost its power to motivate?

This is a belated postcard from stop on recent around the world (I know it was because it included both Cambridges—UK and US—with lots of stops in between) tour. . . . It reports on one of two talks I gave at the annual Breakthrough Institute Dialogue series.  This one was part of a panel on “Progress Problems,” in which the question that I and the other panelists, who included Max Roser and Lydia Powell, addressed was “why are so many of the richest and most privileged people on earth, despite reaping such extraordinary benefits,  convinced that progress is a mirage and modernity must inevitably end badly?My remarks, as best as I can recall them, were as follows (slides here) . . . .

So the question as I understand it is --

Have liberal democratic ideals lost the power to motivate the citizens of liberal democracies?

Can we summon their attention to the common challenges they face by invoking their shared commitment to self-government, civil liberties, and free markets? Or are the animating ideals of liberal democracy now themselves a source of estrangement and division that ennervate public spiritedness?

My answer to these qustions will take a dialectical form. That is, like Clint Eastwood at the Republican Convention of 2012, I will treat you to a disagreeable dialogue with myself, in which I will radically change direction at least twice.

But insofar as I will get the last word, I’m confident that I’ll ultimately come out on top in the exchange.

So to start, with . . .

                Thesis: Sure, those ideals can motivate! I’ll show you . . . .”

I’ll show you, that is, an experiment (Kahan, Jenkins-Smith et al. 2015), one in which invoking the spirit of liberal democratic institutions sharpened apprehension of, and magnified the will to address, a collective challenge –namely the one posed by human-caused global warming.

In the experiment, we measured the willingness of subjects (members of two separate nationally representatives samples, one from  American and the other from England) to engage open mindedly with a climate-change study.

A composite of two real studies (Allen et al. 2009; Solomon et al. 2009), the one featured in our experiment—call it the “Nature-Science study”—told a bleak story. Scientists, it reported, had overestimated the speed with which carbon dioxide dissipates. As a result, even if human beings were to cease generating all greenhouse gasses tomorrow, past emissions would guarantee continued increases in global temperatures along with devastating consequences—from catastrophic flooding of coastal regions, to production-ending droughts in agricultural ones—for decades to come.

We also measured our subjects cultural worldviews along the two dimensions—hierarchy-egalitarianism and individualism-communitarianism—featured in studies of the cultural cognition of risk (Kahan 2012).

As we anticipated, experiment subjects of a hierarchical, individualistic orientation—the ones most predisposed to climate skepticism—were preemptively dismissive of the Nature-Science study results.

This, however, was in a control condition, in which subjects, before grappling with the Nature-Science study, read a news article about a town meeting over a proposal to install traffic lights in the vicinity of a new residential development.

Subjects in two other conditions were assigned to read different news articles: in one, a story about how a national association of preeminent scientists had issued a statement calling for increased limits on human CO2 emissions to combat global warming; and in the other, a story about how the same association had issued a statement calling for research on geoengineering to offset the effects of past and future emissions. We labeled these the “anti-pollution” and “geoengineering” conditions.

Logically, there’s no reason why subjects assigned to either of these conditions should have formed different views on the validity of the Nature-Science study: the validity of the evidence for an asserted problem doesn’t turn on whether someone approves or disapproves of any particular solution for it.

But psychologically, the solution might well matter.

The “anti-pollution” and “geoengineering” stories embed the problem of climate change in different narratives and thus invest it with alternative social meanings (Lessig 1995).

The former narrative is about the inevitable limits on technological ingenuity and the consequences for having too long ignored them. Against the background of the “anti-pollution story,” the message of the Nature-Science story is  “game over” and “I told you so.”

Individuals of a hierarchical, individualistic cultural outlook revere commerce and industry, not just for what they do but for what they signify about human resourcefulness and the welfare-enhancing consequences of spontaneous private orderings and the stratified systems of authority that they spawn.  They are motivated, unconsciously, to resist evidence of the existence and impact of human-caused climate change precisely because they know that if society credits such evidence it will call into question the premises of their preferred way of life.

The social meanings in the anti-pollution story reinforce that perception, and hence amplify the motivation to resist the evidence.

But the message of the “geoengineering story” is very different.

We are not the stupid animal, this narrative goes, who when it reaches the top of the Malthusian curve comes crashing down ass over tincups.  We shift the f***ing curve!

Drinking your own shit, you say? No problem! Try modern sanitation & you can increase the density of cities 10 fold relative to what a bunch of tight-sphinctered naysayers once told us was the “natural limit,” enforced by the dreaded penalty of cholera outbreaks.

Well, it’s time to shift the curve again! This time with mirror-coated nanotechnology flying saucers that magically—hell, not magically; by rational intention & design—spontaneously assemble at just the right attitude to cool the atmosphere to a predetermined, geo-thermostatically determined level.

Not “game over” but more of the same!

“I told you so”? Unh uh! Try, yes we can!

Whereas the social meanings implicit in the “anti-pollution” story narrative threaten and denigrate the identity of the hierarch individualist, the meanings implicit in “geoengineering” affirm and gratify his vision of the best life and its prospects

The result is an abatement of the unconscious, reflexive resistance to evidence that there is in fact a problem to be addressed—by one means or another.

In any case, that was the conjecture we wanted to test in our experiment.

And it was the result that we in fact observed.

Relative to the control condition, hierarchical individualists in assigned to read the “anti-pollution” news story first became even more skeptical, even more dismissive of the validity of the Nature-Science study, increasing polarization within the study sample.

But those who read the “geoengineering” story first were decidedly more receptive to the evidence in the Nature-Science study; they didn’t dismiss its findings out of hand. 

As a result, polarization, over the validity of the study and over the reality of human-caused climate change, both decreased.

So there you go!

The ideals of liberal democracy include the  confidence that people have that technology, human ingenuity, private orderings, and individual strivings can in the course of freeing us from the limits of nature, devise effective solutions for problems of their own making.

Invoking these ideas, narratively, can inspire, can summon attention to common problems and the will to address them!

I’ve shown you!

 Antithesis: “No they can't! Take a closer look.”

Hold on.

Sorry.

What you’ve shown us is that liberal democratic ideals can’t genuinely motivate the citizens of liberal democratic regimes.  Just take a closer look at your own data, and you’ll see.

Yes, relative to their counterparts in the “antipollution” condition subjects in the “geoengineering” one became more open-minded about climate change.

But egalitarian communitarian subjects—people of the sort who normally are “climate concerned”—became less so.

They are the citizens who bridle at the self-centered acquisitiveness implicit in market institutions and in liberal conceptions of individual rights.

For them, the meanings of boundless individual ingenuity and permanent technological progress that pervade the narrative implicit in the “geoengineering” condition threatened and denigrate their identity.

If the meaning of climate change is “yes, we can” and “more of the same!,” then they want none of it.

Or least they want less.  Things aren’t that bad, egalitarian communitarian subjects assigned to the “geoengineering” condition said after reading the Nature-Science study.  We can still make “progress” by shutting down industry, turning off modern agricultural techniques, and simply retreating into a pre-modern style of economic life.

The scientists who wrote this study are biased, are relying on unproven computer models, are furnishing us with evidence that it would be precipitous to use in policymaking without a lot more corroboration etc etc.

Sound familiar? These are the tropes of skepticism—now from the mouths of those most inclined to be embrace climate change.

Why? Because what they really care about, what motivates them, is not  “evidence” (they definitely lack the science literacy to understand it) but the social meaning of “I told you!” and “game over!” (“This changes everything” blah blah) that informs the default narrative on climate change.

Change the narrative and they change their tune.

Just look at your data: They show that, relative to their counterparts in the anti-pollution condition, egalitarian communitarians became skeptical about climate change science in the in the geoengineering condition.

That, plus the greater receptivity to the Nature-Science study data on the part of hierarch individualists, was why there was less polarization in the “geoengineering” condition!

The motivation that invoking liberal democratic ideals, in the form of narratives of limitless technological progress and the self-corrective, self-redemptive power of private orderings and markets, is offset by the resistance that doing so motivates in that portion of or factionalized body politic that has come to despise individual striving, technology, and markets.

You can’t inspire with these ideals!

Invoking them on behalf of one cause of another is a zero sum game. 

Synthesis: “Liberal democratic ideals can indeed inspire--if you just stop obsessively looking at them.

I’m sorry.

You—both of you—are just playing a game.

What is this angst over the loss of the inspirational force of liberal democratic ideals, private markets included?  I mean really, what are you talking about?

Or better why are you focusing so much on talk—by such a small, small group of people who bother to theorize about these things?

Just look a tthe behavior of people—hierarch individualists, egalitarian communitarians, demoKrats/RepubliKans, “liberals,” “conservatives” or whatever.

Ambivalence about technology? Disaffection with consumption?

Ask Apple or Netflix or Amazon if that’s what their bottom lines tell them.

Yes your buddy’s new “environmental studies major” girlfriend is telling you about how environmentally destructive new information technologies are. But she’s showing you on her IPad, which has a “Green Party for Bernie Sander’s” IPad “skin”!

Against capitalism, Naomi Klein? Seriously? (Any chance you’ll show us your tax returns?)

Look.  This s a fashion statement:

And so is this:

Just as these opinions are:

 

Now here’s a worldview:

It’s real.

It’s anti-liberal.

It’s anti-market.

It’s anti-democratic.

And it’s just not on the table, in this society

No one around here finds this genuine repudiation of liberal democratic ideals the least bit inspiring.

The only thing that’s on the table here are the tokens of a demeaning, petty symbolic status competition driven by intellectually juvenile, self-promoting conflict entrepreneurs. . . . 

(Actually, you two guys both have garishly expensive but ridiculously dated sensibilities about fashion; in-your-face black is so 1999! You ain't no Johnny Cash, that's for sure.)

So don’t play that stupid game.

Stop looking & looking & looking at it.

“Messaging”/arguing liberal democracy doesn’t motivate people.

Living it does.


References:

Allen, M.R., Frame, D.J., Huntingford, C., Jones, C.D., Lowe, J.A., Meinshausen, M. & Meinshausen, N. Warming caused by cumulative carbon emissions towards the trillionth tonne. Nature 458, 1163-1166 (2009).

Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015).

Lessig, L. The Regulation of Social Meaning. U. Chi. L. Rev. 62, 943-1045 (1995).

 Solomon, S., Plattner, G.-K., Knutti, R. & Friedlingstein, P. Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences 106, 1704-1709 (2009).

 

 

Sunday
Jul032016

Weekend update: Scarier than Nanotechnology? Episode # 532

ROBOMANDER!

He or she not only emulates the facial expression of a real salamander but also "slithers [locomotes] just like the real thing"!

So is it cute or scary?

Will it disgust those who fear guns or those who fear drones?!

Only time will tell ... & only time will tell whether it grows/morphs into . . .

ROBOFROG!

 

But for now, keep locomoting, little fellah!

 

Friday
Jul012016

The "Science Communication Problem": Two (of four) false starts

From On the Sources of Ordinary Science Knowledge--and Exraordinary Science Ignorance (in press):

2.2. A related false start blames scientists [for the Science Communication Problem--the failure of valid, compelling, widely disseminated science to disepel public disagreement over policy-relevant facts].

If members of the public aren’t converging on some policy-relevant facts despite the clarity of the evidence, the reason must be that scientists are failing to convey the evidence clearly enough (e.g., Brownell, Price & Steinman 2013). Or maybe they are speaking out too clearly, crossing the line from factfinder to policy advocate in a manner that compromises their credibility. Or perhaps what is compromising their credibility is how cagily they are hiding their advocacy by implausibly asserting that the  facts uniquely determine particular policy outcomes (e.g., Fischoff 2007).

While one can make a compelling normative case for either clearer (Olson 2009; Dean 2009) or less opinionated speech (Lempert, Groves & Fischbach 2013) by scientists, the idea that how scientists talk is the cause of the Science Communication Problem is palpably unconvincing. 

Again, all one has to do is look at science issues that don’t provoke persistent controversy. How about raw milk (Sci., Media, & Public Res. Group 2016)? Is there some reason to believe biologists have been doing a better job explaining pasteurization than climate scientists have been doing explaining the greenhouse effect? What folksy idioms or tropes did the former use that were so effective in quieting political polarization?  Or was it that they just were more genuinely neutral on whether people should drink their milk straight up from the cow’s udder?

Here, obviously, I’m relying on a pile of rhetorical questions in lieu of evidence. But the absence of evidence is my evidence.

No one has ever thought it worthwhile to “regress” the difference in public acceptance of, say, the scientific consensus on the dangers of ozone depletion and the scientific consensus on human-caused climate change on the clarity and policy-neutrality of the National Academy of Sciences’ respective reports on those issues (e.g, National Research Council 1976, 1982, 2008, 2011); or the difference between how rapidly and near-universally states adopted the proposed addition of the adolescent HBV vaccinationand how persistently they have resisted adoption of the HPV vaccine  (Kahan 2013) on the clarity and policy-neutrality of the American Academy of Pediatrics’ endorsements of both (American Academy of Pediatrics 1992, 2007).  

Likely no one has because it’s clear to the naked ear that what these groups of scientists had to say on the uncontested members of these societal-risk pairs was no less obscure and no less opinionated than what they had to say about the contested members of them. But whatever the source of the omission, the inclusion of only contested cases in the “sample” necessarily defeats any valid inference from the “obscurity” or “partisanship” of how scientists speak to why any particular policy-relevant fact is affected by the Science Communication Problem.

2.3.  Another false-start account of the Science Communication Problem attributes it to growing resistance to the authority of science itself. Along with widespread disbelief in evolution, political conflict over global warming or other issues is variously depicted as evidence of either the “anti-science” sensibilities of a particular segment of the public or of a creeping anti-science strain in American culture generally (e.g., Frank 2013). 

Anyone who manages to divert his gaze from the Science Communication Problem for even an instant is sure to spy evidence massively out of keeping with this account. In its biennial Science Indicators series, for example, the National Science Foundation  (2016) includes survey measures that consistently evince effusive degrees of confidence in and support for science (Figure 2). These levels of support do not vary meaningfully across groups defined by their political outlooks or degrees of religiosity (Figure 3). Indeed, the levels of support are so high that it would be impossible for them to harbor practically significant levels of variance across groups of any substantial size.

For behavioral validation of these sensibilities, all one has to do is look up from one’s desk (away from one’s monitor) to see the care-free confidence individuals evince in science when making decisions both mundane (the ingestion of a pill to preempt hair loss) and vital (submission to radiation therapy for cancer).

 

Because this evidence is so obvious, it’s less likely proponents of the “age of denial” thesis don’t see it than that they see it as irrelevant. On this view, confusion over or outright rejection of the admittedly authoritative evidence that science has collected on human-caused climate change or human evolution just is evidence of a deficit in the cultural authority of science.

Fine. But at that point what started out as an explanation for the Science Communication Problem has transmuted, ironically, into a piece of evidence-impervious dogma that rules out contrary proof  by definitional fiat. 

References

American Academy of Pediatrics, Committee on Infectious Diseases. Universal hepatitis B immunization. Pediatrics 8, 795-800 (1992).

American Academy of Pediatrics. HPV Vaccine Does Not Lead to Increased Sexual Activity (2012), http://tinyurl.com/jrjx37o

American Academy of Pediatrics. Prevention of Human Papillomavirus Infection: Provisional Recommendations for Immunization of Girls and Women With Quadrivalent Human Papillomavirus Vaccine. Pediatrics 120, 666-668 (2007).

Brownell, S.E., Price, J.V. & Steinman, L. Science communication to the general public: why we need to teach undergraduate and graduate students this skill as part of their formal scientific training. J. Undergraduate Neuroscience Educ. 12, E6 (2013).

Dean, C. Am I making myself clear? : a scientist's guide to talking to the public (Harvard University Press, Cambridge, Mass., 2009).

Fischhoff, B. Nonpersuasive Communication about Matters of Greatest Urgency: Climate Change. Environmental Science & Technology 41, 7204-7208 (2007).

Frank, A. Welcome to the age of denial. N.Y. Times (Aug. 21, 2013), available at http://www.nytimes.com/2013/08/22/opinion/welcome-to-the-age-of-denial.html?_r=0.

“Green Goo: Nanotechnology Comes Alive.” ETCgroup.org (Feb. 2003), at http://www.etcgroup.org/content/green-goo-nanotechnology-comes-alive.

Lempert, R.J., Groves, D.G. & Fischbach, J.R. Is it Ethical to Use a Single Probability Density Function? (Santa Monica, CA: RAND Corporation, 2013). Available at.

National Research Council (U.S.). Board on Atmospheric Sciences and Climate. & National Research Council (U.S.). Committee on Stabilization Targets for Atmospheric Greenhouse Gas Concentrations. Climate stabilization targets : emissions, concentrations, and impacts over decades to millennia (National Academies Press, Washington, D.C., 2011).

National Research Council (U.S.). Committee on Chemistry and Physics of Ozone Depletion. Causes and effects of stratospheric ozone reduction, an update : a report (National Academy Press, Washington, D.C., 1982).

National Research Council (U.S.). Committee on Ecological impacts of Climate Change. Ecological impacts of climate change (National Academies Press, Washington, D.C., 2008).

National Research Council (U.S.). Panel on Atmospheric Chemistry. Halocarbons, environmental effects of chlorofluoromethane release (National Academy of Sciences, Washington, 1976).

National Science Foundation. Science and Engineering Indicators 2016 (Wash. D.C., 2016).

Olson, R. Don't be such a scientist : talking substance in an age of style (Island Press, Washington, DC, 2009).

Science, Media, and the Public Research Group (SCIMEP).. Exploring Public Opinion and Risk Perceptions of Food in Wisconsin. University of Wisconsin-Madison. Madison, WI: Department of Life  Sciences Communication (2016). Available from http://scimep.wisc.edu/projects/reports/

Vance, M.E., Kuiken, T., Vejerano, E.P., McGinnis, S.P., Hochella, M.F., Jr., Rejeski, D. & Hull, M.S. Nanotechnology in the real world: Redeveloping the nanomaterial consumer products inventory. Beilstei

Thursday
Jun232016

Travel report: Is even the vote (not to mention all the voting) on Brexit irrational?...

So I was in Cambridge for the outstanding 7th Annual Risk Studies Summit  at Cambridge University's world-class Centre for Risk Studies this week (now am in San Fran for a couple).

Will write a separate report on that event.  But today on this Brexit business ...

Boy were folks in Cambridge depressed!

People there at least seem to be of one mind on the issue --

But what really seems to bug them is that the issue us up for a vote, and it looks like it will be close (we'll find out presently if so).  How can it be that so many people don't see things the way we do?, they are asking in consternation; what does that mean about the competence of our democratic electorate?

Sound familiar?

Well here's something familiar too:

The Online Privacy Foundation adapted the CCP Motivated Numeracy study to views on Brexit and in a study released yesterday observed the same perverse interactions between predispositions and reasoning proficiency.

I haven't had time to do more the casually peruse the results. But my sense is that there was in fact a finding of symmetry in the effect-- that is, that higher numeracy magnified biased information processing regardless of the subjects' Brexit predispositions.

That doesn't measn that the occasion of the vote isn't entangdeled in reason-effacing pathologies-- on the contary, reason is being effaced-- but it does mean (if I'm right aout the results; I reserve the right to reassess on closer examination & will tell you if I do change my view on this) that only one side is being affected by them.

They have some pretty nice graphis, too, I'll say that:

But that's all I have time for for now!  Others might want to chime in on what they think of the OPF study or generally about Brexit and rationality.

Of course, I'll be tuning in along w/ the 14 billion readers of this blog to get the reasults of the vote later today ...

 

Monday
Jun202016

Travel report: On unpolluted & polluted public health science communcation environments--the cases of the HBV & HPV vaccines (presentation summary & slides)

A rational reconstruction of the talk I gave—in 15' 22" [I can talk 9x faster than the average man, woman, or trained circus animal can read; and I pride myself on the 30-min sentence]—at the truly amazing “How we Can Improve Health Science Communication Conference” at U of Mich.’s amazing Center for Pol. Studies last week. Pretty sure the talks will be or perhaps already are on line.. . . Now am in UK for 7th Annual Cambridge Centre for Risks Studies Confernce – will write a postcard on that soon! Slides for  the U of M talk here.

1. As you know, my paper (in press) is on what I call the Science Communication Problem—the failure of valid, compelling, widely available scientific evidence to quiet public dispute over risks and like facts to which that evidence directly speaks.

The paper argues that our understanding of the Science Communication Problem is being distorted by fixation on conspicuous and specular instances of it—particularly  the conflict over climate change. 

Obviously,  empirical researchers should be focusing on how to decipher and ultimately dispel the Science Communication Problem. My claim, however, is that we won’t achieve these goals if we focus on instances of public dissensus to the near-total disregard of public consensus, which is far and away the norm on decision-relevant science.

 A research program that never diverts its gaze from climate change and other instances of the Science Communication Problem distracts our attention from evidence that would reveal the falsity of many popular accounts of why we have the Problem. It also steers us toward prescriptions that won’t repair the dynamics that ordinarily generate public convergence on the best available evidence and could even, perversely, inflict even more damage upon them.

I won’t rehearse my argument in detail, though. Instead I will try to illustrate it with a specific example not discussed in the paper: public conflict over the HPV vaccine.

2. As I’m sure y’all know, the HPV vaccine confers an imperfect but still important degree of immunity to the human papillomavirus, an extremely common sexually transmitted disease that causes cervical cancer. 

The HPV vaccine also has the distinction of being the only childhood shot recommended for universal administration by the CDC that is not now on the schedule of mandatory school-enrollment immunizations in US states.  Legislative proposals to add it were defeated in dozens of states in the years from 2007 to 2012 as a result of deep, pervasive political controversy over the safety and effectiveness of the vaccine (Kahan 2013).

It’s tempting to think this outcome was inevitable. The vaccine is for a sexually transmitted disease and was to be administered, initially, to pre-pubescent girls as a condition of their eligibility to attend public schools.  Of course, such a proposal would provoke controversy between groups that subscribe to opposing understandings of sexual morality, of parental sovereignty, and of role of the state in securing individual well-being.

But that conclusion—that the HPV-vaccine conflict was inevitable—reflects exactly the tunnel vision I’m attacking.

The HPV vaccine was not the first one that was aimed at a sexually transmitted disease recommended for universal administration to children. The HBV vaccine was.

The HBV vaccine confers immunity to hepatitis-b, which also causes cancer, of the liver.

The CDC proposed it be administered universally to adolescents (now to infants) just a few years before it proposed the  same for the HPV vaccine. With no significant controversy, the HBV vaccine was incorporated into the mandatory, school-enrollment immunization lists of nearly every U.S. state in a wave of approvals that crested just as the HPV-vaccine controversy began. At the time the HPV-vaccine controversy was raging, the HBV vaccine had an national uptake rate of over 90%--compared to the anemic 30% for the HPV Vaccine today (Kahan 2013).

Thus, the HPB vaccine,  is in the “denominator”—the vast class of decision-relevant science issues on which there isn’t public controversy but could be. What it shares with all the other members of that class is the benefit of having become known to the public in an unpolluted science communication environment

The science communication environment, I explain in the paper, consists of the sum total of processes and conventions generative of the cues that normally guide diverse individuals align their behavior with the best available evidence.

The HBV vaccine, like every universal childhood immunization before it, traveled safely through these processes and conventions to the destination of overwhelming public confidence. The vaccine was considered and approved for inclusion in state universal-immunization schedules by non-political public health agencies that have been delegated this expert task by state legislatures. The vast majority of parents thus had occasion to learn of the vaccine for the first time when their consent to administer it was sought from their pediatricians, individuals they had selected b/c they trusted them, who advised the vaccine was safe and a useful addition to the array of prophylactic practices that keep children healthy. Just as important, regardless of who they were—republican or democrat, devout evangelical or atheist or agonistic—all were afforded ample evidence that parents just like them were getting their kids vaccinated for HBV (Kahan 2013).

The decision to follow suit was a no brainer!

In contrast, parents and other citizens learned about the HPV vaccine in what I characterized as a polluted science communication environment.  A polluted science communication environment is one in which some risk or fact has become entangled in antagonistic social meanings that transform them into badges of membership in and loyalty to opposing cultural groups. In those conditions, the same cues that normally guide diverse citizens into convergence on the best available evidence—including what others in their situation are doing and saying-- instead drive them apart.

That’s what happened with the HPV vaccine. To try to establish a dominant position in the market before the approval of a competing HPV vaccine manufactured by its rival Smith-Glaxo Smithkline, Merck--manufacturer of the Gardasil, the HPV shot approved by the FDA in 2006--orchestrated a nationwide campaign to add the vaccine to the state, mandatory school-enrollment schedules by statutes enacted by state legislatures.

What was normally a nonpolitical decision—the updating the of state school-enrollment immunization lists—necessarily became hyper-politicized. People first learned of the vaccine not from their pediatricians but from Fox News, MSNBC, and other political news outlets, who hyped the repressive-in-your-face-religious right vs. the cosmopolitan, communism-of-women-and-children-left showdown on the “STD shot for school girls,” a framing facilitated by Merck’s decision to seek fast-track FDA approval of a girl’s only shot as part of its market-driven plan to sidestep the slower, less politicized approval process.

The result was the entanglement of the HPV vaccine in the sort of antagonistic meanings productive of the most debilitating of all known science-communication pathologies—identity-protective cognition (Kahan 2013).

3. Sarah Gollust and her collaborators (2010, 2013, 2014, 2015) have done a lot of outstanding work to identify and quantify the indicators of this entanglement and its disruptive impact on how ordinary members of the public ordinarily recognize valid science..

The CCP research group did a study on this too back in 2007, just as the process that resulted in this disaster began to unfold. In it we tried to model how different “science communication environments”—unpolluted and polluted—could affect engagement with information on the vaccine’s risks and benefits.

The study (Kahan, Braman, Cohen, Gastil & Slovic 2010) examined how cultural cognition could shape perceptions of the HPV vaccine. Cultural cognition refers to the tendency of people, in effect, to conform their  own perceptions of risk and like facts to the ones that predominate among others who share their cultural identities.

We measured individuals cultural identities with two orthogonal attitudinal scales, hierarchy-egalitarianism and individualism-communitarianism, which can be viewed as forming four types of cultural “affinity groups.”

Next, we exposed them to competing arguments on the balance of risks and benefits of the HPV vaccine from  fictional “public health experts.” The experts were ones we had determined in separate pretests would be tacitly identified by the experimental subjects as having the cultural identifies featured by the cultural cognition worldview scheme.

By crossing the two arguments with the four advocates, we had had 12 HPV “expert-argument matchups.” To assess their impact, we modeled how the proximity of the subjects’ actual cultural outlooks to the experts' tacitly perceived ones affected subjects’ HPV-vaccine risk perceptions.

Our goal  in simultaneously manipulating the array of experts and arguments, on the one hand, and the proximity of the experts' cultural outlooks to the subjects', on th the other, was to simulate the impact of learning about the vaccine under conditions that would themselves vary in how readily they suggested the presence or absence of division of opinion between subjects’ own cultural groups and a rival one.

The simulation suggested that the impact was yuuuuuuuugely consequential.

In effect, there was a continuous polarization effect that tracked subjects' tacit impressions of group conflict.  

We knew, again  from pretesting, that subjects with particular identities had modest predispositions to form one or another impression of the safety and efficacy of the vaccine.

But under the condition least likely to suggest group conflict—the one in which subjects saw an alignment of culturally identifiable experts and arguments contrary to the one they would have expected to see if the issue were in fact dividing groups consistent with subjects’ own predispositions—polarization essentially disappeared.

Where in contrast they saw the alignment most suggestive of such conflict—the one in which an expert with their identity took “their side’s” position and one with opposing identity “the other side’s”—polarization dramatically escalated relative to the level predicted by the subjects’ predispositions alone.

These two points on the continua reflect a pristine and a polluted science communication environment, respectively. The first was the environment in which American parents learned of the HBV vaccine; the latter the one in which they learned of the proposal to add the HPV vaccine to the schedule of mandatory school-enrollment immunizations.

4. Likely this condition could have been avoided. Lots of physicians and others were worried that the manner in which the HPV vaccine was being introduced to the public risked generating a political controversy (Kahan 2013).

But the question now is whether anyone is going to learn from this experience and from research on it, including our study and the penetrating set by Gollust and her collaborators.

The answer, I think, will depend largely on whether members of the public health establishment avoid the mistake of “ignoring the denominator”—the relatively large number of cases in which the public doesn’t polarize but rather converge on the best available scientific evidence. Frankly, I think many of the proposals on how to over come the continuing public ambivalence on the HPV vaccine reflect exactly that mistake.

One prominent proposal is to conduct a large-scale social marketing campaign promoting the vaccine. Thrusting the HPV vaccine back into the limelight in this way would risk exciting the very sorts of sensibilities—and more importantly reigniting the same sort of interest group activity —that bred the initial conflict. Indeed, this idea sounds more or less like a proposal to take out of mothballs the very advertisements that Merck bankrolled during its disastrous campaign to secure legislative mandates.

Just look at the denominator!

There wasn’t any social marketing campaign on HBV vaccine, just as there wasn’t any on the myriad other science issues—from medical x-rays to nanotechnology—on which diverse members of the public now have aligned their behavior appropriately by science.

The mechanism, moreover, in those cases hasn't been the public's reflective processing of detailed bits of medical or other scientific information. It has been their attention to the cue emitted by the words and behavior of others who have evinced their confidence  by words and deeds showing that they have confidence in the underlying science.

An unpolluted science communication environment is not bustiling with broadcast messages. On the contrary, it comprises a host of persitent low key signals that assure that people that doing things that rely on what is in fact the best available evidence is mundane, banal normal.

The question is how to promote this sort of normality to people's engagement with the HPV vaccine.

I’ll give you a hint on the answer.

The one state, Rhode Island, that has adopted an HPV-vaccine school-enrollment mandate in the years since the initial political firestorm over this proposal abated did so without particular fanfare—by resort to the nonpolitical administrative process that is  actually the norm for updating state mandatory vaccination regimes. 

Parents in RI aren't now learning about the HPV vaccine from media reports on a contested legislative mandate for an STD shot for their pre-teen children; they didn’t learn about it from a weird and very likely counterproductive (Nyhan et al. 2014) social marketing campaign. 

Rather they are getting the information in the normal way—from talking to their pediatricians about it at the same time they discuss other immunizations that their children are required to get, and from seeing that other parents just like them, after having done the same, are making decisions to get their kids vaccinated for HPV—just as they are making the decision (at rates well over 90%) to do the same for HBV, MMR and all the other childhood diseases from which their kids and lots of others too are protected by universal immunizations.

It's a no brainer!

It's no big deal.

Refs

Gollust, S.E. & Cappella, J.N. Understanding public resistance to messages about health disparities. Journal of health communication 19, 493-510 (2014).

Gollust, S.E., Attanasio, L., Dempsey, A., Benson, A.M. & Fowler, E.F. Political and News Media Factors Shaping Public Awareness of the HPV Vaccine. Women's Health Issues 23, e143-e151 (2013).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affair 29, 2041-2046 (2010).

Gollust, S.E., LoRusso, S.M., Nagler, R.H. & Fowler, E.F. Understanding the role of the news media in HPV vaccine uptake in the United States: Synthesis and commentary. Human vaccines & immunotherapeutics, 1-5 (2015).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. On the sources of ordinary science intelligence and ignorance. Oxford Handbook on the Science of Science Communicatoin (in press).

Nyhan, B., Reifler, J., Richey, S. & Freed, G.L. Effective messages in vaccine promotion: A Randomized Trial. Pediatrics 133, e835-e842 (2014).

Thursday
Jun162016

The fourth of "four theses on ordinary science intelligence" ... a fragment

I posted the first two "yesterday"; if you want to read the third, then just download   On the Sources of Ordinary Science Knowledge and Ignorance . . .

IV. “The recognition problem [that generates conflict over decision relevant science] is a polluted science communication environment.”

The species of pattern recognition that ordinary members of the public normally use to recognize valid science enables them to get the benefit of substantially more scientific insight than any could possibly hope to genuinely comprehend.  The evidence I described in the last section, however, evinces the disablement of this critical capacity.  The final “thesis of ordinary science knowledge” identifies the source of this disablement: a polluted science communication environment.

Popper (1962b), I noted, attributes the acquisition and exercise of the capacity for science-recognition to immersion in a set of social processes and conventions. When I refer to the science communication environment, I mean to refer to the sum total of the processes and conventions that enable recognition of valid science in this way (Kahan 2015b). Any influence that impairs or impedes the operation of these social practices will necessarily degrade the power of free, reasoning citizens to recognize valid science, and hence to realize the full benefits of it. As a result, we may understand any such influence to be a form of pollution in the science communication environment.

The sorts of influences that can generate such disablement are no doubt numerous and diverse. But I will focus on one, which degrades an especially consequential cue of science validity.

Of all the sources of ordinary science knowledge, by far the most significant will be individuals’ interactions with others with whom they a share cultural commitments or basic understanding of the best way to live. The suggestion that direct communication with scientists is more consequential reflects either the First or Second False Start or both: individuals have neither the time nor the capacity to extract information directly from scientists.  Much more accessible, and much more readily subject to meaningful interpretation, are words and actions of other ordinary people, whose use of DRS vouches for their confidence in it as a basis for decision. 

Indeed, it vouches as effectively when nothing is said about it as it does when something is. Nothing—including a new National Academy of Sciences expert consensus report (National Research Council 2016) that few members of the public will even be dimly aware exists—will assure an ordinary person that it is safe to eat GM corn chips as will watching his best friend and his brother-in-law and his officemate eating them without giving the matter a second’s thought, the “all clear” signal that obviates the need for the vast majority of Americans even to bother learning that the corn chips they are eating contain GM foods (Hallman, Cuite & Morin 2013).

Of course, ordinary citizens don’t interact only with those with whom they share important cultural commitments. But they interact with them much more than they interact with others, for the simple reason that they find their company more congenial and more productive of all manner of profitable intercourse. They are also less likely to was time squabbling with these people, and can also read them more reliably, distinguishing who really does know what science knows and who is only a poser. It is perfectly rational for them consciously to seek out guidance from such individuals, then, or to form unconscious habits of mind that privilege them as sources of guidance on what science knows (Kahan 2015b).

Figure 5 ... click on it: it will increase your proportion of fast-twitch to slow-twich mucle fibers by 23.6%!This process is admittedly insular, but it clearly works in the main. All of the major cultural groups in which this process operate are amply stocked both with members high in science comprehension and with intact social processes for transmitting what they know. No group that lacked these qualities—and that as a result regularly misled its members on the content of valid DRS—would last very long! On issues that don’t display the profile of the Science Communication Paradox, moreover, individuals highest in science proficiency do tend to converge on the best available evidence, and no doubt pull the other members of their groups along in their wake (Figure 5).

But such a system is vulnerable to a distinctive pathology: identity-protective cognition (IPC). IPC occurs when a policy-relevant fact that admits of empirical inquiry becomes entangled in antagonistic social meanings that transform positions on them into badges of identity in, and loyalty to, competing cultural groups (Kahan 2010, 2012). The cost under those conditions of forming factually incorrect beliefs on matters like whether humans are heating up the earth or whether fracking will extinguish or contaminate drinking water sources is essentially zero: individuals’ personal views and actions are not consequential enough to affect the level of risk they face, or the likely adoption of ameliorating (or simply pointless or even perverse) regulatory responses. But given what beliefs on these subjects (correct or incorrect) have come to signify about the kind of person one is—about whose side on is on, in what has become a struggle for status among competing cultural groups—the personal cost of forming the wrong ones in relation to one’s own cultural identity could be high indeed (Kahan, Peters et al. 2012).

In such circumstances, individuals can be expected to use their reason to form and persist in beliefs that reliably vouch for their group identities regardless of whether those beliefs are factually accurate.  This conclusion is consistent with numerous studies, observational (Bolsen & Druckman 2015; Bolsen, Druckman & Cook 2014; Gollust, LaRussao et al. 2015; Gollust, Dempsey, et al. 2010) and experimental (Kahan, Braman, et al. 2009, 2010), that link IPC to the Science Communication Problem’s signature forms of polarization.  Indeed, individuals who enjoy the highest level of proficiency will display this form of motivated reasoning to the greatest extent, precisely because they will be the most adept at using their reasoning proficiency secure the interest that they share to form identity-expressive beliefs (Kahan in press).

In sum, the antagonistic social meanings that trigger IPC are a toxic form of pollution in the science communication environment of culturally pluralistic societies.  They disable individuals’ science-recognition capacities, not by degrading their reason but by conscripting it into the service of advancing their group’s cause in a demeaning form of cultural status competition. IPC does not create the role that social influences play in popular recognition of what science knows. Rather it corrupts them, transforming the role that  spontaneous, everyday social interactions play from an engine of convergence on the beset available evidence into a relentlessly aggressive agent of public dissensus over what scientific consensus really is. 

References

Bolsen, T. & Druckman, J.N. Counteracting the Politicization of Science. Journal of Communication 65, 745-769 (2015).

Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit Behav 36, 235-262 (2014).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affair 29, 2041-2046 (2010).

Gollust, S.E., LoRusso, S.M., Nagler, R.H. & Fowler, E.F. Understanding the role of the news media in HPV vaccine uptake in the United States: Synthesis and commentary. Human vaccines & immunotherapeutics, 1-5 (2015).

Hallman, W., Cuite, C. & Morin, X. Public Perceptions of Labeling Genetically Modified Foods. Rutgers School of Environ. Sci. Working Paper 2013-2001, available at http://humeco.rutgers.edu/documents_PDF/news/GMlabelingperceptions.pdf.

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

National Research Council (U.S.). Committee on Genetically Engineered Crops. Genetically Engineered Crops: Experiences and Prospects  (National Academies Press, Washington, D.C., 2016).

Popper, K.R. On the Sources of Knowledge and of Ignorance. in Conjectures and Refutations 3-40 (Oxford University Press London, 1962b).

 

 

Tuesday
Jun142016

Two of "Four Theses on Ordinary Science Knowledge" . . . a fragment

From On the Sources of Ordinary Science Knowledge and Ignorance . . .

I. “Individuals must accept as known more decision relevant science (DRS) than they can possibly understand or verify for themselves.”

The motto of the Royal Society is Nullius in verba, which translates literally into “take no one’s word for it.” But something—namely, any pretense of being a helpful guide to getting the benefits of scientific knowledge—is definitely lost in a translation that literal.

If you aren’t nodding your head violently up and down, then consider this possibility. You learn next week that you have endocrinological deficit that can be effectively treated but only if you submit to a regimen of daily medications.  You certainly will do enough research to satisfy yourself—to satisfy any reasonable person in your situation—that this recommendation is sound before you undertake such treatment.

But what will you do? Will you carefully read and evaluate all the studies that inform your physician’s recommendation? If those studies refer, as they inevitably will, to previous ones the methods of which aren’t reproduced in those papers, will you read those, too? If the studies you read refer to concepts with which you aren’t familiar, or use methods which you have no current facility, will you enroll in a professional training program to acquire the necessary knowledge and skills? And once you’ve done that, will you redo the experiments—all of them; not just the ones reported in the papers that support the prescribed treatment but in any those studies relied on and extended—so you can avoid taking anyone’s word on what the results of such studies actually were as well?

Of course not. Because by the time you do those things, you’ll be dead. To live well—or just to live—individuals (including scientists) must accept much more DRS than they can ever hope to make sense of on their own.

Science’s way of knowing involves crediting as true only inferences rationally drawn from observation. This was—still is—a radical alternative to other ways of knowing that feature truths revealed by some mystic source to a privileged few, who alone enjoy the authority to certify the veracity of such insights. That system is what the founders of the Royal Society had in mind when they boldly formulated their injunction to “take no one’s word for it.” But it remains the case that to get the benefits of the distinctive, and distinctively penetrating, mode of ascertaining knowledge they devised, we must take the word of those who know what’s been ascertained by those means—while being sure not to take the word of anyone else (Shapin 1994).

II. “Individuals acquire the insights of DRS by reliably recognizing it.”

But how exactly does one do that? How do reasonable, reasoning people who need to use science for an important decision but who cannot plausibly figure out what science knows for themselves figure out who does know what science knows?

We can rule out one possibility right away: that members of the public figure out who genuinely possesses knowledge of what science knows by evaluating the correctness of what putative experts believe. To do that, members of the public would have to become experts in the relevant domain of knowledge themselves. We have already determined (by simply acknowledging the undeniable) that they lack both the capacity and time to do that.

Instead they have to become experts at something else: recognizing valid sources of science. They become experts at that, moreover, in the same way they become experts at recognizing anything else: by using a conglomeration of cues, which operate not as necessary and sufficient conditions, but as elements of prototypical representations (“cat,” “advantageous chess position,” “ice cream sandwich,” “expert”) that are summoned to mind by mental processes, largely unconscious, that rapidly assimilate the case at hand to a large inventory of prototypes acquired through experience.  In a word (or two words), they use pattern recognition (Margolis 1993).

This is equivalent to the answer that Popper gave (in an essay the title, and much more, of which are the inspiration for this one) in answering the near-identical question about how we come to know what is known by science. Popper’s target was a cultural trope of sensory empiricism that treated as “scientific knowledge” only that which one has observed for oneself. After impaling this view on the spear tips of a series of reductios, Popper explains that most things we know”—i.e., know to be known to science—“we have learnt by example, by being told.” In appraising the conformity of any such piece of information to the qualities that invest it with the status of scientific knowledge, moreover, an individual must rely on “his knowledge of persons, places, things, linguistic usages, social conventions, and so on” (ibid., p. 30).

To be sure, powers of critical reasoning play a role. We must calibrate this facility of recognition by “learning how to criticize, how to take and to accept criticism, how to respect truth” (ibid, p. 36), a view Baron (1993) and Keil (2012) both develop systematically.

But the objects of the resulting power to discern valid science are not the qualities that make it valid: those are simply far too “complex,” far too “difficult for the average person to understand (Baron, 1993, p, 193). What this faculty attends to instead are the signifiers of validity implicit in informal, everyday social processes that vouch for the good sense of relying on the relevant information in making important decisions (Keil 2010, 2012). Popper characterizes the aggregation of these processes as “tradition,” which he describes as “by far the most important source of our knowledge” (1962b, p. 36).

It is worth noting that although Popper here is referring to the process by which ordinary science knowledge disseminates to nonscientists, there is no reason to think that scientists are any less in need of a valid-knowledge recognition capacity, or that they acquire or exercise it in a fundamentally different way. Indeed, there is ample reason to think that it couldn’t possibly differ from the faculty that members of the public use to recognize valid science (Shapin 1994) aside from its being more finely calibrated to the particular insights and  methods needed to be competent in the production of the same (Margolis 1987, 1996).

 “How do we gain our knowledge about how to analyze data?” ask Andrew Gelman and Keith O’Rourke (2015, pp., 161-2).  By “informal heuristic reasoning,” they reply, of the sort that enables those immersed in a set of practice to see the correctness of an answer to a problem before, and often without ever being able to give a fully cogent account of, why.

References

Baron, J. Why Teach Thinking? An Essay. Applied Psychology 42, 191-214 (1993).

Gelman, A. & O’Rourke, K. Convincing Evidence. in Roles, Trust, and Reputation in Social Media Knowledge Markets: Theory and Methods (ed. E. Bertino & A.S. Matei) 161-165 (Springer International Publishing, Cham, 2015).

Keil, F.C. Running on Empty? How Folk Science Gets By With Less. Current Directions in Psychological Science 21, 329-334 (2012).

Keil, F.C. The feasibility of folk science. Cognitive science 34, 826-862 (2010).

Margolis, H. Patterns, thinking, and cognition : a theory of judgment (University of Chicago Press, Chicago, 1987).

Margolis, H. Paradigms and Barriers (University of Chicago Press, Chicago, 1993).

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).

Popper, K.R. On the Sources of Knowledge and of Ignorance. in Conjectures and Refutations 3-40 (Oxford University Press London, 1962b). 

Shapin, S. A social history of truth : civility and science in seventeenth-century England (University of Chicago Press, Chicago, 1994).

 

 

Monday
Jun132016

On the Sources of Ordinary Science Knowledge and Ignorance (new paper)

A short little conference paper ...

 for this really cool conference:

 

First person to figure out the significance of the title before reading the paper wins a cool prize.

Wednesday
Jun082016

Slovic elected to NAS!! 

An inspiration to billions for his pathbreaking studies of public risk perceptions!

And to hundreds of millions of those same billions for his messy desk & office -- if he can overcome come that handicap, so can we dammit!

UofO issued a press statement that focuses on the messy desk. Understandable, because that's the obvious human-interest angle on the story.

But here are some other, more scholarship-focused details on Slovic's career. 

Although Slovic proved an exception to the rule that scholars of public risk perception tend to do their best work by age 15, his early hybrid lab-field study strategies remain high on his list of achievements-- and of course generated the $millions that underwrote his later, even more influential research efforts:

 "No one checked my id," Slovic explained in connection with one of his early papers, a jr-high final project on the peculiar risk perceptions of Las Vegas gamblers.

Researcher Sarah Lichtenstein, pictured here with Slovic, opened a hedge fund based on early team research insights, only a fraction of which have ever been revealed to the public. She was quoted recently as saying, "Black & Scholes were real chumps. I've made 10^6 as much money by keeping my behavioral-economics based insights into derivative pricing to myself than they got for winning the Nobel Prize in economics..."

It should also be noted that Slovic kicked UO great Steve Prefontaine's ass in a marathon once. "He burned himself out by running the first 10K in 26:59," Slovic explained. "Of course, it also helped that I supplemented my diet with reindeer milk," he added with a wink.

Slovic thereafter published the data underlying his own training regimen, which was derived from study of billions of Boston Marathon entrants.

Wednesday
Jun082016

Cognitive dualism and beliefs as "dispositions to action" ... a fragment

From something I'm working--and working and working and working--on . . .

4.1. Beliefs as action-enabling dispositions

Imagine an astrophysicist who is also a mother and a member of a particular close-knit community.  Like any other competent scientist (or at least any who examines macro- as opposed to quantum-physical processes!), she adopts a Laplacian orientation toward the objects of her professional study. The current state of the universe, she’ll tell you, is simply the penultimate state plus all the laws of nature; the penultimate state, in turn, is nothing more than the anteultimate one plus all those same laws—and so forth and so on all the way back to the big bang (Laplace 1814).  This understanding is gospel for her when she sets out to investigate one or another cosmic anomaly. She hunts for an explanation that fits this picture, for example, in trying to solve the mystery of massive black holes, the size of which defy existing known principles about the age of the universe (Armitage & Natarajan 2002). Nothing under the heavens—or above or within them—enjoys any special exemption from the all-encompassing and deterministic laws of nature.

In her personal life, however, she takes a very different view—at least of human beings. She explains—and judges—them on the assumption that they are the authors of their own actions.  Her attributes her children’s success at school, for example, to their hard work, and is filled with pride. She learns of the marital infidelity of a friend’s spouse and is outraged.

By viewing everything as determined by immutable, mechanistic laws of nature, on the one hand, and by judging people for the choices they make, on the other, is the scientist guilty of self-contradiction? Is she displaying a cognitive bias or some related defect in rationality?

Definitely not. She is using alternative forms of information-processing rationally suited to her ends.  One of her goals is to make sense of how the universe works: the view that everything, human behavior included, is subject to immutable, deterministic natural laws reliably guides her professional investigations. Another of her goals is to live a meaningful life in that part of the universe she inhabits. The form of information processing that attributes agency to persons is indispensable to her capacity to  experience the moral sensibilities integral to being a good parent and a friend.

The question whether there is a contradiction in her stances toward determinstic natural laws and self-determining people is ill-posed. As mental objects at least, these opposing stances don’t exist independently of clusters of mental states—emotions moral judgments, desires, and the like—geared to doing the things she does with them. There is no contradiction in how she is using her reason if the activities that these forms of information processing enable are themselves consistent with one another—as they surely are.

The individual in this example is engaged in cognitive dualism. That is, she is rationally applying to one and the same object—the self-determining power of human beings—alternative beliefs, and corresponding forms of information processing, suited to achieving diverse but compatible goals.

We start with this example for two reasons. One is to emphasize the lineal descent of cognitive dualism from another—the philosophical dualism of Kant (1785, 1787, 1788).  The two “beliefs” about human autonomy we attributed to the astrophysicist are the two perspectives toward the self—the phenomenal and noumenal—that Kant identified as action-enabling perspectives suited to bringing reason to bear on understanding how the world works, on the one hand, and living a meaningful life within it, on the other. Kant saw puzzling over the consistency of the self-perspectives featured by these perspectivs as obtuse because in fact the opposing orientations they embody don’t exist indepedently of the actions they enable—which clearly are fully compatible.

The other reason for starting with the astrophysicist was to remark the ubiquity of this phenomenon. The opposing perspectives that we attributed to her—of the all-encompassing status of deterministic natural laws, on the one hand, and the uniquely self-governing power of human beings, on the other—are commonplace in modern, liberal democratic societies, whose members use the opposing “beliefs” these perspectives embody to do exactly the same things the astrophysicist does with them: make sense of the world and live in it.

Our astrophysicist both does and doesn’t exist.  She’s no one in particular but is in fact everyone in general.

There’s no need to confine ourselves to composites, however.  Decision scientists, it’s true, have paid remarkably little attention to cognitive dualism, misattributing to bounded rationality forms of information processing that aren’t suited for accurate perceptions of particular facts but that are for cultivating identity-expressive affective dispositions (Kahan in press).  In other scholarly domains, however, one can find a richly elaborated chronicle of the existence and rationality of the two forms of information processing that cognitive dualism comprises.

Developmental psychologists, for example, are very familiarity with them. Children, they’ve shown, not only devote considerable cognitive effort to internalizing confidence- and trust-invoking forms of social competence. They also frequently privilege this form of information processing over ones that feature “factual accuracy.” E.g., a child will often choose to defer to an information source with whom she shares some form of social affinity over one whom she recognize has more knowledge—not because she is biased (cognitively or otherwise) but because she has assimilated the kind of decision she is making in that situation to the stake she has in forging and protecting her connections with members of a social group  (Elashi & Mills, 2014;  MacDonald, Schug, Chase & Barth 2013; Landrum, Mills, & Johnston 2013) .

Researchers have also documented the effect of cognitive dualism in studying of how people who “disbelieve” in evolution can both comprehend and use what science knows about the natural history of human beings (Long 2011). Religiously oriented students, e.g., who don’t “believe in” evolution can learn it just as readily as those who do (Lawson & Worsnop 1992). The vast majority of them will make use of that knowledge simply to pass their school exams and then have nothing more to do with it (Herman 2012); but that’s true for the vast majority of their fellow students who say they “believe” in evolution, too (Bishop & Anderson 1990). 

Some small fraction of the latter (the evolution believers) will go on to do something in their life—like become a scientist or a physician—where they will use that knowledge professionally. But so will a small fraction of the former—the students who “don’t believe in” evolution (Hameed 2015; Everhart & Hameed 2013; Hermann 2012). 

These latter individuals—let us call them “science-accepting disbelievers”—are displaying cognitive dualism.  Science-accepting disbelievers are professing—but not just professing, using—disbelief of evolution in their personal lives, where it is a component of a complex of mental states that reliably summon affective-driven behavior that signifies their commitment to a particular community.  But in addition to being people of that sort, they are or aspiring to become science professionals who use belief in evolution to achieve their ends as such (Everhart & Hameed 2013). 

When queried about the “contradiction, science-accepting disbelievers respond in a way that evinces—affectively, if not intellectually—the same attitude Kant had about the contradiction between the phenomenal and noumenal selves. That is, they various stare blankly at the interviewer, shrug their shoulders in bemusement, or explain—some patiently, other exasperatedly—that the evolution they “disbelieve in” at home and the one the “believe in” at work are, despite having the same referent, “entirely different things” because in fact they have no existence, in their lives, apart from the things that they do with them, which are indeed “entirely different” from one another (Everhart & Hameed 2013; Hermann 2012). In a word, they see the idea that there is a contradiction in their opposing states of belief and disbelief in evolution as obtuse.

References

Armitage, P.J. & Natarajan, P. Accretion during the merger of supermassive black holes. The Astrophysical Journal Letters 567, L9 (2002).

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Elashi, F.B. & Mills, C.M. Do children trust based on group membership or prior accuracy? The role of novel group membership in children’s trust decisions. Journal of experimental child psychology 128, 88-104 (2014).

Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Kahan, D.M. The Expressive Rationality of Inaccurate Perceptions. Behavioral & Brain Sciences (in press).

Kant, I. & Gregor, M.J. Groundwork of the metaphysics of morals (1785).

Kant, I., Critique of pure reason (1787).

Kant, I.. Critique of practical reason (1788).

Landrum, A.R., Mills, C.M. & Johnston, A.M. When do children trust the expert? Benevolence information influences children's trust more than expertise. Developmental Science 16, 622-638 (2013).

Laplace, P. A Philosophical Essay on Probabilities (1814).

Long, D.E. Evolution and religion in American education : an ethnography (Springer, Dordrecht, 2011).

Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

MacDonald, K., Schug, M., Chase, E. & Barth, H. My people, right or wrong? Minimal group membership disrupts preschoolers’ selective trust. Cognitive Development 28, 247-259 (2013).

 

Friday
Jun032016

What does "believing/disbelieving in" add to what one knows is known by science? ... a fragment

From something I'm working on (and related to "yesterday's" post) . . .

4.3. “Believing in” what one knows is known by science

People who use their reason to form identity-expressive beliefs can also use it to acquire and reveal knowledge of what science knows. A bright “evolution disbelieving” high school student intent on being admitted to an undergraduate veterinary program, for example, might readily get a perfect score on an Advanced Placement biology exam (Herman 2012).

It’s tempting, of course, to say that the “knowledge” one evinces in a standardized science test is analytically independent of one's “belief” in the propositions that one “knows.”  This claim isn’t necessarily wrong, but it is highly likely to reflect confusion.  

Imagine a test-taker who says, “I know science’s position on the natural history of human beings: that they evolved from an earlier species of animal. And I’ll tell you something else: I believe it, too.”  What exactly is added by that person’s profession of belief?

The answer “his assent to a factual proposition about the origin of our species” reflects confusion. There is no plausible psychological picture of the contents of the human mind that sees it as containing a belief registry stocked with bare empirical propositions set to “on-off,” or even probabilistic “pr=0.x,” states.  Minds consist of routines—clusters of affective orientations, conscious evaluations, desires, recollections, inferential abilities, and the like—suited for doing things.  Beliefs are elements of such clusters. They are usefully understood as action-enabling states—affective stances toward factual propositions that reliably summon the mental routine geared toward acting in some way that depends on the truth of those propositions (Peirce 1877; Braithwaite 1933, 1946; Hetherington 2011)

In the case of our imagined test-taker, a mental state answering to exactly this description contributed to his supplying the correct response to the assessment item.  If that’s the mental object the test-taker had in mind when he said, “and I believe it, too!,” then his profession of belief furnished no insight into the contents of his mind that we didn’t already have by virtue of his answering the question correctly. So “nothing” is one plausible answer to the question what did it add when he told us he “believed” in evolution.

It’s possible, though, that the statement did add something.  But for the reasons just set forth, the added information would have to relate to some additional action that is enabled by his holding such a belief. One such thing enabled by belief in evolution is being a particular kind of person.  Assent to science’s account of the natural history of human beings has a social meaning that marks a person out has holding certain sorts of attitudes and commitments; a belief in evolution reliably summons behavior evincing such assent on occasions in which a person has a stake in experiencing that identity or enabling others to discern that he does.

Indeed, for the overwhelming majority of people who believe in evolution, having that sort of identity is the only thing they are conveying to us when they profess their belief. They certainly aren’t revealing to us that they possess the mental capacities and motivations necessary to answer even a basic high-school biology exam question on evolution correctly: there is zero correlation between professions of belief and even a rudimentary understanding of random mutation, natural variance, and natural selection (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990).

Precisely because one test-taker’s profession of “belief” adds nothing to any assessment of knowledge of what science knows, another's profession of “disbelief” doesn’t subtract anything.  One who correctly answers the exam question has evinced not only knowledge but also her possession of the mental capacities and motivations necessary to convey such knowledge

When a test-taker says “I know what science thinks about the natural history of human beings—but you better realize, I don’t believe it,” then it is pretty obvious what she is doing: expressing her identity as a member of a community for whom disbelief is a defining attribute. The very occasion for doing so might well be that she was put in a position where revealing of her knowledge of what science knows generated doubt about who she is

But it remains the case that the mental states and motivations that she used to learn and convey what science knows, on the one hand, and the mental states and motivations she is using to experience a particular cultural identity, on the other, are entirely different things (Everhart & Hameed 2013; cf. DiSessa 1982).  Neither tells us whether she will use what evolution knows to do other things that can be done only with such knowledge—like become a veterinarian, say, or enjoy a science documentary on evolution (CCP 2016). To figure out if she believes in evolution for those purposes—despite her not believing in it to be who she is—we’d have to observe what she does in the former settings.

All of these same points apply to the response that study subjects give when they respond to a valid measure of their comprehension of climate science.  That is, their professions of “belief” and “disbelief” in the propositions that figure in the assessment items neither add to nor subtract from the inference that they have (or don’t have) the capacities and motivations necessary to answer the question correctly.  Their respective professions  tell us only who they are. 

As expressions of their identities, moreover, their respective professions of “belief” and “disbelief” don’t tell us anything about whether they possess the “beliefs” in human-caused climate change requisite to action informed by what science knows. To figure out if a climate change “skeptic” possesses the action-enabling belief in climate change that figures, say, in using scientific knowledge to protect herself from the harm of human-caused climate change, or in voting for a member of Congress (Republican or Democrat) who will in fact expend even one ounce of political capital pursuing climate-change mitigation policies, we must observe what that skeptical individual does in those settings.  Likewise, only by seeing what a self-proclaimed climate-change believer does in those same settings can we see if he possess the sort of action-enabling belief in human-caused climate change that using science knowledge for those purposes depends on.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Braithwaite, R.B. The nature of believing. Proceedings of the Aristotelian Society 33, 129-146 (1932).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

CCP, Evidence-based Science Filmmaking Inititive, Study No. 1 (2016)

Demastes, S.S., Settlage, J. & Good, R. Students' conceptions of natural selection and its role in evolution: Cases of replication and comparison. Journal of Research in Science Teaching 32, 535-550 (1995).

DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning*. Cognitive science 6, 37-75 (1982).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).

Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).

Peirce, C.S. The Fixaation of Belief. Popular Science Monthly 12, 1-15 (1877).
Tuesday
May312016

"According to climate scientists ..." -- WTF?! (presentation summary, slides)

Gave talk at the annual Association for Psychological Science on Sat.  Was on a panel that featured great presentations by Leaf Van Boven, Rick Larrick & Ed O'Brien. Maybe I'll be able to induce them to do short guest posts on their presentations, although understandably, they might be shy about become instant world-wide celebrities by introducing their work to this sites 14 bilion readers.

Anyway, my talk was on the perplexing, paradoxical effect of "according to climate scientists" or ACS prefix (slides here).

As 6 billion of the readers of this blog know-- the other 8 have by now forgotten b/c of all the other cool things that have been featured on the blog since the last time I mentioned this--attributing positions on the contribution of human beings to global warming, and the consequences thereof, to "climate scientists" magically dispels polarization on responses to cliimate science literacy questions.

Here's what happens when "test takers" (members of a large, nationally representative sample) respond to two such items that lack the magic ACS prefix:

 
Now, compare what happens with the ACS prefix:

 
Does this make sense?

Sure. Questions that solicit respondents’ understanding of what scientists believe about the causes and consequences of human-caused global warming avoid forcing individuals to choose between answers that reveal what they know about what science knows, on the one hand, and ones that express who they are as members of cultural groups, on the other.

Here's a cool ACS prefix corollary:

Notice that the "Nuclear power" question was a lot "harder" than the "Flooding" one once the ACS prefix nuked (as it were) the identity-knowledge confound.  Not surprisingly, only respondents who scored the highest on the Ordinary Science Intelligence assessment were likely to get it right.

But notice too that those same respondents--the ones highest in OSI--were also the most likely to furnish the incorrect identity-expressive responses when the ACS prefix was removed.

Of course! They are the best at supplying both identity-expressive and  science-knowledge-revealing answers.  Which one they supply depends on what they are doing: revealing what they know or being who they are. 

The ACS prefix is the switch that determines which of those things they use their reason for.

Okay but what about this: do rspts of opposing political ordinations agree on whether climate scientists agree on whether human-caused climate change is happening?

Of course not!

 
In modern liberal democratic societies, holding beliefs contrary to the best available scientific evidence is universally understood to be a sign of stupidity. The cultural cogniton of scientific consensus describes the psychic pressure that members of all cultural groups experience, then, to form and persist in the belief that their group’s position on a culturally contested issue is consistent with the best avaialbel scientific evidence.

But that's what creates the "WTF moment"-- also known as a "paradox":


Um ... I dunno!

That's what I asked the participants--my fellow panelists and the audience members (there were only about 50,000 people, because were scheduled against some other pretty cool panels) to help me figure out!

They had lots of good conjectures.

How about you?

Wednesday
May252016

"Repugnance" & reasoned deciscionmaking ... a fragment

From something I working on ...

Disgust-motivated cognition of costs and benefits

 “Repugnance” can figure in an agent’s instrumental reasoning in a number of ways. One would be as an argument in his or her utility function: repugnant states of affairs are ones worth incurring a cost to avoid; the repugnance of an act is a cost that must be balanced against the value of the otherwise desirable states of affairs that the action might help to promote (e.g., Becker 2013). Alternatively, repugnance might be viewed as investing acts or states of affairs with some “taboo” quality that makes them inappropriate objects of cost-benefit calculation (Fiske & Tetlock 1997). I will address a third possibility: that repugnance might unconsciously shape how actors appraise consequences of actions or states of affairs.  Wholly apart from whatever disutility an agent might assign an act or state of affairs on account of its being repugnant, an agent is likely to conform his or her assessment of information about its risks and benefits to the aversion that it excites in her (Finucane, Alhakami, Slovic & Johnson 2000; Douglas 1966). I will survey the psychological mechanisms for this form of “disgust-motivated” reasoning and assess its implications for rational decisionmaking, individual and collective.

Refs

Becker, G.S. The economic approach to human behavior (University of Chicago press, 2013).

Douglas, M. Purity and Danger: An Analysis of Concepts of Pollution and Taboo (1966).

Finucane, M.L., Alhakami, A., Slovic, P. & Johnson, S.M. The Affect Heuristic in Judgments of Risks and Benefits. Journal of Behavioral Decision Making 13, 1-17 (2000).

Fiske, A.P. & Tetlock, P.E. Taboo Trade-offs: Reactions to Transactions That Transgress the Spheres of Justice. Political Psychology 18, 255-297 (1997).