follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Friday
Nov172017

Where am I?... Part 2

Ummmm... this is typical view of the podium when I give a talk...

  But you can watch/listen at https://www.youtube.com/watch?v=ktHtLIF8R6Q&feature=youtu.be.

Friday
Nov172017

Where am I?... part 1

Just wanted to reassure the 14 billion readers of this blog that I haven't been kidnapped by aliens; I'm simply busy preparing for this --

Drop by if you get a chance!

Tuesday
Nov142017

Science curiosity, not science literacy, is prime virtue in Liberal Republic of Science (here are my slides; see any glitches or mistakes?) 

Talking in few hours here at Northwestern University. Basic message/title of presentation:  "Comprehension without curiosity is no virtue, and curiosity without comprehension no vice." Sums up the quadrillions of studies finding that cognitive proficiency magnifies political polarization and the less-than-a-year's old research suggesting that science curiosity helps to offset this perverse dynamic.

If you hurry & look through, you can still advise me on what to say up until about noon US eastern time!

Watch out for your ears-- we're ready for a fookin good show!

Wednesday
Nov082017

Midweek update: teaching criminal law--voluntary manslaughter

I usually start class (sessions of which are 120 mins. this semester at Harvard Law) with a mini-lecture that synthesizes the material and discussion from the immediately preceding class. The one below recaps voluntary manslaughter":

Voluntary manslaughter.  Last time we looked at voluntary manslaughter.  There are two formulations.  The common law version mitigates murder to manslaughter when an offender who intentionally kills does so in the heat of passion brought on by adequate provocation and without “cooling time.”  The Model Penal Code, in contrast, mitigates when a homicide that would be murder is committed as a result of an extreme emotional or mental disturbance for which there is a “reasonable excuse.”

On the first day of this course, I made the point that disputes about what the law means are frequently disputes about two things: (1) what it ought to mean; and (2) who ought to say what it means.  Our discussion of the common law voluntary manslaughter yesterday nicely illustrated this.

What, for example, does “adequate provocation” mean?  Is adultery adequate provocation?  How about a same-sex overture?  The answer can’t be found in the plain meaning of the doctrine.  Rather, it must be constructed according to some theory about what the doctrine is all about.  And because it must be constructed someone must do the constructing.  So what ought the law mean and who ought to say?

We considered a number of specific theories about why the voluntary manslaughter doctrine exists.  I suggested that we call one the voluntarist view: impassioned killers are treated leniently, on this account, because passion compromises their volition, and thus reduces culpability for their acts.  The problem with this hypothesis, though, is that it can’t explain why there is a provocation requirement at all, much less why the provocation must be adequate.  As cases like Anderson illustrate, people don’t experience uncontrollable, homicidal impulses only when provoked.

keep reading

 

Sunday
Nov052017

Weekend update: does transparency help with this overplotting problem?

Another example of how to use transparency functionality of Stata 15.

Compare this ...

 ... with this:

Which one is better? Why? Other ideas?

 

Friday
Nov032017

Next stop (not counting weekly trips to Cambridge, MA) 

Northwestern University, Evansville, Ill., Nov. 14:

 

Wednesday
Nov012017

How many talks did I give last yr? And how about yr before that, & yr before that ...

Huh... Well just think of how many more I would have done if I weren't so shy.

 

Tuesday
Oct312017

#scicomm question: what communicates essential information more effectively--unfilled overlapping pdd's or filled/transparency ones?

Been having more fun with Stata 15's new transparency feature but was wondering if maybe I'm neglecting communication effectiveness in favor of some other aesthetic consideration.

So tell me: Which looks better--this

 or this?

 

Both convey the same info on how "high numeracy" & "low numeracy" study subjects do on a covariance problem, the numbers of which are manipulated to make the right answers either identity-affirming or identity-threatening.  What they are both illustrating, then, is that high numeracy subjects lose nearly all their accuracy edge when they analyze covariance data that contradicts their political presuppositions and thus threatens their cultural identity.

So assume an attentive reader comes across this point in the text and is directed to look at the Figures to make the point even more vivid.  Does one of these graphic reporting methods work better than the other?

Monday
Oct302017

More evidence of AOT's failure to counteract politically motivated reasoning 

Notice 2 things about this Figure:

1st, Stata 15 can now do transparencies!

2nd, this is even more evidence that “Actively open-minded thinking,” as commonly measured, furnishes no meaningful protection against politically motivated reasoning.

The results here are based on the same experimental design featured in the CCP Motivated Numeracy paper (Kahan, Peters et al. 2017). Subjects were asked what inference was supported by data presented in a 2x2 contingency table.  In one condition, the data were described as results of an experiment to test a new skin-rash cream.  In another, the data were described as results of an experiment to determine whether banning the carrying of concealed handguns in public increased or decreased crime.

In Motivated Numeracy, we found that individuals of opposing ideological orientations were substantially more likely to get the correct answer in the gun-control version if the data, properly interpreted, supported (or “affirmed”) the position associated with their ideology; when the data, properly interpreted did not not support their ideological group's position, individuals were more likely to select the wrong answer.

What’s more, the effect was stronger among the subjects of the highest degree of Numeracy, an aptitude to reason well with quantitative information.

The data here are pretty similar to those in Motivated Numeracy, except now it's “Actively Open-minded Thinking” (AOT) that is being shown to interact with ideology.  On the effectiveness of the new skin cream, individuals who score highest on a standard measure of AOT do better than those who score low, regardless of their political outlooks.

In the “gun control” condition, those who score highest on AOT do only slightly better on the version of the problem that presents ideologically congenial data. 

In the version that presents threatening or ideologically uncongenial evidence, however, those who score highest on AOT do no better than those who score the lowest.

This is not what you’d expect.

AOT is supposed to counteract ideologically motivated reasoning along with kindred forms of “my side bias” (e.g., Stanovich 2013; Baron  1995). Accordingly, in the "identity threatened" condition, one would expect those highest in AOT to do just as well as their high-scoring counterparts in the "identity affirmed" condition. One would expect, too, that the performance of those high in AOT would not show a level of degradation (-30%, +/- 14%) comparable to the degradation in performance shown by low scoring AOT subjects (-23%, +/-10%).

But it didn’t work this way here.

It also didn’t work that way in a study that Jon Corbin and I did last year, in which we showed that those highest in AOT, far from converging, were even more politically polarized on the danger posed by climate change (Kahan & Corbin 2016).

What to make of this?

Well, again, one possibility is that the version of AOT we are using simply is not valid.  I don’t buy that, really, because the measure has been validated in various settings (e.g., Baron et al. 2015).

The other possibility, which I think is more plausible, is that AOT--like Numeracy  (Kahan, Peters et al. 2017), Cognitive Reflection (Kahan 2013), and Ordinary Science Intelligence (Kahan 2016)—magnifies identity-protective reasoning where certain policy-relevant facts have become entangled with group-based identities (Kahan 2015).  Basically, where that’s the case, people use their critical reasoning proficiencies, of which AOT is clearly one, not to figure out the truth but rather to cement their status and relations with other group members (Stanovich & West 2007, 2008; Kahan & Stanovich 2016).

But I don’t want to be closed-minded toward other possibilities. 

So what do you think?

Refs

Baron, J. Myside bias in thinking about abortion. Thinking & Reasoning 1, 221-235 (1995).

Baron, J., Scott, S., Fincher, K. & Emlen Metz, S. Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 265-284 (2015).

Kahan, D. & Stanovich, K. Rationality and Belief in Human Evolution (2016), CCP/APPC Working paper available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2838668.

Kahan, D.M. & Corbin, J.C. A note on the perverse effects of actively open-minded thinking on climate-change polarization. Research & Politics 3 (2016).

Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res, 1-22 (2016).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017).

Stanovich, K.E. Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice. Thinking & Reasoning 19, 1-26 (2013).

Stanovich, K. & West, R. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning 14, 129-167 (2008).

Stanovich, K.E. & West, R.F. Natural myside bias is independent of cognitive ability. Thinking & Reasoning 13, 225-247 (2007).

 

Sunday
Oct292017

Weekend update--it's baaaaaack! Our paper explaining why N=55, 95% liberal, is not a valid sample for "replicating" our "motivated numeracy" study

After a brief hiatus (primarily so we could reanalyze the data after using multiple imputation to handle missing data), our working paper responding to Ballarini & Sloman (2017) is back up at SSRN

As you likely will recall, B&S reported their "failure to replicate" our motivated numeracy study. Our response points out that B&S's N=55 student sample, which was 95% liberal (not a joke), had inadequate statistical power to replicate our study, which in addition to employing a design very different from B&S's used a large (N = 1100), nationally representative sample.

In addition to our paper, you can (re)read Mark Brandt's very reflective blog post on our paper and B&S's.

I'm still baffled about B&S's motivations for making such a weakly supported claim.  Very weird . . . .

 

 

 

Friday
Oct272017

In Cambridge, MA w/ nothing to do this afternoon? Come see cool panel discussion

click for better view

Thursday
Oct192017

How & how not to do replications--guest post by someone who knows what he is talking about

Getting the Most Out of Replication Studies

by Mark Brandt

Ok. At this point, I think most people know that replications are important and necessary for science to proceed. This is what tells us if a finding is robust to different samples, different lab groups, and minor differences in procedure. If a finding is found, but never replicated is it really a finding? Most working scientists would say no (I hope).

But not all replications are created equal. What makes a convincing replication? A few years ago with a lot of help from collaborators we sat down to figure it out (at least for now; see the open access paper). A convincing replication is rigorously conducted by independent researchers, but there are also another 5 ingredients.  

1. Carefully defining the effects and methods that the researcher intends to replicate: If you don’t know what effect you are exactly trying to replicate, it is difficult to carefully plan the study and evaluate the replication attempt. This ingredient determines nearly all that follow.

2. Following as exactly as possible the methods of the original study (including participant recruitment, instructions, stimuli, measures, procedures, and analyses): The closer the replication is to the original attempt, the easier it is to infer if the original finding is confirmed (or not). Although replications that are less close or even just conceptually similar help establish the generalizability of an effect (see this nice paper), the differences make it impossible to tell if differences in results are due to the instability of the underlying effect or to differences in the design.

3. Having high statistical power: Statistical power is basically an indicator of whether your study has a chance of detecting the effect you plan to study. Statisticians will give you more precise definitions and some branches of statistics (e.g., Bayesian) don’t really have the concept. Putting these things aside, the general idea is that you should be able to collect enough data to have precise enough estimates to make strong conclusions about the effect you’re interested in. In most of the domains I work in, power is most easily increased by including more people in the sample; however, it’s also possible to increase power by increasing the number of observations in other ways (e.g., using a within-subjects design with multiple observations per person). The best way to ensure high statistical power in a replication will depend on the precise design of the original study.

4. Making complete details about the replication available, so that interested experts can fully evaluate the replication attempt (or attempt another replication themselves): To best evaluate whether a replication is a close replication attempt, it is useful to make all of the details available for external evaluation. This transparency can illuminate potential problems with either the replication attempt or the original study (or both). It is also beneficial to pre-register the replication study, including the criteria that will be used to evaluate the replication attempt.

5. Evaluating replication results, and comparing them critically to the results of the original study: Don’t just put the results out there. Interpret them too! How are the results similar to the original study and how are they different? Are they statistically similar or different? And what could possibly explain the differences? How to evaluate replication results has become its own industry, with a lot of food for thought (see this paper).

This is all fine, you might say. But how does this work in practice? Well, for one thing we’ve developed a form to help people plan and pre-register replication results. It’s available in our paper, its available here (and in French!), and its built into the Open Science Framework. It’s also useful to examine how it doesn’t work in practice.

Here we turn to a paper that Ballarini and Sloman (B&S) presented at the meeting of the Cognitive Science Society (paper is here). B&S were testing out a debiasing strategy and in that context state that they “failed to replicate Kahan et al.’s ‘motivated numeracy effect’.” To evaluate this claim we need to know what the motivated numeracy effect is and if the B&S study is a convincing replication of it.

A quick summary of the original Kahan et al paper (paper is here): a large, representative sample of Americans evaluated a math problem incorrectly when it conflicted with their prior beliefs and this was the case primarily for people high in numeracy (the people who are good at math). The design is entirely between subjects, with participants completing a scale of political beliefs, a numeracy scale, and a word problem that did or did not conflict with their beliefs. There is more to the paper; go read it.

B&S wanted to see how they could debias people within the context of the Kahan paradigm by presenting people with competing interpretations of the data in the math problem. They found that highly numerate people were more likely to adjust their interpretation based on this competing information. This is interesting. They also did not find any evidence that highly numerate people are more likely to misinterpret a belief contradicting math problem.

It is important to state that this study was conducted by independent scholars and appears to be conducted rigorously. This is a step in the right direction as it provides evidence relevant to the motivated numeracy effect that is independent of the Kahan et al group.  But did they fail to replicate?

It is actually hard to say. The first problem is that B&S used a within-subjects paradigm where participants repeatedly received math problems of the sorts used by Kahan (and a few other types). This is different than the between-subjects design of the original study and so a problem with Ingredient #2. Although within- and between-subject designs can tap into similar processes, it is up to these replication authors to show that this procedural change does not affect the psychological processes at work.

But I do not think this is the biggest problem; if it’s powerful then the motivated numeracy effect should be able to overcome some of these design changes.

The second and more consequential problem is that whereas the original study used a very large sample (N = 1111) representative of Americans, B&S use a small sample (N = 66) of students (that is further reduced for procedural reasons). This smaller sample of students makes it less likely that they will have participants with diverse political views (1% were conservative) and a range of numeracy scores. In designs with measured predictors it is necessary to have adequate range or else there won’t be enough people who are truly low numerate or conservative to test hypotheses about these subpopulations.

The small sample size also it makes it impossible to confidently estimate the size and the direction of these effects (a problem with Ingredient #3). B&S point to the within-subjects part of their design as evidence of its statistical power, but that part of the design does not address the low power for the between-subjects part of the design. That is, although they might have the necessary power to detect differences between the math problems (the within part of the design), they do not have enough people to make strong inferences about the between part of the design (numeracy and politics).

So, at the end of this, what does the B&S study tell us about the motivated numeracy effect? Not much. The sample isn’t big enough or diverse enough for these research questions (and the difference in design is an additional complication). If B&S are just interested in the debiasing aspect, then I think that these data are useful, but they should not be framed as a replication of Kahan et al; the study is not set up to convincingly replicate the motivated numeracy effect. To their credit, B&S are more circumspect in interpreting the replication aspect of their study in the discussion (in contrast to their summary in the abstract). Hopefully most readers will go beyond the abstract…

Why do I care and why should you? Replications are important, but poor replications, just like poor original studies, pollute the literature. I don’t want to discourage people from replicating Kahan et al’s work, but when it is replicated it is important for researchers to carefully recreate the conditions of the study so that we can be confident in the evidence obtained in the study. A representative sample of America is expensive, but there are other ways of recruiting participants with diverse political backgrounds (e.g., collect data from other university campuses). We need a literature of high quality studies so that we can make informed theoretical and practical decisions. Without this it will be difficult to know where to begin.

Self-replicating otters!

Wednesday
Oct182017

Are smart people ruining our democracy? What about curious ones? ... You tell me!

Well, what are your answers?  Extra credit, too, if you can guess what mine are based on the attached slides.

Extra extra credit if you can guess the answers of the Yale psychology students (undergrad) to whom I gave a lecture yesterday.  The lecture featured three CCP studies (as reported in the slides), which were presented in this order:

1. Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government, Behavioural Public Policy 1, 54-86 (2017). This paper reports experimental results showing that subjects high in numeracy use that aptitude to selectively credit and dismiss complex data depending on whether those data support or challenge their cultural group’s position on disputed empirical claims (e.g., permitting individuals to carry concealed guns in public makes crime rates go up—or down). 

The study illustrates motivated system 2 reasoning (MS2R), a dynamic analyzed in this forum “yesterday.”™

2. Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). Again supportive of MS2R, this study presents observational (survey) data suggesting that individuals high in science comprehension are more likely than individuals of modest comprehension to use that capacity to reinforce beliefs congenial to their membership in identity-defining cultural groups.

3. Kahan, D.M., Landrum, A., Carpenter, K., Helft, L. & Hall Jamieson, K., Science Curiosity and Political Information Processing, Political Psychology 38, 179-199 (2017).  The study reported on in this paper does three things.  First, it walks readers through the development of a science curiosity scale created to predict individual engagement (or lack thereof) with high-quality science documentaries. Second, the it  shows that increases in science curiosity tend to stifle rather than exaggerate partisan differences on societal risk assessments.   Finally, it presents experimental data that suggest science curiosity creates an appetite to expose oneself to novel evidence that runs contrary to one’s political predispositions—an unusual characteristic that could account for the brake that science curiosity applies to cultural polarization.

There were also cameo appearances by two other papers: first, Kahan, D.M., Climate-Science Communication and the Measurement Problem,  Advances in Political Psychology 36, 1-43 (2015), which shows that high science comprehension promotes polarization on some policy-relevant facts (e.g., ones relating to the risks of climate change, gun control, and fracking) but convergence on others (e.g., ones relating to nanotechnology and GM foods).; and second, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection, Judgment and Decision Making, 8, 407-424 (2013), which uses experimental results to show that individuals high in cognitive reflection are more likely than individuals of modest science comprehension to react in a close-minded way to evidence that a rival group’s members are more open-minded than are members of one’s own group.

So there you go. Now answer the questions! 

Monday
Oct162017

Motivated System 2 Reasoning (MS2R): a Research Program

Motivated System 2 Reasoning (MS2R): a Research Program

1. MS2R in general.  “Motivated System 2 Reasoning” (MS2R) refers to the affinity between cultural cognition and conscious, effortful information processing. 

In psychology, “dual process” theories distinguish betweeen two styles of reasoning.  The first, often denoted as “System 1,” is rapid, intuitive, and emotion pervaded. The other—typically denoted as “System 2”—is deliberate, conscious, and analytical. 

The core of an exceedingly successful research program, this conception of dual process reasoning has been shown to explain the prevelance of myriad reasoning biases. From hindsight bias to confirmation bias; from the gamblers fallacy to the sunk-cost fallacy; from probability neglect to the availability effect—all are psotively correlated with over-reliance on heuristic, System 1 reasoning.  By the same token, an ability and dispostition to rely instead on the conscious, effortful style associated with System 2 predicts less vulnerability to these cognitive miscues.

A species of motivated reasoning, cultural cognition refers to the tendency of individuals to selctively seek out and credit evidence in patterns that reflect the perception of risks and other policy-relevant facts associated with membership in their cultural group. Cultural cognition can generate  intense and enduring forms of cultural polarization where such groups subscribe to conflicting positions.

Because in such cases cultural cognition is not a truth-convergent form of information processing, it is perfectly plausible to suspectg that it is just another form of bias driven by overreliance on heuristic, System 1 information processing. 

But this conjecture turns out to be incorrect.

It’s incorrect not because cutlural cognition has no connection to System 1 styles of reasoning among individuals who are accustomed to this form of heuristic information processing. 

Rather it is wrong (demonstrably so) because cultural cognition does not abate as the ability and disposition to use System 2 styles of reasning increase.  On the contrary, those members of the public who are most proficient at System 2 reasoning are the most culturally polarized on societal risks such as the reality of climate change, the efficacy of gun control, the hazards of fracking, the safety of nuclear power generation, etc.

MS2R comprises the the cognitive mechanisms that account for this startling result.

2. First generation MS2R studies. Supported by a National Science Foundation grant (SES-0922714), the existence and dynamics of MS2R were established principally through three studies:

  • Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). The study reported in this paper tested directly the competing hypotheses that polarization over climate change risks was associated with over-reliance on heuristic System 1  information processing and that such polarization was associated instead with science literacy and numeracy.  The first conjecture implied that as those apptitudes, which are associated with basic scientific reasoning proficiency, increased, polarization among competing groups should abate.  In fact, exactly the opposite occurred, a result consistent with the second conjecture, which predicted that those individuals most adept at System 2 information processing could be expected to use this reasoning proficiency to ferret out information supportive of their group’s respective positions and to rationalize rejection of the rest. These effects, moreover, were highest among subjects who themselves achieved the highest scores on the CRT test.

  • Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013). The experimental study in this paper demonstrated how proficiencies in cognitive reflection, the apptitude most commonly associated with use of System 2 information processing, magnified polarization over the validity of evidence of the relative closed-mindedness of individuals who took one or another position on the reality of human-caused climate change: where scores on the Cognitive Reflection Test were asserted to be higher among “climate skeptics,” ideologically right-leaning subjects found the evidence that the CRT predicts open-mindedness much more convincing than did individuals who were left-leaning in their political outlooks; where, in contrast, CRT scores were represented as being higher among “climate believers,” left-leaning subjects found the evidence of the validity of the CRT more convincing that did Republivcans.

  • Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017). This paper reports an experimental study on how numeracy interacts with cultural cognition.  Numeracy is an apptitude to reason well with quantitative data and to draw appropriate inferences about such information.  In the study, it was shown that individuals who score highest on a numeracy assessment test were again the most polarized, this time on the inferences to be drawn from data from a study on the impact of gun control: where the data, reported in a standard 2x2 contingency table supported the position associated with their ideologies  (either gun control reduces crime or gun control increaeses crime) subjects high in numeracy far outperformed their low-numeracy counterparts. But where that data supported an inference conterary to the position associated with subjects’ political predispositions, those highest in numeracy performed no better than their low-numeracy counterparts on the very same covariance-detection task.

3. Second generation studies.  The studies described above have given rise to multiple additonal studies seeking to probe and extend their results.  Some of these studies include:

3. Secondary sources describing MS2R

 

 

Saturday
Oct142017

Curious post-docs sought for studies of science curiosity

Great opportunity for budding science of science communiation scientists!

Wednesday
Oct112017

Toward a taxonomy of "fake news" types

Likely this has occurred to others, but as I was putting together my umpteenth conference paper (Kahan 2017b) on this topic it occurred to me that the phrase “fake news” conjures different pictures in the minds of different people. To avoid misunderstanding, then, it is essential, I now realize, for someone addressing this topic to be really clear about what sort of “fake news” he or she has in mind.

Just to get things started, I’m going to describe four distinct kinds of communications that are typically conflated when people talk of “fake news”:

1. “Fake news” proper

2. Counterfeit news

3. Mistaken news

4. Propaganda

1. What I principally had in mind as “fake news” when I wrote my conference papers was the sort of goofy “Pope endorses Trump,” “Hillary linked to sexual slavery trade” stuff.  My argument (Kahan 2017a) was that this sort of “fake news” likely has no impact on election outcomes because only those already predisposed—predestined even—to vote for Trump were involved in meaningful trafficking of such things.  (Most of the bogus news reports were pro-Trump).

These forms of fake news were being put out by a group of clever Macedonians, who were paid commissions for clicks on the commercial advertisements that ringed their made-up stories. Rather than causing people to support Trump, support for Trump was causing people to get value from reading bogus materials that either trumped up Trump or defamed Hilary.   Because support for Trump was in this sense emotionally and cognitively prior to enjoyment and distribution of these stories, the result in the election would have been no different had the stories not existed.

2. But there are additional species of “fake news” out there.  Consider the fake advertisements purchased by Russia on Facebook, Twitter, Google etc. These were no doubt designed in a manner to avoid giving away their provenance, and no doubt were professionally crafted to affect the election outcome.  I’m inclined to think they didn’t but all I have to go on are my priors; I haven’t seen any studies that disentangle the impact of these forms of “fake news” from the Macedonian specials.

I would call this class “counterfeit news” based on its attempt to purchase the attention and evaluation of real news.

3. Next we should have a category for what might be called “mistaken news.”  The category consists of stories that are produced by legitimate news sources but that happen to contain a material misstatement.

Consider, e.g., the report by Dan Rather near the end of the 2000 presidential campaign that he was in possession of a letter that suggested candidate George W. Bush had arranged for a draft deferment to avoid military service in the Vietnam War. Rather had been played by an election dirty trickster.  This error (for which Rather was exiled to retirement) was likely a result of sloppy reporting x wishful thinking.  At least when they are promptly corrected, instances of “mistaken news” like this, I’m guessing, are unlikely to have any real impact (but see Capon & Hulbert 1973; Hovland & Weiss 1951-52; Nyhan & Reifler 2011).

4. Finally, there is out and out propaganda. The aim of this practice is not merely to falsify the news of the day but to utterly annihilate citizens’ capacity to know what is true and what is not about their collective life (cf. Stanley, J. 2015).  If Trump hasn’t reached this point yet, he is certainly well on his way.

So this is my proposal: that we use “fake news,” “counterfeit news, “mistaken news,” and “propaganda” to refer, respectively, to the four types of deception that I’ve canvassed.

 If someone comes up with a better set of names or even a better way to divide these forms of misleading types of news, that’s great.

The only point I’m trying to make is that we do need to draw these kinds of distinctions. We need them, in part, to enable empirical researchers to figure out what they want to measure and to communicate the same to others.

Just as important, we need distinctions like these to help citizens recognize what species of non-news they are encountering, and to deliberate about the appropriate government response to each.

 References

Capon, N. & Hulbert, J. The sleeper effect: an awakening. Public Opin Quart 37, 333-358 (1973).

Hovland, C.I. & Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin Quart 15, 635-650 (1951-52).

Kahan, D. M. Misconceptions, Misinformation & the Logic of Identity Protective Cognition. CCP Working paper  No. 164. (2017a), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2973067.

Kahan, D. M. & Peter E. Misinformation and Identity protective cognition. CCP Working Paper No. (2017a). Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046603

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Stanley, J. How Propaganda Works (Princeton Press. 2015).

Tuesday
Oct102017

Where am I this time? ... The National Academy of Sciences Decadal Survey of Social and Behavioral Sciences for Applications to National Security

What will I be saying? The usual ...

Watch for slides & talk summary.

Monday
Oct092017

Experts & politically motivated reasoning (in domain & out)

The impact of identity-protective cognition & like forms of motivated reasoning on experts, particularly when those experts are making in-domain judgments, is a big open question deserving more research.

Here's a recent study addressing this question:

Eager to know what 14 billion readers of this blog think about it.

Saturday
Oct072017

Weekend up(back) date: What is the American gun debate about?

From Kahan, D.M. & Braman, D. More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions. U. Pa. L. Rev. 151, 1291-1327 (2003) pp. 1291-92:

Few issues divide the American polity as dramatically as gun control. Framed by assassinations, mass shootings, and violent crime, the gun debate feeds on our deepest national anxieties. Pitting women against men, blacks against whites, suburban against rural, Northeast against South and West, Protestants against Catholics and Jews, the gun question reinforces the most volatile sources of factionalization in our political life. Pro and anticontrol forces spend millions of dollars to influence the votes of legislators and the outcomes of popular elec tions. Yet we are no closer to achieving consensus on the major issues today than we were ten, thirty, or even eighty years ago.

Admirably, economists and other empirical social scientists have dedicated themselves to freeing us from this state of perpetual contes tation. Shorn of its emotional trappings, the gun debate, they reason, comes down to a straightforward question of fact: do more guns make society less safe or more? Control supporters take the position that the ready availability of guns diminishes public safety by facilitating violent crimes and accidental shootings; opponents take the position that such availability enhances public safety by enabling potential crime vic tims to ward off violent predation. Both sides believe that “only em pirical research can hope to resolve which of the[se] . . . possible ef fects . . . dominate[s].”   Accordingly, social scientists have attacked the gun issue with a variety of empirical methods—from multivariate regression models  to contingent valuation studies  to public-health risk-factor analyses.

Evaluated in its own idiom, however, this prodigious investment of intellectual capital has yielded only meager practical dividends. As high-quality studies of the consequences of gun control accumulate in number, gun control politics rage on with unabated intensity. Indeed, in the 2000 election, their respective support for and opposition to gun control may well have cost Democrats the White House and Republicans control of the U.S. Senate.

Perhaps empirical social science has failed to quiet public dis agreement over gun control because empirical social scientists have not yet reached their own consensus on what the consequences of gun control really are. If so, then the right course for academics who want to make a positive contribution to resolving the gun control debate would be to stay the course—to continue devoting their energy, time, and creativity to the project of quantifying the impact of various gun control measures.

But another possibility is that by focusing on consequences narrowly conceived, empirical social scientists just aren’t addressing what members of the public really care about. Guns, historians and soci ologists tell us, are not just “weapons, [or] pieces of sporting equipment”; they are also symbols “positively or negatively associated with Daniel Boone, the Civil War, the elemental lifestyles [of] the frontier, war in general, crime, masculinity in the abstract, adventure, civic re sponsibility or irresponsibility, [and] slavery or freedom.”  It stands to reason, then, that how an individual feels about gun control will de pend a lot on the social meanings that she thinks guns and gun con trol express, and not just on the consequences she believes they im pose.  As one southern Democratic senator recently put it, the gun debate is “about values”—“about who you are and who you aren’t.”  Or in the even more pithy formulation of another group of politically minded commentators, “It’s the Culture, Stupid!”

Tuesday
Oct032017

Nano-size examination of misinformation & identity protective reasoning

Another invited conference paper, this short, 1700-word version of "Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition" (3000 words) is perfect for those readers in your family who are on a constrained "time budget" . . .

From left to right: "Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition"; "Misinformation and Identity-protective Cognition"; Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Advances in Political Psychology 38, 127-150 (2017).