Here's another! (session 7 was on "science of science filmmaking"; session 8 on "climate, part 2" ... I'll post those "tomorrow"™.
Here's another! (session 7 was on "science of science filmmaking"; session 8 on "climate, part 2" ... I'll post those "tomorrow"™.
Any surprises here? (In case you don't remember, relatively religious peope have more "confidence" in "those running" the "science community" than in "those running "organized religion.")
Here's the model on which the 2nd figure is based.
Careful now . . . .
Like “yesterday's”™ item (WHICHSCI), this one (SCIIMP1) made a one-time-only appearance in 2006 GSS.
Companion items asked whether it was important that "the people who do [science] have advanced degrees in their field"; that "conclusions [be] based on solid evidence"; that "researchers carefully examine different interpretations, even ones they disagree with"; and that "the results are consistent with religious beliefs." Responseswere all skewed in patterns that reflected a pro-science sensibility. Check out the GSS codebook if you are curious about toplines on those -- they are all skewed toward a pro-science outlook.
Here is regression model, in case anyone is interested.
I'm still stuck in GSS can. Actually, it's more like a bag of potato chips; you can't stop munching until you've emptied the thing.
But anyway, the 2006 GSS had an item that solicited respondents' attitudes toward "industry" vs. "university scientists."
Well, "we all know" that conservatives hold university scientists in contempt for their effemenate, elitist ways & that liberals regard industry scientists as shills.
But here's what GSS says about partisanship & industry vs. university scientists . . . .
Maybe this ...
is more informative? Or will people w/ different priors just disagree about the practical significance of this difference in the probability of finding industry scientists less reliable than university ones?...
p.s. Here's the ordered logistic regression model that was used to generate the probability density distributions reported in 2nd figure.
Never fails! My posts from “yesterday”™ and “the day before yesterday”™ have lured a real, bona fide expert to come forward. The expert in this case is William Hallman, the Chair of the Department of Human Ecology and faculty member of the Department of Nutritional Sciences and of the Bloustein School of Planning and Public Policy at Rutgers University. He is also currently a Resident Scholar in the Science of Science Communication initiative at the Annenberg Public Policy Center.
As you probably suspect, I am sympathetic to your argument that because so few Americans really know anything about them, asking people about the safety of GM crops is problematic in general. So, starting with the premise that most Americans are unlikely to have a pre-formed opinion about the safety of GM crops before being asked to think about the issue in the survey, I think that we should assume that most of the answers given to the question are impressionistic, and likely influenced by the wording of the question itself. Which is:
“Do you think that modifying the genes of certain crops is: “Extremely dangerous for the environment . . . Not dangerous at all for the environment.”
I agree with the idea suggested by @Joshua, that because the risk targeted is “danger to the environment,” it is plausible that the differences seen are because conservative Republicans may be less likely to endorse the idea that anything is dangerous for the environment. If you were to ask about risks to human health, you might get a different pattern of responses.
But that’s not all. The root of the question refers to crops. That is, to plants/agriculture, and not to food. So, are conservative Republicans also less likely to view crops/agriculture a threat to the environment in general? My guess is ‘probably’, but I don’t have good data to back up that assertion.
But wait, there’s more. . . The question doesn’t actually refer to GMO’s. It asks whether modifying the genes of crops is dangerous. I’m don’t know where the specific question falls in the overall line of questioning. Were there questions about GMOs preceding this? If not, participants may not have grasped that the question was really about Genetic Engineering. Technically, you can “modify the genes of certain crops” through standard crossbreeding/hybridization methods. It is, in part, why the FDA has never liked the broad term “Genetic Modification.” If the question had asked, “Do you think the genetic engineering of crops is dangerous for the environment,” I think you would get a different pattern of responses. As a side note, I have ancient data that shows that more than a decade ago, Americans were as likely to approve of foods produced through crossbreeding as they were for foods produced through genetic engineering.
Here are some simple data analyses that reflect how a wider range of GSS science-attitude variables relate to perceptions that GM crops harm the environment, and how that relationship is affected by partisanship.
I’d say they tell basically the same story as my initial analysis of CONSCI, the item that measures “confidence” in “those running” the “scientific community”: basically, that higher, pro-science scores on these measures is associated with less concern with GM crops. This is so particularly among right-leaning respondents; indeed, left-leaning ones don't really move at all when one looks at risk perceptions in relation to the composite "proscience" scale.
There is also a small zero-order correlation (r (1189) -0.12, p < 0.01) between GENEGEN—the GSS’s 2010 GM risk perception item—and the composite left-right scale that I constructed and that is coded so that higher scores denote greater conervatism.
All of this is out of keeping with the usual finding of a lack of partisan influence on GM food risks. I have reported many times that there is no partisan effect when GM food risks are measured with the Industrial Strength Risk Perception measure. Surveys conducted by other opinion analysts using different measures have shown the same thing.
So what’s going on?
One possibility, suggested by loyal listener @Joshua, is that the GSS’s GM-concern item looks at people’s anxiety about the impact of GM crops on “the environment” as opposed to the safety of consuming GM foods. The “environmental risk” cue is enough information for the public—which is otherwise pretty clueless (“cueless”?) about GM risks—to recognize how the issue ought to cohere with their political outlooks.
Seems persuasive to me . . . but what do you—the 14 billion daily readers of this blog—think?!
Oh, one more thing: I did a quick search and found only one paper that addresses partisanship and the GSS’s “GENEGEN” item. If others know of additional ones, please let me & all the readers know.
Oh, one more "one more" thing. Here are the raw data:
Because of the number of observations (i.e., people) in the "Confidence in science community" & the "Proscience scale" graphs, it's difficult to discern the relative proportions of "< avg" & "> avg" (i.e., below the mean on left_right scale & above it) along various points on the x- & y axes.
One way to try to deal with that is by using transparencies, which vary the intensity of the colors as observations pile up on top of each other. These differences convey information on the density of observations of right- and left-leaning respondents at different x-/y-axis coordinates.
I did that for "Confidence in science community" & "Proscience." (I also jittered--added a little random noise--to the observations to spread them out a bit, a technique I used yesterday, too.)
That's a little better, right? But the proportions of red and blue at 1.0 and 0.0 are still hard to detect, so I would for sure still include the locally weighted regression lines, as I did here.
Indeed, one might reasonably argue for dropping the scatterplot & going only with loess plots. Locally weighted regression, in my view, is definitely the best way to enable observation of the "raw" data, particularly when there is an over-plotting problem of this sort.
So here’s something fun.
I found it while scraping the bottom of the barrel of a can of GSS data that I had consulted to prepare remarks on the role of “trust in science/of scientists” that I gave at a National Academy of Sciences event a couple of weeks ago.
The GSS has a variety of measures that could be construed as measuring “trust” of this sort. The most famous one is its “Confidence in Institutions” query. It solicits how much “confidence” respondents have in “those running” various institutions, including the “science community.” The item permits one of three responses: “hardly any,” “only some,” and “a great deal.”
The wording is kind of odd, but the item is a classic, having been included in every GSS study conducted in 1974. One can find dozens of studies that use it for one thing or another, including proofs of partisan differences in trust of science.
Well, it turns out that in 2010, the GSS asked this question:
Do you think that modifying the genes of certain crops is:
1. Extremely dangerous for the environment
2. Very dangerous
3. Somewhat dangerous
4. Not very dangerous
5. Not dangerous at all for the environment.
So I decided to see what would happen when one uses trust in since, as measured by the institutional confidence item, to predict responses to the genetically modified crops item. I also included a political orientation measure formed by aggregating responses to the GSS’s 7-point liberal-conservative ideology item and its 7-point party orientation item.
In my analysis, I measured the probability that a respondent would select a response from 1-3—the ones that evince a perception of danger.
Here’s what I found:
I hadn’t expected partisan identity to matter at all, given that surveys now typically find no meaningful correlation between attitudes toward GM foods and party identity. You can see, though, that there is a bit of a partisan effect here, with those are right-leaning ideologically inclined to find less danger in GM crops as their “confidence” in the “scientific community” increases. For a "conservative Republican," the estimated difference in the probability of finding GM crops to be environmentally dangerous at the "great deal" vs. "hardly any" response levels is -18% (+/- 15% at 0.95 LOC).
Left-leaning respondents, in contrast, don't budge a centimeter as their science-community “confidence” increases (-3%, +/- 12%).
What should we make of this, if anything?
I’m not sure, actually. I still am inclined to see responses to GM food questions as meaningless—a survey artifact—given how few people are actually aware of what GM foods are. Obviously, here, the level of concern expressed is way out of line with people’s behavior in consuming prodigious amounts of GM food.
We also don’t have any decent validation of the “confidence in science” measure: I’ve never encountered it being used to explain other attitudes in a way that would give one confidence that it really measures trust in science. The same goes, moreover, for all the other “trust” measures in the GSS, which consistently find high levels of trust in science among politically diverse citizens.
But maybe this finding should nudge me in the other direction?
You tell me what you think & maybe I’ll revise my view!
In the raw data, left-leaning subjects budge at least a little s they become more "confident" in "those running" the "science community." Checking the fit of the ordered logistic regression model, it turns out that the model violated the "parallel lines" test-- that is, that the impact of moving from one increment to the next on the "confidence" variable was not uniform in relation to the outcome variable (perceived risk of GM crops).
So here's a multinomial model -- one that fits the predictor variable to each level of the outcome variable seperately. It at least looks like a better representation of the raw data.
Digging deeper w/ more GSS science-attitude items here.
This is pretty cool. I'm going to go out on a limb & predict it will eventually be bought out by one of the on-line dating services, which will then offer one-stop shopping for scholars looking for professional & personal. matches.
Maybe you can get the gist of the experiment in pictures? If not, you can always read the (open-source) paper (Kahan, D.M., Jamieson, K.H., Landrum, A. & Winneg, K., Culturally antagonistic memes and the Zika virus: an experimental test, J. Risk Res. 20, 1-40 (2017)).
Toxic memes ...
Affective impacts ...
From something I'm working on . . . .
It is almost surely a mistake to think that highly divisive conflicts over science are attributable to general distrust of science or scientists. Most Americans—regardless of their cultural identities—hold scientists in high regard, and don’t give a second’s thought to whether they should rely on what science knows when making important decisions. The sorts of disagreements we see over climate change and a small number of additional factual issues stem from considerations particular to those issues (National Research Council 2016). The most consequential of these considerations are toxic memes, which have transformed positions on these issues into badges of membership in and loyalty to competing cultural groups (Kahan et al 2017; Stanovich & West 2008).
We will call this position the “particularity thesis.” We will distinguish it from competing accounts of how “attitudes toward science” relate to controversy on policy-relevant facts. We’ve already adverted to two related ones: the “public ambivalence” thesis, which posits a widespread public unease toward science or scientists; and the “right-wing anti-science” thesis, which asserts that distrust of science is a consequence of holding a conservative political orientation or like cultural disposition. . . .
National Research Council 2016. Science Literacy: Concepts, Contexts and Consequences. A Report of the National Academies of Science, Engineering and Medicine. Washington DC: National Academies Press.
Stanovich, K. & R. West, 2008. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning, 14, 129-67.
As I mentioned, in putting together a show for the National Academy of Sciences, I took a look at the 2014 GSS data.
Here's a bit more of what's in there:
Actually, the left-hand panel is based on GSS 2010 data. But I hadn't looked at that particular item before.
The right-hand panel is based on GSS 2008, 2010, 2012, & 2014. It is an update of a data display I created before the 2014 data (the most recent that has been released by the GSS) were available.
If, as reasonable, you want confirmation that the underlying scales I've constructed are reliabily measuring the disposition that we independently have good reason to associate with religiosity, here are how these survey respondents respond to the GSS's "evolution" item:
I still find it astonishing that there isn't a more meaningful difference in the attitudes of religious & non-religious respondents on the "science attitude" measures. Guess I had a case of WEKS on this.
There are much more serious destructive forces to worry about . . . .
The use of likelihood ratios here --" climate change made maximum temperatures like those seen in January and February at least 10 times more likely than a century ago"-- makes this pretty good #scicomm, in my view.
Climate-science communicators typically get tied in knots when they address the issue of whether a particular event was “caused” by global warming. The most conspicuous, & conspicuously unenlightening, instance of this occcurred in the aftermath of Hurricaine Sandy.
Likelihood ratios (LRs) are a productive alternative to the entanglement of linguistics—because the former invite and enable critical judgment while the latter attempt to evade it.
Obviously, LRs are only as good as the models that generated them.
But if those models reflect the best available evidence, then a practical person or group can make informed decisions based on how LRs quantify the risk involved (Lempert et al. 2013). That’s what is effaced by linguistic tests that purport to treat causation as binary rather than probabilistic (Anders & Rasmussen 2012; Dolaghan 2004).
LRs also spare communicators from coming off as confabulators when an independent-minded person asks “what does it mean to say indirect/proximately/systematically caused?”
The statement “this event was 10x more consistent with the hypothesis that mean global temperatures have increased by this amount rather than having remained constant” in relation to a specified period conveys exactly what the communicator means and in terms that ordinarily intelligent people can understand (Hansen et al. 2012).
Or in any case, that is my hypothesis. While science communicators are doing the best they can to enlighten people in real time, science-of-science –communication researchers can help by empirically assessing the methods they are using.
Dollaghan, C.A., 2004. Evidence-based practice in communication disorders: what do we know, and when do we know it? Journal of Communication Disorders, 37(5), 391-400.
Hansen, J., M. Sato & R. Ruedy, 2012. Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415-E23.
Lempert, R.J., D.G. Groves & J.R. Fischbach, 2013. Is it Ethical to Use a Single Probability Density Function?, Santa Monica, CA: RAND Corporation.
Nordgaard, A. & B. Rasmusson, 2012. The likelihood ratio as value of evidence—more than a question of numbers. Law, Probability and Risk, 11(4), 303-15.
So I popped open a can of data—General Social Survey 2014 (the latest available)—a couple of days ago in anticipation of the talk I’m doing on Wednesday & I found out something pretty cool.
The thing had to do with responses to the GSS’s “confidence in institutions” module. The module, which now has been has been part of the Survey for over 40 years, asks respondents to indicate “how much confidence”—“hardly any,” “only some,” or “a great deal”—they have in the “people running” 13 institutions:
a. Banks and Financial Institutions
b. Major Companies
c. Organized Religion
e. Executive Branch of the Federal Government
f. Organized Labor
j. U.S. Supreme Court
k. Scientific Community
Over the life of the measure, ratings for nearly every one of these institutions has declined “with one exception” (Smith 2013). “The exception is . . . the Scientific Community,” in whom confidence “has varied little and shown no decline.” So much for Americans’ “growing distrust” of science.
In fact, over that entire period, “the people running” the “Scientific community” have ranked second, initially to those “running” medicine, but in more recent years to the “people running” the “military.” One can see that in this graphic, which I generated with the 1972-2014 dataset:
But what about those supposedly “antiscience” groups like conservatives and religious folks?
Turns out that they have displayed a remarkably high and consistent degree of confidence in those “running” the “Scientific community,” too. Across the life of the measure, they both have consistently ranked the “Scientific community” as second or (in the case of religious folks for one time interval) third in confidence-worthiness
Indeed, conservatives ranked the “people running” the “Scientific community” higher than the “people running” the “Executive branch” of the federal government during the presidency of Ronald Reagan.
Citizens who are above average in religiosity have consistently ranked the “people running” the “Scientific community” ahead of the “people running” the “institution” of “Organized religion.”
So cheer up: there is no shortage of trust in and respect for science in our pluralistic liberal democracy.
Probably the only Americans who today don’t share this high regard for science are the “people" now "running” the “Executive branch.”
They are the true “enemy of the people”--all of them-- in the Liberal Republic of Science.
Smith, T.W. Trends in Public Attitudes About Confidence in Institutions (NORC, Chicago, IL, 2013).
My remarks, rationally reconstructed, at the AAAS Panel on “Fake News and Social Media: Impacts on Science Communication and Education” (slides here).
1. Putting the bottom line on top. If one is trying to assess the current health of science communication in our society, then he or she should likely regard the case of “fake news” as akin to a bad head cold.
The systematic propogation of false information that President Trump is engaged in, on the other hand, is a cancer on the body politic of enlightened self-government.
2. Conjectures inviting refutation. I’ll tell you why I see the “alternative facts presidency” as so much more serious than “fake news.” But before I continue, I want to issue a proviso: namely, that everything I think on these matters is in the nature of informed conjecture.
I will be drawing on the dynamic of identity-protective reasoning to advance my claims (Flynn et al. 2017; Kahan 2010). Because we have learned so much about mass opinion from studies featuring this dynamic, it makes perfect sense to suspect this form of information processing will determine how people react to fake news and to the stream of falsehoods that flow continuously from the Trump administration.
But we should recognize that these phenomena are different from the ones that have supplied the focus for the study of identity-protective reasoning.
Other dynamics—including ones that also reflect well-established mechanisms of cognition—might support competing hypotheses.
Accordingly, it’s not appropriate to stand up in front of you and say “here is what social science tells us about fake news and presidential misinformation . . . .” Social science hasn’t spoken yet. Unless he or she has data that directly address these phenomena, anyone who tells you that “social science says” this or that about “fake news” is engaged in story-telling, a practice that can itself mislead the public and distort scholarly inquiry.
I will, for purposes of exposition, speak with a tone of conviction. But I’m willing to do that only because I can now be confident that you’ll understand my position to be a provisional one, reflecting how things look to me at the Bayesian periphery of a frontier that warrants (demands) empirical exploration. Once valid studies start to accumulate, I am prepared to pull up stakes and move in the direction they prescribe, should it turn out that the ground I’m standing on now is insecure.
3. Models. I’m going to use two simple models to guide my exposition. I’ll call one the “passive aggregator theory” (PAT). PAT envisions a credulous public that is pushed around by misinformation emanating from powerful economic and political interest groups.
That model, I will contend, is simply wrong.
The truth is something closer to the second model I want you to consider. This one can be called the “motivated public theory” (MPT). According to MPT, members of the public are unconsciously impelled to seek out information that supports the view of the identity-defining group they belong to and to dismiss as non-credible any information that challenges that position.
Where the public is motivated to see things in an identity-reinforcing way, it will be very profitable to create misinformation that gives members of the public what they want—namely, corroboration that their group’s positions are right, and those of their benighted rival wrong.
In my view, that’s what the fake news we saw during the election was all about. Some smart people in Macedonia or wherever set up sites with scandalous—in fact, outright incredible—headlines to direct traffic to websites that had agreed to pay them to do exactly that. Indeed, every fake news story was ringed with classic click bait features on overcoming baldness, restoring wrinkled skin, curing erectile dysfunction, and the like.
On the MPT account, the only people who’d be enticed to read such material would be people already predisposed to believe (or maybe fantasize) that the subjects of the stories (Hillary Clinton and Donald Trump, for the most part) were evil or stupid enough to engage in the behavior the stories describe. The incremental effect of these stories in shaping their opinions would be nil.
Same for those predisposed not to believe the stories. They’d be unlikely to see most of them because of the insularity of political-news networks in social media. But even if they saw them, they’d dismiss them out of hand as noncredible.
On net, no one’s view of the world would change in any meaningful way.
4. Empirics. Consider some data that makes a conjecture like this plausible.
a. In the study (Kahan et al., in press), ordinary members of the public were instructed to determine the results of an experiment by looking at a two-by-two contingency table. The right way to interpret information presented in this form (a common one for presenting experimental research) is to look at the ratios of positive to negative impacts conditional on the treatment. The subjects who did this would get the correct answer.
But most people don’t correctly interpret 2x2 contingency tables or alternative formulations that convey the same information. Instead the simply compare the number of positive and negative results in the cells for the treatment condition. Or if they are a little smarter, they do that and look at the number of positive results in both the treatment and the untreated control.
Anyone following that strategy would get the “wrong” answer.
The design also had an experimental component. Half the subjects were told that the 2x2 summarized results—better or worse complexions—for a new skin-rash treatment. The other half that it reflected the results—violent crime up versus violent crime down—of a law that permitted citizens to carry concealed weapons in public.
In the skin-rash condition, the likelihood of getting the answer right turned only on the Numeracy (quantitative-rezoning proficiency) of the subjects, regardless of whether were right-leaning or left-.
But in the gun-control condition, high-numeracy subjects were likely to get the answer right only when the data, properly interpreted, supported the position that was dominant in their ideological group. When the data, property interpreted supported their ideological rival’s position, the subjects highest in Numeracy were no more likely to get the answer correct than those who were low in Numeracy. Essentially they used their reasoning proficiencies to pry open a confabulatory escape hatch to the logic trap they found themselves trapped in.
As a result, the highest Numeracy subjects were the most divided on what the data signified.
This is a result consistent with MPT. If it captures the way that people reason outside the lab, then we should expect to see not only that members of opposing affinity groups are polarized on contentious empirical issues. We should expect to see the degree of polarization between their members increasing in lockstep with diverse citizens’ science comprehension capacities.
And indeed, that is what we see (Kahan 2016).
b. Now consider the significance of this for fake news.
From this simple model, we can see how identity-protective reasoning can profoundly divide opposing cultural groups. Yet no one was being misled about the relevant information. Instead, the subjects were misleading themselves—to avoid the dissonance of reaching a conclusion contrary to their political identifies.
Nor was the effect a result of credulity or any like weakness in critical reasoning.
On the contrary, the very best reasoners—the ones best situated to make sense of the evidence—were the ones who displayed the strongest tendency toward identity-protective reasoning.
Because biased information-search is also a consequence of identity-protective cognition, we should expect that people who reason this way will be much more likely to encounter information that reinforces rather than undermines their predispositions.
Of course, people might now and again stumble across “fake news” that goes against their predispositions, too. But because we know such people are already disposed to bend even non-misleading information into a shape that affirms rather than threatens their identities, there is little reason to expect them to credit “fake news” when the gist of it defies their political preconceptions.
These are inferences that support MPT over PAT.
5. As I stated the outset, we shouldn’t equate the Trump Administration’s persistent propagation of misinformation with the misinformation of the cartoonish “fake news” providers. The latter, I’ve just explained, are likely to have only a small or no effect on the science communication environment; the former, however, fills that environment with toxins that enervate human reason.
Return to the “motivated public theory.” We shouldn’t be satisfied to treat a “motivated public” as exogenous. How do people become motivated, identity-protective reasoners?
They aren’t, after all, on myriad issues (e.g., GM foods) on which we could easily imagine conflict—indeed, on whether there actually is in other places (e.g., GM foods in Europe).
Memes are self-propagating ideas or practices that enjoy wide circulation by virtue of their salience.
Culturally toxic memes are ones that fuse positions on risks or similar policy-relevant facts to individual identities. The operate primarily by stigmatizing those who hold such positions as stupid and evil.
When that happens, people gravitate toward habits of mind that reinforce their commitment to their groups’ positions. They do that because holding a position consistent with others in their groups is more important to them—more consequential for their well-being—than is holding a positon that is correct.
What an ordinary member of the public thinks about climate change, e.g., will not affect the risk that it poses to her or to anyone she cares The impact she as an individual consumer or an individual voter will be too small to make any real difference.
But given what holding such a position has come to signify about who one is—whose side one is on in a vicious struggle between competing groups for cultural ascendency—forming a belief (an attitude, really) that estranges her from her peers could have devastating psychic and material consequences.
Of course, when everyone resorts to this form of reasoning simultaneously, we’re screwed. Under these conditions, citizens of pluralistic democratic society will fail to converge, or converge as quickly as they should, on valid empirical evidence about the dangers they face and how to avert them (Kahan et al. 2012).
The study we conducted modeled how exposure to toxic memes (ones linking the spread of Zika to global warming or to illegal immigrants) could rapidly polarize cultural groups that are now largely in agreement about the dangers posed by the Zika virus.
This is why we should worry about Trump: his form of misinformation, combined with the office that he holds, makes him a toxic-meme propagator of unparalleled influence.
When Trump spews forth with lies, the media can’t simply ignore him, as they would a run-of-the-mill crank. What the President of the United States says always compels coverage.
Such coverage, in turn, impels those who want to defend the truth to attack Trump in order to try to undo the influence his lies could have on public opinion.
But because the ascendency of Trump is itself a symbol of the status of the cultural groups that propelled him to the White House, any attack on him for lying is likely to invest his position with the form of symbolic significance that generates identity-protective cognition: the fight communicates a social meaning—this is what our group believes, and that what our enemies believe—that drowns out the facts (Nyhan et al 2010, 2013).
We aren’t polarized today on the safety of universal childhood immunization (Kahan 2013; CCP 2014). But we could easily become so if Trump continues to lie about the connection between vaccinations and autism.
We aren’t polarized today on the means appropriate to counteract the threat of the Zika virus (Kahan et al. 2017). But if Trump tries to leverage public fear of Zika into support for tightening immigration laws, we could become politically polarized—and cognitively impeded from recognizing the best scientific evidence on spread of this disease.
Trump is uniquely situated, and apparently emotionally or strategically driven, to enlarge the domain of issues on which this reason-effacing dynamic degrades our society’s capacity to recognize and give proper effect to decision-relevant science.
6. Trump, in sum, is our nation’s science-communication environment polluter-in-chief. We shouldn’t let concern over “fake news” on Facebook to distract us from the threat he uniquely poses to enlightened self-government or from identifying the means by which the threat his style of political discourse can be repelled.
CCP, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Experimental Investigation (Jan. 27, 2014).
Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology 38, 127-150 (2017).
Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).
Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).
Kahan, D.M. Culturally antagonistic memes and the Zika virus: an experimental test. J Risk Res 20, 1-40 (2017).
Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).
Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Behavioural Public Policy (in press).
Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).
Nyhan, B., Reifler, J. & Ubel, P.A. The Hazards of Correcting Myths About Health Care Reform. Medical Care 51, 127-132 110.1097/MLR.1090b1013e318279486b (2013).