From slides in tak about to give at a biotech conference in Syracuse. Political differences (or lack thereof) in top slide & "science comprehension" magnification of the same (or lack thereof) in bottom.
More later -- but if anyone wants to offer their own views in the meantime, freel free!
So having been freaked out to discover how pervasively polarized members of the public appear to be about fracking despite knowing nothing about it, I resolved to do a little experiment.
In the previous data collection, I had measured perceptions of fracking risks using the "industrial strength measure," which solicits a rating of how "serious" a societal risk some activity poses to "human health, safety, or prosperity."
My thought was that maybe what had generated such a strong degree of polarization might be the wording of the item, which asked subjects to supply such a rating for "fracking (extraction of natural gas by hydraulic fracturing)."
I figured maybe this language--the sort of "dirty" sounding word "fracking" and the references to "extraction" (sounds like a painful and invasive procedure to subject mother Nature to) & "natural gas" ("boo" if you have an egalitarian, "game over, capitalists!" sensibility; yay, if you have an individualist, "yes we can, forever & ever & ever!" one) would be sufficient to alert the ordinary Americans who made up the sample (most of whom likely wouldn't have been able to define fracking without this clue) that this was an "environmental" issue. That would be enough to enable most of them to locate the issue's position on the "cultural theory of risk" map, particularly if they were above-average in science comprehension and thus especially skilled at fitting information to their cultural identities.
So I thought I'd try an experiment. Administer the same measure but vary the description of the putative risk source: in one condition, it would be called simply "fracking"; in another, it would be referred to as "shale oil gas production"; and in a third, the risk source would be identified as it was in the earlier survey-- "fracking (extraction of natural gas by hydraulic fracturing.)"
I figured that relative to the third group, those in the first (plain old "fracking") would be less polarized, and those in the second ("shale oil gas production"; sounds harmless!) would be the least agitated of all.
Actually, I was modeling this experiment loosely on Sinaceur, M., Heath, C. & Cole, S. Emotional and deliberative reactions to a public crisis mad cow disease in France, Psychol Sci 16, 247-254 (2005)), a great study in which the investigators showed that lab subjects formed affect- or emotion-pervaded judgments when evaluating risk information relating to "Mad Cow disease" but formed more analytical, calculative ones when the information referred to either "bovine spongiform encephalopathy (BSE)" or "a variant of Creutzfeldt-Jakob disease (CJD)" instead.
Well, here's what I found:
Click on the image for a closer inspection, but basically, the difference in effect associated with the variation in wording, while "in the direction" hypothesized, was way too small for anyone to think it was practically meaningful.
Same thing for the influence of the wording on the interaction between political outlooks (measured with a right-left scale) and science comprehension (measured with a cool composite of substantive knowledge & critical reasoning measures; more on that "tomorrow"):
So much for that theory.
But I have another one!
All this agitation about fracking, I'm convinced, is really a battle between those who do & those who don't recognize the supreme value of local democratic decisionmaking!
A "CCP journal club!" report from D. Evans:
"Aporia" is a mode of reasoning that shows the author comprehends “an issue’s intractable complexity.”
Too often, judicial opinions addressing complex value questions are anything but aporetic. While the public is deeply divided over the issue, judicial opinions often “effect a posture of unqualified, untroubled confidence” in the outcome. This “[h]yperbolic certitude” might undermine the legitimacy of the opinion with the losing side, making it seem as though the decisionmaker was biased or unwilling to recognize the strength of arguments supporting the losing side’s position.
In addressing how courts can assure citizens of the law's neutrality, my CCP colleagues and I have conjectured that judicial decisions might reduce cultural polarization and garner acceptance from the losing side by abandoning the norm of reasoning as if the answer is obvious, indisputable, and certain.
Instead, if a court were to recognize (a) the difficulty (even intractability) of the problem, and (b) the strength of the losing side’s case, perhaps the losers would be more likely to perceive the opinion as a legitimate one; one that took their concerns and arguments deeply into account. If the losing side sees its concerns and arguments were thoroughly considered in the decision, it might also be more open to accepting the arguments that prevailed in the outcome. I have long thought about testing this hypothesis that aporetic reasoning would reduce cultural polarization over a controversial ruling.
So I was really excited to read Rob Robinson’s empirical study on exactly this point: It’s How You Say It – Ameliorating Cultural Cognition of Judicial Rulings Through Aporetic Reasoning.
Robinson's study follows a few others with promising implications for the aporia hypothesis: Tom Tyler's research, described here, finds that public views about the legitimacy of legal authority are influenced by the procedural justice and by the distributive justice of the outcomes, but less affected by the favorability of the outcome. Dan Simon and Nicholas Scurich, Lay Judgmentsof Judicial Decision-Making, have found that people tend to agree more with decisions recognizing good reasons support either side of the case than decisions that only recognize the value of one side's position. They also find that an opinion giving no reasons is more persuasive than one including a single, curt reason. (Simon and Schurich's findings rebuffed a preexisting hypothesis called ‘placebic reasoning’ – that people are more likely to credit decisions or actions when backed by reasons, even if those reasons are entirely redundant (i.e., asking to cut in line for a copy machine was less credible than asking to cut in line for a copy machine and providing a redundant reason, “because I have to make copies.”)).
While these studies support the aporia hypothesis, Robinson is the first (to my knowledge) to frame his testing in terms of aporia, specifically.
Robinson conducted an experiment designed to test how members of the public would react to more and less aporetic versions of a judicial decision contrary to their own position on gay marriage.
The study subjects, 619 individuals representing a mix of university students Amazon’s MTurk workers, were assigned to one of three mirror-image conditions.
In the “control” condition, subjects read a newspaper article describing a judicial decision that examined whether homosexuality should be recognized as an “immutable” (i.e., unchosen, and unalterable) trait. The story reported the court’s conclusion—either “no,” if subjects said they supported gay marriage; or “yes,” if they said they did—and nothing more.
In the “monolithic” condition, the article includes a quote from the court’s opinion in which the court defends its reasoning by remarking that that an “objective reading of the evidence leads to no other conclusion.” The court explains that it is obliged to reject the position supported by the study subject—either that homosexuality is “ immutable,” in the version of the article shown to gay-marriage supporters; or that it is not, in the version shown to gay-marriage opponents—on the ground that there is “no clear scientific consensus” in favor of that view.
In the “aporetic” condition, the news story quotes language from the opinion evincing a more nuanced stance. The quoted language chides one side or the other—either “those who believe homosexuality is a choice” for “often ignor[ing] evidence [to the contrary]” or “those who argue sexual orientation is fixed or unchanging” for “often overstat[ing] their case.” The court nevertheless justifies a ruling in favor of the scolded side on the ground that a court is powerless to deem matters otherwise in the face of uncertain evidence.
Robinson reports that subjects found the court’s reasoning more persuasive in both the “monolithic” and “aporetic” conditions than in the control. In other words, the subjects were least disappointed by the decision when they were told the court had given an explanation for rejecting their position.
In the view of the subjects who oppose gay marriage, the aporetic opinion was even more persuasive than the monolithic one.
But for those who support gay marriage, the persuasiveness of the decision did not differ significantly among those assigned to “aporetic” and “monolithic” conditions, respectively.
The mean opponents of same-sex marriage rated their disagreement with three forms of the pro-same-sex marriage decision on a scale of 1 ("extremely agree") to 6 ("extremely disagree"): Control 4.16; Monolothic 4.10; Aporetic 3.53. For opponents of same sex marriage, the monolithic opinion was about .06 less disagreeable than the control, the aporetic one was .6 less disagreeable. The mean supporters of same-sex marriage rated their three forms of the anti-same-sex marriage decision follows: Control 4.58; Monolithic 4.46; Aporetic 4.36. Among supporters of same-sex marriage, the monolithic opinion was about .02 less disagreeable than the control, and the aporetic one was .12 less disagreeable than the control.
This is a super valuable study!
I particularly liked the way in which Robinson distilled the aporetic reasoning into a few quotes set within the framework of a newspaper article. There is much innovative about his deisgn, and his study makes me eager to design a follow up study along these lines. In thinking about how to do so, I have been pondering several questions about the design of this study:
- One puzzling aspect of his findings is that supporters of same-sex marriage were overall more negative about all three forms of the opinion ruling against it, and they found the aporetic version only slightly less disagreeable. While the aporetic opinion significantly reduced the extent that opponents of same-sex marriage disagreed with a pro-same-sex marriage decision. (The effect of the aporetic treatment on anti-same sex marriage group's disagreement was -0.592, while the effect of the aporetic treatment on pro-same-sex marriage group's disagreement was only -0.150.)
Why were supporters of same-sex marriage overall more resistant to crediting the contrary opinion, and why was their disagreement less mitigated by aporia? Robinson states this might be caused by the sample of those who favor same-sex marriage being larger (N pro-same-sex marriage=496, N anti-same-sex= 161). (But the larger sample should supply the more significant result if the phenomenon exists, not the less significant one.) He also posits that the difference in reaction may result from "those who favor gay marriage simply having a stronger reaction to empirical claims regarding immutability than those who are opposed." P. 18.
It could be the case that supporters of same-sex marriage are categorically more rigid in their position, and less willing to credit a contrary ruling regardless of its reasoning.
But I'd posit another possible explanation. Perhaps the pro-same-sex marriage group's rigid disagreement relates to their views on the relevance of whether homosexuality is immutable, as opposed to an extra-strong belief that same-sex marriage should be allowed. It seems that there may be many egalitarian individuals like me who think that same-sex marriage should be allowed regardless of whether it is immutable. I think any constitutionally protected individual liberty should be an impermissible basis for discrimination, regardless of whether it is immutable. (Indeed I'm offended by the notion that protection is limited to traits that are predetermined rather than chosen pursuant to constitutionally guaranteed autonomy.). I would be much more persuaded to support regulation of same-sex relationships if it were shown that they caused harm to public welfare: the stability of marriage or childrearing.
Hence, I wonder whether the extra-strong disagreement with the opinion finding homosexuals are not a protected class may represent disdain of the idea that immutability determines the degree of constitutional protection. This is frustration with the legal standard as opposed to ideology-based cognitive rigidity. For this reason, one of my overarching questions about Robinson's study is whether immutability is the best empirical issue for measuring cultural effects in the same sex marriage debate. I would be inclined to focus on welfare-related empirical questions, such as how same-sex marriage impacts childrearing, a question on which strong cultural effects have been observed.
Furthermore, because these welfare concerns seem to be more often cited in the public debate as a reason for prohibiting same sex marriage, it seems cultural identity may be more strongly tied to one’s beliefs about these questions than one’s belief about immutability. (While certainly part of the debate about the morality of homosexuality, immutability seems to be cited less often as the public reason for prohibiting same sex marriage.) It seems some might oppose same sex marriage for purported public welfare consequences, regardless of whether sexual orientation is immutable. And as I have described above, some proponents of same-sex marriage might be particularly resentful of a decision based on immutability, as they do not believe this should be a relevant factor. This group might also, while cognitively motivated to support a pro-same-sex marriage ruling, be disinclined to support a ruling that homosexuality is immutable.
My other questions pertain to specific elements of the study's design:
- Asking for views about same-sex marriage: I wonder whether first asking subjects about their stance on same-sex marriage makes them less susceptible to being persuaded by the aporetic reasoning we are testing. Because people don’t want to be inconsistent—either internally or be perceived as such by those conducting the survey—they might resist crediting the ruling after reporting disagreement with its conclusion at the outset of the study, regardless of whether the find the aporetic or monolithic reasoning persuasive. It seems the cultural measures provide enough information to predict a subject’s likely orientation on same-sex marriage, and it is unnecessary to ask subjects about the issue being studied.
- Assignment to conditions with which subjects are inclined to disagree: I also question the decision to only show subjects opinions with which they are inclined to disagree. It seems to me that a study of this nature should measure the reasoning’s persuasiveness to both those inclined to disagree with it and those inclined to agree with it. It may be that an aporetic opinion is more persuasive to those inclined to disagree, it is less persuasive to those inclined to agree. It seems this, too, would be a noteworthy finding. The question should be whether opposing cultural groups converge on the persuasiveness of an aporetic opinion more than they do on a monolithic one.
- Focus on whether the opinion is persuasive rather than correct: I would not focus on asking subjects whether the court’s conclusion is correct or accurately reflects scientific findings, but whether they find the opinion persuasive. Subjects might agree with the court’s conclusion or believe that it accurately states scientific research, but find its reasoning unpersuasive. Or to the contrary, they might disagree with the court’s scientific conclusion, but find the reasoning persuasive.
- More detailed reasoning: I might consider including a few more sentences so that the court’s reasoning more clearly pronounces three elements that I associate with aporia: (a) noting that this is a difficult, perhaps intractable, question, on which there may be no correct answer; (b) saying the evidence is unclear, and presents the strongest points in favor of each side; and (c) gives reasons for crediting one side’s position despite this empirical uncertainty. (I think this last point is the most contentious aspect of aporia – a court must justify its conclusion after admitting that it is uncertain as to the evidence – and it would be particularly interesting to test.). The monolithic condition would do the opposite--e.g., (a) state that the question is simple with a clear right answer; (b) say the evidence is clear or unequivocal; and (c) hold that there's no way one could reach a different result based on the evidence before the court.
- Singling out one side: The aporetic versions in Robinson's study single out one side (The unprotected class version begins: “Those who believe homosexuality is a choice often ignore evidence [to the contrary]”; and protected class begins: “Those who argue sexual orientation is fixed or unchanging often overstate their case.”). In contrast, the monolithic condition does not single out one side in this way, but states: “There is no scientific consensus. . . .” I wonder whether statements that the winning parties “overstate” their case or “ignore” evidence are necessary to the aporetic reasoning. It seems that, for the sake of maintaining the highest degree of similarity between conditions, the aporetic opinion should simply say “The evidence is uncertain as to whether. . . .” Aside from uniformity, I am concerned is that these words might be read as accusing the prevailing side of being disingenuous. One party overstating its case has nothing to do with the court’s aporetic reasoning, but it could heighten the losing side’s suspicion for the winning side’s claims.
- Explaining what’s at stake before the aporia manipulation: The prompt in this survey tells subjects that immutability determines the degree of constitutional protection afforded same-sex couples, but it does not explicitly say that the degree of constitutional protection determines whether laws prohibiting same-sex marriage are constitutional. It seems this connection—immutability effectively determines the constitutionality of laws prohibiting same-sex marriage—should be made explicit before the aporetic statement about immutability. It seems that priming readers with the cultural significance of the court’s reasoning about immutability would enhance the tendency to engage in motivated reasoning, and this would increase the effects we’d expect to see.
In raising these questions, I do not mean to undermine the value of Robinson’s study. To the contrary, I find it very valuable. Not only is it encouraging in that it suggests this question is worth studying further, it also supplies an inspiring baseline for designing another study on this subject.
The fractal nature of the "knowledge deficit" hypothesis: Biases & heuristics, system 1 & 2, and cultural cognition
I often get asked—in correspondence, in Q&A after talks, in chance encounters with strangers while using one or another mode of public transportation—what the connection is between “cultural cognition” and “all that heuristics and biases stuff” or some equivalent characterization of the work, most prominently associated with Nobelist Daniel Kahneman, on the contribution that automatic, largely unconscious mechanisms of cognition make to risk perception.
This excerpt, from Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition, Law & Human Behavior 34, 501-516, (2010), furnishes half the answer.
The basic idea is that cultural cognition is not an alternative to the “heuristics and biases” position but a supplement that helps explain how one and the same mechanism—“the availability effect,” “biased assimilation,” “probability neglect” etc.—can generate systematically opposing risk perceptions in identifiable groups of people.
But as I said, this is only half the answer. At the time that CCP researchers did this study, they were carrying out a research project to examine how cultural cognition interacts with heuristic or “System 1” information processing, which as I indicated features automatic, unconscious mechanisms of cognition.
In a project that we started thereafter, we’ve been examining the connection between cultural cognition and “System 2” reasoning, which involves conscious, analytic forms of information processing. In particular, we’ve been empirically testing the popular conjecture that disputes over climate change and other politically contested risks reflects the public’s over-reliance on heuristic reasoning.
Tragically, people use their quantitative and critical-reasoning dispositions to fit empirical data and other technically complex forms of evidence to the position that affirm their identities. As a result, those who are most disposed to use System 2 reasoning are the most polarized.
If you are wandering the internet preaching that the climate change controversy is a consequence of public’s over-reliance on “emotion” or “fast, intuitive heuristics” etc etc you are ignoring evidence. It was a very reasonable hypothesis, but you need to update your understanding of what’s going on as new evidence emerges—just as climate scientists do!
Sometimes I think this account—that the climate change controversy is a consequence of “public irrationality”—is a kind of pernicious story-telling virus that is impervious to treatment with evidence.
Makes me realize, too, the irony that I am implicitly affirming my adherence to the “knowledge deficit” hypothesis by continually trying to overcome a version of it by simply bombarding propagators of the "System 1 vs. system 2" (or "bounded rationality," "experiential reasoning," "public irrationality" etc.) explanation of conflict over climate change with more and more and more and more empirical evidence that their account is way too simple.
Life is weird. And interesting.
Theoretical Background: Heuristics, Culture, and Risk
The study of risk perception addresses a puzzle. How do people—particularly ordinary citizens who lack not only experience with myriad hazards but also the time and expertise necessary to make sense of complex technical data—form positions on the dangers they face and what they should do about them?
Social psychology has made well-known progress toward answering this question. People (not just lay persons, but quite often experts too) rely on heuristic reasoning to deal with risk and uncertainty generally. They thus employ a range of “mental shortcuts”: when gauging the danger of a putatively hazardous activity (the possession, say, of a handgun, or the use of nuclear power generation), they consult a mental inventory of recalled instances of misfortunes involving it, give special weight to perceived authorities, and steer clear of options that could improve their situation but that also involve the potential to make them worse off than they are at present (“better safe, than sorry”) (Kahneman, Slovic, & Tversky, 1982; Slovic, 2000; Margolis, 1996). They also employ faculties and styles of reasoning—most conspicuously affective ones informed by feelings such as hope and dread, admiration and disgust—that make it possible for them to respond rapidly to perceived exigency (Slovic, Finucane, Peters & MacGregor, 2004).
To be sure, heuristic reasoning of this sort can lead to mistakes, particularly when they crowd out more considered, systematic forms of reasoning (Sunstein, 2005). But they are adaptive in the main (Slovic et al, 2004).
As much as this account has enlarged our knowledge, it remains incomplete. In particular, a theory that focuses only on heuristic reasoning fails to supply a cogent account of the nature of political conflict over risk (Kahan, Slovic, Braman & Gastil, 2006). Citizens disagree, intensely, over a wide range of personal and societal hazards. If the imprecision of heuristic reasoning accounted for such variance, we might expect such disagreements to be randomly distributed across the population or correlated with personal characteristics (education, income, community type, exposure to news of particular hazards, and the like) that either plausibly related to one or another heuristic or that made the need for heuristic reasoning less necessary altogether. By and large, however, this is not the case. Instead, a large portion of the variance in risk perception coheres with membership in groups integral to personal identity, such as race, gender, political party membership, and religious affiliation (e.g. Slovic, 2000, p. 390; Kahan & Braman, 2006). Whether the planet is overheating; whether nuclear wastes can be safely disposed of; whether genetically modified foods are bad for human health—these are cultural issues in American society every bit as much as whether women should be allowed to have abortions and men should be allowed to marry other men (Kahan, 2007). Indeed, as unmistakably cultural in nature as these latter disputes are, public debate over them often features competing claims about societal risks and benefits, and not merely competing values (e.g. Siegel, 2007; Pollock, 2005).
This is the part of the risk-perception puzzle that the cultural theory of risk is distinctively concerned with (Douglass & Wildavsky, 1982). According to that theory, individuals conform their perceptions of risk to their cultural evaluations of putatively dangerous activities and the policies for regulating them. Thus, persons who subscribe to an “individualist” worldview react dismissively to claims of environmental and technological risks, societal recognition of which would threaten markets and other forms of private ordering. Persons attracted to “egalitarian” and “communitarian” worldviews, in contrast, readily credit claims of environmental risk: they find it congenial to believe that commerce and industry, activities they associate inequity and selfishness, cause societal harm. Precisely because the assertion that such activities cause harm impugns the authority of social elites, individuals of a “hierarchical worldview” are (in this case, like individualists) risk skeptical (Rayner, 1992).
Researchers have furnished a considerable body of empirical support for these patterns of risk perception (Dake, 1991; Jenkins-Smith, 2001; Ellis & Thompson, 1997; Peters & Slovic, 1996; Peters, Burriston & Mertz, 2004; Kahan, Braman, Gastil, Slovic & Mertz, 2007). Such studies have found that cultural worldviews explain variance more powerfully than myriad other characteristics, including socio-economic status, education, and political ideology, and can interact with and reinforce the effect of related sources of identity such as race and gender.
Although one could see a rivalry between culture theory and the heuristic model (Marris, Langford, O’Riordan 1998; Douglas, 1997), it is unnecessary to view them as mutually exclusive. Indeed, one conception of the cultural theory—which we will call the cultural cognition thesis ((Kahan, Braman, Monahan, Callahan & Peters, in press; Kahan, Slovic, Braman & Gastil, 2006)—seeks to integrate them. Culture theorists have had relatively little to say about exactly how culture shapes perceptions of risk.[i] Cultural cognition posits that the connection is supplied by conventional heuristic processes, or at least some subset of them (DiMaggio, 1997). On this account, heuristic mechanisms interact with cultural values: People notice, assign significance to, and recall the instances of misfortune that fit their values; they trust the experts whose cultural outlooks match their own; they define the contingencies that make them worse off, or count as losses, with reference to culturally valued states of affairs; they react affectively toward risk on the basis of emotions that are themselves conditioned by cultural appraisals—and so forth. By supplying this account of the mechanisms through which culture shapes risk perceptions, cultural cognition not only helps to fill a lacuna in the cultural theory of risk. It also helps to complete the heuristic model by showing how one and the same heuristic process (whether availability, credibility, loss aversion, or affect) can generate different perceptions of risk in people with opposing outlooks.
The proposition that moral evaluations of conduct shape the perceived consequences of such conduct is not unique to the cultural cognition thesis. Experimental study, for example, shows that negative affective responses mediate between moral condemnation of “taboo” behaviors and perceptions that those behaviors are harmful (Gutierrez & Giner-Sorolla, 2007). The same conclusion is also supported by a number of correlational studies (Horvath & Giner-Sorolla, 2007; Haidt & Hersh, 2001). The point of contact that the cultural cognition thesis, if demonstrated, would establish between cultural theory and these other works in morally motivated cognition would also lend strength to the psychological foundation of the former’s account of the origins of risk perceptions.
[i] For functionalist accounts, in which individuals are seen as forming risk perceptions congenial to their ways of life precisely because holding those beliefs about risk cohere with and promote their ways of life, see Douglas (1986) and Thompson, Ellis & Wildavsky (1990).
More or less what I said at really great NSF-sponsored "trust" workshop at University of Nebraska this weekend. Slides here.
1. What public distrust of science?
I want to address the relationship of trust to the science communication problem.
As I use the term, “the science communication problem” refers to the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent cultural conflict over risks or other policy-relevant facts to which that evidence directly speaks.
The climate change debate is the most spectacular current example, but it is not the only instance of the science communication problem. Historically, public controversy over the safety of nuclear power fit this description. Another contemporary example is the political dispute over the risks and benefits of the HPV vaccine.
Distrust of science is a common explanation for the science communication problem. The authority of science, it is asserted, is in decline, particularly among individuals of a relatively “conservative” political outlook.
This is an empirical claim. What evidence is there for believing that the public trusts scientists or scientific knowledge less today than it once did?
The NSF, which is sponsoring this very informative conference, has been compiling evidence on public attitudes toward science for quite some time as part of its annual Science Indicators series.
One measure of how the pubic regards science is its expressed support for federal funding of scientific research. In 1985, the public supported federal science funding by a margin of about 80% to 20%. Today the margin in the same—as it was at every point between then and now.
Back in 1981, the proportion of the public who thought that the government was spending too little to support scientific research outnumbered the proportion who thought that the government was spending too much by a margin of 3:2.
Today around four times as many people say the government is spending too little on scientific research than say it is spending too much.
Yes, there is mounting congressional resistance to funding science in the U.S.--but that's not because of any creeping "anti-science" sensibility in the U.S. public.
Still aren't sure about that?
Well, how would you feel if your child told you he or she was marrying a scientist? About 70% of the public in 1983 said that would make them happy. The proportion who said that grew to 80% by 2001, and grew another 5% or so in the last decade.
Are “scientists … helping to solve challenging problems”? Are they “dedicated people who work for the good of humanity”?
About 90% of Americans say yes.
Do you think you can squeeze the 75% of Republicans who say they “don’t believe in human-caused climate change” from the remainder? Better double check your math.
In sum, there isn’t any evidence that creeping distrust in science explains the science communication problem, because there’s no evidence either that Americans don’t trust scientists or that fewer of them trust them now than in the past.
Of course, if you like, you can treat the science communication problem itself as proof of such distrust. Necessarily, you might say, the public distrusts scientists if members of the public are in conflict over matters on which scientists aren’t.
But then the “public distrust in science” explanation becomes analytic rather than empirical. It becomes, in other words, not an explanation for the science communication problem but a restatement of it.
If we want to identify the source of the science communication problem, simply defining the problem as a form of “public distrust” in science—on top of being a weird thing to do, given the abundant evidence that the American public reveres science and scientists—necessarily fails to tell us what we are interested in figuring out, and confuses a lot of people who want to make things better.
2. The impact of cultural distrust on perceptions of what scientists believe
So rather than define the science communication problem as evincing “public distrust in science,” I’m going to offer an evidence-based assessment of its cause.
A premise of this explanation, in fact, is that the public does trust science.
As reflected in the sorts of attitudinal items in the NSF indicators and other sources, members of the public in the U.S. overwhelmingly recognize the authority of science and agree that the individual and collective decisionmaking should be informed by the best available scientific evidence.
But diverse members of the public, I’ll argue, distrust one another when they perceive that the status of the cultural groups they belong to are being adjudicated by the state’s adoption of a policy or law premised on a disputed risk or comparable fact.
When risks and other facts that admit of scientific investigation become the focus of cultural status competition, members of opposing groups will be unconsciously motivated to construe all manner of evidence in a manner that reinforces their commitment to the positions that predominate within their respective groups.
One source of evidence—indeed, the most important one—will be the weight of opinion among expert scientists.
As a result, culturally diverse people, all of whom trust scientists but who disagree with one another’s intentions on policy issues that have come to symbolize clashing worldviews, will end up culturally polarized over what scientists believe about the factual presuppositions of each other's position.
That is the science communication problem.
I will present evidence from two (NSF-funded!) studies that support this account.
3. Cultural cognition of scientific consensus
The first was an experiment on how cultural cognition influences perceptions of scientific consensus on climate change, nuclear waste disposal, and the effect of “concealed carry” laws.
The cultural cognition thesis holds that individuals can be expected to form perceptions of risk and like facts that reflect and reinforce their commitment to identity-defining affinity groups.
For the most part, Individuals have a bigger stake in forming identity-congruent beliefs on societal risks than they have in forming best-evidence-congruent ones. If a person makes a mistake about the best evidence on climate change, for example, that won’t affect the risk that that individual or anyone he or she cares about faces: as a solitary individual, that person’s behavior (as consumer, voter, etc.) is too inconsequential to have an impact.
But if that person makes a “mistake” in relation to the view that dominates in his or her affinity group, the consequences could be quite dire indeed. Given what climate change beliefs now signify about one’s group membership and loyalties, someone who forms a culturally non-conformity view risks estrangement from those on whose good opinion that person’s welfare—material and emotional—depends.
It is perfectly rational, in these circumstances, for individuals to engage information in a manner that reliably connects their beliefs to their cultural identities than to the best scientific evidence. Indeed, experimental evidence suggests that the more proficient that person’s critical reasoning capacities, the more successful he or she will be in fitting all manner of evidence to the position that expresses his or her group identity.
What most scientists in a particular field believe is one such form of evidence. So we hypothesized that culturally diverse individuals would construe evidence of what experts believe in a biased fashion supportive of the position that predominates in their respective groups.
In the experiment, we showed study subjects the pictures and resumes of three highly credentialed scientists and asked whether they were “experts” (as one could reasonably have inferred from their training and academic posts) in the domains of climate change, nuclear power, and gun control.
Half the subjects were shown a book expert in which the featured scientist took the “high risk” position on the relevant issue (“scientific consensus that humans are causing climate change”; “deep geologic isolation of nuclear wastes is extremely hazardous”; “permitting citizens to carry concealed guns in public increases crime”), and half a book excerpt in which the same scientist too the “low risk” position (“evidence on climate change inconclusive”; “deep geologic isolation of nuclear wastes poses no serious hazards”; “allowing citizens to carry concealed guns reduces crime”).
If the featured scientist’s view matched the one dominant in a subject’s cultural group, the subject was highly likely to deem that scientists an “expert” whose views a reasonable citizen would take into account.
But if that same scientist was depicted as taking the position contrary to the one that was dominant in the subject’s group, then she was highly likely to perceive that the scientist lacked expertise on the subject in question.
This result was consistent with our hypotheses.
If individuals in the real-world selectively credit or discredit evidence on “what experts believe” in this manner, then individuals of diverse cultural outlooks will end up polarized on what scientific consensus is.
And this is exactly the case. In an observational component of the study, we found that the vast majority of subjects perceived “scientific consensus” to be consistent with the position that was dominant among members of their respective cultural groups.
Judged in relation to National Academy of Sciences “expert consensus” reports, moreover, all of the opposing cultural groups turned out to be equally bad in discerning what the weight of scientific opinion was across these three issues.
In sum, they all agreed that policy should be informed by the weight of expert scientific opinion.
But because policies in question turned on disputed facts symbolically associated with membership in opposing groups, they were motivated by identity-protective cognition to assess evidence of what scientists believe in a biased fashion.
4. The cultural credibility heuristic
The second study involved perceptions of the risks and benefits of the HPV vaccine.
The CDC’s 2006 recommendation that the vaccine be added to the schedule of immunizations required as a condition of middle school enrollment, although only for girls, provoked intense political controversy across the U.S. in the years immediately thereafter.
In our study, we found that there was very mild cultural polarization on the safety of the HPV vaccine among subjects’ whose views were solicited in a survey.
The degree of cultural polarization was substantially more pronounced, however, among subjects who were first supplied with balanced information on the vaccines’ potential risks and expected benefits. Consistent with the cultural cognition thesis, the subjects were selectively crediting and discrediting the information we supplied in patterns that reflected their stake in forming identity-supportive beliefs.
But still another group of subjects assessed the risks and benefits of the HPV vaccine after being furnished the same information from debating “public health experts.” These “experts” were ones whose appearances and backgrounds, a separate pretest had shown, would induce study subjects to competing cultural identities to them.
In this experiment condition, subjects’ assessments of the risks and benefits of the HPV vaccine turned decisively on the degrees of affinity between the perceived cultural identities of the experts and the study subjects’ own identities.
If subjects observed the position that they were culturally predisposed to accept being advanced by the “expert” they were likely to perceive as having values akin to theirs, and the position they were predisposed to reject being advanced by the “expert” they were likely to perceive as having values alien to their own, then polarization was amplified all the more.
But where subjects saw the expert they were likely to perceive as sharing their values advancing the position they were predisposed to reject, and the expert they were likely to perceive as holding alien values advancing the position they were predisposed to accept, subjects of diverse cultural identities flipped positions entirely.
The subjects, then, trusted the scientific experts.
But the subjects remained predisposed to construe information in a manner protective of their cultural identities.
As a result, when they were furnished tacit cues that opposing positions on the HPV vaccine risks corresponded to membership in competing cultural groups, they credited the expert whose values they tacitly perceived as closest to their own—a result that intensified polarization when subjects' predispositions were reinforced by those cues.
5. A prescription
The practical upshot of these studies is straightforward.
To translate public trust in science into convergence on science-informed policy, it is essential to protect decision-relevant science from entanglement in culturally antagonistic meanings.
No risk issue is necessarily constrained to take on such meanings.
There was nothing inevitable, for example, about the HPV vaccine becoming a focus of cultural status conflict. It could easily, instead, have been assimilated uneventfully into public health practice in the same manner as the HBV vaccine. Like the HPV vaccine, the HBV vaccine immunizes recipients against a sexually transmitted disease (hepatitis-b), was recommended for universal adolescent vaccination by the CDC, and thereafter was added to the school-enrollment schedules of nearly every state.
The HBV vaccine had uptake rates of over 90% during the years in which the safety of the HPV vaccine was a matter of intense, and intensely polarizing, political controversy in the U.S.
The reason HPV ended up becoming suffused with antagonistic cultural meanings had to do with ill-advised decisions, pushed for by the vaccine’s manufacturer and acquiesced in without protest by the FDA, that made it certain that members of the public would learn about the vaccine for the first time not from their pediatricians, as they had with the HBV vaccine, but from news reports on the controversy occasioned by a high-profile, nationwide campaign to secure legislative enactments of a “girls’ only STD shot” as a condition of school enrollment.
The risks associated with introducing the HPV vaccine in this manner were not only foreseeable but foreseen and even empirically studied at the time.
Warnings about this danger were not so much rejected as never considered—because there is no mechanism in place in the regulatory process for assessing how science-informed policymaking interacts with cultural meanings.
The U.S. is a pro-science culture to its core.
But it lacks a commitment to evidence-based methods and procedures for assuring that what is known to science becomes known to those whose decisions, individual and collective, it can profitably inform.
The “declining trust in science” trope is itself a manifestation of our evidence-free science communication culture.
Those who want to solve the science communication problem should resist this & all the other just-so stories that are offered as explanations of it.
They should also steer clear of those drawn to the playground-quality political discourse that features competing tallies of whose “side” is “more anti-science.”
And they should instead combine their energies to the development of a new political science of science communication that reflects an appropriately evidence-based orientation toward the challenge of enabling the members of a pluralistic liberal society to reliably recognize what’s known by science.
Does this show scientists today are suffering from lack of public trust? See exchange in comments -- & add your interpretations of these and other data!
Still more evidence of my preternatural ability to change people's minds: my refutation of Krugman's critique of Klein's article convinces Klein that Krugman's critique was right
But I don't have time to go into this now (am busy w/ field experiments aimed at counteracting the motivated reasoning of cultural anti-cat zealots). Will write something on this "tomorrow."
More or less the remarks I delivered yesterday at Earthday "Climate teach in/out" at Yale University:
I study risk perception and science communication.
I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society.
What people “believe” about global warming doesn’t reflect what they know; it expresses who they are.
Accordingly, if you want to promote constructive public engagement with the best available evidence, you have to change the meaning of the climate change.
You have to disentangle positions on it from opposing cultural identities, so that people aren't put to a choice between freely appraising the evidence and being loyal to their defining commitments.
I’ll elaborate, but for a second just forget climate change, and consider another culturally polarizing science issue: evolution.
About every two years, a major polling organization like Gallup issues a public opinion survey showing that approximately 50% of Americans “don’t believe in evolution.”
Pollsters issue these surveys at two-year intervals because apparently that’s how long it takes people to forget that they’ve already been told this dozens of times. Or in any case, every time such a poll is released, the media and blogosphere is filled with expressions of shock, incomprehension, and dismay.
“What the hell is wrong with our society’s science education system?,” the hand-wringing, hair-pulling commentators ask.
Well, no doubt a lot.
But if you think the proportion of survey respondents who say they “believe in evolution” is an indicator of the quality of the science education that people are receiving in the U.S., you are misinformed.
Do you know what the correlation is between saying “I believe in evolution” and possessing even a basic understanding of “natural selection,” “random mutation,” and “genetic variance”—the core elements of the modern synthesis in evolutionary science?
In a controversial decision in 2010, the National Science Foundation in fact proposed removing from its standard science-literacy test the true-false question “human beings developed from an earlier species of animals.”
The reason is that giving the correct answer to that question doesn’t cohere with giving the right answer to the other questions in NSF’s science-literacy inventory.
What that tells you, if you understand test-question validity, is that the evolution item isn’t measuring the same thing as the other science-literacy items.
Answers to those other questions do cohere with one another, which is how one can be confident they are all validly and reliably measuring how much science knowledge that person has acquired.
But what the NSF “evolution” item is measuring, researchers have concluded, is test takers’ cultural identities, and in particular the significance of religiosity in their lives.
What’s more, the impact of science literacy on the likelihood that people will say they “believe in evolution” is in fact highly conditional on their identity: as their level of science comprehension increases, individuals with a highly secular identity become more likely to say “they believe” in evolution; but as those with a highly religious identity become more science literate, in contrast, they become even more likely to say they don’t.
What you “believe” about evolution, in sum, does not reflect what you know about science—in general, or in regard to the natural history of human beings.
Rather it expresses who you are.
Okay, well, exactly the same thing is true on climate change.
You’ve all seen the polls, I’m sure, showing the astonishing degree of political polarization on “belief” “human-caused” global warming.
Well, a Pew Poll last spring asked a nationally represented sample, “What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it carbon dioxide, hydrogen, helium, or radon?”
Approximately 60% got the right answer to that question.
And there was zero correlation between getting it right and being a Democrat or Republican.
The percentage of Democrats who say they “believe” in global warming is substantially higher than 65%: it’s over 80%, which means that a good number of Democrats who say they “believe” in global warming don’t understand the most basic of all facts known to climate science.
The percentage of Republicans who say they don’t believe in global warming is a lot lower than 65%. Only about 25% say they believe human beings have caused global temperatures to rise in recent decades, according to Pew and other researchers.
That means that a large fraction of the Republicans who tell pollsters they “don’t believe” in human-caused global warming do in fact know the most important thing there is to understand about climate change: that adding carbon to the atmosphere causes the temperature of the earth to increase.
Do you know what the correlation is between science literacy and “belief” in human-caused global warming?
You get half credit for saying zero.
That’s the right answer for a nationally representative sample as a whole.
But it’s a mistake to answer the question without dividing the sample up along cultural or comparable lines: as their score on one or another measure of science comprehension goes up, Democrats become more likely, and Republicans less, to say they “believe” in human-caused global warming.
Like saying “I do/don’t believe in evolution,” saying I “do/don’t believe in climate change” doesn’t convey what you know about science—generally, or in relation to the climate.
It expresses who you are.
Al Gore has described the climate change debate as a “struggle for the soul of America.”
But that’s exactly the problem. Because in “battles for the soul” of America, the stake that culturally diverse individual have in forming beliefs consistent with their group identity dominates the stake they have in forming beliefs that fit the best available evidence.
In saying that, moreover, I’m not talking about whatever interest people have in securing comfortable accommodations in the afterlife. I’m focused entirely on the here and now.
Look: What an ordinary individual believes about the “facts” on climate change has no impact on the climate.
What he or she does as a consumer, as a voter, or as a participant in public debate is just too inconsequential to have an impact.
No mistake that individual makes about the science on climate change, then, is going to affect the risk posed by global warming for him or her or for anyone else that person cares about.
But if he or she takes the “wrong” position in relation to his or her cultural group, the result could be devastating for her, given what climate change now signifies about one’s membership in and loyalty to opposing cultural groups.
It could drive a wedge—material, emotional, and psychological—between individual the people whose support are indispensable to his or her well-being.
In these circumstances, we should expect a rational person to engage information in a manner geared to forming and persisting in positions that are dominant within their cultural groups. And the better they are at making sense of complex information—the more science comprehending they are –the better they’ll do at that.
That’s what we see in lab experiments. And it’s why we see polarization on global warming intensifying in step with science literacy in the real world.
But while that’s the rational way for people to engage information as individuals, given what climate change signifies about their cultural identities, it’s a disaster for them collectively. Because if everyone does this at the same time, members of a culturally diverse democratic society are less likely to converge on scientific evidence that is crucial to the welfare of all of them.
And yet that by itself doesn’t make it any less rational for individuals to attend to information in a manner that reliably connects them to the position that is dominant in their group.
This is a tragedy of the commons problem—a tragedy of the science communications commons.
If we want to overcome it, then we must disentangle competing positions on climate change from opposing cultural identities, so that culturally pluralistic citizens aren’t put in the position of having to choose between knowing what’s known to science and being who they are.
Only that will dissolve the conflict citizens now face between their personal incentive to form identity-consistent beliefs and the collective one they have in recognizing and giving effect to the best available evidence.
Science educators, by the way, have already figured this out about evolution. They’ve shown you can in fact teach the elements of the modern synthesis-- random mutation, genetic variance and natural selection—just as readily to students whose identities cohere with saying they “don’t believe” in evolution as you can to students whose identities cohere with saying they do. You just can’t expect the former to “I believe in evolution” after.
Indeed, you must take pains not to confuse understanding evolutionary science with the “pledge of cultural allegiance” that “I believe in evolution” has become.
You must remove from the education environment the toxic cultural meanings that make answers to that question badges of membership in and loyalty to one’s cultural group. The meanings that fuel the pathetic spectacle of hand-wringing and hair-pulling that occurs every time Gallup or another organization issues its “do you believe in evolution” survey results.
All the diverse groups that make up our pluralistic democracy are amply stocked with science knowledge.
They are amply stocked with public spirit too.
That means you, as a science communicator, can enable these citizens to converge on the best available evidence on climate change.
But to do it, you must banish from the science communication environment the culturally antagonistic meanings with which positions on that issue have become entangled—so that citizens can think and reason for themselves free of the distorting impact of identity-protective cognition.
If you want to know what that sort of science communication environment looks like, I can tell you where you can see it: in Florida, where all 7 members of the Monroe County Board of Commissioners -- 4 Democrats, 3 Republicans -- voted unanimously to join Broward County (predominantly Democratic), Monroe County (predominantly Republican), and Miami-Dade County (predominantly Republican) in approving the Southeast Climate Compact Action plan, which, I quote from the Palm Beach County Board summary, “includes 110 adaptation and mitigation strategies for addressing seal-level risk and other climate issues within the region.”
I’ll tell you another thing about what you’ll see if you make this trip: the culturally pluralistic, and effective form of science communication happening in southeast Florida doesn’t look anything like the culturally assaultive "us-vs-them" YouTube videos and prefabricated internet comments with which Climate Reality and Organizing for American are flooding national discourse.
And if you want to improve public engagement with climate science in the United States, the fact that advocates as high profile and as highly funded as that still haven’t figured out the single most important lesson to be learned from the science of science communication should make you very sad.
No, I don't think "cultural cognition is a bad thing"; I think a *polluted science communication environment* is & we should be using genuine evidence-based field communication to address the problem
Stenton Benjamin Danielson has a characteristically thoughtful post, 95% of which I agree with, on cultural cognition, "public opinion," and promoting constructive public engagement with climate science. But of course the 5%-- which has to do with whether I think "cultural cognition" is a "bad thing" that is to be overcome rather than a dynamic to be deployed to promote such engagement -- sticks in my craw! Maybe this response will get us closer to 100% agreement--if not by moving him a full 5% in my direction, then maybe by provoking him to elaborate & thereby move me some fraction of the remainder toward his point of view.
So read what he says. Then read this:
Part of the problem, I'm sure, is that I'm an imperfect communicator.
Another is the infeasibility of saying everything one believes every time one says anything.
But it is simply not the case that I view
cultural cognition as unreservedly bad -- a sort of disease or pollution in our debate about an issue, something to be prevented or neutralized whenever possible so that we can make rational assessments of the evidence.
On the contrary, I view it is an indispensable element of rational thought, one that contributes in a fundamental way to the capacity of individuals to participate in, and thus extend, collective knowledge. See generally:
- Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1)
- The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)
Cultural cognition conduces to persistent states of public controversy over what's known only in a polluted science communication environment: one in which antagonistic cultural meanings become attached to positions on risk and policy-relevant facts, and transform them into badges of membership in opposing cultural groups.
- The Liberal Republic of Science, part 4: "A new political science ..."
- Democracy and the science communication environment (lecture synopsis and slides)
I also agree, by the way, that "messaging" campaigns aimed at influencing "public opinion" generally are an absurd waste of time, not to mention waste of the money of those eager to support climate-science communication efforts. This approach to "science communication" not only reflects a psychologically unrealistic account of how people come to know what's known by science but betrays an elementary-school level of comprehension of basic principles of political economy.
Don't "message" people with "struggle for the soul of America" appeals.
Show them that engaging climate science is "normal" by enabling them to see that people they recognize as competent and informed are using it to guide their practical decisions. That is how ordinary people -- very rationally -- recognize how to orient themselves appropriately with the best available evidence on all manner of issues.
Understanding the contribution that cultural cognition makes to individuals' rational apprehension of what is known is, I believe, is indispensable to that strategy for promoting constructive public engagement with climate science. I'm glad to see that you agree with me on that -- even if you hadn't discerned that I agree with you!
Those "risk experts" who want to contribute, moreover, should stop telling just-so stories-- give up the facile "take-'biases'-&-'heuristics'-literature-add-water-&-stir" form of "instant decision science"-- and go to the places where real people are trying to figure out how to use climate science to make their lives better.
Go there and genuinely help them by systematically testing their experience-informed hypotheses about how to reproduce in the world the sorts of things that experimental methods using cultural cognition and other theories suggest will improve public engagement with climate science.
We don't need more stylized lab experiments that try to convince us that things that real-world evidence manifestly show won't work actually will if we just keep doing them (followed when they don't by whinging about "the forces of evil" who--as was perfectly foreseeable--told members of the public whom you were targetting not to believe your "message").
Climate scientists update their models to reflect ten years of data. Climate advocates should too.
Want to improve climate-science communication (I mean really, seriously)? Stop telling just-so stories & conducting "messaging" experiments on MTurk workers & female NYU undergraduates & use genuine evidence-based methods in field settings instead
From Kahan, D., "Making Climate Science Communication Evidence-based—All the Way Down," in Culture, Politics and Climate Change, eds. M. Boykoff & D. Crow, pp. 203-21. (Routledge Press, 2014):
a. Methods. In my view, both making use of and enlarging our knowledge of climate science communication requires making a transition from lab models to field experiments. The research that I adverted to on strategies for counteracting motivated reasoning consist of simplified and stylized experiments administered face-to-face or on-line to general population samples. The best studies build explicitly on previous research—much of it also consisting in stylized experiments—that have generated information about the nature of the motivating group dispositions and the specific cognitive mechanisms through which they operate. They then formulate and test conjectures about how devices already familiar to decision science—including message framing, in-group information sources, identity-affirmation, and narrative—might be adapted to avoid triggering these mechanisms when communicating with these groups.
But such studies do not in themselves generate useable communication materials. They are only models of how materials that reflect their essential characteristics might work. Experimental models of this type play a critical role in the advancement of science communication knowledge: by silencing the cacophony of real-world influences that operate independently of anyone’s control, they make it possible for researchers to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. They are thus ideally suited to reducing the class of the merely plausible strategies to ones that communicators can have an empirically justified conviction are likely to have an impact. But one can’t then take the stimulus materials used in such experiments and send them to people in the mail or show them on television and imagine that they will have an effect.
Communicators are relying on a bad model if they expect lab researchers to supply them with a bounty of ready-to use strategies. The researchers have furnished them something else: a reliable map of where to look for them. Such a map will (it is hoped) spare the communicators from wasting their time searching for nonexistent buried treasure. But the communicators will still have to dig, making and acting on informed judgments about what sorts of real materials they believe might reproduce these effects outside the lab in the real-world contexts in which they are working.
The communicators, moreover, are the only ones who can competently direct this reproduction effort. The science communication researchers who constructed the models can’t just tell them what to do because they don’t know enough about the critical details of the communication environment: who the relevant players are, what their stakes and interests might be, how they talk to each other, and whom they listen to. If researchers nevertheless accept the invitation to give “how to” advice, the best they will be able to manage are banalities—“Know your audience!”; “Grab the audience’s attention!”—along with Goldilocks admonitions such as, “Use vivid images, because people engage information with their emotions. . . but beware of appealing too much to emotion, because people become numb and shut down when they are overwhelmed with alarming images!”
Communicators possess knowledge of all the messy particulars that researchers not only didn’t need to understand but were obliged to abstract away from in constructing their models . Indeed, like all smart and practical people, the communicators are filled with many plausible ideas about how to proceed—more than they have the time and resources to implement, and many of which are not compatible with one another anyway. What experimental models—if constructed appropriately—can tell them is which of their surmises rest on empirically sound presuppositions and which do not. Exposure to the information such modeling yields will activate experienced-informed imagination on the communicators’ part, and enable them to make evidence-based judgments about which strategies they believe are most likely to work for their particular problem.
At that point, it is time for the scientist of science communication to step back in—or to join alongside the communicator. The communicator’s informed conjecture is now a hypothesis to be tested. In advising field communicators, science of science communication researchers should treat what the communicators do as experiments. Science communication researchers should work with the communicator to structure their communication strategies in a manner that yields valid observations that can be measured and analyzed.
Indeed, communicators, with or without the advice of science of science communication researchers, should not just go on blind instinct. They shouldn’t just read a few studies, translate them into a plausible-sounding plans of action, and then wing it. Their plausible surmises about what will work will be more plausible, more likely to work, than any that the laboratory researchers, indulging their own experience-free imaginations, concoct. But they will still be only plausible surmises. Still be only hypotheses. Without evidence, we will not learn whether policies based on such surmises did or didn’t work. If we don’t learn that, we won’t learn how to do even better.
Genuinely evidence-based science communication must be based on evidence all the way down. Communicators should make themselves aware of the existing empirical information that science communication researchers have generated (and steer clear of the myriad stories that department-store consumers of decision science work tell) about why the public is divided on climate science. They should formulate strategies that seek to reproduce in the world effects that have been shown to help counter the dynamics of motivated reasoning responsible for such division. Then, working with empirical researchers, they should observe and measure. They should collect appropriate forms of pretest or preliminary data to try corroborate that the basis for expecting a strategy to work is sound and to calibrate and refine its elements to maximize is expected effect. They should also collect and analyze data on the actual impact of their strategies once they’ve been deployed.
Finally, they should make the information that they have generated at every step of this process available to others so that they can learn from it to. Every exercise in evidence-based science communication itself generates knowledge. Every such exercise itself furnishes an instructive model of how that knowledge can be intelligently used. The failure to extract and share the intelligence latent in doing science communication perpetuates the dissipation of collective knowledge that it is the mission of the science of science communication to staunch.
 Unrepresentative convenience samples are unlikely to generate valid insights on how to counteract motivated reasoning. Samples of college undergraduates are perfectly valid when there is reason to believe the cognitive dynamics involved operate uniformly across the population. But the mechanisms through which motivated reasoning generates polarization on climate change don’t; they interact with diverse characteristics—worldviews and values, but also gender, race, religiosity, and even regions of residence. It is known, for example, that white males who are highly hierarchical and individualistic in worldviews or conservative in their political ideologies, and who are likely to live in the South and far west, tend to react dismissively to information about climate change (McCright & Dunlap 2013, 2012, 2011; Kahan, Braman, Gastil, Slovic & Mertz 2007). Are they likely to respond to a “framing” strategy in the same way that a sample of predominantly female undergraduates attending a school in New York City does (Feygina, Jost & Goldsmith 2010)? If not, that’s a good reason to avoid using such a sample in a framing study, and not to base practical decisions on any study that did.
From CCP's "Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment" report:
II. Summary conclusions
1. There is deep and widespread public consensus, even among groups strongly divided on other issues such as climate change and evolution, that childhood vaccinations make an essential contribution to public health. A very large supermajority believes that the benefits of childhood vaccinations outweigh their risks and that public health generally would suffer were vaccination rates to fall short of the goals set by public health authorities.
2. In contrast to other disputed science issues, public opinion on the safety and efficacy of childhood vaccines is not meaningfully affected by differences in either science comprehension or religiosity. Public controversies over science, including those over evolution and climate change, often feature conflict among individuals of varying levels of religiosity, whose difference of opinion intensify in proportion to their level of science comprehension. There is no such division over vaccine risks and benefits.
3. The public’s perception of the risks and benefits of vaccines bears the signature of a generalized affective evaluation, which is positive in a very high proportion of the population. The high degree of coherence in responses to items relating to the contribution that childhood vaccinations make to public health strongly implies that public assessments of vaccine risks and benefits reflect a unitary latent affective orientation. The distribution of that orientation is strongly skewed in a positive direction—indicating that a substantial majority of the population (in the vicinity of 75%) has a positive attitude toward childhood vaccines.
4. Among the manifestations of the public’s positive orientation toward childhood vaccines is the perception that vaccine benefits predominate over vaccine risks and a high degree of confidence in the judgment of public health officials and experts. By large supermajorities, the survey participants endorsed the proposition that vaccine benefits outweigh their risks, and rejected claims that deterioration in vaccination coverage would pose no serious public health danger. They also expressed confidence in the judgment of officials who identify which vaccinations should be universally administered, and in the judgment of experts that vaccines are safe.
5. Perceptions of the relationship between vaccines and specified diseases reflect the same positive affective orientation that informs public perceptions of the contribution that childhood vaccines make to public health generally. Responses to items on the link between vaccines and autism, cancer, diabetes—as well as a fictional disease not asserted by anyone to be connected to childhood vaccinations—displayed the same pattern as the responses to all the other public-health items. Under these circumstances, responses to these items can confidently be viewed only as indicators of the same latent affective attitude reflected in the public’s assessments of the contribution childhood vaccines make to public health generally. Public health officials should resist the mistake of construing responses to survey items such as these as measuring public knowledge about or beliefs on specific issues relating to childhood vaccinations.
6. The demographic characteristics and political outlooks typically associated with group conflict over risk and related aspects of decision-relevant science are not meaningfully associated with disagreement about childhood-vaccination risks. Members of all such groups believe that vaccine risks are low, vaccine benefits high, and mandatory vaccination policies appropriate. Those who believe otherwise are outliers in every one of these groups.
7. There is no meaningful association between concern over vaccine risks and the sharp cultural cleavage that characterizes perceptions of either “public safety risks,” a cluster of putative hazards associated with environmental issues and gun control, or “social deviancy risks,” a cluster associated with legalization of marijuana and prostitution and with teaching high school students about birth control. The opposing cultural allegiances that are associated with disputed societal and public health risks do not generate meaningful disagreement over vaccine risks and benefits. At most, such dispositions mildly influence the intensity with which culturally diverse members of the public approve of childhood vaccination.
8. Existing universal vaccination policies appear to enjoy widespread support, but proposals to restrict existing grounds for exemption divide the public along partisan lines. Despite support for universal vaccination policies and widespread disapproval of parents who refuse to permit vaccination of their children based on concerns about vaccine risks, proposals to restrict or eliminate moral or religious grounds for opting out of vaccination requirements provoke dissensus along largely partisan lines consistent with citizens’ general orientation toward government regulation.
9. The public generally underestimates vaccination rates and overestimates the rate of exemption. Only 9% of the survey respondents recognized that the vaccination rate among U.S. children aged 19-35 months for recommended childhood vaccinations has been over 90% in recent years. The median estimate was between 70-79%. The median estimate of children receiving no vaccinations was 2-10%; only 9% correctly indicated that less than 1% of children aged 19-35 months receive none of the recommended childhood vaccinations.
10. Communications that assert the existence of growing concern over vaccination risks and declining vaccination rates magnify misestimations of vaccination rates and of exemptions. Experiment subjects who read communications patterned on real media communications underestimated vaccine coverage by an even larger amount than subjects in the control.
11. Communications that connect “growing concern” over vaccine risks to disbelief in evolution and climate change generate cultural polarization. Relative to their counterparts in a control condition, experiment subjects exposed to such a communication divided along lines that reflected their predispositions toward currently disputed societal risks.
12. Factually accurate information on vaccine rates, when issued by the CDC, substantially corrects underestimation of vaccination rates. Exposure to a story patterned on the press statements that the CDC typically issues in connection with annual NIS updates resulted in a significant correction of experiment subjects’ underestimation of national vaccination coverage.
B. Normative and prescriptive conclusions
1. Risk communicators—including journalists, advocates, and public health professionals—should refrain from conveying the false impression that a substantial proportion of parents or of the public generally doubts vaccine safety. Such information risks creating anxiety rather than dispelling it. Moreover, by aggravating underestimation of vaccination rates, communications of this nature obscure a signal that conveys public confidence in vaccine safety and stimulates reciprocal motivations to contribute to the collective good of herd immunity.
2. Risk communicators should avoid resort to the factually unsupportable, polemical trope that links vaccine risk concerns to climate-change skepticism and to disbelief in evolution as evidence of growing societal distrust in science. Such rhetoric, in addition to being facile, risks generating an affective or symbolic link between vaccines and issues on which cultural polarization is currently a significant impediment to public science communication.
3. Risk communicators, including public health officials and professionals, should aggressively disseminate true information on the historically and continuing high rates of childhood vaccination in the U.S. The high levels of vaccination in the U.S. are a science communication resource. That resource should be exploited, not obscured or dissipated.
4. Because there is a chance that it would make mandatory vaccination policies a matter of partisan contestation, campaigns to promote legislative elimination or contraction of existing grounds for exemptions should be viewed with extreme caution. There is reason to believe—from real-world experience as well as the results of this study—that proposals to restrict nonmedical exemptions from existing mandates would generate partisan division in the public. As evidenced by the controversy over the HPV vaccine, such divisions disrupt the processes by which ordinary citizens recognize and orient themselves with respect to the best-available evidence on public-health and other risks. Accordingly, the potential for creating polarization over childhood vaccination risks is a cost that must be balanced against whatever benefit might be obtained from reforms in law aimed at reducing the already very low percentage of parents that exempt their children from mandatory vaccination.
5. Vaccine-risk assessments and communication should not be based on creative extrapolations from general theories. Because decision-science mechanisms can be imaginatively manipulated to support a wide variety of explanations and prescriptions, it is a mistake to present theoretical syntheses of work in this field as a guide for action. Instead, conjectures informed by decision-science frameworks should be treated as hypotheses for empirical investigation.
6. Hypotheses relating to vaccine-risk perceptions and vaccine-risk communication should be tested with valid empirical methods specifically suited to measuring matters of consequence. Opinion polls cannot be expected to generate significant insight into vaccine risk perceptions, either on the part of parents, whose responses are unreliable indicators of behavior, or the general public, in whom demographic and attitudinal measures fail to explain practically meaningful levels of variance. Rather, behavioral measures (including validated attitudinal indicators of behavior) should be used to gauge parental risk concern and fine-grained, local methods used to investigate the characteristics of enclaves of demonstrated vaccine hesitancy.
7. The public health establishment should take the initiative to develop comprehensive proposals for better integrating the science of science communication into its culture and practices. Procedures should be adopted, within government public health agencies and within the medical profession, for making use of the best available empirical methods for anticipating and averting influences that distort public risk perceptions. The public health establishment should also propagate professional norms geared to curbing ill-informed and ill-considered forms of ad hoc risk perception by the media and by individual members of the public-health establishment. The most effective step to discouraging this form of feral risk communication is to populate the niche it now occupies with an empirically informed and systematically planned alternative.
More on "Krugman's symmetry proof": it's not whether one gets the answer right or wrong but how one reasons that counts
Okay, I've finally caught my breath after laughing myself into state of hyperventilation as a result of reading Krugman's latest proof (this is actually a replication of an earlier empirical study on his part) that ideologically motivated reasoning is in fact perfectly symmetric with respect to right-left ideology.
Rather than just guffawing appreciatively, it's worth taking a moment to call attention to just how exquisitely self-refuting his "reasoning" is!
There's the great line, of course, about how his "lived experience" (see? I told you, he's doing empirical work!) confirms that motivated cognition "is not, in fact, symmetric between liberals and conservatives."
But what comes next is an even more subtle -- and thus an even more spectacular! -- illustration of what it looks like when one's reason is deformed by tribalism:
Yes, liberals are sometimes subject to bouts of wishful thinking. But can anyone point to a liberal equivalent of conservative denial of climate change, or the “unskewing” mania late in the 2012 campaign, or the frantic efforts to deny that Obamacare is in fact covering a lot of previously uninsured Americans?
Uh, no, PK. I mean seriously, no.
The test for motivated cognition is not whether someone gets the "right" answer but how someone assesses evidence.
A person displays ideologically motivated cognition when, instead of weighing evidence based on criteria related to its connection to the truth, he or she credits or dismisses it based on its conformity to his or her ideological predispositions.
Thus, if we want to use public opinion on some issue -- say, climate change -- to assess the symmetry of ideologically motivated reasoning, we can't just say, "hey, liberals are right, so they must be better reasoners."
Rather we must determine whether "liberals" who "believe" in climate change differ from "conservatives" who "don't" in how impartially they weigh evidence supportive of & contrary to their respective positions.
How might we do that?
Well, one way would be to conduct an experiment in which we manipulate the ideological motivation people with "liberal" & "conservative" values have to credit or dismiss one and the same piece of valid evidence on climate change.
If "liberals" (it makes me shudder to participate in the flattening of this term in contemporary political discourse) adjust the weight they give this evidence depending on its ideological congeniality, that would support the inference that they are assessing evidence in a politically motivated fashion.
If in aggregate, in the real world, they happen to "get the right" answer, then they aren't to be commended for the high quality of their reasoning.
Rather, they are to be congratulated for being lucky that a position they unreasoningly subscribe to happens to be true.
And vice versa if the "truth" happens (on this issue or any other) to align with the position that "conservatives" unreasoningly affirm regardless of the quality of the evidence they are shown.
That Krugman is too thick to see that one can't infer anything about the quality of partisans' reasoning from the truth or falsity of their beliefs is ... another element of Krugman's proof that ideological reasoning is symmetric across right and left!
One of the beliefs that they don't revise in light of valid evidence but rather use in lieu of truth-related criteria to assess the validity of whatever evidence they see.
This proposition is supported by real, honest-to-god empirical evidence -- of the sort collected precisely because no one's personal "lived experience" is a reliable guide to truth.
That PK is innocent of this evidence is-- another element of his proof that ideological reasoning is symmetric across right and left!
As is his unfamiliarity with studies that use the design I just suggested to test whether "liberals" are forming their positions on climate change and other issues in a manner that is free of the influence of politically motivated reasoning. Not surprisingly, these studies suggest the answer is no.
But does that mean that all liberals who believe in climate change believe what they do because of ideologically motivated cognition? Or that only someone who is engaged in that particular form of defective reasoning would form that belief?
If you think so, then, despite your likely ideological differences, you & Paul Krugman have something in common: you are both very poor reasoners.
Well, much like the administrators of the Affordable Health Care Act , I’ve learned the hard way how difficult it can be to anticipate and manage an excited tidal wave of interest surging through the internet toward one’s web portal.
Yes, “tomorrow” has arrived, but because I’ve been inundated with so many 10^3’s of serious entries for the latest MAPKIA, I’ve been unable to process them all, even with the help of my CCP state-of-the-art “big data” MAPKIA automated processor [cut & paste: http://www.palantir.net/2001/tma1/wav/foolprf.wav]
So taking a page from the President’s playbook, I’m extending the deadline of “tomorrow” to “tomorrow,” which is when I’ll post the “results” of the “Where is Ludwick” MAPKIA. In the meantime, entries will continue to be accepted.
But while we wait, how about some related info relevant to an issue that came up in discussion of the ongoing MAPKIA?
In response to my observation that Ludwick’s are “rare”—less than 3% of the U.S. population--@PaulMathews stated that “Ludwicks are not a rare species” in the UK but rather
are quite common. For example, two of our most prominent climate campaigners, Mark Lynas and George Monbiot, are pro-nuclear and pro-GMO.
Well it so happens that I have data that enables an estimation of the population frequency of Ludwicks—that is, individuals who are simultaneously (a) concerned about climate change risk but not much concerned about the risks of (b) nuclear power and (c) GM foods—in England.
Not the UK, certainly, but I think better evidence of what the true frequency is in the UK than reference to a list of commentators (indeed, compiling lists of “how many of x” one can think of is clearly an invalid way to estimate such things, given the obvious sampling bias involved, not to mention the abundant number of even people with very rare combinations of whatever in countries with populations in the tens or hundreds of millions).
It turns out that Ludwicks are even rarer in England than in the U.S. Consider:
Again, a scatterplot of survey respondents (1300 individuals from a nationally representative sample of individuals recruited to participate in CCP “cross-cultural cultural cognition” studies—including the one in our forthcoming paper “Geoengineering and Climate Change Polarization”) arrayed in relationship to their perceptions of nuclear power and climate change risks.
I’ve defined a Ludwick as an individual whose scores on a 0-10 industrial strength risk perception measure (ISRPM10) are ≥ 9 for global warming, ≤ 2 for nuclear power, and ≤ 2 for GM foods.
Those numbers are pretty close equivalents for the scores I used to compute U.S. Ludwicks on the 0-7 industrial strength risk perception measure (≥ 6, ≤ 2, & ≤ 2, respectively) in the data set I used for the MAPKIA (I determined equivalence by comparing the z-scores on the respective ISRPM7 and ISRPM10 scales).
As I said, less than 3% of the US population holds the Ludwick combination of risk perceptions.
But in England, less than 2% do!
But @PaulMathews shouldn’t feel bad—it’s just not easy to gauge these things by personal observation! I trust my own intuitions, and those of any socially competent and informed observe (@Paulmathews certainly is) but verify with empirical measurement to compensate for the inevitably partial perspective any individual is constrained to have.
There are some other cool things that can be gleaned from this cross-cultural comparison—ones, in fact, that definitely surprised me but might well have informed @Paulmathews’ conjecture.
One is that there’s not nearly as much of an affinity between climate change risk perceptions and nuclear ones in the England (r = 0.26, p < 0.01) as there is in the U.S. (r = 0.47, p < 0.01).
The reason that this surprised me is that in our study of “cross-cultural cultural cognition,” we definitely found that climate change risk perceptions in England fit the cultural-polarization profile (“hierarch individualists, skeptical” vs. “egalitarian communitarians, concerned”) that is familiar here.
Another thing: while the population frequency of Ludwicks is lower than in England than in the U.S., the probability of being a Ludwick conditional on holding the nonconformist pairing of high concern for climate and low for nuclear risks is higher in England.
In the scatterplot of English respondents, I’m defining the “Monbiot region” as the space occupied by survey respondents whose ISRPM10 scores for global warming and nuclear were ≥ 9 for global warming, ≤ 2, respectively.
The analogous neighborhood in the U.S. is the “Ropeik region” (global warming ISRPM7 ≥ 6 and nuclear power ISRPM7 ≤ 2).
Whereas about 33% of the residents of the U.S. Ropeik region are Ludwicks, over 60% of the residents of Monbiot are Ludwicks.
What does this signify?
No doubt something interesting, but I’m not sure what!
Do others have views? People who have a better grasp of English cultural meanings & who would be more likely than I to venture sensible interpretations (ones, obviously, that would still need to be empirically verified, of course)?
Could this information be of any use in constructing a successful Ludwick profile in the US (or in England for that matter)?
In generating some data to respond to an interesting observation/query from @Niv, I discovered that I hadn't adjusted the "color coding" of the observations to reflect the difference between the 11-point English industrial strength risk perception measure & the 8-point US one for GM foods. As a result, the "GM food risk believers" (red) were underepresented in the scatterplot, which also had the effect of visually concealing the strength of the correlation (r = 0.26) between nuclear risk and climate change risk perceptions in the (English) sample.
@NIV's observation was that the effect doesn't look very impressive. I'm guessing he is likely to think that it still doesn't -- and that's because it is in fact quite modest.
@NIV also wondered whether the effect reflected in the correlation was being driven by values at one or both extremes, obscuring that the effect is even closer to nil across most of the range. This is a good question -- and it illustrates how important it is for analysts to allow critical readers to observe the raw data rather than just report summary statistics that might in fact hide relationships of consequence (particuarly nonlinear ones) in the data.
Likely he & others can see more clearly now whether the positive correlation obtains in the "middle" part of the plot. But just to enhance everyone's visual acuity, I've superimposed a lowess line rather than a fitted regression one in the version below:
A lowess plot reflects a "locally weighted" regession. In the family of regression "smoothers," it basically breaks the data into a series of tiny slices along the x-axis, fits a regression to each slice, and then connects the resulting series of plotted slopes. Obviously, it is "overfitting" in that sense. But one of the main values of lowess and related "smoothing" techniques is to make the "shape" of the distribution of the raw data even more apparent, thereby faciliting judgment about whether that shape is close enough to the one that a particular statistical models superimpose on the data to make that model a reasonable one for representing the relationships between variables of interest.
I think the lowess line here suggests that the "linear" model inherent in describing the relationship between nuclear and global warming risk perceptions as "r = 0.26" is defensible -- i.e., less wrong than the statistical characteriation that would be be generated by any alternative, nonlinear model.
But the point is you should be able to see the data & make judgments like this for yourself!
For record, btw, the values I selected for risk "GMO risk neutral" and "GMO risk high" were 3-8 & ≥ 9 on the 0-10 ISRPM
from Kahan, D.M. The Cognitively Illiberal State. Stan. L. Rev. 60, 115-154 (2007).
The nature of political conflict in our society is deeply paradoxical. Despite our unprecedented knowledge of the workings of the natural and social world, we remain bitterly divided over the dangers we face and the efficacy of policies for abating them. The basis of our disagreement, moreover, is not differences in our material interests (that would make perfect sense) but divergences in our cultural worldviews. By virtue of the moderating effects of liberal market institutions, we no longer organize ourselves into sectarian factions for the purpose of imposing our opposing visions of the good on one another. Yet when we deliberate over how to secure our collective secular ends, we end up split along exactly those lines.
The explanation, I’ve argued, is the phenomenon of cultural cognition. Individual access to collective knowledge depends just as much today as it ever did on cultural cues. As a result, even as we become increasingly committed to confining law to attainment of goods accessible to persons of morally diverse persuasions, we remain prone to cultural polarization over the means of doing so. Indeed, the prospect of agreement on the consequences of law has diminished, not grown, with advancement in collective knowledge, precisely because we enjoy an unprecedented degree of cultural pluralism and hence an unprecedented number of competing cultural certifiers of truth.
If there’s a way to mitigate this condition of cognitive illiberalism, it is by reforming our political discourse. Liberal discourse norms enjoin us to suppress reference to partisan visions of the good when we engage in political advocacy. But this injunction does little to mitigate illiberal forms of status competition: because what we believe reflects who we are (culturally speaking), citizens readily perceive even value-denuded instrumental justifications for law as partisan affirmations of certain worldviews over others.
Rather than implausibly deny our cultural partiality, we should embrace it. The norm of expressive overdetermination would oblige political actors not just to seek affirmation of their worldviews in law, but to cooperate in forming policies that allow persons of opposing worldviews to do so at the same time. Under these circumstances, citizens of diverse cultural orientations are more likely to agree on the facts—and to get them right—because expressive overdetermination erases the status threats that make individuals resist accurate information. But even more importantly, participation in the framing of policies that bear diverse meanings can be expected to excite self-reinforcing, reciprocal motivations that make a culture of political pluralism sustainable.
Ought, it is said, implies can. Contrary to the central injunction of liberalism, we cannot, as a cognitive matter, justify laws on grounds that are genuinely free of our attachments to competing understandings of the good life. But through a more sophisticated understanding of social psychology, it remains possible to construct a form of political discourse that conveys genuine respect for our cultural diversity.
Nothing to say today that would be as interesting as the points people are making in response to the"MAPKIA!" challenge in "yesterday's" post. Join in the discussion -- & submit your entry! It's a little bit like doing presidential polls 2.5 yrs in advance of the next election, but @Jen is definitely the frontrunner at this stage.
MAPKIA! Episode 49: Where is Ludwick?! Or what *type* of person is worried about climate change but not about nuclear power or GM foods?
Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!
By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for new subscribers (welcome, btw!):
I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)
Okay—we have a real treat for everybody: a really really really fun and really really really hard "MAPKIA!" challenge (much harder than the last one)!
The idea for it came from the convergence of a few seemingly unrelated influences.
One was an exchange I had with some curious folks about the relationship between perceptions of the risks of climate change, nuclear power, & GM foods.
Actually, that exchange already generated one post, in which I presented evidence (for about the umpteenth time) that GM food risks perceptions are not politically or culturally polarized in the U.S., and indeed, not even part of the same “risk perception family” (that was the new part of that post) as climate and nuclear.
Responding to this person’s (reasonable & common, although in fact incorrect) surmise that GM food risk perceptions cohere with climate and nuclear ones, I had replied that it would be more interesting to see if it were possible to “profile” individuals who are simultaneously (a) climate-change risk sensitive, and (b) nuclear-risk and (c) GM food risk skeptical.
Right away, Rachel Ludwick (aka @r3431) said, “That would be me.”
So I’m going to call this combination of risk perceptions the “Ludwick” profile.
Why should we be intrigued by a Ludwick?
Well, anyone who is simultaneously (a) and (b) is already unusual. That’s because climate change risks and nuclear ones do tend to cohere, and signify membership in one or another cultural group.
In addition, the co-occurrence of those positions with (c)—GM food risk skepticism—strikes me as indicating a fairly discerning and reflective orientation toward scientific evidence on risk.
Indeed, one doesn’t usually see discerning, reflective orientations that go against the grain, culturally speaking.
On the contrary, higher degrees of reflection—as featured in various critical reasoning measures—usually are associated with even greater cultural coherence in perceptions of politically contested risks and hence with even greater political polarization.
A Ludwick seems to be thoughtfully ordering a la carte in a world in which most people (including the most intelligent ones) are consistently making the same selection from the prix fixe menu.
That is the second thing that made me think this would be an interesting challenge. I am interested in (obsessed with) trying to identify dispositional indicators that suggest a person is likely to be a reflective cultural nonconformist.
Unreflective nonconformits aren’t hard to find. Indeed, being nonconformist is associated with being bumbling and clueless.
As I’ve explained 43 times before, it’s rational for people to fit their perceptions of risk to their cultural commitments, since their stake in fitting in with their group tends to dominate their stake in forming “correct” perceptions of societal risk on matters like climate change, where one’s personal views have no material effect on anyone’s exposure to the risk in question.
Accordingly, failing to display this pattern of information processing could be a sign that one is socially inept or obtuse. That’s one way to explain why people who are low in critical reasoning capacities tend to be the ones most likely to form group-nonconvergent beliefs on culturally contested risks (although even for them, the “nonconformity effect” isn’t large).
It would be more interesting, then, to find a set of characteristics indicates a reflective disposition to form truth-convergent (or best-evidence convergent) rather than group-convergent perceptions of such risks. I haven’t found any yet. On the contrary, the most reflective people tend to conform more, as one would expect if indeed this form of information processing rationally advances their personal interests.
As I said, thought, the Ludwick combination of risk perceptions strikes me as evincing reflection. Because it is also non-conformist with respect to at least two of its elements (climate-risk concerned, nuclear-risk skeptical), being able to identify Ludwicks might lead to discovery of the elusive “reflective non-conformity profile”!
The last thing that influenced me to propose this challenge is another project I’ve been working on. It involves using latent risk dispositions to predict individual perceptions of risk. The various statistical techniques one can use for such a purpose furnish useful tools for identifying the Ludwick profile.
So everybody, here’s the MAPKIA:
What “risk profiling” (i.e., latent disposition) model would enable someone to accurately categorize individuals drawn from the general population as holding or not holding the Ludwick combination of risk preferences?
Let me furnish a little guidance on what a “successful” entry in this contest would have to look like and the criteria one (that one being me, in particular) might use to assess the same.
To begin with, realize that a Ludwick is extremely rare.
For purposes of illustration, here’s a scatter plot of the participants in an N = 2000 nationally representative survey arrayed with respect to their global warming and nuclear power risk perceptions, indicated by their responses to the “industrial strength risk perception measure” (ISRPM).
So where is @r3431, aka “Rachel Ludwick”?!
Presumably, she’s one of the blue observations within the dotted circle.
The circle marks the zone for “climate change risk sensitive” and “nuclear risk skeptical,” a space we’ll call the “Ropeik region.”
A “Ropeik,” who will be investigated in a future post, is a type who is very worried about climate change but regards the water used to cool nuclear reactor rods as a refreshing post-exercise drink. The Ropeik region is very thinly populated--not necessarily on account of radiation sickness but rather on account of the positive correlation (r = 0.47, p < 0.01) between global warming concerns and nuclear power ones.
The correlation between worrying about global warming & worrying about GM foods is quite modest (r = 0.26, p < 0.01) .
But there definitely is one.
Accordingly, someone who is GM food risk skeptical is even less likely to be in the Ropeik region (where people are very concerned about climate change) than somewhere else.
Those are the Ludwicks. They exist, certainly, but they are uncommon.
Actually, if we define them as I have here in relation to the scores on the relevant ISRPMs, they make up about 3% of the population.
Maybe that is too narrow a specification of a Ludwick?
For sure, I’ll accept broader specifications in evaluating "MAPKIA!" entries—but only from entrants who offer good accounts, connected to cogent theories of who these Ludwicks are, for changing the relevant parameters.
Of course, such entrants, to be eligible to win the great prize (either this or something like it) to be awarded to the winner of this "MAPKIA!" would also need to supply corresponding “profiling” models that “accurately categorize” Ludwicks.
What do I have in mind by that?
Well, I’ll show you an example.
I start with a “theory” about “who fears global warming, who doesn’t, and why.” Based on the cultural theory of risk, that theory posits that people with egalitarian and communitarian outlooks will be more predisposed to credit evidence of climate change, and people—particularly white males—with hierarchical and individualistic outlooks more predisposed to dismiss it.
Because these predispositions reflect the rational processing of information in relation to the stake such individuals have in protecting their status within their cultural groups, my theory also posits that the influence of these predispositions will be increase as individuals become more “science comprehending”—that is, more capable of making sense of empirical evidence and thus acquiring scientific knowledge generally.
A linear regression model specified to reflect that theory explains over 60% of the variance in scores on the global warming ISRPM.
I can then use the same variables—the same model—in a logistic regression to predict the probability that someone is a “climate change believer” (global warming ISRPM ≥ 6) and the probability someone is a “climate change skeptic” (global warming ISRPM ≤ 2).
(Someone who read this essay before I posted it asked me a good question: what’s the difference between this classification strategy and the one reflected in the popular and very interesting “6 Americas” framework? The answer is that the “6 Americas scheme” doesn't predict who is skeptical, concerned, etc. Rather, it simply classifies people on the basis of what they say they believe about climate change. A latent-disposition model, in contrast, classifies people based on some independent basis like cultural identity that makes it possible to predict which global warming "America" members of the general population live in without having to ask them.)
Classifying someone as one or the other so long as he or she had a predicted probability > 0.5 of having the indicated risk perception, the model would enable me to determine whether someone drawn from the general population is either a "skeptic" or a "believer" (your choice!) with a success rate of around 86% for “skeptics” and 80% for “believers.”
How good is that?
Well, one way to answer that question is to see how much better I do with the model than I’d do if the only information I had was the population frequency of skeptics and believers.
“Skeptics” (ISRPM ≤ 2) make up 26% of my general population sample. Accordingly, if I were to just assume that people selected randomly from the population were not “skeptics” I’d be “predicting” correctly 74% of the time.
With the model, I’m up to 86%--which means I’m predicting correctly in about 46% of the cases in which I would have gotten the answer wrong by just assuming everyone was a nonskeptic.
“Believers” (global warming ISRPM ≥ 6) make up 35% of the sample. Because I can improve my “prediction” proficiency relative to just assuming everyone is a nonbeliever from 65% to 80%, the model is getting the right answer in 42% of the cases in which I’d have had gotten the wrong one if the only guide I had was the “believer” population frequency.
Those measures—46% and 42%--reflect the “adjusted count R2” measure of the “fit” of my classification model.
There are other interesting ways to assess the predictive performance of these models, too—and likely I’ll say more about that “tomorrow.”
But “how good” a predictive model is is a question that can be answered only with reference to the goals of the person who wants to use it. If it improves her ability relative to “chance,” does it improve it enough, & in the way one careas about (reducing false positives vs. reducing false negatives), to make using it worth her while?
But for now, consider GM food risk perceptions.
As I’ve explained a billion times, one won’t do a very good job “profiling” someone who is GM food risk sensitive or GM food risk-skeptical by assimilating GM food risks to the “climate change risk family.”
If I use the same latent predisposition model for GM food risk perceptions that I just applied for global warming risk perceptions, I explain only 10% of the variance in the GM food ISRPM (as opposed to over 60% for global warming ISRPM).
When I try to predict GM food risk “skeptics” (ISRPM ≤ 2) and GM food risk “believers” (ISRPM ≥ 6), I end up with correct-classification rates of 79% and 71%, respectively.
That might sound good—but it isn’t.
In fact, that sort of “predictive proficiency” sucks.
GM food “skeptics” make up 22% of the population—meaning that 78% of people are not skeptical. My 79% predictive accuracy rate has an adjusted count R2 of 0.03, and is likely to be regarded as pitiful by anyone who wants to do anything, or at least anyone who wants to do something besides publish a paper with “statistically significant” regression coefficients (I've got a bunchin my GM food "skeptic" model--BFD!), on the basis of which he or she misleadingly claims to be able to “explain” or “predict” who is a GM food risk skeptic!
For GM food “believers,” my 71% predictive accuracy compares with a 70% population frequency (30% of the sample are “believers,” defined as ISRPM ≥ 6). An adjusted count R2 of 0.02: Woo hoo! (Note again that my model has a big pile of “statistically significant” predictors—the problem is that the variables are predicting variance based on combinations of characteristics that don’t exist among real people).
In sum, we need a different theory, and a different model, of who fears what & why to explain GM food risk perceptions.
I don’t have a particularly good theory at this point.
But I do have a pile of hunches.
They are ones I can test, too, with potential indicators that I’ve featured in previous posts. These include
- the “public safety” and “social deviancy” interpretive community disposition measures;
- religiosity and science comprehension, as well as their interaction;
- and demographic characteristics such as race and gender.
In constructing their Ludwick models, "MAPKIA!" entrants might want to consult those posts, too.
I’ll say more how I would use them to predict GM food risks “tomorrow,” when I post (or post the first) report on the MAPKIA entries.
So …on you marks… get set …
The threshold I used for risks "skeptic" -- GM food, climate change, & nuclear -- was ISRPM ≤ 2, not ISRPM "≤ 1" as I mistakenly wrote in the text in couple places (have corrected that). As indicated, for believers, I used ISRPM ≥ 6.
On the 0-7 ISRPM scale used in this dataset, the scores are labeled as follows:
Spent a great couple of days at NCAR/UCAR last week, culminating in a lecture on "Communicating Climate Science in a Polluted Science Communication Environment."
There are 10^6 great things about NCAR/UCAR, of course.
But the one that really grabbed my attention on this visit is how much the scientists there are committed to the instrinsic value of communicating science.
They want people —decisionmakers, citizens, curious people, kids (dogs & cats, even; they are definitely a bit crazy!)—to know what they know, to see what they see, because they recognize the unique thrill that comes from contemplating what human beings, employing science’s signature methods of observation and inference, have been able to discern about the hidden workings of nature.
Yes, making use of what science knows is useful—indeed, essential—for individual & collective well-being.
That’s a very good reason, too, to want to communicate science under circumstances in which one has good justification (i.e., a theory consistent with plausible behavioral mechanisms and supported by evidence) to believe that not knowing what’s known is causing people to make bad decisions.
But if you think that “knowing what’s known” is how people manage to align their decisionmaking with the best available evidence in all the domains in which their well-being depends on that; that their “not knowing” is thus the explanation for persistent states of public conflict over the best evidence on matters like climate change or nuclear power or the HPV vaccine; and that communicating what’s known to science is thus the most effective way to dispel such disputes, then you actually have a very very weak grasp of the science of science communication.
And if you think, too, that what I just wrote implies there is “no point” in enabling people to know, then you have just revealed that you are merely posing—to others, & likely even to yourself!—when you claim to care about science communication and science education.
I spent hours exchanging ideas with NCAR scientists--including ideas about how to use empirical evidence to perfect climate-science communication--and not even for one second did I feel I was talking to someone like that.
From something I'm working on...
Problem statement. Our motivating premise is that advancement of enlightened conservation policymaking depends on addressing the science communication problem. That problem consists in the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent public conflict over policy-relevant facts to which that evidence directly speaks. As spectacular and admittedly consequential as instances of this problem are, states of entrenched public confusion about decision-relevant science are in fact quite rare. They are not a consequence of constraints on public science comprehension, a creeping “anti-science” sensibility in U.S. society, or the sinister acumen of professional misinformers. Rather they are the predictable result of a societal failure to integrate two bodies of scientific knowledge: that relating to the effective management of collective resources; and that relating to the effective management of the processes by which ordinary citizens reliably come to know what is known (Kahan 2010, 2012, 2013).
The study of public risk perception and risk communication dates back to the mid-1970s, when Paul Slovic, Sarah Lichtenstein, Daniel Kahneman, Amos Tversky, and Baruch Fischhoff began to apply the methods of cognitive psychology to investigate conflicts between lay and expert opinion on the safety of nuclear power generation and various other hazards (e.g., Slovic, Fischhoff & Lichtenstein 1977, 1979; Kahneman, Slovic & Tversky 1982). In the decades since, these scholars and others building on their research have constructed a vast and integrated system of insights into the mechanisms by which ordinary individuals form their understandings of risk and related facts. This body of knowledge details not merely the vulnerability of human reason to recurring biases, but also the numerous and robust processes that ordinarily steer individuals away from such hazards, the identifiable and recurring influences that can disrupt these processes, and the means by which risk-communication professionals (from public health administrators to public interest groups, from conflict mediators to government regulators) can anticipate and avoid such threats and attack and dissipate them when such preemptive strategies fail (e.g., Fischhoff & Scheufele 2013; Slovic 2010, 2000; Pidgeon, Kasperson & Slovic 2003; Gregory, McDaniels & Field 2001; Gregory & Wellman 2001).
Astonishingly, however, the practice of science and science-informed policymaking has remained largely innocent of this work. The persistently uneven success of resource-conservation stakeholder proceedings, the sluggish response of local and national governments to the challenges posed by climate-change, and the continuing emergence of new public controversies such as the one over fracking—all are testaments (as are myriad comparable misadventures in the domain of public health) to the persistent failure of government institutions, NGOs, and professional associations to incorporate the science of science communication into their efforts to promote constructive public engagement with the best available evidence on risk.
This disconnection can be attributed to two primary sources. The first is cultural: the actors most responsible for promoting public acceptance of evidence-based conservation policymaking do not possess a mature comprehension of the necessity of evidence-based practices in their own work. For many years, the work of conservation policymakers, analysts, and advocates has been distorted by the more general societal misconception that scientific truth is “manifest”—that because science treats empirical observation as the sole valid criterion for ascertaining truth, the truth (or validity) of insights gleaned by scientific methods is readily observable to all, making it unnecessary to acquire and use empirical methods to promote its public comprehension (Popper 1968).
Dispelled to some extent by the shock of persistent public conflict over climate change, this fallacy has now given way to a stubborn misapprehension about what it means for science communication to be truly evidence based. In investigating the dynamics of public risk perception, the decision sciences have amassed a deep inventory of highly diverse mechanisms (“availability cascades,” “probability neglect,” “framing effects,” “fast/slow information processing,” etc.). Used as expositional templates, any reasonably thoughtful person can construct a plausible-sounding “scientific” account of the challenges that constrain the communication of decision-relevant science (e.g., XXXX 2007, 2006, 2005). But because more surmises about the science communication problem are plausible than are true, this form of story-telling cannot produce insight into its causes and cures. Only gathering and testing empirical evidence can.
Sadly, some empirical researchers have contributed to the failure of practical communicators to appreciate this point. These scholars purport to treat general opinion surveys and highly stylized lab experiments as sources of concrete guidance for actors involved in communicating science relevant to risk-regulation or related policy issues (e.g., XXX 2009). Such methods have yielded indispensable insight into general mechanisms of consequence to science communication. But they do not—because they cannot—furnish insight into how to engage these mechanisms in particular settings in which science must be communicated. The number of plausible surmises about how to reproduce in the field results that have been observed in the lab likewise exceeds the number that are true. Again, empirical observation and testing are necessary—in the field, for this purpose. The number of researchers willing to engage in field-centered research, and unwilling to acknowledge candidly the necessity of doing so, has stifled the emergence of a genuinely evidence-based approach to the promotion of public engagement with decision-relevant science (Kahan 2014).
The second source of the disconnect between the practice of science and science-informed policymaking, on the one hand, and the science of science communication, on the other, is practical: the integration of the two is constrained by a collective action problem. The generation of information relevant to the effective communication of decision-relevant science—including not only empirical evidence of what works and what does not but also practical knowledge of the processes for adapting and extending it in particular circumstances—is a public good. Its benefits are not confined to those who invest the time and resources to produce it but extend as well to any who thereafter have access to it. Under these circumstances, it is predictable that producers, constrained by their own limited resources and attentive only to their own particular needs, will not invest as much in producing such information, and in a form amenable to the dissemination and exploitation of it by others, as would be socially desirable. As a result, instead of progressively building on their successive efforts, each initiative that makes use of evidence-based methods to promote effective public engagement with conservation-relevant science will be constrained to struggle anew with the recurring problems.
This proposal would attack both of sources of the persistent inattention to the science of science communication....
Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).
Gregory, R. & McDaniels, T. Improving Environmental Decision Processes. in Decision making for the environment : social and behavioral science research priorities (ed. G.D. Brewer & P.C. Stern) 175-199 (National Academies Press, Washington, DC, 2005).
Gregory, R., McDaniels, T. & Fields, D. Decision aiding, not dispute resolution: Creating insights through structured environmental decisions. Journal of Policy Analysis and Management 20, 415-432 (2001).
Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change: How Information Shapes Our Common Future, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).
Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).
Pidgeon, N.F., Kasperson, R.E. & Slovic, P. The social amplification of risk (Cambridge University Press, Cambridge ; New York, 2003).
Popper, K.R. Conjectures and refutations : the growth of scientific knowledge (Harper & Row, New York, 1968).
Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).
Slovic, P. The perception of risk (Earthscan Publications, London ; Sterling, VA, 2000).
Slovic, P., Fischhoff, B. & Lichtenstein, S. Behavioral decision theory. Annu Rev Psychol 28, 1-39 (1977).
Slovic, P., Fischhoff, B. & Lichtenstein, S. Rating the risks. Environment: Science and Policy for Sustainable Development 21, 14-39 (1979).
Science comprehension ("OSI") is a culturally random variable -- and don't let anyone experiencing motivated reasoning tell you otherwise!
Here I've simply plotted "science comprehension" -score histograms for the four segments of a general population sample whose members have been divided in relation to their scores on the means of the "hierarchy-egalitarian" & "individualism-communitarianism" cultural worldview scales.
I suppose the figure could itself be used to measure motivated reasoning: If you perceive that one of these groups varies meaningfully in the disposition or apptitude that this particular scale measures, you might well be experiencing it!
But that won't make you any different from anyone else. Rather than being embarrassed, if you manage to catch yourself displaying this tendency, then you should be proud of yourself, for you'll be demonstrating a very unusual form of self-reflection--one much rarer than a "high" level of science comprehension.
The experience of catching yourself in this way will also likely fill you with apprehension over the number of times that you've no doubt experienced this pattern of thinking and did not catch yourself. Cultivating that sort of anxiety can't hurt either if you are trying to sharpen your powers or self-reflection -- or just trying to avoid becoming a boorish cultural sectarian whose interest in promoting public engagement with science is just a mask you don as you gear up for illiberal forms of status competition...
BTW, this figure features the same "ordinary science intelligence" measure (I prefer that phrasing to "science literacy," which to me connotes an inventory of substantive bits of knowledge divorced from comprehension of & facility with the form of inferential reasoning needed to recognize valid science) that I've been futzing with for a while (despite its propensity to lead me into Alice-in-Wonderland style misadventures).
It combines the 11-item NSF indicator battery plus a 10-item "long cognitive reflection test." It has the qualities that one would expect in/demand of a valid science comprehension measure, & has been productive of some pretty interesting insights into when people who have opposing cultural identities but who share a demonstrable proficiency in critical reasoning are more likely to converge or instead more likely to disagree than are less "science comprehending" members of their groups about a fact that admits of scientific investigation (e.g., the natural history of human beings or the reality and causes of climate change or GM foods or fracking or childhood vaccines).
Maybe I'll write more "tomorrow" about the interesting psychometric properties of this OSI measure....