follow CCP

Recent blog entries
Thursday
Mar262015

On self-deception & motivated reasoning: Who is fooling whom?

From something I'm working on....

Who is fooling whom?

Identity-protective cognition is a species of motivated reasoning that consists in the tendency of people to conform disputed facts (particularly ones relevant to political controversies) to positions associated with membership in one or another affinity group. I will present evidence—in the form of correlational studies, standardized assessment tests, and critical-reasoning experiments—that show that identity-protective cognition is not a consequence of over-reliance on heuristic information processing. On the contrary, proficiency in one or another aspect of critical reasoning magnifies individuals’ tendency to selectively credit evidence in a manner that conforms to the position associated with their group identity. The question I want to frame is, Which of these two conclusions is more supportable: that  individuals who engage in this form of information processing are using their reason to fool themselves; or that we (those who study them) are fooling ourselves about what these individuals are actually using their reason to do?

Wednesday
Mar252015

"You -- talking to me? Are *you* talking to *me?" Actually, no, I'm not; the data don't tell us how any individual thinks (or that your side is either "biased" or "right").

A thoughtful correspondent writes:

I am a physician . . .  I was reading an article on Vox debunking the theory which states that more information makes people smarter. This article referenced your study concluding that those with the most scientific literacy and technical reasoning ability were less likely to be concerned about climate change and the safety of nuclear energy.

I read the paper which shows this quite nicely.

I am confused about the conclusions. I scored a perfect score on the science literacy test and on a technical reasoning test as well. I do not believe climate change is a settled science and I believe nuclear power is the safest form of reliable energy available.

The conclusion that I am biased by my scientific knowledge is strange.

In medical experiments data are scientifically gathered and tabulated. Conclusions are used as a way to explain the data. Could an alternate conclusion be reached that scientific and reasonable people downplay the danger of climate change and nuclear power precisely because we are well informed and able reason logically? It seems just as likely a conclusion as the one you reached yet it was never discussed.

My response:

Thanks for these thoughtful reflections. They deserve a reciprocally reflective and earnest response.

1st, I don't think the methods we use are useful for explaining individuals. In the study you described, they identify in large samples patterns that furnish more support than one would otherwise have for the inference that some group-related influence or dynamic is at work that helps to explain variance in genera.  

One can then do additional studies, experimental in nature (like this & this), that try to help to furnish even more support for the inference -- or less, since that is what a valid study has to be in the position to do to be valid. 

But once one has  done that, all one has is an explanation for some portion of the variance in groups of people.  One doesn't have an explanation all the variance (the practical & not & merely "statistical" significance of which is what a reflective person must assess). One doesn't have an instrument that "diagnoses" or tells one why any particular individual believes what he or she does.  

And most important of all you don't have a basis for saying anyone on any of the issues one is studying is "right" or "wrong": to figure that out, do a valid study on the issue on which people like this disagree; then do another & another & another & another. And compare your results w/ others doing the same thing.

2d, I don't believe the dynamic we are looking at is a "bias" per se. Things are more complicated than, at least for me!

I'm inclined to think that the dynamics that we observe generating polarization in our studies are the very ones that normally enable people to figure out what is known by science.

They are also the very same processes that enable people to effectively use information for another of their aims, which is to form stances and positions on issues that evince commitments that they care about and that connected them to others.  That is a matter that is cognitively demanding as well -- & of course one that that most people, even ones who don't get a perfect score on "science comprehension" tests, possess the reasoning proficiency that it takes to perform it.  

What to make of the situations, then, in which that same form of reasoning generates states of polarization on facts that admit of empirical inquiry is a challenging issue -- conceptually, morally & psychologically?  This is very perplexing to me!

I suspect sometimes it reflects the experience of a kind of interference between or confounding of mental operations that serve one purpose and those that serve another.  That in effect, the "science communication environment" has become degraded by conflicts between the stake people have in knowing what's known & being who they are.  

At others, times it might simply be that nothing is amiss from the point of view of the  people who are polarized; they are simply treating being who they are as the thing that matters most for them in processing information on the issue in question. . . .

3d, notwithstanding all this, I don't think our studies admit of your "alternate conclusion": that "scientific and reasonable people downplay the danger of climate change and nuclear power precisely because we are well informed and able reason logically."  

The reason is that that's not what the data show.   They show that those highest in one or another measure of science comprehension are the most polarized on a small subset of risk issues including climate change.  

That doesn't tell us which side is "right" & which "wrong."

But it tells us that we can't rely on what would otherwise be a sensible heuristic -- that the answer individuals with those proficiencies are converging on is most likely the right answer.  Because again, those very people aren't converging; on the contrary, they are the most polarized.

Many people write to me suggesting that an "alternative explanation" for our data is that "their side" is right.  

About 50% of the time they are part of the group whose group is "climate skeptical" & the other half of the time the one that is "climate nonskeptical" (I have no idea what terms I'm supposed to be using for these groups at this point; if they hold a convention and vote on a preferred label, I will abide by their decisions!).

I tell them every time that can’t actually be what the data are showing—for all the reasons I’ve just spelled out.   

Some fraction (only a small one, sadly), say "ah, yes, I see."  

 I can't draw any inferences, as I said, about the relationship between their "worldviews" & how they are thinking.  

I have no information about their scors on "science comprehension" or "critical reasoning" tests.

But at that point I can draw an inference about their intellectual character: that they possess the virtue of being able and willing to recognize complexity.

Tuesday
Mar242015

Science of Science Communication 2.0, Session 9.1: Emerging technologies part II -- synthetic biology!

Were back from spring break (learning things is hard & we need lots of rest & time to recover as we go along).

Time for  "Science of Science Communication 2.0" session 9. Reading list here, & study/discussion questions below.  

Have at it!

 

Monday
Mar232015

Professional judgment--in risk perception & in law: Dual process reasoning and science communication part 3...

This is from something I've been working on.  For a long time.  The paper of which it is a part will be posted soon.  

But for now I am treating it as the final installment of a 3-part series on the relevance of dual-process reasoning theories to science communication. As I'm sure all 14 billion regular readers of this blog recall, the first installment appeared on July 19, 2013, and the second on July 24, 2013.  

Even as of that period, I had been working on this project for a long time. . . .

II. Information Processing, Pattern Recognition, and Professional Judgment

Legal training and practice can reasonably be understood to cultivate proficiency in conscious, analytical forms of reasoning. Thus, the work on “motivated System 2 reasoning”—the tendency of conscious, effortful information processing to magnify identity-protective cognition—might in fact be regarded as supplying the strongest support for the conjecture that unconscious cultural partisanship can be expected to subvert judicial neutrality.

Nevertheless, when judges decide cases, they are not merely engaging in conscious, effortful information processing: they are exercising professional judgment. Professional judgment consists, essentially, in habits of mind—conscious and effortful to some degree, but just as much tacit and perceptive—that are distinctively fitted to reasoning tasks the nature of which falls outside ordinary experience. Indeed, it is characterized, in many fields, by resistance to all manner of error, including ones founded on heuristic information processing, that would defeat the special form of decision that professional judgment facilitates.

The dominant scholarly account of professional judgment roots it in the dynamic of pattern recognition. Pattern recognition consists in the rapid, un- or pre-conscious matching of phenomena with mentally inventoried prototypes. A ubiquitous form of information processing, pattern recognition is the type of cognition that enables human beings reliably to recognize faces and read one another’s’ emotions. But it is also the basis for many forms of highly specialized forms of expert decisionmaking. Highly proficient chess players, for example, outperform less proficient ones not by anticipating and consciously simulating a longer sequence of potential moves but by more reliably perceiving the relative value of different board positions based on their prototypical affinity to ones learned from experience to confer an advantage to one player or another. Likewise, the proficiency of aerial photography analysts consists in their tacit ability to discern prototypical clusters of subtle cues that allow them to cull from large masses of scanned images ones that profitably merit more fine-grained analysis. Forensic accountants must use the same form of facility as they combing through mountains of records in search of financial irregularities or fraud.

Expert medical judgment supplies an especially compelling and instructive example of the role of pattern recognition. Without question, competent medical diagnosis depends on the capacity to draw valid inferences from myriad sources of evidence that reflect the frequency of the occurrence of particular symptoms with one or another pathology—a form critical reasoning that figures in System 2 information processing. But studies have shown that an appropriately attuned capacity for pattern recognition plays an indispensable role in expert medical diagnosis, for unless a physician is able to form an initial set of plausible conjectures—based on the match between a patient’s symptoms and an appropriately stocked inventory of disease prototypes—the probability that the physician will identify what conditions to test for in order to make a proper diagnosis will be unacceptably low.

The proposition that pattern recognition plays this role in professional judgment generally is most famously associated with Howard Margolis. Focusing on expert assessment of risk, Margolis described a form of information processing that differs markedly from the standard “System 1/System 2” conception of dual process reasoning. The latter attributes proficient risk assessment to an individual’s capacity and disposition to “override” his or her unconscious System 1 affective reactions with ones that reflect effortful System 2 assessments of evidence. Margolis, in contrast, suggests an integrated and reciprocal relationship between unconscious, perceptive forms of cognition, on the one hand, and conscious, analytical ones, on the other. Much as in the case of proficient medical diagnosis, expert risk assessment demands reliable, preconscious apprehension of the phenomena that merit valid analytical processing. Even then, the effective use of data generated by such means, Margolis maintains, will depend on the risk expert’s reliable assimilation of such evidence to an inventory of prototypical representations of cases in which the appropriate data were given proper effect. Of course, the quality of an expert’s pattern recognition capacity will depend heavily on his or her proficiency in conscious, analytical reasoning: that form of information processing, employed to assess and re-assess successes and failures over the course of the expert’s training and experience, is what calibrates the experts’ perceptive faculty.

To translate Margolis’s account of expert judgment back into the dominant conception of dual-process reasoning, then, we would say that System 2 by itself gets nowhere—because it is not reliably activated—without a discerning System 1 perception faculty. The reliability of that faculty of perception, however, depends on the quality of the contribution that System 1 information processing makes to the professional’s continuous interrogation and calibration of his or her judgments.

Karl Llewellyn suggested an account of the reasoning style of lawyers and judges very much akin to Margolis’s view of the professional judgment for risk experts. Although Llewellyn is often identified as emphasizing the indeterminacy of formal legal rules and doctrines, the aim of his most important works was to explain how there could be such a tremendously high degree of consensus among lawyers and judges on what those rules and doctrines entail. His answer was “situation sense”—a perceptive faculty, formed through professional training and experience, that enabled lawyers and judges to reliably assimilate controversies to “situation types” that include within them their own proper resolutions. Llewellyn discounted the emphasis on deductive logic featured in legal argumentation. But he did not dismiss such reasoning as mere confabulation; in his view, lawyers and judges (legislators, too, in drafting rules) employed formal reasoning to prime or activate the “situation sense” of other lawyers and judges—the same function that Margolis sees it as playing in professional discourse among risk experts and indeed in any setting in which human being resort to it.

Margolis also identified the role that pattern recognition plays in professional judgment to explain expert-public conflicts over risk. Lacking the experience and training of experts, and hence the stock of prototypes that reliably guide expert risk assessment, members of the public, he argued, were prone to one or another heuristic bias. By the same token, the experts’ access to those prototypes reliably fixes their attention on the pertinent features of risks and inure them to the features that excite cognitive biases on the part of the lay public.

Based on the role of pattern recognition in professional judgment, one might make an analogous claim about judicial and lay judgments in culturally contested legal disputes. On this account, lawyers’ and judges’ “situation sense” can be expected to reliably fix their attention on pertinent elements of case “situation types,” thereby immunizing them from the distorting influence that identity-protective cognition exerts on the judgments of legally untrained members of the public. It is thus possible the professional judgment of the judge, as an expert neutral decisionmaker, embodies exactly the form of information processing most likely to counteract identity-protective reasoning, including the elements of it magnified by System 2 reasoning.

Friday
Mar132015

What can we learn from (a) studying public perceptions of the risks of technologies the public hasn't heard of & (b) studying studies that do that?

Tamar Willner has posted another very perceptive and provocative essay in reaction to the readings for Science of Science Communication 2.0, this time in relation to Session 8, on “emerging technologies.”  I’ve posted the first portion of it, plus a link to her site for continuation.

She also posed a very interesting question in the comments about an experiment that CCP did on nanotechnology risk perceptions.  I’ve posted my answer to her question below the excerpt from her own post.

1. Tamar Wilner on studying perceived risks of emerging technologies ...

read rest of post

2. Q&A on a CCP study of nanotechnology risk perceptions

Tamar's question:

[I]n your paper (Cultural Cognition of the Risks and Benefits of Nanotechnology) you say, “The ‘cultural cognition’ hypothesis holds that these same patterns [cultural polarization] are likely to emerge as members of the public come to learn more about nanotechnology.” But in your blog you repeatedly make the point that only a minority of public science topics end up getting polarized - that such polarization is “pathological” in its rarity. Why then did you hypothesize that such a pattern would be likely to emerge for nanotech?

I noticed that you start to address this later in the paper when you say, “At the same time, nothing in our study suggests that cultural polarization over nanotechnology is inevitable…” and point out that proper framing can help people to extract factual information. Does this indicate that the passages used in your study employed framing likely to encourage polarization? They seem to use pretty neutral language, to me. What about them makes them polarizing - and is it possible that some polarizing language is unavoidable? For example it seems like just talking about "risks of a new technology" tape into certain egalitarian/communitarian sensibilities, but since that's exactly what the topic of discussion is, I don't see how you would avoid it.

My response

This is a great question.  It raises some important general issues & also gives me a chance to say some things that how my own views of the phenomenon of cultural contestation over risk have evolved since performing the study.

The main motivation for the study, actually, was a position that we characterized as the “familiarity hypothesis”: that as people learned more about nanotechnology, their views were likely to be positive.

This was an inference from the a consistent survey finding that although only a small percentage of the public reports having heard of nanotechnology, those who say they have tend to express very favorable views about the ratio of benefits to risks that it is likely to involve.

That inference is specious: there is obviously something unusual about people who know about a technology 80% of the rest of the public is unfamiliar with; it reflects poor reasoning not to anticipate that whatever is causing them to become familiar with a novel technology might also dispose them to form a view that others who lack their interest in technology might fail to form when they eventually learn about a novel form of science.

Our hypotheses, largely corroborated by the study, was that those who were already familiar with nanotechnology (or actually, simply saying they were familiar; the surveys were using self-report measures) were likely people with a protechnology “individualist” cultural outlook, and that when individuals with anti-technology “egalitarian communitarian” ones were exposed to information on nanotechnology they would likely form more negative reactions.

Okay, fine.

But Tamar’s perceptive question is why did we expect people unfamiliar with a technology to react at all when exposed to such a small amount of info?

As she notes, only a small minority of potentially risky technologies excite polarization.  People tend to overlook this fact b/c the they understandably fixate on those and ignore the vast majority of noncontroversial ones.

My answer, basically, is that I don’t think the research team really had a good grasp of that point at the time we did the study.  I know I didn’t! 

I think, actually, that I really did mistakenly believe that culturally infused and hence opposing reactions to putative risk sources was “the norm,” and that it was therefore likely our subjects would polarize in the way they did.

Looking back, I’d say the reason it was reasonable to expect subjects would polarize is that the study was putting them in the position of consciously evaluating risks and benefits.

Check out these polarized & nonpolarized risks -- all you need to do is click! C'mon! It won't hurt!On the vast majority of putative risk sources on which there isn’t any meaningful level of polarization—from pasteurization of milk to medical x-rays to cell phone radiation to high-power transmission lines etc.—people don’t consciously think anything; they just model their behavior on what they see other people like them doing; when they do so, it’s rare for them to observe signs that give them reason to think there is anything to worry about.

Perfectly sensible approach, in my view, given how much more information known to science it makes sense to use in our lives than we have time to make sense of on our own.

But as I said, the study subjects were being prompted to do conscious risk assessment.  Apparently, in doing that, they reliably extracted from the balanced risk-benefit information culturally affective resonances that enabled them to assimilate this novel putative risk source—nanotechnology—to a class of risks, environmental ones, on which members of their group are in fact culturally polarized. 

Being made to expect, in effect, that there would be an issue here, the subjects reliably anticipated too what position “people like them” culturally speaking would likely take.

This interpretation raises a second point on which my thinking has evolved: the external validity of public opinion studies of novel technologies.

This was (as Tamar’s excellent blog post on the readings as a whole discusses) a major theme of the readings.  Basically, when pollsters ask people their views on technological risks about which members of the public have never heard and don’t have discussions about in their daily lives, they aren’t genuinely measuring a real-world phenomenon. 

They are, in effect, modeling how people react to the strange experience of being asked questions about something they have not thought about.  To pretend that one can draw inferences from that to what actual people in the world are truly thinking is flat-out bogus. Serious social science researchers know this is a mistake; news-maker and advocacy pollsters either don’t or don’t care.

One can of course try to anticipate how people—including ones with different cultural outlooks might react to an emerging technology when they do learn about it.  Indeed, I think that is a very sensible thing to do; the failure to make the effort can result in disaster, as it did in the case of the HPV vaccine!

But to perform what amounts to a risk-perception forecasting study, one must use an experimental design that it is reasonable to think will induce in subjects the reaction that people in the real world will form when they learn about the technology—or could form depending on how they learn about it.  That is what one is trying to model.

A simple survey question—like the one Pew asked respondents about GM foods in its recent public attitudes studycannot plausibly be viewed as doing that.  The real-world conditions in which people learn things about a new technology will be much richer—much more dense with cues relating to the occasions for discussing an issue, the setting in which the discussion is being had, and the identity and perceived motivations of the information sources—than are accounted for in a simple survey question.

I think it is possible to do forecasting studies that reasonable people can reasonably rely on.  I think our HPV vaccine risk study, e.g., which tried to model how people would likely react depending on whether they learned about the vaccine in conditions that exposed them to cues of group-conflict or not was like that.

But I think it is super hard to do it.

Frankly, I now don’t think our nanotechnology experiment design was sufficiently rich with the sorts of contextual background to model the likely circumstances in which people would form nanotechnology risk perceptions!

The study helped to show that the “familiarly hypothesis,” as we styled it, was simplistic.  It also supported the inference that it was possible people might assimilate nanotechnology to the sorts of technological-risk controversies that now polarize members of different groups. 

But the stimulus was too thin to be viewed as modeling the conditions in which that was actually likely to happen..

We should be mindful of hindsight bias, of course, but the fact that nanotechnology has not provoked any sort of cultural divisions in what is now approach two decades of its use in commercial manufacturing helps show the limited strength of inferences on the likelihood of conflict that can be drawn from experiments like the one we did. 

As Tamar notes, we were careful in our study to point out that the experimental result didn’t imply that conflict over nanotechnology was “inevitable” or necessarily even “likely.”

But I myself am very willing—eager even—to acknowledge that we viewed the design we used as more informative than it could have been expected to be about the likely career of nanotechnology.

I have acknowledged this before in fact. 

In doing so, too, I pointed out that that doesn’t mean studies like the ones we and other researchers did on nanotechnology risk perceptions weren’t or aren’t generally useful.  It just means that the value people can get from those studies depends on researchers and readers forming a valid understanding of what designs of that sort are modeling and what they are not.  

In order for that to happen, moreover, that researchers must reflect on their own studies over time to see what the fit between them and experience tells them about what is involved in modeling real-world processes in a manner that is most supportive of real-world inferences.

Speaking for myself, at least, I acknowledge that, despite my best efforts, I cannot guarantee anyone I will always make the right assessment of the inferences that can be drawn from my studies.  I can promise, though, that when I figure out that I didn’t, I’ll say so—not just to set the record straight but also to help enlarge understanding of the phenomena that it is in fact my goal to make sense of.

Of course, if a cultural conflagration over nanotechnology ignites in the future, I suppose I’ll have to acknowledge the “me” I was then then had a better grasp of things than the “me” I am now; I doubt that will happen—but life, thank goodness, is filled with surprises!

Wednesday
Mar112015

Is "shaming" an effective way to counteract biased information processing? A preliminary investigation

So far, there's been no improvement in the subject's defective information processing.

But data collection involving this subject and others is ongoing.

 

Saturday
Mar072015

Submerged ... 

But will surface in near future -- w/ results of new study ....

Prize for anyone who correctly predicts what it is about; 2 prizes if predict result.

Tuesday
Mar032015

Science of Science Communication 2.0, Session 8.1: Emerging technologies part I

Wow--time flies, doesn't it? Especially when every other week of class is cancelled due to snow.

But in any case-- it's that time again!  "Science of Science Communication 2.0" session 8 reading list here, & study/discussion questions below.  

Have at it!



Sunday
Mar012015

Weekend update: Science of Science Communication 2.0: on CRED "how to" manual & more on 97% messaging

Here are some contributions to class discussion.  The first is from Tamar Wilner, who offers reactions to session 7 readings.  I've posted just the beginning of her essay and the linked to her page for continuation. The second is from Kevin Tobia (by the way, I also recommend his great study on the quality of "lay" and "expert" moral intuitions), who addresses "97% messaging," a topic initially addressed in Session 6 but now brought into sharper focus with a recently published study that I myself commented on in my last post.

Health, ingenuity and ‘the American way of life': how should we talk about climate?

Tamar Wilner

Imagine you were

  1. President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;
  2. a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;
  3. a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or
  4. a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be? How would you advise any one of these actors to proceed?

First, some thoughts on these four readings.

The CRED Manual: well-intentioned, but flawed2015-02-27_10-42-04


Source material: Center for Research on Environmental Decisions, Columbia University. “
The Psychology of Climate Change Communication: A guide for scientists, journalists, educators, political aides, and the interested public.”

When I first read the CRED manual, it chimed well with my sensibilities. My initial reaction was that this was a valuable, well-prepared document. But on closer inspection, I have misgivings. I think a lot of that “chiming” comes from the manual’s references to well-known psychological phenomena that science communicators and the media have tossed around as potential culprits for climate change denialism. But for a lot of these psychological processes, there isn’t much empirical basis showing their relevance to climate change communication.

Of course, the CRED staff undoubtedly know the literature better than I do, so they could well know empirical support that I’m not aware of. But the manual authors often don’t support their contentions with research citations. That’s a shame because much of the advice given is too surface-level for communications practitioners to directly apply to their work, and the missing citations would have helped practitioners to look more deeply into and understand particular tactics.

continue reading

 

“Gateway beliefs” and what effect 97% consensus messaging should have

Kevin Tobia

A new paper reports the effect of “consensus messaging” on beliefs about climate change. The media have begun covering both this new study and some recent criticism. While I agree with much of the critique, I want to focus on a different aspect of the paper, the idea of a “gateway belief.” Thinking closely about this suggests an interpretation of the paper that raises important questions for future research – and may offer a small vindication of the value of 97% scientific consensus reporting. You be the judge.

First, what is a “gateway” belief? In this context, what is it for dis/belief in scientific consensus on climate change to be a “gateway” belief? You might think this means that some perceived level of scientific consensus is necessary to have certain beliefs in climate change, worry in climate change, belief in human causation, and support for public action. You can’t have these beliefs unless you go – or have gone – “through the gate” of perceived scientific consensus.

But here’s what the researchers actually have to say about the “gateway” model prediction:

We posit that belief or disbelief in the scientific consensus on human-caused climate change plays an important role in the formation of public opinion on the issue. This is consistent with prior research, which has found that highlighting scientific consensus increases belief in human-caused climate change. More specifically, we posit perceived scientific agreement as a “gateway belief” that either supports or undermines other key beliefs about climate change, which in turn, influence support for public action. ... Specifically, we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue (H1). In turn, a change in these key beliefs is subsequently expected to lead to a change in respondents’ support for societal action on climate change (H2). Thus, while the model predicts that the perceived level of scientific agreement acts as a key psychological motivator, its effect on support for action is assumed to be fully mediated by key beliefs about climate change (H3). 

There are a couple things worth noting here. First, if the ultimate aim is increasing support for public action, you don’t have to go “through the gate” of the gateway belief model at all. That is, what is really doing the work is the set of (i) beliefs that climate change is happening, (ii) beliefs it is human-caused, and (iii) how much people worry about it. If we could find some other way to increase these, we could affect public support without needing to change the perceived level of science consensus (though, whether that change might itself affect the perceived level of scientific consensus is another interesting and open question).

Second – and more importantly – it is not the case that there is some “gateway belief” in perceived scientific agreement that is required to have certain beliefs in climate change or human causation, worry, or support for action. The model merely predicts that all of these are affected by changes to the perceived level of scientific agreement. Thus, it is incorrect to draw from this research that the “gateway belief” of perceived scientific consensus is (as analogy might suggest) some necessary or required belief that must be held in order to belief in human-caused/climate change, worry about it, or support for action. Instead, the gateway belief is something like a belief that tends to have or normally has some relation to these other climate beliefs.

When we put it this way, the “gateway belief” prediction might start to seem less interesting: there’s one belief about climate change (perceived consensus among scientists) that is a good indicator of other beliefs about climate change. But, what is more intriguing is the further claim the researchers make: this gateway belief isn’t just a good indicator of others, but increasing the gateway belief will cause greater belief in the others. Perceiving scientific non-consensus is the gateway drug to climate change denial!

This conception of the gateway belief illuminates a subtle feature of the researchers’ prediction. Recall:

we hypothesize that an experimentally induced change in the level of perceived consensus is causally associated with a subsequent change in the belief that climate change is (a) happening, (b) human-caused, and (c) how much people worry about the issue ....

The hypothesis here is that changing the level of perceived consensus causes changes in these other climate beliefs.

Some might worry about the relatively small effects found in the study. But if we recognize the full extent of the researchers’ prediction (that increased perceived consensus will raise the other beliefs AND that decreased perceived consensus will lower the other beliefs), one possibility is that some participants with very high pre-test consensus estimates (e.g. 99% or 100%) actually reduced their estimate in light of the consensus messaging – and that this affected their other beliefs. It might seem unlikely that many participants held an initial estimate of consensus upwards of 97% (even with a pre-test mean estimate of 66.98), but data on this would be useful.

There is a more plausible consideration that also weakens the worry about small effect size: would we really want participants (or people in the world) to change their beliefs about science in exact proportion to the most recent information they receive about expert consensus? To put it another way, the small effect size might not be evidence of the weakness of consensus messaging, but might rather be evidence of the measured fashion in which people weigh new evidence and update their beliefs.

What would be helpful is data on the number (or percent) of participants whose beliefs increased from pre to post test. This would help distinguish between two quite different possibilities: 

  1. very few participants greatly increased their beliefs in climate change after reading the consensus message
  2. many participants moderately increased their beliefs in climate change after reading the consensus message
There is no good way to distinguish between these from the data provided so far, but which of these is true is quite important. If (1) is true, consensus messaging can only be offered as a method to appeal to the idiosyncratic few. And we might worry that the few people responding in this extreme way are, in some sense, overreacting to the single piece of evidence they just received. If (2) is true, this might provide a small redemption of the “consensus messaging campaign.” A little consensus messaging increases people’s beliefs a little bit (and, since this is from just one message, the small belief change is quite reasonable).
 

Of course, even if (2) is true, much more research will be required before deciding to launch an expensive messaging campaign. For instance, suppose we discover that the climate beliefs of people with very high initial beliefs in consensus (“97+% initial believers”) are actually weakened when they are exposed to 97% messaging. Even if people who are initially “sub-97% initial believers” change their beliefs in light of 97% messaging, we should ask whether this trade-off is beneficial. There are a number of relevant considerations here, but one thought is that perhaps reducing someone’s climate change belief from 100% to 99% is not worth an equivalent gain elsewhere from, say, 3% to 4%. For one, behaviors and dispositions may increase/decrease non-proportionally. 100 to 99 percent belief change might result in the loss of an ardent climate action supporter, while change from 3 to 4 percent might result in little practical consequence. These are all open questions! 

Wednesday
Feb252015

"the strongest evidence to date" on effect of "97% consensus" messaging

There's a new study out on effect of "97% consensus" messaging.

Actually, it is a new analysis of data that were featured in an article published a few months ago in Climatic Change.

The earlier paper reported that after being told that 97% of scientists accept human-caused climate change, study subjects increased their estimate of the percentage of scientists who accept human-caused climate change.

The new paper reports results, not included in the earlier paper, on the effect of the study's "97% consensus msg" on subjects' acceptance of climate change, their climate change risk perceptions, and their support for responsive policy measures.

The design of the study was admirably simple: 

  1. Ask subjects to characterize on a 0-100 scale their "belief certainty" that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it;
  2. tell the subjects that “97% of climate scientists have concluded that human-caused climate change is happening”; and
  3. ask the subjects to characterize again their "belief certainty" that climate change is occurring, that it is caused by humans, that it is something to worry about, and that something should be done about it.

Administered to a group of 1,104 members of the US population, the experiment produced these results on the indicated attitudes:

So what does this signify?

According to the authors, 

Using pre and post measures from a national message test experiment, we found that all stated hypotheses were confirmed; increasing public perceptions of the scientific consensus causes a significant increase in the belief that climate change is (a) happening, (b) human-caused and (c) a worrisome problem. In turn, changes in these key beliefs lead to increased support for public action.

I gotta say, I just don't see any evidence in these results that the "97% consensus msg" meaningfully affected any of the outcome variables that the authors' new writeup focuses on (belief in climate change, perceived risk, support for policy).

It's hard to know exactly what to make of  the 0-100 "belief certainty" measures. They obviously aren't as easy to interpret as items that ask whether the respondent believes in human-caused climate change, supports a carbon tax etc.

(In fact, a reader could understandably mistake the "belief certainty" levels in the table as %'s of subjects who agreed with one or another concrete proposition. To find an explanation of what the "0-100" values are actually measurements of, one has to read the Climatic Change paper-- or actually, the on-line supplementary information for the Climatic Change paper. If the authors have data on %s who believed in climate change before & after etc, I'm sure readers would actually be more interested in those.)

But based on the "belief certainty" values in the table, it looks to me like the members of this particular sample, were, on average, somewhere between ambivalent and moderately certain about these propositions before they got the "97% consensus msg."

After, they got the message, I'd say they were, on average,  ... somewhere between ambivalent and moderately certain about these propositions.

From "75.19" to "76.88" in "belief certainty": yes, that's "increased support for policy action," but it sure doesn't look like anything that would justify continuing to spend milions & millions of dollars on a social marketing campaign that has been more or less continuously in gear for over a decade with nothing but the partisan branding of climate science to show for it.

The authors repeatedly stress that the results are "statistically significant."

But that's definitely not a thing significant enough to warrant stressing.

Knowing that the difference between something and zero is "statistically significant" doesn't tell you whether what's being measured is of any practical consequence.  

Indeed, w/ N = 1,104, even quantities that differ from zero by only a very small amount will be "statistically significant."

The question is, What can we infer from the results, practically speaking? 

A collection of regression coefficients in a path diagram can't help anyone figure that out.

Maybe there's more to say about the practical magnitude of the effects, but unfortunately the researchers don't say it.

For sure they don't say anything that would enable a reader to assess whether the "97% message" had a meaningful impact on political polarization.

They say this: 

While the model “controls” for the effect of political party, we also explicitly tested an alternative model specification that included an interaction-effect between the consensus-treatments and political party identification. Because the interaction term did not significantly improve model fit (nor change the significance of the coefficients), it was not represented in the final model (to preserve parsimony). Yet, it is important to note that the interaction itself was positive and significant (β = 3.25, SE = 0.88, t = 3.68, p < 0.001); suggesting that compared to Democrats, Republican subjects responded particularly well to the scientific consensus message.

This is perplexing....

If adding an interaction term didn't "significantly improve model fit," that implies the incremental explanatory power of treating the "97% msg" as different for Rs and Ds was not significantly different from zero. So one should view the effect as the same.

Yet the authors then say that the "interaction itself was positive and significant" and that therefore Rs should be seen as "respond[ing] particularly well" relative to Ds. By the time they get to the conclusion of the paper, the authors state that "the consensus message had a larger influence on Republican respondents," although on what --their support for policy action? belief in climate change? their perception of % of scientists who believe in climate change? -- is not specified....

Again, though, the question isn't whether the authors found a correlation the size of which was "significantly different" from zero.

It's whether the results of the experiment generated a practically meaningful result.

Once more the answer is, "Impossible to say but almost surely not."

I'll assume the Rs and Ds in the study were highly polarized "before" they got the "97% consensus msg" (if not, then the sample was definitely not a valid one for trying to model science communication dynamics in the general population). 

But because the authors don't report what the before-and-after-msg "belief certainty" means were for Rs and Ds, there's simply no way to know whether the "97% consensus msg's" "larger" impact on Rs meaningfully reduced polarization.

All we can say is that whatever it was on, the "larger" impact the msg had on Rs must still have been pretty darn small, given how remarkably unimpressive the changes were in the climate-change beliefs, risk perceptions, and policy attitudes for the sample as a whole.

Sigh....

The authors state that their "findings provide the strongest evidence to date that public understanding of the scientific consensus is consequential." 

If this is the strongest case that can be made for "97% consensus messaging," there should no longer be any doubt in the minds of practical people--ones making decisions about how to actually do constructive things in the real world-- that it's time to try something else.

To be against "97% consensus messaging" is not to be against promoting public engagement with scientific consensus on climate change.

It's to be against wasting time & money & hope on failed social marketing campaigns that are wholly disconnected from the best evidence we have on the sources of public conflict on this issue.

 

Wednesday
Feb252015

Doh!

Had "comments disabled" for yesterday's post on  Session 7 of virtual "Science of Science Communication 2.0"

Was wondering why there weren't the usual 7,000+ "student" comments!

The sole purpose of this post is to announce that the comments feature for that one is now enabled.  Because that's the logical place for discussion, I'm disabling comments here.

Tuesday
Feb242015

Science of Science Communication 2.0, Session 7.1: communicating climate science part 2

It's that again! Another session -- # 7 --of virtual "Science of Science Communication 2.0"

Reading list here

Imagine you were

  1. President Obama about to make a speech to the Nation in support of your proposal for a carbon tax; 
  2.  

  3. a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”; 
  4.  

  5. a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or 
  6.  

  7. a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.
  8.  

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be?  How would you advise any one of these actors to proceed?

Monday
Feb232015

Some other places to to find discussion

Couple of posts elsewhere worth checkiout out today.

1st is Tamar Wilner's great "response paper" for Science of Science Communication course session 6. She asks whether the evidence for "97% consensus messaging" bears critical scrutingy

2d is a post by me on Washington Post Monkey Cage discussing our recently published paper on geoengineering and "two-channel science communication."

One thing that bums me a bit about the post is that they edited out material at end explaining that the experiment was a model, not a proposed "communication framing strategy," and that the value of such a study is in guiding field studies.

Also not thrilled with headline--I don't study science communication to teach people how to "change skeptics' minds"; I do studies to show how to communicate science in a manner that enables people to decide for themselves what to make of it.

Oh well...

 

 

 

Sunday
Feb222015

Weekend update: Hard questions, incomplete answers, on the "disentanglement principle"

I have written a few times now about the “disentanglement principle”—that science communicators & educators must refrain from “making free, reasoning people choose between knowing what’s known by science and being who they are.”  The Measurement Problem paper uses empirical evidence to show how science educators and communicators have “disentangled” identity and knowledge on issues like evolution & climate change, and proposes a research program aimed at perfecting such techniques.

In a comment on a recent post, Asheley Landrum posed a set of penetrating and difficult questions about “disentanglement.”  I thought they warranted a separate blog, one that I hoped might, by highlighting the importance of the questions and the incompleteness of my own answers, motivate others to lend their efforts to expanding our understanding of, and ability to manage, the problem of identity-knowledge entanglement.

Asheley's comment:

I'm really interested in the idea of disentangling identity from knowledge. However, I wonder to what extent that really can be done. Take, for instance, the conflation of belief in evolution versus knowledge of evolution that you've described. Does it matter if multiple cultural identies recognize that the theory of evolution states humans evolved from earlier species of mammal if they do not accept (or believe) it to be extremely likely to be true? Is our goal as scientists (and science communicators) to make sure that people simply know what a theory is comprised of but not worry about whether the public buys it?

Also, once a topic becomes politicized, is it possible to truly disentangle that topic from people's cultural identies? I feel like new work is showing how we can potentially stop topics from becoming politicized in the first place, but once a topic becomes entangled with cultural identy, the mere mention of it may trigger motivated cognition. Is it something that will pass with time? For instance, we've seen public perception shift on a myriad of social issues (e.g. Interracial marriage, now gay marriage). Is this a result of time or a change in the narrative surrounding the topics? Does changing the narrative surrounding certain science topics change eventually change how entangles that topic is with regard to cultural identity?

My response:

@Ashley:

Good questions. I certainly don't have complete answers.

But I'd start by sorting out 3 things.

1st, is "non-entanglement" possible?

2d, can entanglement be undone?

3d, is the goal of the science communicator/educator “belief” or “knowledge”?

1. Is non-entanglement possible?

I take this to mean, is it possible to create conditions where people don't have to choose between knowing what's known and being who they are?

Answer is, of course.

For one thing, the issue never arises for most issues -- ones for which it very well could have.

E.g., identity and knowledge were "entangled" on HPV vaccine but not the HBV vaccine.

The former—as a result of factors that were perfectly foreseeable and perfectly avoidable—traveled a path into public awareness that generated conditions, persisting to this day, that distort and disable the normally reliable faculties parents use to make informed decisions about their children’s health.

Likewise, there's no entanglement between identity and knowledge and GMO in US - - even though there is in Europe.

There's no entanglement on vaccine safety in US -- although there might well be if we don't stifle the evidence-unencumbered misrepresentations about the extent of vaccine risk perceptions and the relationship between them and cultural styles.

So the first thing is, there is nothing necessary about entanglement; it happens for reasons we can identify; and if we organize ourselves appropriately to use the science of science communication, we can certainly avoid this reason-eviscerating state of affairs.

Another pointeven when positions on risks and other facts become entangled in antagonistic cultural meanings--turning them into symbols of cultural identity--it still is possible to create conditions of science communication that free people from having to choose between knowing and being who they are!

You advert to this in raising evolution. We know from empirical evidence that it is possible to teach evolution in a manner that doesn't make religious students choose between knowing and being who they are and that when the right mode of teaching (one focusing on simply valid inference from observation), they can learn the modern synthesis just as readily as students who say they "do believe" in evolution (and who invariably don't know anything about natural selection, random mutation, and genetic variance).

We need similar pedagogical strategies for teaching evolutionary science—and able teachers and researchers are busy at work on this problem.

I think, too, we are seeing successful disentanglement strategies being used in local government to promote evidence-based climate policymaking aimed at adaptation.

We need to study these examples and learn the mechanisms they feature and how to harness and deploy them to promote knowledge.

2. Can entanglement be undone?

As I mentioned, in most cases, the entanglement problem never arises-- as in case of HBV vaccine or GMO foods.

But if entanglement occurs-- if antagonistic meanings become attached to issues turning positions on them into symbols of identity--can that condition itself be neutralized, vanquished

This is different, I think, from asking whether, in a polluted science communication environment, it is possible to "disentangle" in communicating or teaching climate science or evolution, etc.

The communication practices that make that possible are in the nature of "adaptation" strategies for getting by in a polluted science communication environment.

The question here is whether it possible to decontaminate a polluted science communication environment.

I think this is possible, certainly. I suppose, too, I could give you examples where this seems to have happened (e.g., on cigarette smoking in US).

But the truth is, we know a lot less about how risks and like facts become entangled in antagonistic meanings, and about how to “adapt” when that happens, than we do about how to clear the science communication environment of that sort of pollution once it becomes contaminated by it.

We need more information, more evidence.

But the practical lesson should be obvious: we must use all the knowledge at our disposal, and summon all the common will and attention we can, to prevent pollution of the science communication environment in the first place (a critical issue right now for childhood vaccines).

3.  Is the goal of the science communicator/educator “belief” or “knowledge”? 

Finally, you raise the issue of what the “goal” of science communication and education is—“knowledge” vs. “belief”?

My own sense is that the “knowledge”-“belief” dichotomy here reflects at least two forms of confusion

One is semantic. It’s the incoherent idea that there is some meaningful distinction between the objects of “belief” and the objects of “knowledge” and that “science” deals with the latter.

I’ve discussed this before. It’s not worth going into again, except to remark that those who think they are making an important point when they assert “it’s not a belief—it’s a fact!”need to seek guidance from the two patron saints of science’s theory of causal inference—the Rev. Thomas Bayes and Sir Karl Popper—to get some remedial instruction on how empirical proof expands our knowledge (by furnishing valid observational evidence on the basis of which we update our current beliefs about how the world works).

The other confusion is more complicated. It's certainly not a cause for embarrassment, but not grasping it is certainly a cause for concern.

The nature of the mistake (I'm still struggling, but am pretty sure at this point that this is the nub of the problem) is to believe that, as a psychological matter, it makes sense to individuate people's "beliefs" (or items of "knowledge") independently of what those people are doing.

Consider Hameed’s Pakistani Dr.  He says he doesn’t “believe in” evolution: “Allah created man! We did not descend from monkeys!”

Yet he tells you “of course” he depends on evolutionary science in his practice as an oncologist, where he uses insights from this field to screen patients for potential cancer risks. 

Of course” medical research relies on evolutionary science, too, he says—“consider stem cell research!”

If we say, “but isn’t that inconsistent—to say you ‘disbelieve’ in evolution but then make use of it in those ways as a Dr?,” he thinks we are being obtuse.

And he is right.

As Everhart & Hameed helps us see, there are two different “evolutions”: the one the Dr rejects in order to be a member of a religious community; and the one he accepts in order to be a doctor and a member of a scientific-knowledge profession.

The idea that there is a contradiction rests on a silly model that thinks individuals’ “beliefs” (or what is “known” by them) can be defined solely with reference to states of affairs or bodies of evidence in the world.

In the mind, “beliefs” are intentional states--often compound ones, consisting of assent to various factual propositions but also pro- or con- affective stances, and related propensities to action--that are yoked to role-specific actions

As a psychological matter, then, what people “believe” or “know” cannot a be divorced from what use they make of the same.

Being a member of a religious community and being as a member of the medical profession are integrated elements of the Pakistani Dr's identity. 

As a result, there’s no contradiction between the Pakistani Dr. saying he “disbelieves” in evolution when he is “at home” (or at the Mosque), where the set of intentional states that signifies allows him to be the former, and that he “believes in” it when he is “at work,” where the set of intentional states that signifies allows him to be a member of a science-informed profession.  

He knows that the evolution he accepts and the one he rejects both refer to the same account of the natural history of human beings that originates in work of Darwin; but the one he accepts and the one he rejects are "completely different things" b/c connected to completely different things that he does.  

It's confusing, I agree, but he's not the one who is confused-- we are if we can't get grasp the point knowing that can't be disconnected, psychologically, from knowing how!

Still, the Pakistani Dr is is lucky to live a life in which the two identities that harbor these competing beliefs have no reason to quarrel.

For members of certain religious communities in the US–and as Hameed notes, for more and more Islamic scientists and scientists in training in Europe—that's not so.

There is a social conflict, one foisted on them by illiberal cultural status competition, that makes it impossible to use their reason to be both members of cultural communities and science-trained professionals at the same time.

So what should the goal of the science communicator and teacher be?

It should be to make it possible for people to recognize and give effect to scientific knowledge in order to do the things the doing of which require such knowledge: like being scientists; like being successful members of other professions, including, say, agriculture, that depend on knowing what science knows; like being a good parent; like being a member of a self-governing community, the well-being of which turns on it making science-informed policy choices; and like being curious, reflective people who simply enjoy the awe and pleasure of being able to participate in comprehending the astonishing insights into the mysteries of nature that our species has gained by using the signature methods of scientific inquiry.

If a science teacher or communicator thinks that it his or her job “to “get people to say they ‘believe in’ ” evolution or climate change independently of enabling them to do things that depend on making use of the best available evidence, then he or she is making a mistake.

If that a science educator thinks that he or she is doing a good job because people say they “believe in” those things even though they couldn’t pass a high-school biology test on natural selection, and think that CO2 is poisonous to plants, then he or she is incompetent.

And if he or she is trying to get people who use scientific knowledge related to climate change or evolution or anything else to say they “believe in” it when the only purpose that would serve is to force them to denigrate who they are, then he or she has become a source, witting or unwitting, of the science communication pollution that entangles identity and knowledge and that disables free and reasoning democratic citizens from making use of all the scientific knowledge at their disposal.

Friday
Feb202015

"Measurement Problem" published but still unsolved

Published version of this paper is now out....

 

But I don't expect the mystery of the Pakistani Dr. and the Kentucky Farmer to be solved anytime soon....

 

 

Wednesday
Feb182015

On becoming part of a polluted science communication environment while studying it....

From a correspondent, cc'ing me

Dear National Geographic Forum,
 
 
I am an engineer who, in addition to engineering-related courses, also studied geology, geophysics and even a little astronomy at the undergraduate and graduate level. In short, a big fan of science and rational thought, especialy applied science like engineering.
 
I firmly believe in climate change but find that the claim that it is "human-made" is total rubbish.  
Rather I think the human-made claim is driven by "tribalism", just like Professor Dan Kahan of Yale Law School ascribes to the "barber in a rural town in South Carolina". (And seriously, good he be any more elitist and patronizing? North vs. South, City Mouse vs. Country Mouse, Perfesser of Law vs. Barber, etc.)
 
In fact, the tribal forces at work on researchers and politicians are much more pronounced. 
It's not just about losing customers, it's about fame, glory, popularity and - most important - money.  
Think Oscars, think Nobel Prizes, think tapping into the hundreds of millions of dollars (billions?) out there for the taking.
 
All you have to do is run the same flawed computer programs, fiddle with the data when necessary, and confirm, affirm, re-affirm The Consensus.
 
And, if that isn't enough enticement, you also get to adopt a holier-than-thou attitude when talking about "Skeptics".  I mean, it's no accident that your article states "How to convert the skeptics?"
 
Maybe you should take a look in the mirror.
 
When you liken the more than half of Americans who don't believe the Earth is warming because humans are burning fossil fuels to "loopy... flat-Earthers" you are reinforcing tribalism.
 
Sincerely,
 
My response

Thanks, ***. 

I'm going to put aside "who is right" on climate change and also how the sorts of influences I study bear on the production of climate science or any other form of science. One can't reliably draw any inferences from studies of how dynamics like identity-protective cognition affect public opinion, on the one hand, to how the same dynamics affect the expert judgments of scientists or any other group of professionals (like judges, say!), on the other.  If one wants to figure out if the conclusions of expert decisionmakers are being biased by identity-protective cogntion or comparable dynamics, then one has to perform valid studies on samples of the experts in question as they apply their professional judgment to the types of problems for which it is suited.

But I agree with the science-communication points you are raising here.

I don't think it is useful at all to characterize as "anti-science" the 50% of US general population who, using exactly the same forms of reasoning as those who conclude that best evidence supports belief in AGW, conclude that the best evidence doesn't support it. 

On the contrary, I think the dynamics that generate these sorts of characterizations are exactly what prevent culturally diverse citizens from converging, as they usually do, on what science knows. 

Some papers that address these points; happy to receive the benefit of any comments (including critical ones) you have on them:

Kahan, D.M. What is the "Science of Science communication"? J. Sci. Comm. (in press). 

Kahan D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology (in press). 

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

I often find my and my collaborators' research cited -- by both "left" & "right" (those are crude ways to characterize the relevant cultural groups, but they are good enough here) -- to "explain" why the "other side" is dogmatic, anti-science, stupid, etc.   

That sort of misinterpretation of our findings is part of exactly the phenomenon we are studying: the forces that drive people to misconstrue empirical evidence in patterns congenial to their cultural outlooks.

Indeed, for a study of how people misconstrue evidence relating to the open-mindedness & critical reasoning capacities of those who disagree w/ them on contested science issues, take a look at Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection, Judgment and Decision Making 8, 407-424 (2013).  The most demoralizing thing is that this tendency is most pronounced in individuals with highest proficiency in critical reasoning.... 

Makes me wonder sometimes whether there's any point in trying to use empirical evidence to fix this problem.

But it takes only about 15 seconds to conclude that of course that would be the wrong conclusion to draw.

Because for one thing, any scholar who gets the benefit of being supported by a liberal democratic society to do scientific research would have to be a moral cretin not to recognize that he or she owes that society's members whatever he or she can contribute to protecting their science communication environment from the sort of toxins that deprive liberal democratic citizens of the benefits of all the scientific knowledge their way of life makes possible.

There is one more thing I want to be sure I express my agreement with:  I don't doubt for a second that I myself, in the course of trying to address these matters, will blunder, either as a result of being subject to the same dynamics I'm studying or to simple failings in judgment or powers of expression.  And as a result, I'll end up conveying, contrary to my own intentions and ambitions, the very sort of partisan meanings that I believe must be purged from the science communication environment.

I don't resent being told when that happens; I am chastened, but grateful. 

Take care. 

--Dan

p.s.

It's in people's self-interest to form beliefs that connect rather than estrange them from those whose good opinion they depend on (economically, emotionally, and otherwise).  As a result, we should expect individuals' cultural outlooks to have a very substantial impact on their climate change risk perceptions.

At the same time, the beliefs that the typical member of the public forms about climate change will likely have an impact on how she gets along with people she interacts with in her daily life. A Hierarchical Individualist in Oklahoma City who proclaims that he thinks that climate change is a serious and real risk might well be shunned by his coworkers at a local oil refinery; the same might be true for an Egalitarian Communitarian English professor in New York City who reveals to colleagues that she thinks that “scientific consensus” on climate change is a “hoax.” They can both misrepresent their positions, of course, but only at the cost of having to endure the anxiety of living a lie, not to mention the risk that they’ll slip up and reveal their true convictions. Given how much they depend on others for support—material and emotional—and how little impact their beliefs have on what society does to protect the phys-ical environment, they are better off when they form perceptions of climate change risk that minimize this danger of community estrangement.

In contrast, what an ordinary individual believes and says about climate change can have a huge impact on her interactions with her peers. If a professor on the faculty of a liberal university in Cambridge Massachusetts starts saying "cliamte change is ridiculous," he or she can count on being ostracized and vilified by others in the academic community. If the barber in some town in South Carolina's 4th congressional district insists to his  friends & neighbors that they really should believe the NAS on climate change, he will probably find himself twiddling his thumbs rather than cutting hair.

 

* * *

At the same time, the beliefs that the typical member of the public forms about climate change will likely have an impact on how she gets along with people she interacts with in her daily life. A Hierarchical Individualist in Oklahoma City who proclaims that he thinks that climate change is a serious and real risk might well be shunned by his coworkers at a local oil refinery; the same might be true for an Egalitarian Communitarian English professor in New York City who reveals to colleagues that she thinks that “scientific consensus” on climate change is a “hoax.” They can both misrepresent their positions, of course, but only at the cost of having to endure the anxiety of living a lie, not to mention the risk that they’ll slip up and reveal their true convictions. Given how much they depend on others for support—material and emotional—and how little impact their beliefs have on what society does to protect the phys-ical environment, they are better off when they form perceptions of climate change risk that minimize this danger of community estrangement.

 

Tuesday
Feb172015

Science of Science Communication 2.0, Session 6.1: Communicating climate science, part 1

It's time for session 6 of virtual "Science of Science Communication 2.0" (time for real-space version in about 4 hrs)! Reading list here

Reactions?


Friday
Feb132015

"Let's shame them!": part and parcel of the dangerous seat-of-the-pants, evidence-free style of risk communication we are using to protect universal vaccination in US

A thoughtful correspondent asked me what I thought of proposals to "shame" parents who don't vaccinate their children.  I'm against doing that. Actually, I'm not opposed to "shaming" when it makes sense; but I am opposed to doing anything in public policy that disregards the best evidence we have on the challenges we face and the best strategies for combatting them. Here is what I had to say about why shaming parents who don't vaccinate should be viewed as falling into that category:

I myself don't see any value in shaming here.

The conflict-entrepreneur, anti-vax organizers deserve ridicule and are awful people etc. But denouncing or shaming them actually only gives them exactly what they want -- more attention, which in turn does make more members of the public agitated and confused. 

This is known as the “rope-a-dope” strategy. Many vaccine advocates are falling for it big time and are doing more harm than good by falsely overstating the impact of the small fringe of society that is anti-vaccine.

In addition, shaming individual parents risks chilling anxious individuals who aren't militant or political but just confused from trying to get answers to their questions from their drs or neighbors etc. 

No one could think that’s good for public health. Thoughtful and reflective people can actually help those parents see that vaccination makes sense for their kids and for society as a whole--but only if those parents seek out their counsel.

Finally, shaming risks undermining the social norms most worthy of protection here.

One is the general confidence that parents indisputably have (as manifested in maintenance of the public-health goal of 90%+ vaccination rates for over a decade) in vaccine safety. The main source of information that people use to assess risk is the attitude of other ordinary people as evinced by their behavior.

What we definitely don't want, then, is to give people a false impression that fear of and resistance to vaccines are widespread or growing. The incessantly repeated, demonstrably false assertion that vaccination rates are falling & exemptions rising in the US does exactly that: it causes people to misperceive how much confidence US parents have in vaccination -- to misperceive how high our vaccination rates are & have been for well over a decade.

The other is reciprocal cooperation.  People contribute to public goods when they perceive others are -- but don't when they perceive others aren't, in which case contributing makes one feel like a sucker.  Herd immunity is a public good. In fact, studies show that giving people the impression that others are refusing to vaccinate diminishes their own intention to vaccinate.

I'd worry that the spectacle of orchestrated shaming -- b/c the premise for it is falling vaccine rates, etc. -- could reinforce these norm-eroding effects. 

Instead, we should want parents and the public generally to know, as Moms Who Vax wisely emphasize, that “the vast, vast majority of” parents do recognize that vaccines are critical for their kids welfare.  

Those parents have better things to do than march around asserting the obvious, so it is easy to lose the benefit that their behavior and confidence can contribute to the norms that promote universal vaccination.  

Let's follow the lead of MWX and and raise the profile of the behavior and attitude of those ordinary parents.

We should also take the money and attention that would otherwise be devoted to pointless and likely self-defeating shaming campaigns (ones coordinated by commercial marketing firms poised to carry out this questionable strategy) and direct it to research into developing screening instruments that can help identify vaccine-hesitant parents and targeted risk counseling for them.

The scientists doing this research aren't nearly as loud, nearly as self-promoting, as the advocates who are overstating anxiety about vaccines and the need for unresponsive policies like "shaming campaigns."

So like MWV, we should be trying to remedy this inattention by hearalding what these researchers are up to, and helping those dedicated to perfecting our universal vaccination regime to make sure that these researchers are adequately supported in their efforts.

Thursday
Feb122015

Turbulence and shifting gusts of hot air: the forecast for perceptions of "scientific consensus" in response to NAS geoengineering reports

They aren’t the first National Academy of Sciences Reports to call for stepping up research on geoengineering, but the ones the Academy issued Tuesday are definitely raising both the volume and intensity of this recommendation.

In response, I predict an interesting counter-reaction by many of the advocacy groups involved in promoting graeater public engagement with climate science. 

A prominent if not dominant stance among such groups, I’m guessing, will be to dismiss geoengineering as impractical, dangerous, futile, etc.

And impolitic as well: triggering an outcome referred to as "moral hazard" (an inapt label, given the established meaning of this term in economics), talk of geoengineering, it is asserted, will lull people into believing climate change can be met w/o significant changes in their lifestyle, thereby dissipating the groundswell of popular support (?!) for restrictions on use of carbon-emitting greenhouse gasses.

I’m guessing that we'll see this reaction, first, because that’s already how many climate-policy advocates have reacted whenever anyone mentions geoengineering.

And I’m guessing this is what we'll see, second, b/c such a reaction would be in keeping with dynamics of cultural cognition generally and with studies of geoengineering and perceptions of “scientific consensus” in particular.

Just the day before yesterday (we take seriously our commitment to our 14 billion regular blog subscribers to be as topical as possible!), CCP’s study "Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication" was published in the Annals of the American Academy of Political and Social Science.

It reports the results of a two-nation—US and England—experiment that found that subjects who learned of scientist’ call for research into geoengineering were in fact less cultural polarized in their subsequent assessment of the strength of the evidence of human-caused climate change than were subjects who first learned of scientists’ ‘call for more carbon emission limits. 

The latter were in fact more polarized than subjects in a control group, who considered the evidence of human-caused climate change without any information on carbon-emission limit or geoengineering-research proposals.

This finding was consistent with the hypothesis that learning that geoengineering, a pro-technology response, was being given serious consideration would reduce the defensive biases of citizens culturally predisposed to discount evidence of human-caused climate change insofar as accepting it implies limits on markets, commerce, and industry—activities that individuals with this cultural orientation value, symbolically as well as materially.

Contrary to the often-asserted "moral hazard" claim, telling people about geoengineering research did not reduce concern about climate change risks. 

On the contrary, the subjects who learned about the proposal for such research were more concerned, presumably because the ones most prone to skepticism reacted much more open-mindedly to evidence of global warming.

The second study that predicts that many climate-change policy advocates will react dismissively to the NAS (and Royal Society) recommendations to research geoengineering is the CCP study on cultural cognition of scientific consensus.

That one found that individuals tend to credit or discredit representations of scientific consensus on risk and related facts in a selective pattern that reflects their cultural worldviews. 

Egalitarian, communitarian individuals credit the expertise of scientists who assert that climate change and nuclear power pose huge environmental risks and dismiss the expertise of scientists—ones with exactly the same credentials—who assert otherwise.

The pattern is reversed in hierarchic, individualistic subjects.

If that's how people process information about "scientific consensus" in the real world, than people with these opposing outlooks will end up forming opposingly skewed understandings of what scientific consensus is on these and other issues. And in fact, that’s what surveys show to be the case.

So here we can expect egalitarian communitarians—who readily perceive scientific consensus in favor of human-caused climate change—to dispute that there is "really" scientific consensus in favor of investigating the contribution geoengineering can make to counteracting climate-change risks.

They’ll either dismiss the NAS and Royal Society reports or construe them as saying the opposite of what they say (that more research is indeed warranted) because the suggestion that one response to climate change is more innovation, more technology—not less of all of those things—disappoints their cultural worldview, which is exactly what motivates a good many of them to exhuberantly embrace evidence of climate change.

Geoengineering is “liposuction,” when what capitalism really needs to do is go on a “diet,” as one commentator poetically put it.

Identity-protective reasoning doesn't discriminate on the basis worldview.  

Its reason-eviscerating effects are symmetric across the ideological spectrum.  We are all vulnerable.  

And the reactions to geoengineering might help us to see that; or it might not, precisely because it's in the nature of the disease to discern its effects only in those who belong to an opposing cultural group and never in the members of one's own....

Anyway, that’s my prediction about how people will react to the new NAS reports.

Guess we'll find out.    

Wednesday
Feb112015

"What is the 'science of science communication'?" (new paper)

A short essay that tries to tie some bigger themes together...