follow CCP

Recent blog entries
Friday
Jun272014

What SE Florida can teach us about the *political* science of climate change

This is another section from new paper "Climate Science Communication and the Measurement problem.

 

That paper, which I posted yesterday, presents data showing that "conservative Republicans" know just as much as "liberal Democrats" about climate science (a very modest amount) and more importantly are just as likely to be motivated to see scientific evidence of climate change as supporting the conclusion that we face huge risks.

The reason they evince "climate skepticism" in responding to survey items on "belief in" human-caused global warming is that those items aren't measuring what people know; they are measuring "who they are, whose side they are on," in a mean, illiberal, collective-knowledge-vitiating, individual-reason-effacing cultural status competition.  Sadly, that is exactly what the "climate change" question is measuring in our political life too.

This section of the paper shows how local politicians in SE Florida are disentangling what citizens know from who they are -- and the breakthrough they are achieving politically on climate as a result.

7.  Disentanglement

a.  We could all use a good high school teacher.  * * *

b.  Don’t ignore the denominator. * * *

c.  The “normality” of climate science in Southeast Florida. Southeast Florida is not Berkeley, California, or Cambridge, Massachusetts.  Southeast Florida’s political climate, for one thing, differs at least as much from the one that Berkeley and Cambridge share as the region’s natural climate does from each of theirs. Unlike these homogenously left-leaning communities, Miami-Dade, Broward, Palm Beach, and Monroe counties are politically conventional and diverse, with federal congressional delegations, county commissions, and city governments occupied by comparable proportions of Republicans and Democrats. 

Indeed, by one measure of “who they are,” the residents of these four counties look a lot like the United States as a whole. There is the same tight connection between how people identify themselves politically and their “beliefs” about global warming—and hence the same deep polarization on that issue.  Just as in the rest of the U.S., moreover, the degree of polarization is highest among the residents who display the highest level of science comprehension (Figure 19).

But like Berkeley and Cambridge—and unlike most other places in the U.S.—these four counties have formally adopted climate action plans. Or more precisely, they have each ratified a joint plan as members of the Southeast Florida Regional Climate Change Compact.  Unlike the largely hortatory declarations enacted by one or another university town, the Compact’s Regional Climate Action Plan sets out 110 substantive “action items” to be implemented over a multi-year period.[1] 

Many of these, understandably, are geared to protecting the region from anticipated threats. The Plan goals include construction of protective barriers for hospitals, power-generating facilities, and other key elements of infrastructure threatened by rising sea levels and storm surges; the enactment of building codes to assure that existing and new structures are fortified against severe weather; measures to protect water sources essential both for residential use and for agriculture and other local businesses.

But included too are a variety of measures designed to mitigate the contribution  the four counties make to climate change.  The Plan thus calls for increased availability of public transportation, the  implementation of energy-efficiency standards, and the adoption of a “green rating” system to constrain carbon emissions associated with construction and other public works.

The effects will be admittedly modest—indeed, wholly immaterial in relation to the dynamics at work in global climate change. 

But they mean something; they are part of the package of collective initiatives identified as worthy of being pursued by the city planners, business groups, and resident associations—by the conservation groups, civic organizations, and religious groups—who all participated in the public and highly participatory process that generated the Plan.

That process has been (will no doubt continue to be) lively and filled with debate but at no point has it featured the polarizing cultural status competition that has marked (marred) national political engagement with climate science.  Members of the groups divided on the ugly question that struggle poses—which group’s members are competent, enlightened, and virtuous, and which foolish, benighted, and corrupt—have from the start taken for granted that the well-being of all of them demands making appropriate use of the best available scientific evidence on climate. 

The Compact Plan carries out a 2011 legislative mandate—enacted by the state’s Republican-controlled legislature and signed by its Tea Party Republican Governor—that all municipal subdivisions update their Comprehensive Plans to protect public health and resources from “impacts of rising sea levels,” including “coastal flooding due to extreme high tides and storm surge.”  The individual county commissioners who took the lead in forming the compact included Republicans and Democrats. Nor was there partisan division in the approval process for the Compact Action Plan.

What makes Southeast Florida so different from the rest of the country? 

Indeed, what makes Southeast Florida, when it addresses climate change inside the Compact's decisionmaking process, so different from the Southeast Florida that. like the rest of the country, is polaized on climate change?

The explanation is that the Compact process puts a different question from the one put in the national climate change debate.  The latter forces Southeast Floridians, like everyone else, to express “who they are, whose side they are on.”In contrast, the decisionmaking of the Compact is effectively, and insistently, testing what they know about how to live in a region that faces a serious climate problem. 

The region has always had a climate problem.  The models and plans that local government planners use today  to protect the region’s freshwater aquifers from saltwater intrusion are updated versions of ones their predecessors used in the 1960s. The state has made tremendous investments in its universities to acquire a level of scientific expertise on sea-level and related climate dynamics unsurpassed in any other part of the Nation.

People in Florida know that the region’s well-being depends on using the information that its scientists know.  The same ones who are politically divided on the question do you “believe in” human-caused global warming overwhelmingly agree that “local and state officials should be involved in identifying steps that local communities can take to reduce the risk posed by rising sea levels”; that “local communities should take steps to combat the threat that storm surge poses to drinking water supplies”; and that their “land use planners should identify, assess, and revise existing laws to assure that they reflect the risks posed by rising sea level and extreme weather” (Figure 20).

That’s normal.  It’s what government is supposed to do in Southeast Florida. And it better be sure to pick up the garbage every Wednesday, too, their citizens (Republican and Democrat) would add.

Public mtg on climate action in SE FlaThe Compact effectively informed its citizens of the appropriateness of using the best available science for these ends but not through a “messaging” campaign focused on “scientific consensus” or anything else. 

The Compact’s “communication strategy” was its process.  The dozens of open meetings and forums, convened not just by the Compact governments but by business, residential, and other groups in civil society filled the region’s science communication environment with exactly the information that ordinary people rationally rely on to discern what’s known to science: the conspicuous example of people they trust and recognize as socially competent supporting the use of science in decisionmaking directly bearing on their lives.

No polluting the science communication environment with partisan meanings!Indeed, far from evoking the toxic aura of tribal contempt that pervades “messaging” campaigns (“what? are you stupid? What part of ‘97% AGREE!’ don’t you understand?!”), Compact officials aggressively, instinctively repel it whenever it threatens to contaminate the region’s deliberations.  One of those occasions occurred during a heavily attended “town meeting,” conducted in connection with the Compact’s 2013 “Regional Climate Leadership Summit,” a two-day series of presentations and workshops involving both government officials and representatives of key public stakeholder groups. 

The moderator for the town meeting (a public radio personality who had just moved to Southeast Florida from Chicago) persistently tried to inject the stock themes of the national climate change debate into the discussion as the public officials on stage took turns answering questions from the audience.  What do Republicans in Washington have against science? And what “about the level of evidence that’s being accepted by private industry”—how come its doing so little to address climate change?

After an awkward pause, Broward County’s Democratic Mayor Kristin Jacobs replied.  “I think it’s important to note,” she said, gesturing to a banner adorned by a variety of corporate logos, “that one of the sponsors of this Summit today is the Broward Workshop. The Broward Workshop represents 100 of the largest businesses in Broward County.” The owners of these businesses, she continued, were “not only sponsoring this Summit,” but actively participating in it, and had organized their own working groups “addressing the impacts of water and climate change.”  “They know what’s happening here,” she said to the moderator, who at this point was averting her gaze and fumbling with his notes.

Town Hall mtg, 11/7/13. Mayor Jacobs, far right“I would also point out,” Jacobs persisted, “when you look across this region at the Summit partners, the Summit Counties, there are three Mayors that are Republican and one  that’s Democrat, and we’re working on these issues across party lines.” Pause, silence.  “So I don’t think it is about party,” she concluded. “I think it is about understanding what the problems are and fixing them and addressing them.”

Five of the lead chapter authors of the National Climate Assessment were affiliated with Florida universities or government institutions. As more regions of the country start to confront climate threats comparable to ones Florida has long dealt with, Florida will share the knowledge it has invested to acquire about how to do so and thrive while doing it.

But there is more Florida can teach.  If we study how the Compact Counties created a political process that enables its diverse citizens to respond to the question “so what should we do about climate change?” with an answer that reflects what they all know, we are likely to learn important lessons about how to protect enlightened self-government from the threat posed by the science of science communication’s measurement problem.

 

 


[1] I am a member of the research team associated with the Southeast Florida Evidence-based Science Communication Initiative, which supplies evidence-based science-communication support for the Compact.

Thursday
Jun262014

New paper: "Climate Science Communication and the Measurement Problem"

As all 14 billion readers of this blog know, when I say "tomorrow" or "this week," I really mean "tomorrow" or "next week."  Nevertheless, as I said I'd do earlier this week, I really am posting this week--as in today--a new paper

It's the one from which the " 'external validity' ruminations" (parts one, two, & three) were drawn.

The coolest part about it (from my perspectiveve at least!) is the data it presents from the administration  of two "science comprehension" tests, one general and the other specifically on climate change, to a large nationally representative sample.  

The results surprised me in many important respects. I've spent a good amount of time trying to figure out how to revise my understanding of science communication & cultural conflict in light of them (and talked with a good number of people who were kind enough to listen to me explain how excited & disoriented the results made me feel). I think I get these results now. But I still have the unsettled, unsettling feeling that I might now be standing stationary in the middle of a wide, smooth sheet of ice!

I'll post more information, more reflections on the data. But to get the benefit of hearing what those with the motivation & time to read the paper think about the argument it presents, I'll just post it for now.

8. Solving the science of science communication’s measurement problem—by annihilating it

My goal in this lecture has been to identify the science of science communication’s “measurement problem.”  I’ve tried to demonstrate the value of understanding this problem by showing how it contributes to the failure of the communication of climate science in the U.S.

At the most prosaic level, the “measurement problem” in that setting is that many data collectors do not fully grasp what they are measuring when they investigate the sources of public polarization on climate change. As a result, many of their conclusions are wrong. Those who rely on those conclusions in formulating real-world communication strategies fail to make progress—and sometimes end up acting in a self-defeating manner.

But more fundamentally, the science of science communication’s measurement problem describes a set of social and psychological dynamics. Like the “measurement problem” of quantum mechanics, it describes a vexing feature of the phenomena that are being observed and not merely a limitation in the precision of the methods available for studying them.

There is, in the science of science communication, an analog to the dual “wave-like” and “particle-like” nature of light (or of elementary particles generally). It is the dual nature of human reasoners as collective-knowledge acquirers and cultural-identity protectors.  Just as individual photons in the double-slit experiment pass through “both slits at once” when unobserved, so each individual person uses her reason simultaneously to apprehend what is collectively known and to be a member of a particular cultural community defined by a set of highly distinctive set of commitments.  

Moreover, in the science of science communication as in quantum physics, assessment perturbs this dualism.  The antagonistic cultural meanings that pervade the social interactions in which we engage individuals on contested science issues forces them to be only one of their reasoning selves.  We can through these interactions measure what they know, or measure who they are, but we cannot do both at once.

This is the difficulty that has persistently defeated effective communication of climate science.  By reinforcing the association of opposing positions with membership in competing cultural groups, the antagonistic meanings relentlessly conveyed by high-profile “communicators” on both sides effectively force individuals to use their reason to selectively construe all manner of evidence—from what “most scientists believe” (Corner, Whitmarsh & Dimitrios 2012; Kahan, Jenkins-Smith & Braman 2011) to what the weather has been like in their community in recent years (Goebbert, Jenkins-Smith, Klockow, Nowlin & Silva 2012)—in patterns that reflect the positions that prevail in their communities.  We thus observe citizens only as identity-protective reasoners.  We consistently fail to engage their formidable capacity as collective-knowledge acquirers to recognize and give effect to the best available scientific evidence on climate change.

There is nothing inevitable or necessary about this outcome.  In other domains, most noticeably the teaching of evolutionary theory, the use of valid empirical methods has identified means of disentangling the question of what do you know? from the question who are you; whose side are you on?, thereby making it possible for individuals of diverse cultural identities to use their reason to participate in the insights of science.  Climate-science communicators need to learn how to do this too, not only in the classroom but in the public spaces in which we engage climate science as citizens.

Indeed, the results of the “climate science comprehension”  study I’ve described supports the conclusion that ordinary citizens of all political outlooks already know the core insights of climate science.  If they can be freed of the ugly, illiberal dynamics that force them to choose between exploiting what they know and expressing who they are, there is every reason to believe that they will demand that democratically accountable representatives use the best available evidence to promote their collective well-being.  Indeed, this is happening, although on a regrettably tiny scale, in regions like Southeast Florida.

Though I’ve used the “measurement problem” framework to extract insight from empirical evidence—of both real-world and laboratory varieties—nothing in fact depends on accepting the framework.  Like “collapsing wave functions,” “superposition,” and similar devices in one particular rendering of quantum physics, the various elements of the science of science communication measurement problem (“dualistic reasoners,” “communicative interference,” “disentanglement,” etc.) are not being held forth as “real things,” that are “happening” somewhere. 

They are a set of pictures intended to help us visualize processes that cannot be observed and likely do not even admit of being truly seen. The value of the pictures lies in whether they are useful to us, at least for a time, in forming a reliable mental apprehension of how those dynamics affect our world, in predicting what is likely to happen to us as we interact with them, and in empowering us to do things that make our world better.

I think the science of “science communication measurement problem” can serve that function, and do so much better than myriad other theories (“bounded rationality,” “terror management,” “system justification,” etc.) that also can be appraised only for their explanatory, predictive, and prescriptive utility.  But it is the imperative to make sense of —and stop ignoring—observable, consequential features of our experience.  If there are better frameworks, or simply equivalent but different ones, that help to achieve this goal, then they should be embraced.

But there is one final important element of the theoretical framework I have proposed that would need to be represented by an appropriate counterpart in any alternative.  It is a part of the framework that emphasizes not a parallel in the “measurement problems” of the science of science communication and quantum physics but a critical difference between them.

The insolubility of quantum mechanics’ “measurement problem” is fundamental to the work that this construct and all the ones related to it (“the uncertainty principle,” “quantum entanglement,” and the like) do in that theory.  To dispel quantum mechanic’s measurement problem (by, say, identifying the “hidden variables” that determine which of the two slits through which the photon must pass, whether we are watching or not) would demonstrate the inadequacy (or “incompleteness”) of quantum mechanics.

But the measurement problem that confronts the science of science communication, while connected to real-world dynamics of consequence and not merely the imperfect methods used to study them, can be overcome.  The dynamics that this measurement problem comprises are ones generated by the behavior of conscious, reasoning, acting human beings.  They can choose to act differently, if they can figure out how.

The utility of recognizing the “science of science communication measurement problem”  thus depends on the contribution that using that theory can ultimately make to its own destruction. 

Tuesday
Jun242014

What’s that hiding behind the poll? Perceiving public perceptions of biotechnology

Hey look! Here's something you won't find on any of those other blogs addressing how cultural cognition shapes perceptions of risk: guest posts from experts who actually know something! This one, by Jason Delborne, addresses two of my favorite topics: first, the meaning (or lack thereof) of surveys purporting to characterize public attitudes on matters ordinary people haven't thought about; & second, GM foods (including delectable mosquitoes!  MMMMM MMM! 

Jason Delborne:

Whether motivated by a democratic impulse or a desire to tune corporate marketing, a fair amount of research has focused on measuring public perceptions of emerging technologies. In many cases, results are reported in an opinion-poll format: X percentage of the surveyed population support the development of Y technology (see examples on GM food, food safety,  and stem cells/cloning).

But what is behind such numbers, and what are they meant to communicate?

From a democratic perspective, perhaps we are supposed to take comfort in a majority support of a technology that seems to be coming our way. 51% or more support would seem to suggest that our government “by and for the people” is somehow functioning, whereas we are supposed to feel concerned if our regulators were permitting a technology to move forward that had 49% support or less. A more nuanced view might interpret all but the most lopsided results as indicative of a need for greater deliberation, public dialogue, and perhaps political compromise.

From a marketing perspective, the polling number offers both an indicator of commercial potential and a barometer of success or failure in the shaping of public perceptions. An opponent of a given technology will interpret high approval numbers as a call to arms – “Clearly we have to get the word out about this technology to warn people of how bad it is!” And they will know they are succeeding if the next polling study shows diminished support.

Below the headline, however, lie two aspects of complexity that may disrupt the interpretations described above. First, survey methodologies vary in their validity, reliability, and their strategic choices that construct “the public” in particular ways. Much has been written on this point (e.g., The Illusion of Public Opinion and The Opinion Makers), and it’s worth a critical look. A second concern, however, is whether such measures of support provide any meaningful insight into the “public perception” of a technology.

Several of my colleagues recently conducted a survey in Key West, FL, where the Mosquito Control Board has proposed the use of Oxitech’s genetically-modified mosquitoes as a strategy to reduce the spread of dengue fever (see “Are Mutant Mosquitos the Answer in Key West?”). My colleagues have not yet published their research, but they kindly shared some of their results with me and gave me permission to discuss it in limited fashion at the 2014 Biotechnology Symposium and in this blog post. They were thoughtful in their development of a survey instrument and in their strategic choices for defining a relevant public. They also brought a reflexive stance to their research design that nicely illustrates the potential disconnect between measures of public perception and the complexity of public perception.

Reporting from a door-to-door survey, Elizabeth Pitts and Michael Cobb (unpublished manuscript) asked whether residents supported the public release of GM mosquitos. The results would seem to comfort those who support the technology:

With a clear majority of support, and opposition under 25% of survey respondents, we might assume that little needs to be done – either by the company developing the mosquito or the state agency that wishes to try it. Only the anti-GM campaigners have a lot of work to do – or maybe such numbers suggest that they should just give up and focus on something else.

But the story does not and should not end there. The survey protocol also asked respondents to describe the benefits and risks of GM mosquitos – enabling the coding of their open-ended responses as follows in the next two tables.

These tables do not exactly offer rock-solid pillars to support the apparently straightforward “polling numbers”. First, despite having just been told a short version of how GM mosquitos would work to control the spread of dengue fever, very few respondents seemed to have internalized or understood this key point. In fact, we should not even take solace in the fact that 40% of respondents mentioned “mosquito control” as a benefit – the GM mosquito is designed to reduce the population only of the species of mosquito that transmits Dengue fever, which may have little impact on residents’ experience of mosquitos (of all species) as pesky blood-sucking pests. Second, nearly one-third of respondents had no response at all to either the benefits or hazards questions – suggesting a lack of engagement and/or knowledge with the topic. Third, nearly 40% of respondents expressed one or more concerns, many of which are at least superficially reasonable (e.g., questions about ecological consequences or unintended impacts on human health). While the survey data do not tell us how concerned residents were, such concerns have the potential to torpedo the 60% support figure, depending on subsequent dissemination of information and opinions.

To me, these data reveal the superficiality of the “approval rating” as a measure of public perception; yet, those are the data that are easiest to measure and most tempting for our media to report and pundits to interpret. It is a lovely sound bite to sum up a technology assessment in a poll measuring support or approval.

As someone who has practiced and studied public engagement (for example, see 2013a, 2013b, 2012, 2011a, 2011b, 2011c, 2011d), I would argue that if we truly care about how non-experts perceive an emerging technology – whether for democratic or commercial purposes – we need to focus on more messy forms of measurement and engagement. These might be more expensive, less clear-cut, and perhaps somewhat internally inconsistent, but they will give us more insight. We also must at least entertain the idea that opinion formation may reflect an evaluative process that does not rely only upon “the facts.” My hope would be that such practices would promote further engagement rather than quick numbers to either reassure or provoke existing fields of partisans. 

Jason Delborne is an Associate Professor of Science, Policy, and Society in the Department of Forestry and Environmental Resources, and an affiliated faculty member in the Center on Genetic Engineering and Society, at North Carolina State University.

 

Monday
Jun232014

They've already gotten the memo! What the public (Rs & Ds) think "climate scientists believe"

I’ve explained in a couple of posts why I think experimental evidence in support of “messaging” scientific consensus is externally invalid and why real-world instances of this “messaging” strategy can be expected to reinforce polarization.

But here is some new evidence (from a new paper, which I'll post this week) that critically examines the premise of the “message 97%” strategy: namely, that political polarization over climate change is caused by a misapprehension of the weight of opinion among climate scientists.

It isn't.

Consider:

That’s what members of the U.S. general public, defined in terms of their political outlooks (based on their score in relation to the mean on a continuous scale running "left" to "right"), “believe” about human-caused global warming.

Old news.

But here are a set of items that indicate what they think “climate scientists believe” (each statement except the first was preceded with that clause):

 

Got it?

Overwhelming majorities of both Republicans and Democrats are convinced that “climate scientists believe" that  CO2 emissions cause the temperature of the atmosphere to go up—probably the most basic fact scientific proposition about climate change.

In addition, overwhelming majorities of both Republicans and think that “climate scientists believe” that human-caused climate change poses all manner of danger to people and the environment.

Thus, they correctly think that “climate scientists believe” that “human-caused global warming will result in flooding of many coastal regions.” 

But they also incorrectly think that "climate scientists believe" that the melting of the North Pole ice cap will cause flooding. 

Healthy majorities of both Republicans and Democrats correctly think that “climate scientists believe” that global warming increased in the first decade of this century—but mistakenly think that “climate scientists believe” that human-caused climate change “will increase the risk of skin cancer” as well.

Again, these are the responses of the same nationally representative sample of respondents who were highly polarized on the question whether human-caused climate change is happening.

Here’s what’s going on:

1.  Items measuring “belief in human caused global warming” & the equivalent do not measure perceptions of “what people know,” including what they think “climate scientists believe.”

“Belief in human-caused global  warming” items measure “who one is, what side one is on” in an ugly and highly illiberal form of cultural status competition, one being fueled by the idioms of contempt that the most conspicuous spokespeople on both sides use.

As I’ve explained, the responses that individuals give to such items in surveys are as strong an indicator of their political identity as items that solicit self-reported liberal-conservative ideology and political-party self-identification.

What individuals know—or think they know—about climate science is a different matter.  To measure it, one has to figure out how to ask a question that is not understood by survey respondents as “who are you, whose side are you on.”  

Consider, in this regard, the parallel with “belief” in evolution.  When asked whether they believe in evolution, members of the US general population split 50-50, based not on understanding of evolution or science comprehension generally but on the centrality of religion to their cultural identities.

But when one frames the question as what scientists understand the evidence to be on evolution, then the division disappears.  A question worded that way enables relatively religious individuals to indicate what they know about science without having to express a position that denigrates their identities.

Same here: ask “what do climate scientists believe,” and the parties who polarize on the identity-expressive question “do you believe in global warming? do you? do you?” and you can see that there is in fact bipartisan agreement about what climate scientists think!

2.  Different impressions of what “climate scientists believe” clearly aren’t the cause of polarization on global warming.

The differences between Republicans and Democrats on “what climate scientists believe” ‘is trivial.  It doesn’t come close to explaining the magnitude and depth of the division on “human-caused global warming.”

Otherwise, the debate between Democrats and Republicans would be only over how much to spend to develop new nanotechnology sun screens to protect Americans from the epidemic of skin cancer that all recognize is looming.

Why did anyone ever think otherwise -- that the problem was simply not enough people had been told yet that there is scientific consensus on human-caused climate change?

Because it was plausible to believe that, for a while, given the correlation between responses to  items asking survey respondents “do you believe in human-caused climate change” and ones asking them whether they believed“scientific consensus” was consistent or inconsistent with the position they held.

There was always a competing explanation: that survey items on “scientific consensus”—because they are not constructed to disentangle knowledge and identity—were in fact measuring the same thing as the “what do you believe about global warming” questions: namely, who are you, whose side are you on.

A decade’s worth of real-world evidence on the impact of “messaging” consensus has now rendered the former position wholly untenable.

And now here’s some new survey evidence—items constructed to separate the “who are you, whose side are you on” question from “what do you know” question—that is much more consistent with the alternative hypothesis, and with the real world and experimental data that support that explanation.

Climate scientists update their models after ten years of evidence suggest one or another parameter of their models was not right.

Climate science communicators must be willing to do the same—or else they are not genuinely being guided by science in their craft.

3.  Members of the public already get that climate scientists think that we face a huge problem.

The data I’ve presented obviously don't suggest that members of the public know very much about what scientists believe.  They are in fact as likely to be wrong about that as right.

However encouraging it is to see that they understand  CO2 is a “greenhouse gas,” it is painful to realize that they think  CO2 will kill the plants inside a greenhouse.

But the mistakes are all in the same direction: in favor of the answer that “climate scientists believe” global warming poses a huge risk for the environment and human beings in particular.

Basically, items like these are indicators of a latent (unobserved) disposition to attribute to climate scientists the position “we are screwed if we don’t do something.”

That might not be a nuanced and discerning enough view to get you an “A” on a high school “climate science” exam.

But if civic knowledge consists in recognizing the policy significance of what science knows (melting polar ice causes sea level rise) as opposed to various technical details (e.g., that the North Pole ice cap is a big ice cube floating in the Arctic sea & thus won’t displace ocean water when it melts), then there is already more than enough civic understanding to motivate political responsiveness

The problem—what’s blocking this civic knowledge from being translated into action—is something else.  That’s what science communicators and others need to work on.

4. Consensus “messaging” campaigns don’t address the problem—except to the extent that predictably partisan forms of them make things worse.

If there is already a strong, bi-partisan disposition to view climate science as saying “we are in deep shit trouble, folks,” then “messaging” that doesn’t tell people anything they don’t already know.

The reason that ordinary citizens are polarized on doing something about climate change is that such policies have become infused with cultural meanings that signify each group’s contempt for the other. 

Climate change, as Al Gore says, is a “struggle for the soul of America”—and as long as it remains so, people will resist an outcome that says they and people they look up to are “stupid and evil.

Disentangling climate science from cultural status conflict must be the key objective.

“Messaging” scientific consensus doesn’t do that. On the contrary, it just adds another assaultive idiom – “97% AGREEEEEEE, MORON!!!” –to the already abundant stock of tropes one side uses to express how much contempt it has for its opponent in an ugly, senseless cultural status competition.

5.  Is there any alternative interpretation of these data?

Sure!

Someone could say, reasonably, that asking people what they think “climate scientists believe” is different from measuring whether those people themselves believe what they climate scientists have concluded.

I don’t think that's a convincing explanation for the discrepancy between the bipartisan consensus on the “what do climate scientists believe?” items and the “do you believe in human-caused global warming?" items.

As I’ve explained, I think the two are measuring different things, and—sadly, the question that is posed by the “climate change debate” is measuring what the latter items do: who you are, what side are you on?

We need to change the way politics frames the question -- so that it measures what we know, including what we collectively are fully capable of recognizing as science's best understanding of the evidence.

But the point is that even if someone thinks the best explanation for the data is that "Republicans distrust scientists"--another issue that depends on making valid measurements of public opinion-- then obviously “messaging” consensus is a not a responsive strategy.

Of course, the even bigger point is this: climate-science communicators will get nowhere if they accept interpretations of bits and pieces of evidence that are manifestly inconsistent with the evidence as a whole.           

Saturday
Jun212014

Authors: to assure no one can read your articles, publish in a Taylor & Francis journal!

They obviously have some exceptionally horrendous licensing policy, since even major university libraries do not have on-line access to T&F periodicals for 1 yr after article publication.

For sure $226 for the whole issue of Human & Eological Risk Assessment is a great deal, too!

 

 

Friday
Jun202014

Response: An “externally-valid” approach to consensus messaging

John Cook, science communication scholar and co-author of Quantifying the consensus on anthropogenic global warming in the scientific literature, Environmental Research Letters 8, 024024 (2013), has supplied this thoughtful response to the first of my posts on "messaging consensus." --dmk38

Over the last decade, public opinion about human-caused global warming has shown little change. Why? Dan Kahan suggests cultural cognition is the answer: 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact ... they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. 

It’s certainly the case that cultural values influence attitudes towards climate. In fact, not only do cultural values play a large part in our existing beliefs, they also influence how we process new evidence about climate change. But this view is based on lab experiments. Does Kahan’s view that cultural cognition is the whole story work out in the real world? Is that view “externally valid”?

The evidence says no. A 2012 Pew surveys of the general public found that even among liberals, there is low perception of the scientific consensus on human-caused global warming. When Democrats are asked “Do scientists agree earth is getting warmer because of human activity?”, only 58% said yes. There’s a significant "consensus gap” even for those whose cultural values predispose them towards accepting the scientific consensus. A “liberal consensus gap”.

My own data, measuring climate perceptions amongst US representative samples, confirms the liberal consensus gap. The figure below shows what people said in 2013 when asked how many climate scientists agree that humans are causing global warming. The x-axis is a measure of political ideology (specifically, support for free markets). For people on the political right (e.g., more politically conservative), perception of scientific consensus decreases, just as cultural cognition predicts. However, the most relevant feature for this discussion is the perceived consensus on the left.

At the left of the political spectrum, perceived consensus is below 70%. Even those at the far left are not close to correctly perceiving the 97% consensus. Obviously cultural cognition cannot explain the liberal consensus gap. So what can? There are two prime suspects. Information deficit and/or misinformation surplus. 

Kahan suggests that misinformation casting doubt on the consensus is ineffective on liberals. I tend to agree. Data I’ve collected in randomized experiments supports this view. If this is the case, then it would seem information deficit is the driving force behind the liberal consensus gap. It further follows that providing information about the consensus is necessary to close this gap. 

So cultural values and information deficit both contribute to the consensus gap. Kahan himself suggests that science communicators should consider two channels: information content and cultural meaning. Arguing that one must choose between the information deficit model or cultural cognition is a false dichotomy. Both are factors. Ignoring one or the other neglects the full picture. 

But how can there be an information deficit about the consensus? We’ve been communicating the consensus message for years! Experimental research by Stephan Lewandowsky, a recent study by George Mason University and my own research have found that presenting consensus information has a strong effect on perceived consensus. If you bring a participant into the lab, show them the 97% consensus then have them fill out a survey asking what the scientific consensus is, then lo and behold, perception of consensus shoots up dramatically. 

How does this “internally valid” lab research gel with the real-world observation that perceived consensus hasn’t shifted much over the last decade? A clue to the answer lies with a seasoned communicator whose focus is solely on “externally valid” approaches to messaging. To put past efforts at consensus messaging into perspective, reflect on these words of wisdom from Republican strategist and messaging expert Frank Luntz on how to successfully communicate a message: 

“You say it again, and you say it again, and you say it again, and you say it again, and you say it again, and then again and again and again and again, and about the time that you're absolutely sick of saying it is about the time that your target audience has heard it for the first time. And it is so hard, but you've just got to keep repeating, because we hear so many different things -- the noises from outside, the sounds, all the things that are coming into our head, the 200 cable channels and the satellite versus cable, and what we hear from our friends.” 

When it comes to disciplined, persistent messaging, scientists aren’t in the same league as strategists like Frank Luntz. And when it comes to consensus, this is a problem. Frank Luntz is also the guy who said: 

“Voters believe that there is no consensus about global warming in the scientific community.  Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly.  Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate, and defer to scientists and other experts in the field.” 

Luntz advocated casting doubt on the consensus for one simple reason. When people understand that scientists agree that humans are causing global warming, then they’re more likely to support policies to mitigate climate change. Confuse people about consensus, and you delay climate action. 

This finding has subsequently been confirmed by studies in 2011 and 2013. But a decade before social scientists figured it out, Luntz was already putting into place strategies to drum home the “no consensus” myth, with the purpose of reducing public support for climate action. 

Reflecting on the disinformation campaign and the social science research into consensus messaging, Ed Maibach at George Mason University incorporates both the “internally valid” social science research and the “externally valid” approach of Frank Luntz:

We urge scientific organizations to patiently, yet assertively inform the public that, based on the evidence, more than 97% of climate experts are convinced that human-caused climate change is happening. Some scientific organizations may argue that they have already done this through official statements. We applaud them for their efforts to date, yet survey data clearly demonstrate that the message has not yet reached or engaged most Americans. Occasional statements and press releases about the reality of human-caused climate change are unfortunately not enough to cut through the fog—it will take a concerted, ongoing effort to inform Americans about the scientific consensus regarding the realities of climate change.

How do we achieve this? Maibach suggests climate scientists should team up with social scientists and communication professionals. What should scientists be telling the public? Maibach advises:

In media interviews, public presentations, and even neighborhood and family gatherings, climate scientists should remember that many people do not currently understand that there is an overwhelming scientific consensus about human-caused climate change. Tell them, and give them the numbers.

The book Made To Stick looks at “sticky” messages that have caught the attention in the public’s eyes. It runs through many real-world case studies (e.g., externally valid examples) to demonstrate that sticky ideas are simple, concrete, unexpected and tell a story. For a general public who think there is a 50:50 debate among climate scientists, learning that 97% of climate scientists agree that humans are causing global warming ticks many of the sticky boxes.

 

Wednesday
Jun182014

WSMD? JA! How confident should we be that what one "believes" about global warming, on 1 hand, and political outlooks, on other, measure the same *one* thing?

This is the 983rd--I think; it could also be 613th--episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

@DaneGWendell, snickering at a bar graph (I pretty much agree: bar graphs almost always are a yucky way to graphically report interesting data!) A couple days ago I posted something on “what belief in global warming measures.” The answer, I said, was one’s a group-based sense of self-identity.

To support basic point I stated that (1) the Industrial Strength Measure of global warming risk perceptions, (2) a standard “belief in” human-caused global warming item, (3) the standard 5-point “liberal-conservative” ideology measure, and (4) the standard 7-point partisan self-identification display the psychometric properties of being observable indicators for a single latent variable.

A “latent variable” is something that can’t be observed directly. “Indicators” are things one can observe that correlate with the latent variable, typically because they are caused by it (that’s not strictly necessary; one can model a latent variable as being caused by indicators, or both indicators and latent variables as being caused by some other exogenous variable, etc.).

We can thus use the indicators as a substitute for the latent variable in modeling how the latent variable relates to other quantities of interest. When the indicators are aggregated appropriately, their “noise”—the parts of them that vary independently of their causal connection to the latent variable—cancel out, making the resulting scale or index an even more discerning measure of the latent variable (DeVellis 2012).

But before one can do that, one has to be confident the putative indicators really do have the properties one would expect of variables that are measuring the same thing.

I noted that the scale formed by combining the global-warming risk ISM, the “belief” in climate change item, and the two right-left political outlook ones displays a high “Cronbach’s α,” an inter-item correlation statistic that is conventionally understood to measure how reliably the aggregated items (the indicators) can be taken to be measuring any latent variable.

But a curious & reflective guy named @DanegGWendell correctly noted—on twitter—that a high α doesn’t by itself guarantee that the aggregated items are measuring a single latent variable. 

Particularly where one has a large number of items, a scale formed by summing item responses can display a reasonably high α when in fact they are measuring two or maybe even more correlated latent variables.

Linear factor analysis is one of the conventional ways to assess the “dimensionality” of a scale. Conceptually, factor analysis estimates how much variance in the responses to the items can be accounted for by positing a single factor or latent variable, how much of the remaining variance can then be accounted for by positing a second, and so forth.

@DaneGWendell was interested in what a factor analysis of the global warming ISM, global warming belief, and political outlook measures would reveal.

Good question & worthy of a WSMD, JA!

To start, here’s the item “correlation matrix.”  The coefficients express polychoric correlation, which is more appropriate than pearson correlation where, as here, one wants to do a factor analysis of "mixed" data (the ISM is a multi-point rating scale, the political outlook measures multi-point Likert items, and the “belief in” measure a dichotomous item). 

 

Here is the factor analysis of that correlation matrix: 

 

There are a variety of conventional “rules of thumb” used to assess factor structure, all of which suggest that the four items here are appropriately treated as forming a “unidimensional” (i.e., one latent variable) scale.

E.g., the ratio of the “eigenvalues” of the first factor (which explains 90% of the variance in the items) and of the second (which explains almost all the rest) is “greater than 3.”

In addition, the eigenvalue for the second factor is “less than 1.”

Or if we look at a “scree plot,” which plots the eigenvalue of successive factors, there is an “elbow” at 2.

Maybe you can tell, but I find this way of proceeding, which is exactly what you'll see in most articles or textbooks, pretty mechanical and unmotivated. 

Call me silly, but I think it makes more sense to use judgment in assessing the covariance structure to determine whether the items can plausibly be understood to measuring only one latent variable. 

Actually, it's been shown by people who are actually thinking about what they are doinjg and why that treating a two-dimensional scale as one dimensional often has no adverse affect on the accuracy of that scale as a measure of a single latent variable if the two factors are very closely correlated (e.g., Bolt 1999). 

Also, the various statistical techniques and rules of thumb (pragmatic fit indexes etc.) that researchers typically use to investigate scale "dimensionality" have been described as essentially "completely worthless" ((Embretson & Reise 2000, p. 228).

But in fact, that's an unfair appraisal.  They are useful-- but not if used mechanically, as if (to quote Chris Hedges), "the answer to the question" whether a group of items can be treated as observable indicators of a single latent variable were the same as asking, “I mean, what exact buttons do I have to hit?”

"There is utility" (to paraphrase Chris Hedges), in these techniques "in that they may provide supporting evidence that a data set is reasonably dominated by a single common factor" (Embretson & Reise 2000, p. 228).

Or in other words, factor analysis, cronbach's α, and various related statistical measures are tools one can use to equip judgment to do a more reliable job in helping to form valid inferences. 

But treated as substitutes for judgment, they are "completely worthless" (Hedges, of course, 1999, 2000, 2006, 2012, 2014, 2014).

So applying some judgment, what am I trying to say here, and how confident should I be about that given this particular set of observations?

Basically, I’m saying that the 4 items are all measuring the “same thing”—a latent disposition to form coherent stances on matters political. The responses to the “climate change” items are expressions of that disposition—are caused by it—in the same way as responses to the liberal-conservative ideology and party self-identification measures.

The factor analysis is consistent with that. 

But wouldn’t it be more satisfying if I showed this interpretation was more convincing than some alternative plausible hypothesis?

One might think—very reasonably!—that expressions of risk toward environmental hazards reflect a latent disposition, one correlated with but in fact distinct from the sense of identity that one might think political outlooks measure. 

A good alternative hypothesis, then, would be that “climate change” risk perceptions and related factual beliefs are better understood as indicators of some “environmental concern” disposition that is connected to but actually not the "same thing" as the "self-identity" disposition indicated by liberal-conservative ideology and party self-identification.

That alternative hypothesis would have been supported, for sure, if variance in these items had turned out to be more convincingly explained by two discrete factors, one comprising the political outlook items and the other the climate-change items.

But an even more convincing test would be to add some additional “environmental risk concern” items to the “mix,” and then see what happens.

Here is  a covariance matrix that adds to the four items in question ISMs for “artificial food colorings,” “use of artificial sweeteners in diet soft drinks,” and “genetically modified food.”

The signs of the items are consistent with what one might expect if one beleived both that environmental risk perceptions will cohere with each other and that political outlooks will correlate with environmental risk perceptions.

But the correlations between the artificial food coloring, artificial sweetener, and GM food ISMs, on the one hand, and the climate-change items, on the other, are much smaller than the correlations between the climate-change items and the political outlook ones, on the other!

That makes me think it's less likely that global warming items are measuring the "same thing" as those other risk items than it is that the global warming items are indeed measuring the "same thing" as the political outlook items.

Now consider the factor analysis of these 7 items:

The relative proportions of variance explained by the first two factors—0.6 and 0.3—is much closer than was the case for the two factors in the first analysis (0.9 and 0.1).

By the same token, the rule-of-thumb criteria—ratio of eigenvalues (about 2), the absolute size of the second factor’s eigenvalue (> 1), and the scree plot (“elbow” at 3 rather than 2) all support treating the items as measuring two discrete factors.

More importantly in my judgmental opinion, if we look at the “factor loadings”—essentially the correlations between the factor and the indicated items—we can see that the covariance structure looks as you might expect if there 2 latent variables being measured here rather than 1.

The first is one consisting of the global warming ISM, the  “belief in” climate change item, the liberal-conservative ideology item, and the partisan self-identification item.

That's a discrete factor corresponding to the hypothesized latent disposition for which those four variables are all indicators.

The second factor loads much less heavily on those four items and much more so on the food coloring, artificial sweetener, and GM food risk ISMs.

We might, then, want to treat the latter three variables as a scale that measures a concern with environmental risks, or maybe with “food risks” in particular.

The Cronbach’s α for a scale that aggregates those three items would be 0.76.  Usually 0.70 is considered “good.”

The Cronbach’s α for a scale formed by aggregating the climate-change and political outlook items that form the first factor would be 0.85. 

I'm happy about that, though, less b/c I cleared some arbitrary statistical threshold than b/c it just is the case that w/ a "low" Cronbach’s α, one won't be able to connect variance in the scale to variance in other quanitities of itnerest.

There is a very modest positive correlation between the scales of 0.15 (p < 0.01).  In other words, the identity disposition explains some of the variance in this “food risk” disposition, but not much (that's kind of interesting, don't you think? but the 14 billion readers of this blog are among the select few who already know that it's not true that GM foods divide the US general public along political lines).

Well there you go!

I’m even more confident than I would have been had I not done these analyses, or had I just done a recipe-book factor analysis of the four items I hypothesized form a single latent “identity” variable and stopped there.

But that’s all I am: more confident than I’d be otherwise.

Also, not as confident as I could be if I were to do even more things that admit of meaningful assessment than the still too recpie-bookish application of factor analysis I just performed.

And for sure not so confident that I wouldn't change my mind if I were shown meaningful evidence that seemed to support a different conclusion the factor analyses notwithstanding.

The idea that one can perform some set of tests in a mechanical, judgment-free fashion and get “the answer” on questions about how elements of cognition work is commonplace, but wrong.

References

Bolt, D.M. Evaluating the Effects of Multidimensionality on IRT True-Score Equating. Applied Measurement in Education 12, 383-407 (1999).

DeVellis, R.F. Scale development : theory and applications (SAGE, Thousand Oaks, Calif., 2012).

Embretson, S.E. & Reise, S.P. Item response theory for psychologists (L. Erlbaum Associates, Mahwah, N.J., 2000).

 

Wednesday
Jun182014

What is the *message* of real-world "scientific consensus" messaging? Ruminations on the external validity of climate-science-communication studies, part 3

This is part 3 of a series on external validity problems with climate-science-communication studies.The problem, in sum, is that far too many researchers are modeling dynamics different from the ones that occur in the real world, and far too many communicators are being induced to rely on these bad models.

In my first post, I described the confusion that occurs when pollsters assert that responses to survey item that don't reliably or validly measure anything show there's "overwhelming bipartisan support" for something having to do with climate change.

In the second, I described the mistake of treating a laboratory "messaging" experiment as better evidence than 10 yrs of real-world evidence on what happens when communicators expend huge amounts of resources on a "scientific consensus" messaging campaign.

This post extends the last by showing how much different real-world scientific-consensus "messaging" campaigns are from anything that is being tested in lab experments.

All of these are exercpts from a paper I'll post soon -- one that has original empirical data relating to what measures what in the study of climate-change science communication.

* * *

5. “Messaging” scientific consensus


a. The “external validity” question.
 * * *

b.  What is the “message” of “97%”?  “External invalidity” is not an incorrect explanation of why “scientific consensus” lab experiments produce results divorced from the observable impact of real-world scientific-consensus “messaging” campaigns. But it is incomplete. 

We can learn more by treating the lab experiments and the real-world campaigns as studies of how people react to entirely different types of messages.  If we do, there is no conflict in their results.  They both show individuals rationally extracting from “messages” the information that is being communicated.

Consider what the “97% scientific consensus” message looks like outside the lab.  There people are likely to "receive" it in the form it takes in videos produced by the advocacy group Organizing for Action.  Entitled “X is a climate change denier,” the videos consist of a common template with a variable montage of images and quotes from “X,” one of two dozen Republican members of Congress (“Speaker Boehner,” “Senator Marco Rubio,” “Senator Ted Cruz”). Communicators are expected to select “X” based on the location in which they plan to disseminate the video. 

The video begins with an angry, perspiring, shirt-sleeved President Obama delivering a speech: “Ninety-seven percent of scientists,” he intones, shaking his fist.  After he completes his sentence, a narrator continues, “There’s not a lot of debate left in this debate: NASA and 97% of the nation’s scientists agree . . .,” a message reinforced by a  cartoon image of a laboratory beaker and the printed message “97% OF SCIENTISTS AGREE.” 

After additional cartoon footage (e.g., a snowman climbing into a refrigerator) and a bar graph  (“Events with Damages Totaling $1 billion or More,” the tallest column of which is labeled “Tornadoes . . .”) , the video reveals that X is a “CLIMATE CHANGE DENIER.”  X is then labeled “RADICAL & DANGEROUS” because he or she disputes what “NASA” and the “NATIONAL ACADEMY OF SCIENCES” and “ 97% of SCIENTISTS” (bloc letters against a background of cartoon beakers) all “AGREE” is true.

What’s the lesson?  Unless the viewer is a genuine idiot, the one thing she already knows is what “belief” or “disbelief in” global warming means. The position someone adopts on that question conveys who he is—whose side he’s on, in a hate-filled, anxiety-stoked competition for status between opposing cultural groups.  

If the viewer of “X is a climate denier” had not yet been informed that the message “97% of scientists agree” is one of the stock phrases used to signal one cultural group’s contempt for the other, she has now been put on notice. It is really pretty intuitive : who wouldn’t be insulted by someone screaming in her face that she and everyone she identifies with “rejects science”?

 The viewer can now incorporate the “97% consensus” trope into her own “arguments” if she finds it useful or enjoyable to demonstrate convincingly that she belongs to the tribe that “believes in” global warming.  Or if she is part of the other one, she can now more readily discern who isn’t by their use of this tagline to heap ridicule on the people she respects.

The video’s relentless use of cartoons and out-of-proportion, all-cap messages invests it with a “do you get it yet, moron?!” motif. That theme reaches its climax near the end of the video when a multiple choice “Pop Quiz!” is superimposed on the (cartoon) background of a piece of student-notebook paper.  “CLIMATE CHANGE IS,” the item reads, “A) REAL,” “B) MANMADE,” “C) DANGEROUS,” or as indicated instantly by a red check mark, “D) ALL OF THE ABOVE.”

The viewer of “X is a climate denier" is almost certainly an expert—not in any particular form of science but in recognizing what is known by science. As parent, health-care consumer, workplace decisionmaker, and usually as citizen, too, she adroitly discerns and uses to her advantage all manner of scientific insight, the validity and significance of which she can comprehend fully without the need to understand it in the way a scientist would.  If one administers a “what do scientists believe?” test after making visible to her the signs and cues that ordinary members of the public use to recognize what science knows, she will get an “A.” 

Similarly, if one performs an experiment that models that sort of reasoning, the hypothesis that this recognition faculty is pervasive and reliably steers the members of culturally diverse groups into convergence on the best available evidence will be confirmed.

But the viewer’s response to the “97% consensus” video is measuring something else.

The video has in fact forced her to be become another version of herself. After watching it, she will now deploy her formidable reason and associated powers of recognition to correctly identify the stance to adopt toward the “97% consensus” message that accurately expresses who she is in a world in which the answer to “whose side you are on?” has a much bigger impact on her life than her answer to the question “what do you know?”

 

 

Tuesday
Jun172014

"Messaging" scientific consensus: ruminations on the external validity of climate-science-communication studies, part 2

This is the second installment of a set on "external validity" problems in climate-science communication studies.

"Internal validity" refers to qualities of the design that support drawing inferences about what is happening in the study. "External vality" refers to qualities of the design that support drawing inferences from the study to the real-world dynamics it is supposed to be modeling.

The exernal validity problems I want to highlight don't affect only the quality of studies. They affect the quality of the practice of climate-science communication, too, because communicators are relying on externally invalid studies for guidance.

The last entry concerned the use of surveys to measure public opinion on climate change.

This one addresses experimental and other evidence used to ground "social marketing campaigns" that feature scientific consensus.  It is also only the first of two on "messaging" scientific consensus; the next, which I'll post "tomorrow," will examine real-world "messaging" that purports to implement these study findings.

This post, like the last, is from a paper that I'm working on and will post soon (one with some interesting new data, of course!)

* * *

5. “Messaging” scientific consensus

a. The “external validity” question. On May 16, 2013, the journal Environmental Research Letters published an article entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature.” In it, the authors reported that they had reviewed the abstracts of 12,000 articles published in peer-reviewed science journals between 1991 and 2011 and found that “among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming” (Cook et al. 2013).

“This is significant,” the lead author was quoted as saying in a press statement issued by his university, “because when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it.” “Making the results of our paper more widely-known,” he continued, “is an important step toward closing the consensus gap”—between scientists who agree with one another about global warming and ordinary citizens who don’t—“and increasing public support for meaningful climate action” (Univ. Queensland 2013).

The proposition that disseminating the results of ERL study would reduce public conflict over climate change was an empirical claim not itself tested by the authors of the ERL paper.  What sorts of evidence might one use (or have used) to assess it?

Opinion surveys are certainly relevant.  They show, to start, that members of the U.S. general public— Republican and Democrat, religious and nonreligious, white and black, rich and poor—express strongly pro-science attitudes and hold scientists in high regard (National Science Foundation 2014, ch. 7; Pew 2009). In addition, no recognizable cultural or political group of consequence in American political life professes to disagree with, or otherwise dismiss the significance of, what scientists have to say about policy-relevant facts. On the contrary, on myriad disputed policy issues—from the safety of nuclear power  to the effectiveness of gun control—members of the public in the U.S. (and other liberal democratic nations, too) indicate that the position that predominates in their political or cultural group is the one consistent with scientific consensus (Kahan, Jenkins-Smith & Braman 2011; Lewendowsky, Gignac & Vaugh 2012).

Same thing for climate change. As the ERL authors noted, surveys show a substantial proportion of the U.S. general public rejects the proposition that there is “scientific consensus” on the existence and causes of climate change. Indeed, the proportion that believes there is no such consensus consists of exactly the same proportion that says it does not “believe in” human-caused global warming (Kahan et al. 2011).

So, the logic goes, all one has to do is correct the misimpression of that portion of the public. Members of the public very sensibly treat as the best available evidence what science understands to be the best available evidence on facts of policy significance. Thus, “when people understand that scientists agree on global warming, they’re more likely to support policies that take action on it” (Univ. Queensland 2013).

But there is still more evidence, of a type that any conscientious adviser to climate-science communicators would want them to consider carefully. That evidence bears directly on the public-opinion impact of “[m]aking the results” of studies like the ERL one “more widely-known” (Univ. Queensland 2013).

The ERL study was not the first one to “[q]uantify[]the consensus on anthropogenic global warming”; it was at least the sixth, the first one of which was published in Science in 2004 (Oreskes 2004; Lichter 2008; Doran & Zimmerman 2009; Anderegg et al. 2010; Powell 2012).  Appearing on average once every 18 months thereafter, these studies, using a variety of methodologies, all reached conclusions equivalent to the one reported in ERL paper.

Like the ERL paper, moreover, each of these earlier studies was accompanied by a high degree of media attention. 

Indeed, the “scientific consensus” message figured prominently in the $300 million social marketing campaign by Alliance for Climate Protection, the advocacy group headed by former Vice President Al Gore, whose “Inconvenient Truth” documentary film and book both prominently featured the 2004 “97% consensus” study published in Science (which was characterized by Gore as finding that "0%" of peer-reviewed climate science articles disputed the human contribution to global warming). 

An electronic search of major news sources indicates finds over 6,000 references to “scientific consensus” and “global warming” or “climate change” in the period from 2005 to May 1, 2013.

There is thus a straightfroward way to assess the prediction that “[m]aking the results” of the ERL study “more widely-known” can be expected to influence public opinion.  It is to examine how opinion varied in relation to efforts to publicize these earlier “scientific consensus” studies. 

Figure 9 plots the proportion of the U.S. general public who selected “human activities” as opposed to “natural changes in the environment” as the main cause of “increases in the Earth’s temperature over the last century” over the period 2003 to 2013 (in this Gallup item, there is no option to indicate rejection of the premise that the earth’s temperature has increased, a position a majority or near majority of Republicans tend to selection when it is available). The year in which “scientific consensus” studies appeared is indicated on the x-axis, as is the year in which “Inconvenient Truth” was released.   


Nothing happened.

Or, in truth, a lot happened.  Many additional important scientific studies corroborating human-caused global warming were published during this time.  Many syntheses of the data were issued by high-profile institutions in the scientific community, including the U.S. National Academy of Sciences, the Royal Society, and the IPCC, all of which concluded that human activity is heating the planet. High-profile, and massively funded campaigns to dispute and discredit these sources were conducted too.  People endured devastating heat waves, wild fires, and hurricanes, punctuated by long periods of weather normality.  The Boston Red Sox won their first World Series title in over eight decades.

It would surely be impossible to disentangle all of these and myriad other potential influences on U.S. public opinion on global warming.  But one doesn’t need to do that to see that whatever the earlier scientific-consensus "messaging" campaigns added did not “clos[e] the consensus gap” (Univ. Queensland 2013). 

Why, then, would any reflective, realistic person counsel communicators to spend millions of dollars to repeat exactly that sort of “messaging” campaign? 

The answer could be laboratory studies. One (Lewendowsky et al. 2012), published in Nature Climate Change, reported that the mean level of agreement with the proposition “CO2 emissions cause climate change” was higher among subjects exposed to a “97% scientific consensus” message than among subjects in a control condition (4.4 vs. 4.0 on a 5-point Likert scale).  After being advised that “97% of scientists” accept  CO2 emissions increase global temperatures, those subjects also formed a higher estimate of the proportion of scientists who believe that (88% vs. 67%).

Is it possible to reconcile this result with the real-world data on the failure of previous “scientific consensus” messaging campaigns to influence U.S. public opinion?  The most straightforward explanation would be that the NCC experiment was not externally valid—i.e., it didn’t realistically model the real-world dynamics of opinion-formation relevant to the climate change dispute. 

The problem is not the sample (90 individuals interviewed face-to-face in Perth, Australia). If researchers were to replicate this result using a U.S. general population sample, the inference of external invalidity would be exactly the same. 

For “97% consensus” messaging experiments to justify a social marketing campaign featuring studies like the ERL one, it would have to be reasonable to believe that what investigators are observing in laboratory conditions—ones created specifically for the purpose of measuring opinion—tell us what is likely to happen when communicators emphasize the “97% consensus” message in the real world. 

Such a strategy has already been tried in the real world.  It didn’t work.

There are, to be sure, many more things going on in the world, including counter-messaging,  than are going on in a “97% consensus” messaging experiment.  But if those additional things account for the difference in the results, then that is exactly why that form experiment must be regarded as externally invalid: it is omitting real-world dynamics that we have reason to believe, based on real-world evidence, actually matter in the real world.

On this account, the question to be investigated is not whether a “97% consensus” messaging campaign will influence public opinion but why it hasn’t over a 10-year trial.  The answer, presumably, is not that members of the public are divided on whether they should give weight to the conclusions scientists have reached in studying risks and other policy relevant facts. Those on both sides of the climate change believe that the other side’s position is the one in consistent with scientific consensus. 

The ERL authors’ own recommendation to publicize their study results presupposes public consensus in the U.S. in support of using the best available scientific evidence in policymaking.  The advice of those who continue to champion “97% consensus” social marketing campaigns does, too. 

So why have all the previous highly funded efforts to make “people understand that scientists agree on global warming” so manifestly failed to “close the consensus gap” (Univ. Queensland 2013)?

There are studies that seek to answer exactly that question as well.  They find that culturally biased assimilation—the tendency of people to fit their perceptions of disputed facts to ones that predominate in their cultural group—applies to their assessment of evidence of scientific consensus just as it does to their assessment of all other manner of evidence relating to climate change (Corner, Whitmarsh & Dimitrios 2012; Kahan et al. 2011). 

When people are shown evidence relating to what scientists believe about a culturally disputed policy-relevant fact (e.g., is the earth heating up? is it safe to store nuclear wastes deep underground? does allowing people to carry hand guns in public increase the risk of crime—or decrease it?), they selectively credit or dismiss that evidence depending on whether it is consistent with or inconsistent with their cultural group’s position. As a result, they form polarized perceptions of scientific consensus even when they rely on the same sources of evidence.

These studies imply misinformation is not a decisive source of public controversy over climate change.  People in these studies are misinforming themselves by opportunistically adjusting the weight they give to evidence based on what they are already committed to believing.  This form of motivated reasoning occurs, this work suggests, not just in the climate change debate but in numerous others in which these same cultural groups trade places being out of line with the National Academy of Sciences’ assessments of what “expert consensus” is.

To accept that this dynamic explains persistent public disagreement over scientific consensus on climate change, one has to be confident that these experimental studies are externally valid.  Real world communicators should definitely think carefully about that.  But because these experiments are testing alternative explanations for something we clearly observe in the real world (deep public division on climate change), they don’t suffer from the obvious defects of studies that predict we should already live in world we don’t see.

Part 3

References

Anderegg, W.R., Prall, J.W., Harold, J. & Schneider, S.H. Expert credibility in climate change. Proceedings of the National Academy of Sciences 107, 12107-12109 (2010).

Cook, J., Nuccitelli, D., Green, S.A., Richardson, M., Winkler, B., Painting, R., Way, R., Jacobs, P. & Skuce, A. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters 8, 024024 (2013).

Corner, A., Whitmarsh, L. & Xenias, D. Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Climatic Change 114, 463-478 (2012).

Doran, P.T. & Zimmerman, M.K. Examining the Scientific Consensus on Climate Change. Eos, Transactions American Geophysical Union 90, 22-23 (2009).

Farnsworth, S.J. & Lichter, S.R. Scientific assessments of climate change information in news and entertainment media. Science Communication 34, 435-459 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Lewandowsky, S., Gignac, G.E. & Vaughan, S. The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change 3, 399-404 (2012).

Lichter, S. Robert. Climate Scientists Agree on Warming, Disagree on Dangers, and Don't Trust the Media's Coverage of Climate Change. Statistical Assessment Service, George Mason University (2008).

National Science Foundation. Science and Engineering Indicators (Wash. D.C. 2014), available at http://www.nsf.gov/statistics/seind14/index.cfm/chapter-7/c7s3.htm.

Oreskes, N. The scientific consensus on climate change. Science 306, 1686-1686 (2004).

Pew Research Center for the People & the Press. Public praises science; scientists fault public, media (Pew Research Center, Washington D.C., 2009).

Powell, J. Why Climate Deniers Have No Scientific Credibility - In One Pie Chart. DESMOGBLOG.com (2012).

Univ. Queensland. Study shows scientists agree humans cause global-warming (2013). Available at http://www.uq.edu.au/news/article/2013/05/study-shows-scientists-agree-humans-cause-global-warming.

Monday
Jun162014

External validity of climate-science-communication studies: ruminations part 1

The following is an excerpt from a paper I'm writing.  One of the paper's central themes is external validity.

Roughly, "internal validity" refers to the quality of a design that warrants drawing inferences from the results to what is going on in the study. "External validity" refers to the quality of a design that warrants drawing infernces from the results of a study to the real-world phenomenon the study is supposed to be engaging or modeling.  

I'm convinced that the study and practice of climate-science communication both reflect insufficient attention to external validity issues, and that this disregard is significantly dissipating the effectiveness of--wasting the resources committed to--communicating climate science. 

I'll post the paper in the near future. It has some cool data in it!

But in the meantime, I'll post a few bits -- somewhere between 2 and 17-- as blog posts.

* * *

3. What does “belief in” global warming measure?

Just as we can use empirical methods to determine that “belief in evolution” measures “who one is” rather than “what one knows,” so we can use these methods to assess what “belief in global warming” measures. An illuminating way to start is by seeing what a valid measure of “belief in global warming” looks like.

Figure 3 presents a scatter plot of the responses to a survey item that asked respondents (1800 members of a nationally representative sample) to rate “how much risk … global warming poses to human health, safety, or prosperity” in “our society.” The item, which I’ll call the “Industrial Strength Measure” (ISM), used an eight-point response scale, running form “none at all” to “extremely high risk,” with each point in between assigned a descriptive label.  The survey participants are arrayed along the y-axis in relation to their score on “Left_Right,” a reliable (α = 0.78) composite scale formed by aggregating their responses to a seven-point “party self-identification” measure (“Strong Republican” to “Strong Democrat”) and a five-point “ideology” one (“Very liberal” to “Very conservative). The color-coding of the observations—orange to red for higher risk ratings, yellow for middling ones, and green to blue for lower ones—helps to reveal the strength of the correlation between the global-warming risk ISM and left-right political outlooks.

Exactly how “strong,” though, is that correlation?  An “r” of  “- 0.65” might intuitively seem pretty big, but determining its practical significance requires a meaningful benchmark.  As it turns out, subjects’ responses to the party self-identification and liberal-conservative ideology items are correlated to almost exactly the same degree (r = 0.64, p <  0.01). So in this nationally representative sample, perceptions of the risk of global warming are as strongly associated with respondents’ right-left political outlooks as the indicators of their political outlooks are with one another. 

We could thus combine the global-warming ISM with the party-identification and liberal-conservative ideology items to create an even more reliable political outlook scale (α = 0.81), one with which we could predict with even greater accuracy people's positions on issues like Obamacare and Roe v. Wade.  From a psychometric perspective, all three of these items are measuring the same thing—a latent (unobserved) disposition that causes different groups of people to adopt coherent sets of opposing stances on political matters (DeVellis 2012).

The global-warming ISM has another interesting property, one it shares with ISMs for other putative hazards: it coheres very strongly with professed beliefs about the facts relevant to assessing the specified risk source (Dohmen eta al. 2011; Ganzach et al. 2008; Weber et al. 2002). “Cronbach’s α” is a conventional measure of scale reliability that ranges from 0.0 to 1.0; a score of 0.70 is generally regarded as signifying that a set of indicators  display the requisite degree of intercorrelation necessary to qualify as measure of some underlying latent variable. When global-warming ISM is combined with items measuring whether people believe that “average global temperatures are increasing,” that “[h]human activity is causing global temperatures to rise,” and that global warming will result in various “bad consequences for human beings” if not “counteracted,” the resulting scale has a Cronbach’s α of 0.95.   These “belief” items, then, can also be viewed as measuring the same thing as the “risk seriousness” item—viz., a latent disposition to form coherent sets of beliefs about the facts and consequences of the climate change.

Not surprisingly—indeed, as a matter of simple logic—there is a comparably high degree of coherence between “belief in climate change” and political outlooks. In this sample, some 75% of the individuals whose scores placed them to the “left” of the mean on the political outlook scale indicated that they believe human activity is the primary source of global warming. Only 22% of those who scores placed them to the “right” of the mean indicated that they believed that, and 58% of them indicated that they did not believe there was “solid evidence that the average temperature on earth has been getting warmer over the past few decades.” These figures are in accord with ones consistently reported by scholars and public opinion research centers for over a decade.

Nevertheless, advocacy groups regularly report polls that paint a very different picture. “A new study,” their press releases announce, show that “an overwhelming majority of Americans”—“Blue State and Red ones alike,” enough “to swing” an upcoming presidential election etc.— “support taking action” immediately to combat global warming. Disturbingly, the producers of such polls do not always release information about the survey’s wording or the (mysteriously characterized) methods used to analyze them. But when they do, informed observers point out that the questions posed were likely to confuse, mislead, or herd the survey respondents toward desired answers (Kohut 2010). 

Given the source of these surveys, one could infer that they reflect an advocacy strategy aimed at fostering “overwhelming majority support” for “action on climate change” by insisting that such support already exists. If so, the continued generation of these surveys itself displays determined inattention to over a decade’s worth of real-world evidence showing that advocacy polls of this sort have failed to dissipate the deep partisan conflict measured by various straightforward items relating to global warming.

Indeed, that is the key point: items that show “an overwhelming majority of Americans” believe or support one thing or another relating to climate change are necessarily not measuring the same thing as items that cohere with ISM. The question, then, is simply which items—ones that cohere with one another and ISM and that attest to polarization over climate change, or ones that do not cohere with anything in particular and that report a deep bipartisan consensus in favor of “taking action”—are more meaningfully tracking the real-world phenomena of interest. Unless one is prepared to conclude that the latent or unobserved disposition that causes coherent responses to political outlook and various global warming “belief” and risk perception items are irrelevant for making sense of the public opinion over climate change in the United States, it follows that survey questions that do not cohere with those ones are.

Serious opinion scholars know that when public-policy survey items are administered to a general population sample, it is a mistake to treat the responses as valid and reliable measures of the particular positions or arguments those items express.  One can never be sure that an item is being understood as one intended. In addition, if, as is so for most concrete policy issues, the items relate to an issue that members of the general population have not heard of or formed opinions on, then the responses are not modeling anything that people in the general population are thinking in their everyday world; rather they are modeling only how such people would respond in the strange, artificial environment they are transported into when a pollster asks them to express positions not meaningfully connected to their lives (Bishop 2005; Shuman 1998).

Of course many public policy issues are ones on which people have reflected and adopted stances of meaning and consequence to them.  But even in that case, responses to survey items relating to those issues are not equivalent to statements or arguments being asserted by a participant in political debate.  The items were drafted by someone else and thrust in front of the survey participants; their responses consist of manifestations of a pro- or con- attitude, registered on a coarse, contrived metric.

Because the response to any particular item is at best only a noisy indicator of that attitude, the appropriate way to confirm that an item is genuinely measuring anything, and to draw inferences about what that is, is to show that responses to it cohere with other things (responses to other items, behavior, performance on objective tests, and so forth) the meaning of which is already reasonably understood. Whatever thatitem does measure, moreover, can be measured more precisely when that item is appropriately combined into a scale with others that measure that same thng (Bishop 2005; Zaller 1992; Berinsky & Druckman 2007; Gliem & Gliem 2003).

The striking convergence of items measuring perceptions of global warming risk and like facts, on the one hand, and ones measuring political outlooks, on the other, suggests they are all indicators of a single latent variable.  The established status of political outlooks as indicators of cultural identity supports the inference that that is exactly what that latent variable is. Indeed, the inference can be made even stronger by replacing or fortifying political outlooks with even more discerning cultural identity indicators, such as cultural worldviews and their interaction with demographic characteristics such as race and gender: the resulting latent measures of identity will be even more strongly correlated with climate change risk perceptions and related attitudes (McCright & Dunlap 2012; Kahan et al. 2012).  In sum, whether people “believe in” climate change, like whether they “believe in” evolution, expresses who they are

Part 2

Part 3

References 

Berinsky, A.J. & Druckman, J.N. The Polls—Review: Public Opinion Research and Support for the Iraq War. Public Opin Quart 71, 126-141 (2007).

Bishop, G.F. The Illusion of Public Opinion : Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, Lanham, MD, 2005).

DeVellis, R.F. Scale development : theory and applications (SAGE, Thousand Oaks, Calif., 2012).

Dohmen, T., Falk, A., Huffman, D., Sunde, U., Schupp, J. & Wagner, G.G. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011).

Ganzach, Y., Ellis, S., Pazy, A. & Ricci-Siag, T. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008).

Gliem, J.A. & Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. (Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, The Ohio State University, Columbus, OH, 2003). Available at https://scholarworks.iupui.edu/bitstream/handle/1805/344/Gliem+&+Gliem.pdf?sequence=1.

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kohut, A. Views on climate change: What the polls show. N.Y. Times A22 (June 13, 2010), available at http://www.nytimes.com/2010/06/14/opinion/l14climate.html?_r=0. 

McCright, A.M. & Dunlap, R.E. Bringing ideology in: the conservative white male effect on worry about environmental problems in the USA. J Risk Res, 1-16 (2012).
Shuman, H. Interpreting the Poll Results Better. Public Perspective 1, 87-88 (1998).
 

Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002).

Zaller, J.R. The Nature and Origins of Mass Opinion (Cambridge Univ. Press, Cambridge, England, 1992).

Wednesday
Jun112014

"Resolved: Climate change is not a 'crisis'": Using cultural cognition research in high school ecology class

[from Dan Kahan: The following is a guest post on a super important topic: teaching secondary-school students climate science in a polluted science communication environment. In today's society, opposing stances on climate change have taken on the character of badges of membership in, and loyalty to, competing cultural groups. No one should have to choose between knowing what's known to science and being who they are; certainly kids can't be expected to learn effectively when put in that position. But talented, dedicated science educators have faced the challenge of dispelling this conflict and have overcome it in other settings. It won't be easy to do here, but I'm confident they'll succeed--and that all of us will learn something in the process about how to disentangle the toxic knot between cultural identity and positions on climate change.  Read this--a report form a reflective and passionate science educator on his encounter with this dilemma--& you'll see why I'm so optimistic!]

By Peter Buckland

What do you do when you get an email from a parent who’s worried your teaching climate alarmism?

Peter Buckland, displaying the sense of wonder that he is dedicated to enabling his students to experienceIn my second year as Director of Sustainability at Kiski, I was tasked with teaching two sections of Ecology. I designed the course to merge ecological concept mastery, major human-environmental issues, a campus arboretum and organic gardens, and opportunities for reflection.  Given the world as it is, I had to do a fairly in-depth unit on the science of the climate and climate change.

When I was hired, I told my interviewers that I was likely to encounter some resistance to scientifically-based climate education. The national politics and the personal convictions of a sizable swath of conservative Americans and their vociferousness indicated we’d get a phone call, email, or some grousing. At Kiski, many of my students come from white middle- to upper-class conservative families whose political alliances virtually guarantee they will doubt anthropogenic global warming or outright deny it as liberal garbage. I knew my audience and the potential resistance and I also knew I had administrators and a science department chair who backed me up.

I focused on the scientific consensus and how it has been achieved. We did labs and activities using radiative forcing data from NOAA and historical regional land and ocean temperature maps from the Australian Bureau of Meteorology. They read the most recent IPCC AR 5 “Headline Statement” and other current materials. One section got to Skype with Dr. Michael E. Mann, Director of Penn State’s Earth Systems Science Center and author of The Hockey Stick and the Climate Wars who spoke about current findings in climate science and not much on politics or lawsuits. All in all, the unit was shaping up well.

Then, at the end of the unit, I received an email from a concerned parent. He was concerned that I was being imbalanced in my teaching and courting some kind of climate alarmism. As a geologist, he had done some personal research and discerned that our climate was changing, that there was some anthropogenic forcing, but that climate change was not as bad as some people were making it out to be and that it certainly wasn’t a catastrophe. He offered to come to my class to balance the scales with a presentation of his own.

I admit, I was initially insulted and started a keyboard barrage. Yes! Bludgeon him with scientific data, authoritative scientific organizations, and self-righteous ire. After a few minutes though, I realized my strategy would backfire. My awareness of research about motivated reasoning and identity protection overrode my impulses. The emailing parent seemed in the “Doubtful” or maybe “Dismissive” camp of the Yale Six Americas study. Working from Lewandowsky’s and Cook’s The Debunking Handbook, I knew I should avoid emphasizing falsehoods, prevent an overkill backfire from a barrage of information (so hard), and do what I can to stop a worldview backfire, Dan Kahan’s focus at Cultural Cognition. Caution was in order. This was an opportunity.

When I wrote back I thanked the parent for being interested in his son’s education, his interest in the topic, and then explained my course’s logic. First, I work to represent current science accurately. Second, I am not an arbiter of my students’ values. While I am pretty alarmed about climate change’s scale and pace, it is not my place to indoctrinate my students into a political or emotional faction but to invite them to reflect on the state of the world and their own lives and values (how American of me). Third, and most importantly, I would provide them with the opportunity to develop their own views on the matter by taking positions in a mock UNFCCC deliberation where they could determine their thresholds for risk or whether or not climate change is a catastrophe. I would end up changing that format, though, because I wanted to get around my students’ motivated reasoning and develop their scientific literacy, their moral literacy, and their communication and analytical skills.

I split my classes into small groups and had them take positions on the following proposition: “Climate change is not a crisis.” We followed the format of Intelligence Squared that airs on National Public Radio. I instructed my students’ to incorporate well-grounded scientific information that reflects current understanding and make a clear argument of definition about what does or does not constitute a crisis. I chose this format from among three options for a few strategic reasons. First, the debate was not “Climate change is real: yes or no.” We could not deal in disproven or junk science. Second, by debating the proposition in the negative – “not a crisis” – I avoided an alarmist’s position. Third, equal numbers of students would have to be on one side or the other in groups I created deliberately.  This way I could put very concerned or alarmed students for the proposition and dismissive or doubtful students against it, thereby inviting them to reason in ways that could counter their own motivated reasoning. Fourth, at no point did I tell my students to be “objective,” “unbiased,” “rational,” or “open-minded.” While I might like them to do that, entreating them to be so could backfire as research has indicated it does. We like to believe we are open-minded and those people over there are the close-minded unrealistic ones. My set-up could avert some of that problem.

This model has at least two potential flaws. Someone could accuse me of a certain kind of censorship or choice editing. I have to edit my students’ choices in class. All teachers are, to some extent, editors. To master trophic relationships in the soil I would not tell my students that it is okay to entertain the notion that the soil food web is a hoax because the first law of thermodynamics is wrong. Similarly, any responsible understanding of the science of climate change at this point will not entertain hoax arguments.

A second problem may be more insidious. Simply inviting group deliberation could entrench people even further. At Cultural Cognition, Dan Kahan writes, “Far from counteracting this effect, deliberation among diverse groups is likely to accentuate polarization.  By revealing the correlation between one or another position and one or another cultural style, public debate intensifies identity-protective pressure on individuals to conform to the views dominant within their group.” Earlier I wrote that I had initially conceived of this project using a mock UNFCCC framework. I decided not to use that format to skirt worldview-threatening messages. As Kahan et al show in some of their research, hierarchical individualists (who map fairly well onto American conservatives) doubt climate change science more if it’s couched in terms of carbon regulations. Because the UNFCCC deliberations focus so heavily on regulation of markets and perceived United Nations’ interference, I tried to dodge that landmine.

How did they debate? They crafted arguments around what constitutes a crisis. One group that agreed with the proposition – climate change is not a crisis – said that on the scale of crises there is only so much room for big crises; global poverty, AIDS, and wars held up that space. Climate change may be a problem but it is so slow that right now it is not urgent. But another group argued it is a crisis because of the enormous costs to disaster-prone areas, the cost to insure them, and the costs to reinsurance. Using MunichRe and SwissRe as sources they made a powerful argument against the proposition. Yet another used techno-optimism a la Ray Kurzweil to predict that solar power will eclipse fossil fuels in the next couple of decades, thereby eliminating the largest emissions sources. But those against the proposition showed that threats to the carbon cycle were so severe already that major disruptions in ecosystem services and extinction were real, present, and harming people. Sadly, one group did not follow directions and used thoroughly debunked and scientifically invalid arguments. A teacher can’t control everything.

The emailing parent’s son was a star too. He was placed on a team that argued that climate change is a crisis and he spoke for the group (each group had a designated speaker). During the question and answer session that followed each teams’ statement, he answered questions clearly and asked his opponents intelligent and pointed questions. At one point, after pointing out the denier groups’ inaccuracies, their leader looked red in the face and asked, Do you think it’s a crisis?

He said something to the effect of, No. But that’s not the point. I’m arguing a position and doing it the best I can. But there are facts and we shouldn’t be afraid of facts. I don’t have to think this is a crisis to believe it’s real.

I felt pretty satisfied as a teacher at that moment. He and his group had formulated a scientifically-informed and sensible argument with which he did not agree. And in so doing, he showed that he could master scientific information he might have rejected were it presented in a way that it would have threatened his and his family’s worldview.

Just last week, he graduated and I had the chance to talk to his dad, the emailing parent. He and his son had talked about the debate. I told him I had created it in part because of his email. And he was pleased with that and the experience it gave his son. He agreed that whatever we might think climate change’s status as crisis or not, that we should all master sound information and concepts and learn to place them into coherent messages. It was, once again, very satisfying to have some evidence that developing the assignment using my understanding of identity protection and motivated reasoning had worked with at least one student.

It seems to me that I might have made some educational errors had I not known and thoughtfully strategized from the cultural cognition and related research. I encourage others to craft similar strategies to develop their students and the public’s/publics’ climate literacy. By attending to who we are, we have better chances at dealing with reality together. With the world’s climate changing as rapidly as it is, we need to use the best strategies we can so that we can be better ecological citizens.

Peter Buckland is completing his two-year term as Director of Sustainability at the Kiski School. He has worked on energy, waste, land, and educational projects ranging from gardens and an arboretum to a comprehensive energy strategy and teacher development. He sees his purpose as making possibilities for all people to become better ecological citizens, people who “recognizes the importance and interconnectivity of all living beings, human and non-human…[who] understands that she or he is responsible to all beings and actively seeks sustainable futures for them” (Kissling and Barton, 2013). He is finishing his doctorate in Educational Theory and Policy at Penn State University.

Monday
Jun092014

Got facts? The boring, ignorant, anti-liberal, science-communication-environment polluting "who is more anti-science" game

I haven’t really been paying that much attention but I gather that some attention-seeking talking head—or maybe multiple of them—have decided that their own best available current strategy for getting people to pay attention to them instead of some other attention-seeking, know-nothing talking head who also has nothing important to say is to recycle the evidence-free assertion that the “left” is anti-science b/c of “its” supposed view that vaccines cause autism.

I guess the sheer tedium of shooting down such nonsense is too self-indulgent a reason not to keep shooting down such views given how much harm this sad feature of our political discourse can inflict.

It's important to clear such bullshit from the pathways of collective opinion formation first, b/c our society really does need an accurate understanding of why perceptions of risk and similar beliefs about policy-relevant facts sometimes (but very very infrequently) become entangled in antagonistic cultural meanings, and second, b/c the false assertion that one or another cultural group is “unreasoning,” “anti-science” etc. is almost certainly one of the mechanisms by which such entanglements get perpetuated.

So…

No, it’s not the case that that "liberals" are more likely to be anti-vaccine than "conservatives."

from CCP Risk Perception Studies, No. 17

And guess what? People on the “right” are  not meaningfully “anti-vaccine” either, as you can see (if you can't see that in the figure, what does that mean?). 

And --despite the boring boring boring "growing anti-science sensibility" trope to the contrary-- there’s no meaningful correlation between climate-change skepticism and disbelief in evolution and the perception that childhood vaccines endanger public health! 

So next time you want to “sound smart” without knowing anything, please don’t make that claim either.  Becaues the truth-- in case that really matters to you & isn't something you just say you care about for effect--is that there is widspread cultural consensus in the US that universal vaccination is a very valuable thing. 

Also, while we are on the topic: No, there’s no meaningful correlation between holding “liberal”—or “conservative”—views and being concerned about GM food risks!  In fact, the vast majority of ordinary people don’t have any particular opinion on GM foods whatsoever!


And in case you were even thinking of going there, don’t:

There's no meaningful correlation between believing raw milk is healthy and conventional political outlooks in the general population! 

Same for risks of high-voltage power lines and cellphones (and flouridation of water and medical x-rays, etc.), so if you think holding forth on those w/o knowing anything will make you sound smart, just don't. Okay?

Yes, there are small groups of people who believe absurd, unscientific things about vaccines, GM foods, pasteurization of milk & all these other things.

If you are genuinely worried about people spreading misinformation about consequential matters-- & you should be on vaccines, GM foods etc.-- great! By all means call them out on it

Just don't say that that what weird unrepresentative groups are doing is evidence of a "creeping anti-science" sensibility in the public or of the hostility toward science of large communities of Americans who hold completely ordinary political or cultural outlooks.

We live in a nation with 3.2x10^8 people—so you can find large, in absolute terms, numbers of people who believe anything (e.g., that “contrails” are some sort of mind-controlling gas being sprayed by the CIA etc.).

People who believe those weird-ass things might have similar views on politics (I have no idea, frankly, whether people who are anti-vax hold similar political outlooks).

But it is a logical fallacy to infer that therefore all other people who hold those views on politics generally believe in whatever weird things these small fringe groups believe in.

But the illogic of the talking heads who play the “the other side is anti-science!” game bothers me less than two other things.

The first is their conviction that their casual impressions (I doubt they are even first-hand; they likely consist in reading of what other empirically uninformed bull shitters are saying) support empirical characterizations about public opinion.

Just because things “feel” a certain way to you based on your very limited, very skewed exposure to public opinion doesn’t mean that that’s the truth.

The second is the contribution of these know-it-all-know-nothings are making to poisoning our science communication environment.

Ordinary members of the public, of all cultural and political outlooks in the US, are extraordinarily PRO-science.

If you don’t believe this, you are misinformed, likely as a result of the sad vulnerability of people of all cultural and political outlooks to believe that people who don’t agree with them on (admittedly important!) questions about how to live are “stupid,” “closed-mined” etc.

And by dragging science into your illiberal status competition with people whose cultural identity is different from yours, you are making it harder for all of us to converge on the best available evidence on matters that are critical to our collective decisinomaking.

So please just shut up already. 

Tuesday
Jun032014

Critical thinking about public opinion on climate change

A Washington Post Poll found that "70%" of Americans support regulation of green house gases.

The first thing to do, always, is take a close look and see if one accepts that a survey item of this sort is indeed worded in the manner that supports how its results are being characterized.

After that, the question to ask is, "What is the survey item actually measuring?"  

The answer is usually "nothing," or nothing of consequence.

If the item refers to a policy that members of the public don't know or think about--something that doesn't figure in their everyday interactions with other ordinary people--then the survey is not modeling anything going on in the world we live in.

Consider: About half the respondents in a general population survey won't know-- or even have good enough luck to guess-- the answer to the multiple-choice question "how long is the term of a U.S. Senator?"

Are we really supposed to base an inference about what people with that level of political engagement are thinking from the responses of the 1,000 who get weirdly transported out of their everyday worlds and asked (by aliens, it must seem) to indicate whether they "approve" of, say, "the NSA's metadata collection policy?" ("hell, I'm for it-- college athletes are there to be educated!")

The Washington Post tells us that an "overwhelming majority of Americans" support regulating CO2 emissions.

But given that only 58% (+/- 3%) of a general population telephone sample know that "carbon dioxide" rather "hydrogen," "helium," or "radon" is the "gas ... most scientists believe causes temperatures in the atmosphere to rise," what exactly did the "overwhelming majority" of the weird-lottery winners called by "Langer Research Associates" understand "greenhouse gases" to be?

There's zero correlation between responses to the "which gas" question and party affiliation.  That's how one knows that it isn't measuring the same thing as a survey item asking ordinary people whether they believe in human-caused global warming.

That's a question fraught w/ meaning -- not as a matter of policy or science, necessarily -- but as an element of their social world.  

It's one of those things -- along with abortion and gun control -- that separate "us" from "them." 

Indeed, the issue has split people right down the middle -- about 50% have answered "yes" & 50% "no" to the "human-caused global warming" item --for many years. And no, that has not changed recently: 48%, in a nationally representative Cultural Cognition Project survey conducted last month.

What are we supposed to think when told that an "overwhelming majority" of Americans said they "support" a policy that regulates human activities responsible for climate change even though Americans are divided 50-50 on whether human behavior is even causing global warming?

But even that takes responses to the the Washington Post survey item waaaaaaaaaay too seriously.

What valid public-policy survey items measure is an affective orientation -- a feeling that is either positive or negative, strong or weak.

Such orientations can be of extreme importance.  

They can propel people to disregard serious health risks (e.g., lung cancer from smoking) & shrink in terror from non-existent ones (autism from vaccines).

They can determine who they vote for for President (or for the House of Representatives or for anything else)--or who they marry or try to kill.

Often, such affective sensibilities are an expression of a vital element of the respondent's self-conception, one more convincingly seen as a cause than as a consequence of how that person makes sense of all manner of evidence and information, from raw data to brute sense impressions.

But that's what the response is--a register of an affective orientation. It's not an argument or an idea -- not anything like what you would make of a statement that a person in a conversation uttered on his or her own accord.

Same for hypothetical "willingness to pay" measures: if they are genuinely connected to something meaningful for people, responses to them express an attitude-- but are not a valid predictor of the willingness to pay for anything.

Psychometrically speaking, survey responses are "indicators" or indirect measures of a "latent" or unobserved "variable" or influence.  They need to be validated -- i.e., shown by independent means (including coherence with other survey items) to be measuring what one thinks they are measuring--and even then must be regarded as only noisy or imprecise approximations.

Or practically speaking, someone who is genuinely motivated to understand public opinion treats responses to any particular public-policy survey item as one of many ambiguous pieces of evidence.  

When connected to other pieces of evidence including responses to other items that cohere with one another, the responses often support inferences -- solid ones -- on the basis of which we can predict behavior and explain states of affairs.  

But a particular item by itself supports no reliable inference.

Likewise, if an outlier survey item (or even a collection of them) invites an interpretation contrary to the ones borne by other, conventional ones known to be valid and to support reliable inferences, the "hey look" item more than likely isn't measuring the affective sensibility that genuinely motivates the real-world attitudes the pollster is purporting to model.

As I've explained before, the "industrial strength" risk-perception item --the rating of a putative risk source on a multi-point scale -- elicits a straightforward pro- or con- expression of attitude that can vary within a meaningful but relatively constrained range.  At least where the putative risk is something members of the general population have had experience with in the world, responses to ISM can be expected to correlate highly with particular perceptions or factual beliefs relating to that same object.  It can be expected to be correlated with forms of behavior that fit or express the sensibility that it elicits.

But note the emphasis on correlation.  

By itself, responses to ISM are meaningless; the scale is arbitrary ("OMG, the public's perception of the risk of 'private ownership of guns' is only 3.4!").  

Its interpretive utility lies in its covariance with other characteristics -- e.g., w/ "cognitive reflection" or "numeracy" in the case of some risk perception hypothesized to be connected to over-reliance on heuristic reasoning. 

Or w/ "ideology." Here is the ISM for "global warming" in relation to political outlooks:

The correlation displayed is quite strong. Indeed, the correlation between the respondents' ISM rating and their right-left political outlooks is as high as the correlation between the two items -- partisan self-identification (on a 7-point measure) and liberal-conservative ideology (5-point) --that were combined to form the left-right outlook scale itself  (r = - 0.64, p < 0.01).

Well, believing in human caused global warming, unsurprisingly (the two together support the inference that each measures what it seems to).

It also correlates with positions on very familiar, very strongly contested issues like gun control and abortion.

ISM also correlates with individual characteristics, like being white & male, and hierarchical and individualistic, that are known to be indicators, too, of a group disposition that generates strong negative reactions to the issue of global warming (or "climate change"; changing the labe doesn't have any material effect -- which is to say, "believe in climate change?" and "believe in global warming?" are both valid, if noisy, indicators of the same latent affective disposition).  

Look: The affective sensibility that motivates cultural polarization on climate is real.

It can't be exorcised by magic words.

It won't abate if people rely on lab experiments to justify "messaging campaigns" that have been shown decisively not to work by over a decade of real-world evidence.

Yrs of advocacy polls designed to create "overwhelming majority" support for "action on climate change" by insisting it already exists have proven their valuelessness too.

The only way to ameliorate the destructive impact that the climate conflict is having on our capacity for enlightened self-government is to extricate the scientific issues it turns on from the ugly, illiberal form of  status competition that now engulfs them.

And we won't figure out with wishful thinking & meaningless measures.

 

 

Saturday
May242014

Weekend update: You'd have to be science illiterate to think "belief in evolution" measures science literacy

It's been soooo long -- at least 3 weeks! -- since I last did a post on the relationship between "belief in evolution" & "science literacy."

That's just not right!  Plus I have some cool new data on this issue.

But let's start with a reprise of the basics -- because one can never overstate how aggressively ignored they are by those who flip out & let loose with a toxic stream of ignorance & cultural zealotry every time a polling organization announces the "startling" news that nearly 50% of the US public continues (as it has for decades) to say "no" when asked whether they believe in evolution (in addition, if one asks how many of the "believers" subscribe to a "naturalistic" or Darwinian view as opposed to a "theistic" variant, the proportion plummets down all the more-- for "Democrats" as well as "Republicans" blah blah blah).

First, there is zero correlation between saying one "believes" in evolution & understanding the rudiments of modern evolutionary science.

Those who say they do "believe" are no more likely to be able to be able to give a high-school-exam passing account of natural selection, genetic variance, and random mutation -- the basic elements of the modern synthesis -- than than those who say they "don't" believe.

In fact, neither is very likely to be able to, which means that those who "believe" in evolution are professing their assent to something they don't understand.

That's really nothing to be embarrassed about: if one wants to live a decent life -- or just live, really --one has to accept much more as known by science than one can comprehend to any meaningful degree.

What is embarrassing, though, is for those who don't understand something to claim that their "belief" in it demonstrates that they have a greater comprehension of science than someone who says he or she "doesn't" believe it.

Second, "disbelief" in evolution poses absolutely no barrier to comprehension of basic evolutionary science.

Fantastic empirical research shows that it is very very possible for a dedicated science educator to teach the modern synthesis to a secondary school student who says he or she "doesn't believe" in evolution.  

The way to do it is to do the same thing that one should do for the secondary school student who says he or she does believe in evolution & who, in all likelihood, doesn't understand it: by focusing on correcting various naive misconceptions that have little to do with belief in the supernatural, etc., & everything to do with the ingrained attraction of people to functionalist sorts of accounts of how natural beings adapt to their environments.

The thing is, though, even after acquiring knowledge of the modern synthesis-- likely the most awe-inspiring & elegant, not to mention astonishingly useful, collection of insights that human reason has ever pried loose from nature--the bright kid who before said "no" when asked if he or she "believes" in evolution is not any more likely to say that he or she now "believes" it

Indeed, confusing "comprehension" with profession of "belief" is a very good way to assure that those kids who are disposed to say they "don't believe" won't learn these momentous insights.

As Lawson & Worsnop observed in the conclusion of their classic study (the one that presented such amazingly cool evidence on how to teach evolution in a way that excited kids of all cultural outlooks to want to learn it), 

[E]very teacher who has addressed the issue of special creation and evolution in the classroom already knows that highly religious students are not likely to change their belief in special creation as a consequence of relative brief lessons on evolution. Our suggestion is that it is best not to try to do so, not directly at least. Rather, our experience and results suggest to us that a more prudent plan would be to utilize instruction time, much as we did, to explore the alternatives, their predicted consequences, and the evidence in a hypothetico-deductive way in an effort to provoke argumentation and the use of reflective thought. Thus, the primary aims of the lesson should not be to convince students of one belief or another, but, instead, to help students (a) gain a better understanding of how scientists compare alternative hypotheses, their predicated consequences, and the evidence to arrive at belief and (b) acquire skill in the use of this important reasoning pattern-a pattern that appears to be necessary for independent learning and critical thought.

There are actually some who say in response, "Not good enough; it is essential not merely to impart knowledge but also to extract a profession of belief too!"

When someone says that, he or she helps us to see that there are actually illiberal sectarians on both sides of the "evolution in education" controversy in this society.

Third -- and here we are getting to the point where the new data come in! -- profession of "belief" in evolution is simply not a valid measure of science comprehension.

This is very much related to what I have already recounted but is in fact a separate point.

Because imparting basic comprehension of science  in citizens is so critical to enlightened democracy, it is essential that we develop valid measures of it, so that we can assess and improve the profession of teaching science to people.

What should be measured, in my view, is a quality of  ordinary science intelligence -- not some inventory of facts ("earth goes 'round the sun, not other way 'round-- check!") but rather an ability to to distinguish valid from invalid claims to scientific insight and a disposition to use in one's own decisions science's signature style of inference from observation.

The National Science Foundation has been engaged in the project of trying to formulate and promote such a measure for quite some time. A few years ago it came to the conclusion that the item "human beings, as we know them today, developed from earlier species of animals," shouldn't be included when computing "science literacy."

The reason was simple: the answer people give to this question doesn't measure their comprehension of science. People who score at or near the top on the remaining portions of the test aren't any more likely to get this item "correct" than those who do poorly on the remaining portions.

What the NSF's evolution item does measure, researchers have concluded, is test takers' cultural identities, and in particular the centrality of religion in their lives.

Predictably, the NSF was forced to back off this position by a crescendo of objections from those who either couldn't get or didn't care about the distinction between measuring science comprehension and administering a cultural orthodoxy test. The NSF regularly notes the controversy but prudently distances itself from what the significance of it.

But those of us who don't have to worry about whether taking a stance will affect our research budgets, who genuinely care about science, and who recognize the challenge of propagating widespread comprehension and simple enjoyment of science in a culturally pluralistic society (which is, ironically, the type of political regime most conducive to the advance of scientific discovery!) shouldn't equivocate.

We should insist that science comprehension be measured scientifically and point out the mistakes -- myriads of them -- being made by those who continue to insist that professions of "belief" in evolution are any sort of indicator of that.

I've reported some evidence before in this blog that reinforces the conclusion that "belief" in evolution is a measure of who people are and not what they know.

Well, here's some more.

Following up on a super interesting tidbit from the 2014 NSF Science Indicators, I included alternate versions of the conventional NSF Indicator "evolution" item in a science comprehension battery that I administered to a large (N = 2000) nationally representative sample earlier this month.

One was the conventional "true-false" statement, "Human beings, as we know them today, developed from earlier species of animals.”

The second simply added to thist sentence the introductory clause, "According to the theory of evolution, ..."

The NSF had reported on a General Social Science module from a few years ago that found that the latter version elicits a much higher percentage of "true" responses.

Well, sure enough.   

As the Figure at the top of the post shows, the proportion who selected "true" jumped from 55% on the NSF item to 81% on the GSS one!

Wow!  Who would have thought it would be so easy to improve the "science literacy" of benighted Americans (who leaving aside the "evolution" and related "big bang" origin-of-the-universe items already tend to score better on the NSF battery than members of other industrialized nations).

Seriously: as a measure of what test takers know about science, there's absolutely no less content in the GSS version than the NSF.  Indeed, if anyone who was asked to give an explanation for why "true" is the correct response to the NSF version failed to connect the answer to  "evidence consisntent with the theory of evolution  ..." would be revealed to have no idea what he or she is talking about.

The only thing the NSF item does that the GSS item doesn't is entangle the "knowledge" component of the "evolution" item (as paltry as it is) in the identity-expressive significance of "positions" on evolution.  

Want some more evidence? Here you go:


This figure shows the relationship between the probability of a "true" response to the respective versions of the question conditional "religiosity" & "science comprehension." (The figure graphically reports the results of a regression model. If you want to see the raw Click me--I will make you more science literate, I swear!data, click on the inset to the left!)

The former was measured by aggregating into a scale responses to items on self-reported frequency of church attendance, frequency of prayer, and importance of God (α = 0.87).

The latter was formed by combining the NSF's science indicator battery (excluding the "evolution" one, to avoid circularity) with a set of Numeracy and Critical Reflection Test items.  The NSF indicators, a collection of "true-false" items,  can be seen as comprising knowledge of elementary facts; the additional items assess the sorts of reasoning skills--including, in particular, the disposition and ability to make valid inferences from quantitative and other forms of information--that a person needs in order reliably to acquire scientific knowledge. 

The items cohere nicely, forming a highly reliable unidimensional scale (α = 0.84), which I scored with an item response theory model. 

Indeed, the main reason for collecting data on the GSS and NSF variants of the evolution item was to see what the frequency of "true" responses to them would reveal about the item's relative connection to religious identity and science comprehension.

These data answer that question.

The panel on the left confirms that the NSF item does indeed measure religious identity, not scientific knowledgeable.  

Or maybe one can see it as indicating science comprehension for relatively secular folks, since in them one sees what one would expect if that were the case--namely, that the probability of answering "true" goes up as people become progressively more comprehending of science.

But the probability of answering "true" doesn't go up--if anything it goes down--as individuals who are above average in religiosity become more science comprehending.  That's manifestly inconsistent with any inference that the answer to the question indicates the science comprehension of people with a more religious identity. (In case you were wondering -- and it's perfectly reasonable to -- there was a fairly minor negative correlation-- r = - 0.17, p < 0.01-- between religiosity and science comprehension.)  

Now behold the panel on the right!

Here we do see exactly what one would expect of an item that indicates (i.e., correlates, because it's presumably caused by) science comprehension--an increasing probability of answering "true" -- for both non-religious and religious individuals!

By adding the introductory clause, "According to the theory of evolution," the GSS question disentangles ("unconfounds" in psychology-speak) the "science knowledge" component and the "identity expressive" components of the item.

Gee, Americans aren't that dumb after all!

Or maybe they are; this is too easy a question if one wants to figure out whether Americans or anyone else really knows anything about science: some 80% of the respondents answer it correctly -- a figure that rapidly approaches 100% among those of even middling science comprehension.

So ditch this question & substitute for it one more probative of genuine science comprehension -- like whether the test taker actually gets natural selection, random mutation, and genetic variance, which are of course the fundamental mechanisms of evolution and which kids with a religious identity can be taught just as readily as anyone else.

Or actually, how about this.

Instruct the test taker to reflect on the graph above and then respond to the item, 

"'Belief in evolution' is a valid measure of a person's science literacy," true or false?

Thursday
May222014

What is to be done? Let's start with why ... a fragment

From still another thing I'm working on that is distracting me from my main job--writing blog entries:

You asked me to describe what I want to do.  I think I’m more likely to convey that if I start with an account of why.

Two things concern me.  The first is the failure of professions that exist to enlarge, disseminate, and exploit the insights of valid empirical inquiry to use those methods to improve their own proficiency in enlarging, disseminating, and exploiting scientific knowledge.  Call this the “meta-empiricism spectacle” (MS).

Call the second problem “Popper’s revenge” (PR). Cultural pluralism makes liberal democratic societies uniquely congenial to the advancement of scientific inquiry; at the same time, however, it multiplies the occasions for polarizing forms of status conflict between the cultural groups within which diverse citizens necessarily come to know what’s known.  This dynamic puts at risk citizens’ enjoyment of both the promise of tolerance and the enormity of knowledge that are the hallmarks of liberal democratic societies.

MS and PR interact.  As a result of their failure to apply empirical methods to themselves, the professions that traffic in empirical knowledge—from conservation advocacy groups to government regulatory agencies, from science journalists to public health professionals, from educators to judges—fail to negotiate the forms of illiberal status competition that impede public recognition of what’s known to science.

I want to help address these problems....

But in any case, you now have a sense of why; so here is what I want to do.

I am intent on stimulating and being a party to the creation of as many projects as possible aimed at creating “evidence-based practices” within the professions most responsible for assuring reliable recognition of what science knows by the culturally diverse individuals and groups whose welfare such knowledge can enhance....

Wednesday
May212014

More on public "trust of scientists": *You* tell *me* what it means!

Okay, so I've done a good number of posts on "trust" in science/scientists. The basic gist of them is that I think  it's pretty ridiculous to think that any significant portion of the US public distrusts the authority of science -- epistemic, cultural, political, etc. -- or that partisan divisions in regard to trust in science/scientists can plausibly explain polarization over particular risks or other policy-relevant facts that admit of scientific inquiry (vice versa is a closer call but even there I'm not persuaded).

So here's some more data on the subject.

It comes from a large (N = 2000) nationally representative survey administered as part of an ongoing collaborative research project by the Annenberg Public Policy Center and CCP (it's a super cool project on reasoning & political polarization; I've been meaning to do a post on it -- & will, tomorrow"!).

The survey asked respondents to indicate on a 6-point "agree-disagree" Likert measure whether they "think scientists who work" (or in one case, "do research for") in a particular institutional setting "can be trusted to tell the public the truth."

The institutions in questions were NASA, the CDC, the National Academy of Sciences, the EPA, "Industry," the military, and "universities."

We had each subject evaluate the trustworthiness of only one such group of scientists.

Often researchers and pollsters ask respondents to asses the trustworthiness of multiple groups of scientists, or of scientists generally in relation to multiple other groups.

One problem with that method is that it introduces a "beauty pageant" element in which respondents rank the institutions.  If that's what their doing, one might conclude that the public "trusts" a group of scientists or scientists generally more than they actually do simply because they trust the others even less.

So what did we find?

I'll tell you (just hold on, be patient).  

But I won't tell you what I make of the findings. 

Do they support the widespread lament of a creeping "anti-science" sensibility in the U.S.?  

Or the claim that Republicans/conservatives in particular are anti-science or less trusting in science than they were in the past.

Or do they show "the left" is in fact "anti-science" -- as much so or more than "the right" etc.

You tell me!

Actually, I'm sure everyone will come to exactly the same conclusion on these questions.  Here as elsewhere, the facts speak for themselves!







Tuesday
May202014

The "generalizability problem" -- a fragment

From something I'm working on (one of many things distracting me from this blog; I've experienced a curious inversion recently in proscrastination diversions....)

One of the major challenges confronting the science of science communication is generalizability.  This problem is obvious when researchers engage in  lab experiments. By quieting the cacophony of uncontrollable real-world influences, such experiments enable the researcher to isolate and manipulate mechanisms of interest, and thus draw confident inferences about their significance, or lack thereof. But how, then, can one know whether the effects observed in these artificially tranquil conditions will hold up in the chaotic real-life environment from which the researcher sought refuge in the lab? 

It would be a mistake, though, to think that this difficulty reflects some fatal defect in laboratory methods.  And not just because such methods do indeed play an indispensable role in the formation of communication strategies that can subsequently be tested outside the lab. For any empirical testing that occurs in the field must also confront the question of generalizability: how is one to know that what worked in one distinctively messy real-world setting will work in another distinctively messy one?

The generalizability problem is central to the motivation for our proposal.  Disturbingly, a large fraction of researchers offering counsel to conservation advocates and policymakers simply ignore this issue altogether. 

But just as bad, a large fraction of the remainder try to address it in the wrong way.  They believe that the goal of empirical research is to identify a fixed set of universally effective “techniques” or “best practices” that can, with the benefit maybe of cartoon-illustrated instruction manuals, be confidently and more-or-less thoughtlessly applied by communicator "consumers." 

But in fact, the only technique of the science of science communication that generalizes—the sole valid “best practice” it has to offer—is its method. Successful lab experiments and field studies alike do enlarge understandings of how the world works. But how the insights they generate can be brought successfully to bear on any new problem will always be a question that those promoting science-informed conservation policymaking will have to answer for themselves.  The only way they can reliably do so, moreover, is by using empirical methods to adapt what the science of science communication knows to the distinctive circumstances at hand.  

Perfecting knowledge of how to use empirical methods in the everyday practice of conservation-science communication—so that the generalizability issue will always be confronted and confronted effectively—is the whole point of the proposed ....


Sunday
May182014

"Energy future 2030" talk (slides, video)

Thursday
May152014

Some "pathological" public risk perceptions & a whole bunch of "normal" ones

From slides in tak about to give at a biotech conference in Syracuse.  Political differences (or lack thereof) in top slide & "science comprehension" magnification of the same (or lack thereof) in bottom.

More later -- but if anyone wants to offer their own views in the meantime, freel free!

Tuesday
May132014

So much for that theory . . . (fracking freaks me out  #2)

Huh.

So having been freaked out to discover how pervasively polarized members of the public appear to be about fracking despite knowing nothing about it, I resolved to do a little experiment.

In the previous data collection, I had measured perceptions of fracking risks using the "industrial strength measure," which solicits a rating of how "serious" a societal risk some activity poses to "human health, safety, or prosperity."

My thought was that maybe what had generated such a strong degree of polarization might be the wording of the item, which asked subjects to supply such a rating for "fracking (extraction of natural gas by hydraulic fracturing)."

I figured maybe this language--the sort of "dirty" sounding word "fracking" and the references to "extraction" (sounds like a painful and invasive procedure to subject mother Nature to) &  "natural gas" ("boo" if you have an egalitarian, "game over, capitalists!" sensibility; yay, if you have an individualist, "yes we can, forever & ever & ever!" one) would be sufficient to alert  the ordinary Americans who made up the sample (most of whom likely wouldn't have been able to define fracking without this clue) that this was an "environmental" issue. That would be enough to enable most of them to locate the issue's position on the "cultural theory of risk" map, particularly if they were above-average in science comprehension and thus especially skilled at fitting information to their cultural identities.

So I thought I'd try an experiment.  Administer the same measure but vary the description of the putative risk source: in one condition, it would be called simply "fracking"; in another, it would be referred to as "shale oil gas production"; and in a third, the risk source would be identified as it was in the earlier survey-- "fracking (extraction of natural gas by hydraulic fracturing.)"

I figured that relative to the third group, those in the first (plain old "fracking") would be less polarized, and those in the second ("shale oil gas production"; sounds harmless!) would be the least agitated of all.

Actually, I was modeling this experiment loosely on  Sinaceur, M., Heath, C. & Cole, S. Emotional and deliberative reactions to a public crisis mad cow disease in France, Psychol Sci 16, 247-254 (2005)), a great study in which the investigators showed that lab subjects formed affect- or emotion-pervaded judgments when evaluating risk information relating to "Mad Cow disease" but formed more analytical, calculative ones when the information referred to either "bovine spongiform encephalopathy (BSE)" or "a variant of Creutzfeldt-Jakob disease (CJD)" instead.

Well, here's what I found:

 


Click on the image for a closer inspection, but basically, the difference in effect associated with the variation in wording, while "in the direction" hypothesized, was way too small for anyone to think it was practically meaningful.

Same thing for the influence of the wording on the interaction between political outlooks (measured with a right-left scale) and science comprehension (measured with a cool composite of substantive knowledge & critical reasoning measures; more on that "tomorrow"): 

So much for that theory.

But I have another one!  

All this agitation about fracking, I'm convinced, is really a battle between those who do & those who don't recognize the supreme value of local democratic decisionmaking!

 

 

Page 1 ... 2 3 4 5 6 ... 25 Next 20 Entries »