follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Sunday
Jan242016

Weekend update: OMG-- we are now as politically polarized over cell phone radiation as over GM food risks!!!

Some "Industrial Strength Risk Perception Measure" readings from CCP/Annenberg Public Policy Center study administered this month: 

Click for bigger, 3d, virtual reality view!

Interesting but not particularly surprising that polarization over the risk associated with unlawful entry of immigrants rivals that on global warming, which has abated recedntly about as much as the pumping of CO2 into the atmosphere.

Interesting but not surprising to learn (re-learn, actually) that it's nonsense to say Americans are "more afraid of terrorism than climate change b/c the former is more dramatic, emotionally charged" etc. That trope, associated with the "take-heuristics-and-biases-add-water-and-stir" formula of "instant decision science," reflects a false premise: those predisposed to worry about climate change do in fct see the risk it poses as bigger than that posed by domestic terrorism.

And completely boring at this point to learn form the 10^7 time that there is no political division over GM food risk in the general public, despite the constant din in the media and even some academic commentary to this effect.  

Consider this histogram:

The flatness of the distribution is the signature of the sheer noise associated with responses to GM food survey questions, the administration of which, as discussed seven billion times in this blog (once for every two regular blog subscribers!) is an instance of classic "non-opinion" polling. 

Ordinary Americans--the ones who don't spend all day reading and debating politics (99% of them)-- just don't give GM food any thought.  They don't know what GM technology is, that it has been a staple of US agricultural production for decades, and that it is in 80% of the foodstuffs they buy at the market.  

They don't know that the House of Reps passed a bipartisan bill to preempt state-labelling laws, which special-interest groups keep unsucessfully sponsoring in costly state referenda campaigns, and that the Senate will almost surely follow suit, presenting a bill that University of Chicago Democrat Barrack Obama will happily sign w/o more than 1% of the U.S. population noticing (a lot of commentators don't even seem to realize how close this non-issue is to completely disappearing).

Why the professional conflict entrepreneurs have failed in their effort to generate in the U.S. the sort of public division over GM foods that has existed for a long time in Europe is really an interesting puzzle.  It's much more interesting to try to figure out hypotheses for that & test them than to engage in a make-believe debate about why the public is "so worried" about them!

But neither that interesting question nor the boring, faux "public fear of GM foods" question was the focus of the CCP/APPC study.

Some other really cool things were.

Stay tuned!

Sunday
Jan172016

Status report on temporary CCP Lab relocation

we (my chief co-analyst & I) have arrived & resumed operations.  

A short photojournal of our relocation process:

1. Travelling (in custom-designed unit to avoid annoying paparazzi)
 
Inline image 1

2. Wrestling w/ research problem in new work space
 
Inline image 2

3. Taking a short break ....
 
Inline image 1

 

Friday
Jan152016

"I'm going to Jackson, I'm gonna mess around... " Well, Philly, actually

As of today, and until end of academic yr, will be at Annenberg Public Policy Center at University of Pennsylvania, to be a resident scholar in their amazing & inspiring Science of Science Communication project. 

Promise to write often!

Thursday
Jan142016

"Evidence-based Science Filmmaking Initiative," Rep. No. 1: Overview & Conclusions

In the last couple of posts (one on evolution believers' & nonbelievers' engagement with an evolution-science documentary, and another on measuring "science curiosity") I've summarized some of the findings from Study No. 1 of the Annenberg/CCP ESFI--"Evidence-based Science Filmmaking Initiative."

Those findings are described in more detail in a study Report, which also spells out the motivation for the study and its relation to ESFI overall. 

Indeed, the Report is an unusual document--or at least an unusual sort of document to share. 

It isn't styled as announcing to the world the "corroboration of" or "refutation" of some specified set of hypotheses.  It is in fact an internal report prepared for consumption of the the investigators in an ongoing research project, one that is in fact at a very preliminary stage!

Why release something like that?  Well, in part because even at this point in the investigation, we do think there are things to report that will be of interest to other scholars and reflective people generally, many of whom can be counted on to supply us w/ feedback that will itself make what we do next even more useful.

But in addition, one of the aims of the project, in addition to generating evidence relevant to questions of interest to professional science filmmakers, is to model the process of using evidence-based methods to answer those very questions.

As explained in the ESFI "main page," the project is itself meant to supply evidence relevant to the hypothesis that the methods distinctive of the science of science communication can make a positive contribution to the craft of science filmmaking by furnishing those engaged in it with the information relevant to the exercise of their professional judgment. 

Of course, those engaged in ESFI, including its professional science communication members, believe (with varying levels of confidence!), that in fact the science of science communication can make such a contribution; but of course,  too, others, including other professional science filmmakers, are likely to disagree with this conjecture.

I wouldn't say "no point arguing about it" just b/c reasonable, and informed, people can disagree.

But I would say that these are exactly the conditions in which the argument will proceed in a more satisfactory way with additional information of the sort that can be generated by science's signature methods of disciplined observation, reliable measurement, and valid inference.

Hence ESFI: Let's do it -- and see what  a collaboration between professional science filmmakers and allied communicators, on the 1 hand, and & "scientists of science communication" on the other, produces.  Then, on the basis of that evidence, those who are involved in science filmmaking can use their own reason to judge for themselves what that evidence signifies, and update accordingly their assessments of the utility of integrating the science of science communication into the craft of science filmmaking (not to mention related forms of science communication, like science journalism).

Precisely b/c the Report is an internal research document that takes stock of early findings in a multi-stage project, it furnishes a glimpse of the project in action.  It thus gives those who might consider using such methods a chance to form a more concrete picture of what these practices look like, and a chance to use their own experience-informed imaginations to assess what they might do if they could add evidence-based methods to their professional tool kits.

But of course this is only the start-- only the first Report, both of results and of the experience of doing evidence-based filmmaking.

A. Overview and summary conclusions

This report summarizes the preliminary conclusions of Study No. 1 in the Annenberg/CCP “Evidence-based Science Filmmaking Initiative.” The goal of the initiative is to promote the integration of the emerging science of science communication into the craft of science filmmaking. Study No. 1 involved an exploratory investigation of viewer engagement with an excerpt from Your Inner Fish, a documentary on human evolution.

The study had two objectives.

One was to gather evidence relevant to an issue of debate among science filmmakers: what explains the perceived demographic homogeneity of the audience for high-quality documentaries featured on NOVA, Nature, and similar PBS shows? Is the answer the distribution of tastes for learning about scientific discovery in the general population, or instead some feature of those shows collateral to their science content that makes them uncongenial to individuals who subscribe to certain cultural styles?

The other study objective was to model how evidence-based methods could be used by science filmmakers. Hard questions—ones for which the number of plausible answers exceeds the number of correct ones—are endemic to the activity of producing science films. By testing competing conjectures on an issue of consequence to their craft, Study No. 1 illustrates how documentary producers might use empirical methods to enlarge the stock of information pertinent to the exercise of their professional judgment in answering such questions.

Principal conclusions of Study No. 1 include:

1. By combining appropriately subtle self-report items with behavioral and performance-based ones, it is possible to construct a valid scale for measuring individuals’ general motivation to consume information about scientific discovery for personal satisfaction. Desirable properties of the “Science Curiosity Scale” (SCS) include its high degree of measurement precision, its appropriate relationship with science comprehension and other pertinent covariates, and (most importantly) its power to predict meaningful differences in objective manifestations of science curiosity.

2. By similar means, one can construct a satisfactory scale for measuring viewer engagement with material such as that featured in the YIF clip. Such a scale was again formed by combining self-report and objective measures, including duration of viewing time and requested access to the remainder of the documentary. Designated the “Engagement Index” (EI), the scale had the expected relationships with education and general science comprehension. The strongest predictor of EI was the study subjects’ SCS scores.

3. Engagement with the clip did not vary to a meaningful degree among subjects who had comparable SCS scores but opposing “beliefs” about human evolution. Evolution “believers” and “nonbelievers” with high SCS scores formed comparably positive reactions to the YIF clip. The show didn’t “convert” the latter. But like “believers” with high SCS scores, high-scoring “nonbelievers” were very likely to accept the validity of the science featured in the clip. This finding is consistent with research suggesting that professions of “disbelief” in evolution are an indicator of cultural identity that poses no barrier to engagement with scientific information on evolution, so long as that information itself avoids mistaking exacting professions of “belief” for communicating knowledge.

4. Engagement with the show did vary across culturally identifiable groups. The members of one cultural group, whose members are in fact distinguished in part by their pro-technology attitudes, appeared to display less engagement the clip than was predicted by their SCS scores. This finding furnishes at least some support for the conjecture that some fraction of the potential audience for science documentary programing is discouraged from viewing it by uncongenial cultural meanings collateral to the science content of such programming.

5. But additional, more fine-grained analysis of the data is necessary. In particular, the science-communication-professional members of the research team must formulate concrete, alternative hypotheses about the identity of culturally identifiable groups who might well be responding negatively to collateral cultural meanings in the clip. Those hypotheses can in turn be used by the science-of-science-communication team members to develop more fine-tuned cultural profiles that can be used to probe such conjectures.

6. Depending on the results of these additional analyses, next steps would include experimental testing that seeks to modify collateral meanings or cues in a manner that eliminates any disparity in engagement among individuals of diverse cultural identities who share a high level of curiosity about science.

 

 

Wednesday
Jan132016

"SCS_1.0": Measuring science curiosity

 Yesterday, I discussed how evolution "believers" and "nonbelievers" reacted to a cool evolution-science  documentary. The data I described came from Study No. 1 of the Annenberg Public Policy Center/CCP "Evidence-based Science Filmmaking Initiative" (ESFI).

That data suggested that "belief" in evolution wasn't nearly as important to engagement with the documentary (Your Inner Fish, an award-winning film produced by ESFI collaborator Tangled Bank Studios) as was science curiosity.

Today I'll say a bit more about how we measured science curiosity.

Developing a valid and reliable science curiosity scale was one of the principal aims of Study No. 1.  As conceptualized here, science curiosity is not a simple transient state (Loewenstein 1999) but instead a general disposition, variable in intensity across persons, that reflects the motivation to seek out and consume scientific information for personal pleasure.

Obviously, a measure of this disposition would furnish science journalists, science filmmakers, and related science-communication professionals with a useful tool for perfecting the appeal of their work to those individuals who value it the most. But it could also make myriad other contributions to the advancement of knowledge. 

A valid science curiosity measure could be used to improve science education, for example, by facilitating investigation of the forms of pedagogy most likely to promote its development and harness it to promote learning (Blalock, Lichtenstin, Owen & Pruski 2008). Those who study the science of science communication (Fischhoff & Scheufele 2014; Kahan 2015) could also use a science curiosity measure to deepen their understanding of how public interest in science shapes the responsiveness of democratically accountable institutions to policy-relevant evidence.

Indeed, the benefits of measuring science curiosity are so numerous and so substantial that it would be natural to assume researchers must have created such a measure long ago.  But the simple truth is that they have not. 

“Science interest” measures abound. But every serious attempt to assess their performance has concluded that they are psychometrically weak and, more important, not genuinely predictive of what they are supposed to be assessing—namely, the disposition to seek out and consume scientific information for personal satisfaction (Blalock et al 2008; Osborne, Collins & Simons 2003).

ESFI assumptions: "1. Mathematics is the language of nature. 2. Everything around us can be represented and understood through numbers. 3. If you graph these numbers, patterns emerge. Therefore: ..." click on this, or you are a big fat loser!!!!!ESFI’s “Science Curiosity Scale 1.0” (SCS_1.0) is an initial step toward filling this gap in the study of science  communication.  The items it comprises, and the process used to select (and combine) them, self-consciously address the defects in existing scales.

One of these is the excessive reliance on self-report measures. Existing scales relentlessly interrogate the respondents on the single topic of their own attraction to or aversion toward information on scientific discovery: “I am curious about the world in which we live,” “I find it boring to hear about new ideas,” “I get bored when watching science programs on TV,” etc.  Items like these are well-known to elicit responses that exaggerate respondents’ possession of desirable traits or attributes.

To counteract this dynamic, SCS_1.0 disguises its objectives by presenting itself as a general “marketing” survey.

Individual self-report items relating specifically to science were thus embedded in discrete blocks or modules, each consisting of ten or more items relating to an array of “topics” that “some people are interested in, and some people are not.” Items were presented in random order, each on a separate screen. 

There was thus no reason for subjects to suspect that their motivation to learn about science was of particular interest, nor any opportunity for them to adjust the responses across items in a manner that overstated their interest in it.  A similar strategy was used to gather information on behavior reflecting such an interest, including visits to science museums, attendance at public science lectures, and the reading of books on scientific discovery.

SCS_1.0 also featured an objective performance measure. 

Well into the survey, subjects were advised that we were interested in their reactions to a news story “of interest” to them.  In order to assure that the story was one that in fact matched their interests, they were furnished with discrete news story sets, the shared subject matter of which was be identified by a header and reinforced by the individual story headlines and graphics. One set consisted of science stories; the others ones on popular entertainment, on sports, and on financial news. 

Subjects, we anticipated, were likely to find the prospect of reading a story and answering questions about it burdensome.  Accordingly, the selection of the science set rather than one of the others would be a valid indicator of genuine science interest . Responses to this task were then used to validate the self-reported interest items to help furnish assurance the genuineness of the latter.

When combined, the items displayed the requisite psychometric properties of a valid and reliable scale.  Their unidimensional covariance structure warranted the inference that they were measuring the same latent disposition.  Formed with item response theory, the composite scale weighted particular items in relation to the level of the disposition that responses to them evinced. The result was an index—SCS_1.0—that reflected a high degree of measurement precision along the entire population distribution of that trait (Embretson & Paul 2000).

Are *you* science curious? If so, you'll click on this to see how SCS predicts behavior evincing a desire to learn of scientific discoveries!Finally and most importantly, SCS_1.0 was behaviorally validated

As detailed in ESFI Study Report No. 1, subjects were instructed to watch a 10-minute clip from the science documentary Your Inner Fish.  SCS_1.0 strongly predicted engagement with the clip as reflected not only in self-reported interest but also in objective measures such as duration of viewing time and subjects’ election (or not) to be furnished free access to the documentary as a whole.

SCS_1.0 is by no means understood to be an ideal science curiosity measure.  Additional testing is necessary, both to assure the robustness of the scale and to refine its powers to discern the motivation to seek out and consume science information for pleasure.

Moreover, SCS_1.0 was self-consciously designed to assess this disposition in adult members of the public; variants would be appropriate for specialized populations including elementary or secondary school students.

But what SCS_1.0 does do, we believe, is initiate a process that there's every reason to believe will generate measures of genuine value to researchers interested in assessing science curiosity in the general public and in specialized subpopulations.  The researchers associated with CCP’s ESFI and other evidence-based science communication initiatives are eager to participate in that process.  But they are also eager to stimulate others to participate in it either by building on and extending SCS_1.0 or by developing alternatives that genuinely predict behavior that manifests the motivation to seek out and consume scientific information.

Existing “science interest” measures just don’t do that.  SCS_1.0 shows that it is possible to do much better.

References

Besley, J.C. The state of public opinion research on attitudes and understanding of science and technology. Bulletin of Science, Technology & Society, 0270467613496723 (2013).

Blalock, C.L., Lichtenstein, M.J., Owen, S., Pruski, L., Marshall, C. & Toepperwein, M. In Pursuit of Validity: A comprehensive review of science attitude instruments 1935–2005. International Journal of Science Education 30, 961-977 (2008).

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, N.J.: L. Erlbaum Associates.

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Kahan, D.M. What is the "science of science communication"? J. Sci. Comm, 14, 1-12 (2015).

Loewenstein, G. The psychology of curiosity: A review and reinterpretation. Psychological Bulletin 116, 75 (1994).

National Science Foundation. Science and Engineering Indicators, 2010 (National Science Foundation, Arlington, Va., 2014).

Osborne, J., Simon, S. & Collins, S. Attitudes towards science: A review of the literature and its implications. International journal of science education 25, 1049-1079 (2003).

Thomas G Reio Jr, Joseph M Petrosko, Albert K Wiswell & Juthamas Thongsukmag, The Measurement and Conceptualization of Curiosity, 167 The Journal of Genetic Psychology 117-135 (2006).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).

 

 

 

Monday
Jan112016

The (non)relationship between "believing in" evolution and being engaged by evolutionary science

Are Americans who “disbelieve in” human evolution as likely as those who “believe in” it to be interested in a science documentary on our species’ natural history? Would they accept the evidence in such a documentary as valid and convincing?

“No” and “no” would seem to be the obvious answers.  It’s not as if those who reject human evolution just haven’t been shown the proof yet. However skillfully presented, then, another exposition of evolutionary science, one might think, would be more likely to antagonize them than to pique their interest.

But Study 1 in CCP’s Evidence-based Science Filmmaking Initiative suggests that things aren’t that simple.

The study involved a nationally representative sample of 2500 U.S. adults.  In line with national survey findings that haven't changed for decades (Newport 2014), about 40% of the subjects selected “false” in response to the survey item “Human beings evolved from an earlier species of animal.”

Study subjects were instructed to view as much or as little as they chose of a 10-minute science documentary segment.  The segment was excerpted from Your Inner Fish, an award-winning documentary on evolution that was produced by ESFI collaborator Tangled Bank Studios and that was broadcast on PBS in 2014. The excerpt in question examined the origins of color vision in humans.

The study also measured subjects’ science curiosity and science comprehension. Both of these dispositions were positively correlated with subjects’ acceptance of evolution. But the strength of the relationships was quite modest .  Among those who “believed” in evolution and among those who did not, there were ample numbers of study subjects high in science comprehension and science curiosity, and ample numbers of people who were high in neither.

Unsurprisingly, those subjects who ranked highest in science curiosity were substantially more engaged by the segment.  The more curious subjects were, the more likely they were to watch all or a substantial portion of it; to report finding it interesting; and to supply the information necessary to receive free access to the remainder of the documentary (responses aggregated to form an "Engagement Index").

The intensity of the relationship between curiosity and engagement was no less pronounced, moreover, in subjects who said they did not “believe in” evolution than it was among those who said they did.  Low-curiosity  evolution “disbelievers” were in fact slightly less engaged than low-curiosity “believers.”  But neither of those low-curiosity subgroups was nearly as engaged by the clip as were evolution “nonbelievers” who scored high on the science curiosity scale. 

This is evidence, then, that yes, an evolution “nonbeliever” can enjoy an evolution-science documentary—one that uses experiments on monkeys no less to support inferences about the impact of random mutation, natural selection, and genetic variance on modern humans’ perception of color. 

How much an evolution “nonbeliever” will enjoy this documentary depends, the study suggests, on exactly the same thing that an evolution “believer’s” level of enjoyment does: how motivated he or she is to seek out and consume information on science for personal satisfaction--or in a word, how curious that person is about science.

Can an evolution “nonbeliever” find the evidence presented in such a documentary both valid and convincing?

The answer to this question is also "yes"—particularly if he or she is generally curious about science

A low-curiosity evolution “nonbeliever” was about as likely to disagree as he was to agree that the clip was “convincing,” and that it “supplied strong evidence of how humans acquired color vision.”  But the probability a high-curiosity “nonbeliever” would agree with these characterizations of the validity of the information in the segment was well over 75%.

Note, though, that the curious “nonbelievers” who indicated that they found the evidence “strong” and “convincing” did not “change their minds” on human evolution. 

Is that surprising? It won’t be to anyone familiar with empirical study of the relationship between professions of “belief” in evolution and comprehension of science.

That research consistently finds no correlation between how people respond to “true-false” human-evolution survey items and their ability to give a cogent account of natural selection, genetic variance, and random mutation (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990). 

Researchers also find that students who say they don’t believe in evolution can learn these important insights just as readily as those who say they do believe in it—as long as the teacher doesn’t make the mistake of conveying that the point of the instruction is to extract a profession of “belief” from the former, a style of pedagogy that needlessly pits students’ interest in learning against their interest in being faithful to their cultural identities (Lawson & Worsnop 1999).

What people say they “believe” about human evolution doesn’t indicate what people know; it expresses who they are, culturally speaking (Long 2011). 

Professing rejection of evolution coheres with a cultural style that features religiosity (Roos 2012). It is precisely because the answer “false” signifies their defining commitments that individuals with this identity balk when educators make the mistake (itself a sign of inattention to empirical research) of conflating transmission of knowledge with extracting professions of “belief” in it. 

When put in the position of having to choose between being who they are and expressing what they know, free, reasoning people understandably opt for the former (Hameed 2015). Indeed, they can be expected to dedicate all of their reasoning proficiency to doing so: the higher the science literacy score of someone who subscribes to a religious cultural identity, the more likely he or she is to respond negatively to the “true-false” survey item “human beings evolved from an earlier species of animal” (Kahan 2015).

Our study captured this form of of identity-protective cognition, too.   

Again, science curiosity was positively correlated with levels of engagement and with levels of perceived validity for both evolution believers and evolution nonbelievers.  But this was not the case for science comprehension: as subjects’ scores on the Ordinary Science Intelligence assessment test (Kahan in press; Kahan, Peters et al. 2012) increased, evolution believers became more engaged and more convinced by the clip, while evolution disbelievers became less so.

This result was driven by the negative reactions of evolution nonbelievers who were simultaneously high in science comprehension and low in science curiosity. These study subjects were by far the least engaged by the clip and the least likely to view the evidence it presented as valid.

Nonbelievers who scored high on both the science curiosity and science comprehension scales, in contrast, were highly engaged by the documentary segment and highly likely to deem it a strong and convincing account of the origins of human color vision.

People use their reason for multiple ends. One of these is to form the dispositions and attitudes that enable them to reliably experience and express their commitment to a shared way of life.  Another of these is to attain goals—from personal health to professional success—that can be effectively achieved only with what science knows (Kahan 2015).

People who are curious about science have a goal that those who aren’t curious don’t: to satisfy their appetite to understand the insights generated by use of science’s signature methods of observation, measurement, and inference. EFSI Study 1 shows that such a person can satisfy that goal by enjoying a skillfully made science documentary about evolution even if she has an identity that is itself enabled by professing “disbelief” in it. 

In this respect, the results of the study are in line with those that show that individuals who hold a religious identity associated with disbelief in evolution can still learn what science knows about the natural history of human beings and, if they choose, even use that knowledge to engage in activities, such as the practice of medicine or scientific research, that are uniquely enabled by such knowledge (Lawson & Worsnop 1999; Everhart & Hameed 2013).

People who are low in science curiosity can be expected to engage information on it for one purpose only: to be the sorts of persons, culturally speaking, enabled by their respective states of “belief” or “disbelief.”  Making use of information for that end is another one of things people can do even better if they possess the sort of reasoning proficiency associated with high science comprehension.  Accordingly, individuals who scored high in science comprehension but low in science curiosity (the two dispositions are only weakly correlated) predictably formed attitudes—of “engagement” and “acceptance”—that accurately manifested their cultural identities.

What to make of all this?

Well, for one thing, it is very much worth acknowledging that this interpretation of the data from ESFI Study No. 1, like all interpretations of any data, is provisional.  Additional studies, additional evidence might well furnish grounds for revising this understanding.

But it’s also very worth pointing out that the engagement enjoyed by science-curious evolution “nonbelievers,” as well as the experience of edification reflected in their response to the study's “accuracy” items, belies the simple—indeed simplistic—picture of how those who profess any particular “position” on evolution feel about science. 

In particular, it is wrong to infer that those who profess nonacceptance necessarily lack either the desire to know or the capacity to experience awe and wonder at the knowledge human beings have acquired through science, including the astonishing insights into their own natural history.

Because science curiosity does not discriminate on the basis of cultural identity, it would be a mistake for anyone who is genuinely committed to communicating science in culturally pluralistic society to adopt a style of  discourse that forces curious, reflective people to choose between  satisfying their appetite to know what’s known to science and being the sort of person that they are.

References

Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Cultural Cognition Project, Evidence-based Science Filmaking Initiative Study No. 1 (2015).

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evolution: Educ. & Outreach 6, 1-8 (2013).

Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

 Kahan, D.M. "Ordinary Science Intelligence": A Science Comprehension Measure for the Study of Risk and Science Communication. J. Risk Res. (in press).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Long, D.E. Evolution and religion in American education : an ethnography (Springer, Dordrecht, 2011).


Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).

Newport, F. In U.S. 42% Believe in Creationist View of Human Origins. Gallup. (June 14, 2014),http://www.gallup.com/poll/170822/believe-creationist-view-human-origins.aspx.

Roos, J.M. Measuring science or religion? A measurement analysis of the National Science Foundation sponsored science literacy scale 2006–2010. Public Understanding of Science  (2012).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006). 

 

Sunday
Jan102016

It's here: Annenberg/CCP "Evidence-based Science Filmmaking Initiative"

As the 14 bilion readers of this blog can attest, when I say I'm going to do something "tomorrow" or "Monday" or "soon" or "June 31"-- I'm not kidding around: I mean "tomorrow" or "Monday" or "soon" or "June 31" or whatever the heck I said.

So just as promised "yesterday" [not counting the weekend] & foreshadowed "not so long ago" [the conjugate of "soon"] ...

Here is CCP's new Evidence-based Science Filmmaking Initiative!  (aka "Science of science filmmaking" initiative--title soon to be put to a vote on this site)!

I'm not going to say a lot at this point.  For one thing, there's plenty of material emanating from the "project page," so you can just poke around yourself all day on your own.

Also there's the Initiative's first "Report."  It describes the results of a big preliminary study aimed at investigating the "Missing Audience Hypothesis" (a conjecture that was in fact featured in an earlier blog post and that provoked a pretty interesting discussion).

The study had all kinds of cool things in it, including a "Science Curiosity Scale" (SCS) which was self-consciously designed to remedy (or at least start to remedy) the defects in existing measures. As discussed previously in this blog (as I've mentioned innumerable times, I am loath to repeat myself in my posts, but I'll make an exception here), existing "science curiosity" measures are dominated by ill-formulated self-report items that exihibit lousy psychometric performance and have that never been shown to predict behavior evincing an interest in science.

Our "SCS" index includes some self-report measures (discretely bundled in with numerous other types of items of the sort that one might expect to see if one were participating in consumer-marketing survey), but it combines them them with performance and behavioral ones.

To validate SCS, we--the ESFI science filmmaking professionals and "science of science communication" researchers who collaborated on this study-- assessed its power to predict the level of subject engagement (also behaviorally measured) with a segment of a cool science documentary, Your Inner Fish, produced by ESFI collaborator Tangled Bank Studios.

The study also found some other really really cool things, including how engagement interacted with "belief in" evolution and science comprehension.  

But I'll spare you the details.

Why? Because they are summarized in the "project pages," and spelled out in even greater detail in the Report, which of course, you can download!

I'll also say more on various of these matters in subsequent posts, which will supplement the analyses and interpretations in the project pages and Report.  

In case you haven't noticed, I'm loath ever to repeat myself in this blog.  So I will hold back for now.

And say more "tomorrow."

But by all means, feel free to offer your own views on any of the materials that appear in the Report or the sections of the site dedicated to ESFI, whose members consist of a both accomplished science science communication professionals and and empirical researchers all eager to explore the integration of the science of science communication into the craft of science filmmaking..

Saturday
Jan092016

Weekend update: the anti- "fact inventory conception of science literacy" movement is gaining ground on Tea Party & Trascism [Trump+Fascism]; to eclipse them, only thing it needs is a catchier name!

A friend pointed me toward this really interesting article:

The nerve of the piece is a critique of the "fact inventory" conception of science comprehension that informs the NSF's Science Indicators' battery:

The bigger issue, however, is whether we ought to call someone who gets those questions right “scientifically literate.” Scientific literacy has little to do with memorizing information and a lot to do with a rational approach to problems....

[T]he interpretation of data requires critical thinking.... Our schools don’t train people to be vigilant about avoiding errors such as confounding correlation and causation, however, nor do they do a good job of rooting out confirmation bias or teaching the basics of statistics and probabilities. All of this leads to the propagation of a lot of nonsense in the press and internet, and it leaves people vulnerable to the flood of “facts.”

It’s not possible for everyone—or anyone—to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I’ve forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. . . . Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness. Facts can be used in the way a drunk uses a lamppost, for support. Science illuminates the universe.

Wow.

For sure I couldn't have said this better.  Anyone can confirm this for him- or herself by reviewing the various posts I've written criticizing the "fact inventory" conception of science literacy and defending an "ordinary science intelligence" alternative that features the types of critical reasoning proficiencies essential to recognizing and making use of valid scientific evidence.

Maybe I'm jumping the gun, but I hope this thoughtful and reflective article is a harbinger of more of the same, and the beginning of a wider discussion of this problem.

If I have any quibble with Teller's argument, though, it is over what the nature of the problem actually is.

Teller starts with the premise that the U.S. public has a poor comprehension of science and attributes this to the "fact inventory" conception of science literacy.

She might be right-- but I'm not sure.

I'm not sure, that is, that the American public's science comprehension is as poor as she assumes it is. The reason I'm not sure is that I don't think we've been assessing the general public's science comprehension with a valid measure of that capacity -- one that features critical reasoning proficiencies rather than a"fact inventory"!

Developing a public science comprehension measure focused on the reasoning proficiencies that Teller conviciningly emphasizes has been one focus of CCP reasearch over the last few years.  The progress made so far in that effort is reflected in the current version, "2.0," of the "Ordinary Science Intelligence" assessment test (Kahan in press).

As discussed in previous posts, OSI_2.0 doesn't try to certify respondents' acquisition of any set of canonical "factual" beliefs. 

Instead, it uses quantitative and critical reasoning items that are intended to assess a latent or unobserved disposition suited for recognizing and making appropriate use of valid empirical evidence in one's "ordinary," everyday life as a consumer, a participant in today's economy, and as a democratic citizen.

Since at least 1910 (my memory is hazy for events earlier than that), when Dewey published his famous "Science as Subject-Matter and as Method," the idea that science pedagogy should be focused on cultivating the distinctive reasoning proficiencies associated with making valid inferences from reliable observations has exerted a powerful force on the imaginations and motivations of a good number of educators and scholars (today I think of Jon Baron (1993, 2008) as the foremost champion of this view).

One thing they've learned is that imparting this sort of capacity is easier said than done!

But in any event, they are right -- as is Teller -- that this kind of thinking disposition is the proper object of science education.

The much more pedestrian point I find myself making now & again is that we really don't have a good general public measure of this capacity -- and so aren't even in a good position to figure out how well or poorly we are doing in equipping citizens with it.

Necessarily, too, without such a good measure, we won't be as smart as we ought to be about what contribution defects in science comprehension are making, if any, to public controversies over climate change, nuclear power, the HPV vaccine, and other issues that turn on decision-relevant science.

Teller cites the 2012 CCP study that found that higher science literacy is associated with greater polarization, not less, on climate change risks (nuclear power ones too).

I think that study helps to show that this sort of conflict is not plausibly attributed to defects in science comprehension. Precisely b/c I and my collaborators agree with Teller that a "fact inventory" conception of "science literacy" is defective, we used a science comprehension measure-- OSI_1.0-- that combined certain NSF Indicator "basic fact" items with a Numeracy battery, which has been shown to be highly effective in measuring the capacity of ordinary members of the public & others to reason well with quantitative information. 

People who scored high on that critical reasoning measure still polarized on climate change.

And the same is true of people who score the highest on even reasoning-proficiency centered OSI_2.0:


Most people, sadly, don't know very much about the science of climate change.

But the few who actually can reliably identify its causes and consequences (as measured by version 1.0 of the "Ordinary Climate Science Intelligence" test, an assessment based on "climate science literacy" items drawn from NASA, NOAA, and the IPCC) are also the most politically polarized on the question of whether human activity is the principal cause of climate change -- or indeed on whether climate change is happening at all (Kahan 2015a).

That evidence has lead me to conclude that the conflict over climate change (not to mention numerous other disputed issues of science) isn't about what people know.  It is about who they are: the "beliefs" people form on these issues are ones suited to helping them form affective orientations toward these issues that effectively signal their membership in & loyalty to groups embroiled in a nasty form of cultural status competition....

That problem isn't being caused by any deficiency  in science education in this country.

On the contrary, that problem is preventing our democracy from getting the benefit of whatever scientific knowledge & reasoning capacity we have managed to impart in our citizens.

If we want enlightened democracy, we better figure out how to extricate science from these sorts of ugly, illiberal, reason-eviscerating forms of cultural conflict (Kahan 2015b).

Of course, these are provisional conclusions, informed by what I regard as the best available evidence.

But the best evidence available definitely isn't as good as it should be for exactly the reason that Teller describes so articulately: we don't possess as good a measure of public science comprehension as we ought to have.

This is how I put it at the end of “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change:

The scale development exercise that generated OSI_2.0 is offered as an admittedly modest contribution to an objective of grand dimensions. How ordinary citizens come to know what is collectively known by science is simultaneously a mystery that excites deep scholarly curiosity and a practical problem that motivates urgent attention by those charged with assuring democratic societies make effective use of the collective knowledge at their disposal. An appropriately discerning and focused instrument for measuring individual differences in the cognitive capacities essential to recognizing what is known to science is essential to progress in these convergent inquiries.

The claim made on behalf of OSI_2.0 is not that it fully satisfies this need. It is presented instead to show the large degree of progress that can be made toward creating such an instrument, and the likely advances in insight that can be realized in the interim, if scholars studying risk perception and science communication make adapting and refining admittedly imperfect existing measures, rather than passively employing them as they are, a routine component of their ongoing explorations.

Not as articulate as Teller-- but the best I can do! 

And hey-- if my best motivates others who can do a better job still, then I figure I'm doing my part.

References 

Baron, J. Thinking and deciding (Cambridge University Press, New York, 2008).


Baron, J. Why Teach Thinking?‐An Essay. Applied Psychology 42, 191-214 (1993).

Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press).

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

 

Friday
Jan082016

Prepare yourself ... CCP's Evidence-based Science Filmmaking Initiative

I told you "more soon"; well soon is going to be like Monday ...

 

Thursday
Jan072016

Still time to get your "entry" in for MAPKIA #939! But hurry!

5-time MAPKIA winnder, @Mw. Sure she uses performance-enhancing drugs -- & so can you!In recognition of the impact that the Macao internet outage has had on the posting of entries in the ongoing "MAPKIA!" contest, we are extending the time for posting entries.  

Besides, literally 10^3s (figureatively speaking) of entries have been delivered offline by emails, fedex deliveries, telegraphs, mental telepathy & other alternative channels during the outage (thank goodness they found the squirrel who was gnawing on the internet tubes and reclocated him to one of the nation's 10^3s  "wildlife" preserves (figureatively speaking)). It's going to take me a while to process all of them!

So just go to the "comments" section for the post & make your own predictions (supported by a "cogent" theory)--right now while there is still time to compete for the fame & notoreity--not to mention cool prizes!--that winning a MAPKIA confers!

Wednesday
Jan062016

Join the SBST Team: Neither nudge nor shove will stop us from improving your life (whether you are aware of it or not) in 2016 & beyond!

click me -- I need your attention!Wow, I got a cool email announcement about a "one year fellowship" position in the White House Social and Behavioral Sciences Team (SBST). 

The "Team's" mission is to use behavioral economics--primarily of the "nudge" variety--to steer people into making decisons that mesh better with one or another government program aimed at improving a variety of social and economic outcomes, from the proportion of peple obtaining higher education to the proportion of small businesses that keep afloat; from living a more healthy life to availing oneself of myriad govt benefits etc.

Interesting stuff.

But what struck me is the casual assumption that SBST is going to happily outlive the Obama Administration.

Obama is a classic "University of Chicago Democrat"--someone who substitutes for the old style passion of Neal Deal liberalism a cool confidence in technocratic management strategies, many of which tweak but don't fundamentally question "private orderings" as a means of promoting collective wellbeing (distributional justice, an aim of the old-style New Deal Democrat liberalism just as fundamental as collective well-being, has shrunk in importance to near invisibility in the U of C Democratic program).

This is Cass Sunstein's liberalism, not John Kenneth Galbraith's, much less Ted Kennedy's!

But the vision of U of C Democrats is if anything even more obnoxious to the "Chicago School"  neo-liberals and the dyed-in-the-wool social conservatives that cohabit, albeit often uneasily, in the Republican Party.

U of C Democrats say, "hey, we are not only going to take back some share of the profits you've made by exploiting public goods ('you didn't build that!') but we're going to do so with 'strategies' that bypass your reason, so you don't really notice & fail, as a result of 'bounded rationality', to contribute your fair share."

It's hard to think of a program more likely to make the descendants of Hayek & Ayn Rand (what a weird marriage! & what a weird brood of offspring!) see red(s)!

That's one of things that makes the "Fellowship" so damn interesting!

"One year, beginnign in October 2016," you say...

The basis for the "SBST" is an Obama Executive Order that directs all executive agencies to "identify policies, programs, and operations where applying behavioral science insights may yield substantial improvements in public welfare, program outcomes, and program cost effectiveness" and " develop strategies for applying behavioral science insights to programs and, where possible, rigorously test and evaluate the impact of these insights."

To implement this directive, the Nudge Order directs the SBST (also created by the Obama White House) to issue "agencies ... advice and policy guidance to help them execute policy objectives.

This "Nudge Order" (let's call it that; snappier than "Executive Order--Using Behavioral Science Insights to Better Serve the American People") seems to be patterned on the Reagan Executive Order that mandated all executive agencies (only a fraction, actually of the agencies that have been authorized by Congress to engage in significant regulatory activity) submit their proposed regulations to the Office of Management & Um... are those "scrubbing bubbles"?...Budge for "cost benefit analysis."

Decried at the time by traditional New Deal Liberal Democrats, U of C Democrats actually have really grown to like that Reagan order a lot & even proposed extending it!

But I have a feeling that the next President, if he or she is a Republican, isn't going to reciprocate the love when it comes to Obama's "Nudge Order."

Pretty clear, I think, that neither a President Trump nor a President Cruz--both of whom seem to look to a very different source for their "strategies" for "managing" public opinion-- would have much use for the Nudge Order or the apparatus that carries it out.

But I doubt that a President Fiorina, a President Rubio, a President Bush, a President Christie, a President Carson, or a President Paul would either. (I'm sure I'm forgetting somebody-- but who has the memory capacity to keep track of all of them?)

I don't know what a President H.R. Clinton would think--but I would note that President W.J. Clinton was the first & remains the model U of C Democrat President

I know for sure what President Sanders would do w/ the Nudge Order and SBST--and well before Oct. 2017.

So, this is a cool position -- not only b/c the normal job description is interesting but b/c it's certain to be interesting to be "on hand" to witness the Nudge Order "in transition."

Oh, but I've decided not to apply.  I like what I'm doing just fine!

Tuesday
Jan052016

MAPKIA episode #939: What does the Pew "Malthusian Worldview" item predict?!

Winner's prize: Vintage Cultural Cogniton Project Lab Jersey! (Subject to availability)HEY EVERYONE--guess what!

Its time for the first  "MAPKIA!"!  [Make a prediction, know it all!"] episode of 2016!

Yup--this wildly popular feature of the CCP Blog—the #1 most popular game show in Macao for two years running—has been renewed for another season!

Score!

It’s of course inconceivable that anyone doesn’t know the rules, and I don’t mean to insult anyone’s intelligence, but legal niceties do require me to post them before every contest. So here they are:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or this or some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Actually, though, the rules are being significantly modified for this particular episode!  The question I’m going to pose has to be answered with data from the Pew’s big hit  “Public vs. the ‘Scientists’ ” Report from last yr.

Ooooooo ... Pew on science literacy & polarization data! Yummy!As you likely all realize, I’ve been going on & on since last yr about the fun that can be had poking around in the “public” portion of Pew’s report.

In previous posts, I showed that the data in Pew’s study (for the public rspts; the data for the AAAS members who formed the “scientist” sample hasn’t been released, at least not yet. . .) corroborates the usual story about politically disputed risks: namely, that as science literacy goes up, cultural polarization (measured by one or another proxy for cultural identity) intensifies in magnitude.

Well, the study also has some interesting “science attitude” items, one of which is this:

I’m going to call this the Pew “Malthusian worldview” item.

“What do you think,” the question effectively asks,

are we in fact just like all the other stupid animals who keep multiplying in number and engorging themselves on all their foodstuffs and other necessary resources until they crash, calamitously, over the top of the Malthusian curve in some massive die off? Or are human beings special precisely because their reason allows them to keep shifting the curve through technological innovation?

Consider climate change to be history’s “biggest ‘I told you so’ ” confirmation of what “Marx wrote about capitalism’s ‘irreparable rift’ with ‘the natural laws of life itself’ ” and what “indigenous peoples" have been "warning[] about the dangers of disrespecting ‘Mother Earth’ [since] long before that”? 

Then answer “2” is for (or just is) you!

Alternatively, when you hear someone talking like that, do you want to let out a primal  WME   “hell noooo!”?  Are you thinking,

Right! These are the same fools who told us that we couldn’t have a city more populous than 200,000 people or we’d be choking to death on our own excrement! Well, thanks to the advent of modern sanitation systems, reinforced with related advances in public health, we can safely inhabit cities orders of magnitude larger and more dense than the ones whose residents regularly succumbed to devastating outbreaks of cholera in the 19th century.

Sure, we'll face some new challenges but we’ll just blast our shit into outer space & everything will be fine-- just you watch & see!

 Hey—did you hear about those cool mirror-coated nanotechnology flying saucer drone things that automatically levitate up to just the right altitude to reflect the sunlight necessary to neutralize climate change & keep temperatures here on earth a comfortable 72 degrees everywhere yr ‘round?

This changes ... nothing!

That's answer number "1" talking!

So the question is, should we expect the Pew item to tap into those two opposing mindsets?

Specifically,

How powerfully (if at all) will responses to the Pew Malthusian Worldview item predict beliefs and attitudes toward technological and environmental risks like climate change, fracking, nuclear power, and GM foods?  Will it be a stronger predictor than political partisanship? Will responses interact with—or essentially amplify—the explanatory power of political ideology and party identification? 

What will the relationship be between the Malthusian Worldview item and science literacy? Will responses be correlated with it—and if so in which direction? Will higher science literacy magnify the correlation between responses to the Malthusian Worldview item and opposing perceptions of environmental and technological risks--just as higher science comprehension magnifies cultural polarization on climate change, nuclear power, fracking, and the like?

Perhaps my framing of the question implies an answer.  But if you think I have one, then obviously mine could be wrong!

“Make a prediction know it all”—and explain cogently the reasoning for it and how one might test your conjecture with Pew dataset items, which have been featured in previous posts and are set forth in their entirety at the Pew site.

Here’s your chance to win not only a great prize but to also to demonstrate to all the schoolchildren in Macao and to billions of other curious and reflective people everywhere that you, unlike everybody else, really knows what the hell you are talking about when it comes to making sense of public perceptions of risk.

Just post your prediction, & take a stab at specifying a testing strategy, in a comment below.  I'll do the analyses & we'll see what you got!

It's that friggin' simple!

Ready ... set ... MAPKIA!

Sunday
Jan032016

"Enough already" vs. "can I have some more, pls": science comprehension & cultural polarization

The motivation for this post is to respond to commentators--@Joshua & @HalMorris—who wonder, reasonably, whether there’s really much point in continuing to examine the relationship between cultural cognition & like mechanisms, on the one hand, and one or another element of science comprehension (cognitive reflection, numeracy, “knowledge of basic facts,” etc).

They acknowledge that evidence that cultural polarization grows in step with proficiency in critical reasoning is useful for, say, discrediting positions like the “knowledge deficit” theory (the view that public conflicts over policy-relevant science are a consequence of public unfamiliarity with the relevant evidence) and the “asymmetry thesis” (the positon that attributes such conflicts to forms of dogmatic thinking distinctive of “right wing” ideology.

But haven’t all those who are amenable to being persuaded by evidence on these points gotten the message by now, they ask?

I agree that the persistence of the “knowledge deficit” view and to a lesser extent the “asymmetry thesis” (which I do think is weakly supported but not nearly so unworthy of being entertained as “knowledge deficit” arguments) likely don’t justify sustained efforts at this point to probe the relationship between cultural cognition and critical reasoning.

But I disagree that those are the only reasons for continuing with—indeed, intensifying—such research.

On the contrary, I think focusing on science comprehension is critical to understanding cultural cognition; to forming an accurate moral assessment of it; and to identifying appropriate responses for managing its potential to interfere with free and reasoning citizens’ attainment of their ends, both individual and collective (Kahan 2015a, 2015b).

I should work more systematically how to convey the basis of this conviction.

But for now, consider these “two conceptions” of cultural cognition and rationality. Maybe doing so will foreshadow the more complete account—or better still, provoke you into helping me to work this issue out in a way that satisfies us both.

1. Cultural cognition as bounded rationality. Persistent public conflict over societal risks (e.g., climate change, nuclear waste disposal, private gun possession, HPV immunization of schoolgirls, etc.) is frequently attributed to overreliance on heuristic, “System 1” as opposed to conscious, effortful “System 2” information processing (e.g., Weber 2006; Sunstein 2005). But in fact, the dynamics that make up the standard “bounded rationality” menagerie—from the “availability effect” to “base rate neglect,” from the “affect heuristic” to the “conjunction fallacy”—apply to people of all manner of political predispositions, and thus don’t on their own cogently explain the most salient feature of public conflicts over societal risks: that people are not simply “confused” about the facts on these issues but systematically divided on them on political grounds.

One account of cultural cognition views it as the dynamic that transforms the mechanisms of “bounded rationality” into fonts of political polarization (Kahan, Slovic, Braman, & Gastil 2006 Kahan 2012). Cultural predispositions thus determine the valence of the sensibilities that govern information processing according in the manner contemplated by the “affect heuristic” (Peters, Burraston & Mertz 2004; Slovic & Peters 1996). The same for the “availability effect”: the stake individuals have in forming “beliefs” that express and reinforce their connection to cultural groups determines what sorts of risk-relevant facts they notice, what significance to them, and how readily they recall them; (Kahan, Jenkins-Smith & Braman 2011). The motivation to form identity-congruent beliefs drives biased search and biased assimilation of information (Kahan, Braman, Cohen, Gastil & Slovic 2010)..—not only on existing contested issues but on novel ones (Kahan, Braman, Slovic, Gastil & Slovic 2009).  

2. Cultural cognition as expressive rationality. Recent scholarship on cultural cognition, however, seems to complicate if not in fact contradict this account!

By treating politically motivated reasoning—of which “cultural cognition” is one operationalization (Kahan in pressb)—as in effect a “moderator” of other more familiar cognitive biases, the “bounded rationality” conception of it implies that cultural cognition is a consequence of over-reliance on heuristic information processing (e.g., Taber & Lodge 2013; Sunstein 2006). If this understanding is correct, then we should expect cultural cognition to be mitigated by proficiency in the sorts of reasoning dispositions essential to conscious, effortful “System 2” information processing.

But in fact, a growing body of evidence suggests that System 2 reasoning dispositions magnify rather than reduce cultural cognition! Experiments show that individuals high in cognitive reflection and numeracy use their distinctive proficiencies to discern what the significance of crediting complex information is for positions associated with their cultural or political identities (Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).

As a result, they more consistently credit information that is in fact identity-affirming and discount information that is identity-threatening. If this is how individuals reason outside of lab conditions, then we should expect to see that individuals highest in the capacities and dispositions necessary to make sense of quantitative information should be the most politically polarized on facts that have become invested with identity-defining significance. And we do see that—on climate change, nuclear power, gun control, and other issues (Kahan 2015; Kahan, Peters, et al., 2012).

This work supports an alternative “expressive” conception of cultural cognition. On this account, cultural cognition is not a consequence of “bounded rationality.” It is a form of engaging information rationally suited for forming affective dispositions that reliably express their group allegiances (cf. Lessig 1995; Akerlof & Kranton 2000).

“Expressing group allegiances” is not just one thing ordinary people do with information on societally contested risks. It is pretty much the only thing they do. The personal “beliefs” ordinary people form on issues like climate change or gun control or nuclear power etc. don’t otherwise have any impact on them. Ordinary individuals just don’t matter enough, as individuals, for anything they do based on their view of the facts on these issues to affect the level of risk they are exposed to or the policies that get adopted to abate them (Kahan 2013, in press). In contrast, it is in fact critical to ordinary people’s well-being—psychic, emotional, and material—to evince attitudes that convey their commitment to their identity-defining groups in the myriad everyday settings in which they can be confident those around them will be assessing their character in this way (Kahan in pressb).

* * * * *

At one point I thought the first conception of cultural cognition was right. Indeed, it didn’t even occur to me, early on, that the second conception existed!

But now I believe the second view is almost certainly right. And that no account that fails to recognize that cultural cognition is integral to individual rationality can possibly make sense of it or manage successfully the influences that create the conflict between expressive rationality and collective rationality that give rise to cultural polarization over policy-relevant facts.

If that’s right, then in fact the continued focus on the interaction of cultural cognition and critical reasoning proficiencies will remain essential.

So is it right? Maybe not; but the only way to figure that out also is to keep probing this interaction.

References

Akerlof, G. A., & Kranton, R. E. (2000). Economics and Identity. Quarterly Journal of Economics, 115(3), 715-753.

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural Cognition of Scientific Consensus. J. Risk Res., 14, 147-174.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2013). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D. M.. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D.M. The expressive rationality of inaccurate perceptions of fact. Brain & Behav. Sci. (in press_a).

Kahan, D.M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences (in press_b).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Evaluation of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006).

Lessig, L. (1995). The Regulation of Social Meaning. U. Chi. L. Rev., 62, 943-1045.

Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge ; New York: Cambridge University Press.

Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).

Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).

Sunstein, C. R. (2006). Misfearing: A reply. Harvard Law Review, 119(4), 1110-1125.

Weber, E. Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming does not Scare us (Yet). Climatic Change 77, 103-120 (2006).

Saturday
Jan022016

"Don't jump"--weekend reading: Do judges, loan officers, and baseball umpires suffer from the "gambler's fallacy"?

I know how desperately bored the 14 billion regular subscribers to this blog can get on weekends, and the resulting toll this can exact on the mental health of many times that number of people due to the contagious nature of affective funks. So one of my NY's resolutions is to try to supply subscribers with things to read that can distract them from the frustration of being momentarily shielded from the relentless onslaught of real-world obligation they happily confront during the workweek.

So how about this:

We were all so entertained last year  by Miller & Sanjurjo’s“Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” which taught us something profound about the peculiar vulnerabilities to err super smart people can acquire as a result of teaching themselves to avoid common errors associated with interpreting random events.

So I thought, hey, maybe it would be fun for us to take a look at other efforts that try to "expose" non-randomness of events that smart people might be inclined to think are random.

Here's one:

Actually, I'm not sure this is really a paper about the randomness-detection blindspots of people who are really good at detecting probability blindspots in ordinary folks.

It's more in the nature of how expert judgment can be subverted by a run-of-the-mill (of the "-mine"?) cognitive biases involving randomness--here the "gamblers' fallacy": the expectation that the occurrence of independent random events will behave interdependently in a manner consistent with their relative frequency; or more plainly, that an outcome like "heads" in the flipping of a coin can become "due" as a string of alternative outcomes in independent events--"tails" in previous tosses--increases in length.

CMS present data suggesting that behavior of immigration judges, loan officers, and baseball umpires all display this pattern.  That is, all of these professional decisionmakers become more likely than one would expect by chance to make a particular determination--grant an asylum petition; disapprove a loan application; call a "strike"--after a series of previous opposing determinations ("deny," "approve," "ball" etc.).

If you liked puzzling over the the M&S paper, I predict you'll like puzzling through this one.

In figuring out the null, CMS get that it is a mistake, actually, to model the outcomes in question as reflecting a binomial distribution if one is sampling from a finite sequence of past events.  Binary outcomes that occur independently across an indefinite series of trials (i.e., outcomes generated by a Bernoulli process) are not independent when when one samples from a finite sequence of past trials.

In other words, CMS avoid the error that M&S showed the authors of the "hot hand fallacy" studies made.

But figuring out how to do the analysis in a way that avoids this mistake is damn tricky.

If one samples from a finite sequence of events generated by a Bernoulli process, what should the null be for determining whether the probability of a particular outcome following a string of opposing outcomes was "higher" than what could have been expected to occur by chance?

One could figure that out mathematically....  But it's a hell of a lot easier to do it by simulation.

Another tricky thing here is whether the types of events decisionmakers are evaluating here--the merit of immigration petitions, the crediworthiness of loan applicants, and the location of baseball pitches--really are i.i.d. ("independent and identically distributed").

Actually, no one could plausibly think "balls" and "strikes" in baseball are.

A pitcher's decision to throw a "strike" (or attempt to throw one) will be influenced by myriad factors, including the pitch count--i.e., the running tally of "balls" and "strikes" for the current batter, a figure that determines how likely the batter is to "walk" (be allowed to advance to "first base"; shit, do I really need to try to define this stuff? Who the hell doesn't understand baseball?!) or "strike out" on the next pitch.

CMS diligently try to "take account" of the "non-independence" of "balls" and "strikes" in baseball, and like potential influences in the context of judicial decisionmaking and loan applications, in their statistical models. 

But whether they have done so correctly--or done so with the degree of precision necessary to disentangle the impact of those influences from the hypothesized tendency of these decisonmakers to impose on outcomes the sort of constrained variance that would be the signature of the "gambler's fallacy"-- is definitely open to reasonable debate.

Maybe in trying to sort all this out, CMS are also making some errors about randomness that we could expect to see only in super smart people who have trained themselves not to make simple errors?

I dunno!

But b/c I love all 14 billion of you regular CCP subscribers so much, and am so concerned about your mental wellbeing, I'm calling your attention to this paper & asking you-- what do you think?

 

Friday
Jan012016

Critical, must-do 2016 CCP NY resolutions!

Thursday
Dec312015

Coolest article of the yr-- hot hands down!

Boy, it's not even close.

I’m going to resist summarizing Miller & Sanjurjo’s “Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” not only because I’ve already tried to do that multiple times –

but also because any attempt to do so results in a mental misadventure of staggering proportions.

Actually, that is what’s so cool about the article.  At least in my view. 

Like lots of other people—including, to his credit, the scholar most prominently identified with the classic “hot hand fallacy” study—I think it is really neat that M&S have re-opened the question whether the performance of athletes really do vary in patterns that defy the fluctuations one would expect to see by chance (i.e., whether NBA basketball players and others really do go on “hot streaks” etc).

I also am filled with admiration for their mathematical dexterity in exposing the error in the original “hot hand fallacy” research (viz., the assumption that the shooting consistency of basketball players over a finite set of observations should be measured in relation to the variance associated with a binomial distribution).

But what really intrigues me is what M&S's accomplishment tells us about cognition.  Or really what it tells us about what we don’t know but should about how intuition and conscious reflection operate in expert judgment.

How could researchers so familiar with probability theory, and so accomplished in exposing the errors people routinely make when attempting to detect patterns in random events, fail to detect the mistaken assumption that they themselves were making about how to detect such a pattern in this particular setting?

How could the error have evaded the notice of those who reviewed their work—and much more fundamentally the notice of thousands of scholars who for decades have held up the original “hot hand fallacy” study (along with its many progeny) as the paradigmatic demonstration of a particular cognitive bias (one that no one disputes really exists) and of a method for detecting defects in human rationality generally?

Why when they are shown incontrovertible (really!) proof of the error that the “hot hand” researchers made (and re-made over the course of numerous successor studies) do so many highly intelligent, reflective people—ones who unquestionably possess the knowledge and reasoning proficiency that it takes to understand the logic of the M&S argument—so strongly and stubbornly resist accepting it before (in the vast majority of cases, at least) finally acknowledging (often with a gratifying display of appreciative surprise) that M&S are right?

What is the cognitive process, in short, that makes individuals who have cultivated the habits of mind necessary to resist commonplace but mistaken intuitions about randomness vulnerable to being misled by mistaken intuitions about randomness that only those highly proficient in reasoning about randomness could have developed in the first place?

The project to answer this question started before 2015.

But the vividness imparted to this puzzle by the astonishing M&S paper, and the resulting amplification and dissemination of the motivation to solve it, will, I predict, energize researchers for years to come.

Still fooled by non-randomness? Some gadgets to help you *see* the " 'hot hand' fallacy" fallacy
Tuesday
Dec292015

Mining even more insight from the Pew "public science knowledge/attitudes" data--but hoping for even better extraction equipment (fracking technology, maybe?) in 2016...

Futzing around "yesterday" with the "public" portion of the"public vs. scientists" study (Pew 2015), I presented some data  consistent with previous findings (Kahan 2014, 2015) that "beliefs" in human evolution and human-caused climate change measure cultural identity, not any aspect of science comprehension.

Well, there's actually still more fun things one can do with the Pew data (a way to pass the time, actually, as I wait for some new data on climate-science literacy... stay tuned!).

"Today" I'll share with you some interesting correlations between the Pew "science literacy" battery (also discussed yesterday; but actually, a bit more about it at the end of the post) & various "science-informed" policy issues.  I'll also show how those relationships intereact with (vary in relation to) right-left political outlooks.

Ready? ...

Okay -- consider this one!

See? It's scary to eat GM foods, but people of all political outlooks & levels of science literacy agree that it makes sense to put blue GM tomatoes (or even a single "potatoe") in the gas tank of their SUVs.

But you know my view here: "what do you think of GM ..." in a survey administered to the general public measures non-opinion.  Fun for laughs, and for creating fodder for professional "anti-science" commentators, but not particulary helpful in trying to make genuine sense of public risk perceptions.

Just my opinion....

Here's another:

Okay, now this is meaningful stuff.  Not news, of course, but still nice to be able to get corroboration with additional high-quality data.

When polarization on a "societal risk" doesn't abate but increases conditional on science comprehension, that's a super strong indicator of a polluted science communication environment.  It is a sign that positions on an issue have become entangled in antagonistic social meanings that transform them into badges of identity in and loyalty to groups (Kahan 2012). When that happens, people will predictably use their reasoning proficiencies to fit their understanding of evidence to the view that predominates in their group.

Here one can reasonably question the inference I'm drawing, since Pew's items aren't about "risk perceptions" but rather "policy preferences." 

But if one is familiar with the "affect heuristic"--which refers to the tendency of people to conform their understanding of all aspects of a putative risk source to a generic pro- or con- attitude (Slovic, Finucane & MacGregor 2005; Loewenstein, Weber, Hsee & Welch 2001)--then one would be inclined to treat the Pew question as just another indicator of that risk-perception-generating sensibility. 

The "affect heuristic" is what makes the "Industrial Strength Risk Perception Measure" so powerful.  Using ISRPM, CCP data has found that both the perceived risk of both fracking and of nuclear power (not to mention climate change, of course) display the signature "polluted science communication environment" characteristic  of increased cultural polarization conditional on greater reasoning proficiency.

I, anyway, am inclined to view the Pew data as more corroboration of this relationship, just as in "yesterday's" post I explained how the Pew data corroborated the findings that greater science comprehension generally and greater comprehension of climate science in particular magnify polarization.

But before signing off here, let me observe one thing about the Pew science literacy battery.

You likely noticed that the values on the y-axes of the figures start to get more bunched together at the high end.

That's because the six-item, basic facts sicence literacy battery used in the Pew 2015 report are highly skewed in the direction of a high score.

Some 30% of the nationally represenative sample got all six questions correct! 

The distribution is a bit less skewed when one scores the responses to the battery using Item Response Theory, which takes account of the relative difficulty and measurement precision (or discrimination) of the individual items. But only a bit less. (You can't tell from the # of bins in the histogram, but there are actually over 5-dozen "science literacy" levels under the IRT model, as opposed to the 7 that result when one simply adds the number of correct responses; pretty cool illustration of how much more "information," as it were, one can get using IRT rather than "classic test theory" scoring.)

To put it plainly, the Pew battery is just too darn easy. 

The practical consequence-- a serious one-- is that the test won't do a very good job in helping us to determine whether differences in science comprehension affect perceptions of risk or other science-related attitudes among individuals whose scores are above the population average.

Actually, the best way to see that is to look at the Item Reponse Theory test information and reliability characteristics for the Pew battery:

If you need a refresher on the significance on these measures, then check out this post & this one

But what they are telling us is that the power of the Pew battery to discern differences in science comprehension is concentrated at about -1 SD below the estimated population mean. Even there, the measurement precision is modest -- a reliability coefficient of under 0.6 (0.7 is better). 

More importantly, it quickly tails of to zero by +0.5 SD. 

In other words, above the 60th percentile in the population the test can furnish us with no guidance on differences in science literacy levels.  And even what it can tell us even at the population mean ("0" on the y-axis) is pretty noisy (reliability = 0.40).

As I've explained in previous posts, the NSF Indicators have exactly the same problem. The Pew battery is an admirable effort to try to improve on the familiar NSF science literacy test, but with these items, at least, it hasn't made a lot of progress.

As the last two posts have shown, you can in fact still learn a fair amount from a science literacy scale the measurement precision is this skewed toward the lower end of the distribution of this sort of proficiency.

But if we really want to learn more, we desperately need a better public science comprehension instrument.

That conviction has informed the research that generated the "Ordinary Science Intelligence" assessment.  An 18-item test, OSI combines a modest number of "basic fact" items (ones derived from the Indicator and from a previous Pew battery) with critical reasoning measures that examine cognitive reflection and numeracy, dispositions essential to being able to recognize and give proper effect to valid science.

OSI was deliberately constructed to possess a high degree of measurement precision across the entire range of the underlying latent (or unobserved) dispotion that it's measuring. 

That's a necessary quality, I'd argue, for an instrument suited to advance scholarly investigation of how variance in public science comprhension affects perceptions of risk and related facts relevant to individual and collective decisionmaking.

Is OSI (actually "OSI_2.0") perfect?

Hell no

Indeed, while better for now than the NSF Indicators battery (on which it in fact builds) for the study of risk perception and science communication, OSI_2.0 is primarily intended to stimulate other scholars to try to do even better, either by building on and refining OSI or by coming up with instruments that they can show (by conducting appropriate assessments of the instruments' psychometric characteristics and their external validity) are even better.

I hope that there are a bunch of smart researchers out there who have made contributing to the creation of a better public science comprehension instrucment one of their New Year's resolutions.

If the researchers at Pew Research Center are among them, then I bet we'll all be a lot smarter by 2017!

REferences

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as feelings. Psychological Bulletin 127, 267-287 (2001).

Pew Research Center (2015). Public and Scientists' Views on Science and Society.

Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).

Monday
Dec282015

Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center!

One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies.  That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew's own research reports. 

I've been playing around with the "public" portion of the "public vs. scientists" study, which was issued last January (Pew 2015). Actually Pew hasn't released the "scientist" (or more accurately, AAAS membership) portion of the data. I hope they do!

But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from "The Climate Science Communication Measurement Problem" (2015)

In that paper, I presented data suggesting, first, that neither "belief" in evolution nor "belief" in human-caused climate change were measures of general science literacy.  Rather both were better understood as measures of forms of "cultural identity" indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.

Second, and more importantly, I presented data suggesting hat there is no relationship between "belief" in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most "climate science literacy" measures), the more polarized the respondents were on "belief" in AGW--which, again, is best understood as simply an indicator of "who one is," culturally speaking.

Well, it turns out one can see the same patterns, very clearly, in the Pew data.

Patterned on the NSF Indicators "basic facts" science literacy test (indeed, "lasers" is an NSF item), the Pew battery consists of six items:

As I've explained before, I'm not a huge fan of the "basic facts" approach to measuring public science comprehension. In my view, items like these aren't well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.

One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly.  But certifying that she has stocked her "basic fact" inventory with any particular set of such propositions doesn't give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.

For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent's ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former.  The "Ordinary Science Intelligence" battery, used in the Measurement Problem paper, includes "cognitive reflection" and "numeracy"modules for this purpose.

Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn't released the underlying data for that one.

Psychometric properties of Pew science literacy battery--click on it, c'mon!But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale-- suggesting they are indeed measuring some sort of science-related apptitude.

But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.

For evolution, Pew used it's two-part question, which first asks whether the respondent believes (1) "Humans and other living things have evolved over time" or (2) "Humans and other living things have existed in their present form since the beginning of time." 

Subjects who pick (1) then are asked whether (3) "Humans and other living things have evolved due to natural processes such as natural selection" or (4) "A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."

Basically, subjects who select (2) are "new earth creationists." Subjects who select (4) are generally regarded as believing in "theistic evolution."  Intelligent design isn't the only variant of "theistic evolution," but it is certainly one of the accounts that fit this account.

Only subjects who select (3)-- "humans and other living things have evolved due to natural processes such as natural selection" -- are the only ones furnishing the response that reflects science's account of the natural history of humans. 

So I created a variable, "evolution_c," that reflects this answer, which was in fact selected by only 35% of the subjects in Pew's U.S. general public sample.

On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) "the earth is getting warmer mostly because of natural patterns in the earth’s environment," (2) "the earth is getting warmer mostly because of human activity such as burning fossil fuels," or (3) "there is no solid evidence that the earth is getting warmer."

About 50% of the respondents selected (2).  I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.

For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 "Democrat"; 2 "Independent leans Democrat"; 3 "Independent"; 4 "Independent leans Republican"; and 5 "Republican").  The composite scale had modest reliability (α = 0.61).

For religiosity, I combined two items.  One was a standard Pew item on church attendance. The other was a dummy variable, "nonrelig," scored "1" for subjects who said they were either "atheists," "agnostics" or "nothing in particular" in response to a religious-denomination item (α = 0.66).

But the very first thing I did was toss all of these items -- the 6 "science literacy" ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity--into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).

Click for closer look-- if you dare....

Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.

But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the "science literacy" factor.  Instead they loaded on the religiosity and right-left political orientation factors, respectively.

This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate Warning: Click only if psychologically prepared to see shocking cultural bias in "belief in evolution" as science literacy assessment item! change are not indicators of "science comprehension" but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.

To test this inference further, I used "differential item function" or "DIF" analysis (Osterlind & Everson, 2009).

Based on item response theory, DIF examines whether a test item is "culturally biased"--not in an animus sense but a measurement one: the question is whether the responses to the item measure the "same" latent proficiency (here, science literacy) in diverse groups.  If it doesn't-- if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it "correctly"--then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.

In Measurement Problem, I used DIF analysis to show that belief in evolution is "biased" against individuals who are high in religioisity. 

Using the Pew data (regression models here), one can see the same bias:

The latter but not the former are likely to indicate acceptance of science's account of the natural history of humans as their science literacy scores increase. This isn't so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations). 

The obvious conclusion is that the evolution item isn't measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery. 

In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy.  That analysis, though, assessed responses to a "belief in Warning: Graphic demonstration of cultural bias in standardized assessment item. Click only if 21 yrs or older or accompanied by responsible adult or medical professional.climate change" item (one identical to Pew's) in relation to scores on a general climate-science literacy assessment, the "Ordinary Climate Science Intelligence" (OCSI) assesssment.  Pew's scientist-AAAS study didn't have a climate-science literacy battery.

Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: "What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?" (CO2).

Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases--that makes perfect sense!

But as their science literacy score increases, individuals of diverse political outlooks don't converge on "belief in human caused climate change"; they become more polarized.  That question is measuring who the subjects are, not what they know about about climate science.

So there you go!

I probably will tinker a bit more with these data and will tell you if I find anything else of note.

But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew.  So reciprocate Pew's contribution to knowledge by analyzing them & reporting what you find out!

References

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.

Pew Research Center (2015). Public and Scientists' Views on Science and Society.

Tuesday
Dec222015

Two theories of "values," "identity" & "politically motivated reasoning"

This is a bit of correspondence with a thoughtful scholar & friend who was commenting on The Politically Motivated Reasoning Paradigm.

He stated,

Biggest question [for me] is what is the relationship between values and identities. You make clear that people can be acting protect any type prior but those two seem distinct in some ways and may benefit from more discussion. . . . .

[I am interested in the] larger question about whether you would call cultural cognition orientations an identity. The question arose because [I have a colleague] who is writing . . . on cases of identity-value conflict such as when a minority holds distinct values from the modal member of his/her identity group.

My response:

I’m eager to offer a response or acknowledge I don’t have a very good one to the sort of “value-identity” conflict you are envisioning. 

But I think we need to "iterate" a bit more in order to converge on a common conception of the issue here.

So I'm not going to try to address the "identity-value" conflict right off. Instead, I am going to discuss different understandings of how "values" & "identity" relate to one another in a research program that looks at the sort of "fact polarization" of interest to cultural cognition & other conceptions of PMR.

I'll start w/ two theories of why one might measure "values" to operationalize the source of "motivation" in PMR: dissonance avoidance & status protection.  

As a preliminary point, neither theory understands the sorts of "values" being measured as what  motivates information processing.  For both, the theoretically posited "motivator" is some unobserved (latent) disposition that causes the observable expression of "values," which are then treated simply as "indicators" or imperfect measures of that latent disposition.  

For that reason, both theories are agnostic on whether the relevant values are "truly, really" "political," "cultural" or something else.  All "value" frameworks are just alternative measures of the same unobserved latent dispositions.  The only issue is what measurement strategy works best for explanation, prediction, & prescription -- a criterion that will itself be specific to the goal of the research (e.g., I myself use much more fine-grained indicators, corresponding to much narrower specifications of the underlying dispositions, when I'm doing "field based" science communication in a region like S.E. Florida than I do when I'm participating in a scholarly conversation about mass opinion formation in "American society": the constructs & measurement instruments in former context wouldn't have same traction in latter context but the ones w/ most traction in latter furnish less in the former, where the consumers of the information are trying to do something that is advanced by a framework fitted more to their conditions).

Okay, the 2 theories:

1. Dissonance avoidance (DA). We might imagine that as "political beings" individuals are like diners at a restaurant that serves a "fixe prixe" menu of "ideologies" or "worldviews" or whathaveyou. After making their selections, it would be psychologically painful for these individuals to have to acknowledge that the world is configured in a way that forecloses achieving states of affairs associated with their preferred "worldview"or "ideology" or whatever: e.g., that unconstrained private orderings of the sort prized by individualists will burden the natural environment with toxic byproducts that make such a way of life unsustainable. They are therefore motivated to construe information in a manner that "fits" the evidence on risk and like facts to positions ("beliefs") supportive of policies congenial to their worldviews & unsupportive of policies uncongenial to the same.

2. Status protection (SP).  DA is a relatively individualistic conception of PMR; SP is more "social."  On this account, individual well-being is understood to be decisively linked to membership in important "affinity groups," whose members are bound together by their shared adherence to ways of life. Cultivating affective styles that evince commitment to the positions conventionally associated with these groups will be essential to signaling membership in and loyalty to one or another of them.  "Policy" positions will routinely bear such associations. But sometimes risks and like policy-relevant facts will come to bear social meanings (necessarily antagonistic ones in relation to the opposing groups) that express group membership &  loyalty too.  In those cases, PMR will be a mode of information processing rationally suited to forming the affective styles that reliably & convincingly express an individual's "group identity."

Avoiding the psychic disappointment of assenting to facts uncongenial to an individual's personal "policy preferences" is not the truth-extrinsic goal that "motivates" cognition on this view.  Status protection--i.e., the maintenance of the sort of standing in one's group essential to enjoying access to the benefits, material and emotional, that membership imparts--is.

Okay, those are the two theories.

But let me be clear: neither of these theories is "true"! 

Not because some other one is -- but because no theories are.  All theories are simplified, imperfect "models"-- or pictures or metaphors, even! -- that warrant our acceptance to the extent that they enable us to do what we want to do w/ an empirical research program: enlarge our capacity to explain, predict & prescribe.

On this basis, I view SP as "true" & DA "false."

For now at least.

But in any case, my question is whether your & your colleague's question --whether "cultural cognition orientations" are "an identity" -- can be connected to this particular account of how "values," "identities," & PMR are connected?  If so, then, I might have something more helpful to say!  If not, then maybe what you have to say about why not will help me engage this issue more concretely.


 

Monday
Dec212015

The "asymmetry thesis": another PMRP issue that won't go away

I feel like I've done 10^8 posts on this .... That's wrong: I counted, and in fact I've done 10.3^14.

But that's because it's a difficult question. Or at least is if one treats it as one of "measurement" & "weight of the evidence."  I remain convinced that it is not of great practical significance--that is, even if "motivated reasoning" and like dynamics are "asymmetric" across the ideological spectrum (or cultural spectra) that define the groups polarized on policy-consequential facts, the evidence is overwhelming and undeniable that members of all such groups are subject to this dynamic, & to an extent that makes addressing its general impact -- rather than singling out one or another group as "anti-science" etc. -- the proper normative aim for those dedicated to advancing enlightened self-govt.

But issues of "measurement" & "weight of the evidence" etc. are still, in my view, perfectly legitimate matters of scholarly inquiry. Indeed, pursuit of them in this case will, I'm sure, enlarge knowledge, theoretical and practical.

"Asymmetry" is an open question--& not just in the sense that nothing in science is ever resolved but in the sense that those on both "sides" (i.e., those who believe politically motivated reasoning is symmetric and those who believe it is asymmetric) ought to wonder enough about the correctness of their own position to wish that they had more evidence.

Here's an excerpt from my The Politically Motivated Reasoning Paradigm survey/synthesis essay addressing the state of the "debate":

4. Asymmetry thesis

The “factual polarization” associated with politically motivated reasoning is pervasive in U.S. political life. But whether politically motivated reasoning is uniform across opposing cultural groups is a matter of considerable debate (Mooney 2012).

In the spirit of the classic “authoritarian personality” thesis (Adorno 1950), one group of scholars has forcefully advanced the claim that it is not. Known as the “asymmetry thesis,” their position links biased processing of political information with characteristics associated with right-wing political orientations. Their studies emphasize correlations in observational studies between conventional ideological measures and scores on self-report reasoning-style scales such as “need for closure” and “need for cognition” and on personality-trait scales such “openness to experience” (Jost, Glaser, Kruglanski & Sulloway 2003; Jost, Hennes & Lavine 2013).

But the research that the “neo-authoritarian personality” school features supplies weak evidence for the asymmetry thesis. First, the reasoning style measures that they feature are of questionable validity. It is a staple of cognitive psychology that defects in information processing are not open to introspective observation or control (Pronin 2007) –a conclusion that applies to individuals high as well as more modest in cognitive proficiency (West, Meserve & Stanovich 2012). There is thus little reason to believe a person’s own perception of the quality of his reasoning is a valid measure of the same.

Indeed, tests that seek to validate such self-report reasoning style scales consistently find them to be inferior in predicting the disposition to resort to conscious, effortful information processing than performance-based measures such as the Cognitive Reflection Test and Numeracy (Toplak, West & Stanovich 2011; Liberali, Reyna, Furlan & Pardo 2011). Those measures, when applied to valid general population samples, show no meaningful correlation with party affiliation or liberal-conservative ideology (Kahan 2013; Baron 2015).

More importantly, there is no evidence that individual differences in reasoning style predict vulnerability to politically motivated reasoning. On the contrary, as will be discussed in the next part, evidence suggests that proficiency in dispositions such as cognitive reflection, numeracy, and science comprehension magnify politically motivated reasoning (Fig. 6).

Ultimately, the only way to determine if politically motivated reasoning is asymmetric with respect to ideology or other diverse systems of identity-defining commitments is through valid experiments. There are a collection of intriguing experiments that variously purport to show that one or another form of judgment—e.g., moral evolution, willingness to espouse counter-attitudinal positions, the political valence of positions formed while intoxicated, individual differences in activation of “brain regions” etc.—is ideologically asymmetric or symmetric (Thórisdóttir & Jost 2011; Jost, Nam, Jost & Van Bavel 2013; Eidelman et al. 2012; Crawford & Brandt 2013; Schreiber, Fonzo et al. 2013). These studies vary dramatically in validity and insight. But even the very best and genuinely informative ones (e.g., Conway, Gideon, et al. 2015; Liu & Ditto 2013; Crawford 2012) are in fact examining a form of information processing distinct from PMRP and with methods other than the PMRP design or its equivalent.

One study that did use the PMRP design found no support for the “asymmetry thesis” (Kahan 2013). In it, individuals of left- and right-wing political outlooks displayed perfectly symmetric forms of politically motivated fashioning in evaluating evidence that people who reject their group’s position on climate change have been found to engage in open-minded evaluation of evidence (Figure 5).

But that’s a single study, one that like any other is open to reasonable alternative explanations that themselves can inform future studies. In sum, it is certainly reasonable to view the “asymmetry thesis” issue as unresolved. The only important point is that progress in resolving it is unlikely to occur unless studied with designs that reflect PMRP design or ones equivalently suited to support inferences consistent with the PMRP model.

Refs

Adorno, T.W. The Authoritarian personality (Harper, New York, 1950).

Baron, J. Supplement to Deppe et al.(2015). Judgment and Decision Making 10, 2 (2015).

Conway, L.G., Gornick, L.J., Houck, S.C., Anderson, C., Stockert, J., Sessoms, D. & McCue, K. Are Conservatives Really More Simple‐Minded than Liberals? The Domain Specificity of Complex Thinking. Political Psychology (2015), advance on-line, DOI: 10.1111/pops.12304.

Crawford, J.T. The ideologically objectionable premise model: Predicting biased political judgments on the left and right. Journal of Experimental Social Psychology 48, 138-151 (2012).

Eidelman, S., Crandall, C.S., Goodman, J.A. & Blanchar, J.C. Low-Effort Thought Promotes Political Conservatism. Pers. Soc. Psychol. B. (2012).

Jost, J.T., Glaser, J., Kruglanski, A.W. & Sulloway, F.J. Political Conservatism as Motivated Social Cognition. Psychological Bulletin 129, 339-375 (2003).

Jost, J.T., Hennes, E.P. & Lavine, H. “Hot” political cognition: Its self-, group-, and system-serving purposes. in Oxford handbook of social cognition (ed. D.E. Carlson) 851-875 (Oxford University Press, New York, 2013).

Kahan, D. M.. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424 (2013).

Liberali, J.M., Reyna, V.F., Furlan, S., Stein, L.M. & Pardo, S.T. Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment. Journal of Behavioral Decision Making 25, 361-381 (2012).

Nam, H.H., Jost, J.T. & Van Bavel, J.J. “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance. PLoS ONE 8(4) 8, :e59837. doi:59810.51371/journal.pone.0059837 (2013).

Pronin, E. Perception and misperception of bias in human judgment. Trends in cognitive sciences 11, 37-43 (2007).

Thórisdóttir, H. & Jost, J.T. Motivated Closed-Mindedness Mediates the Effect of Threat on Political Conservatism. Political Psychology 32, 785-811 (2011).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

West, R.F., Meserve, R.J. & Stanovich, K.E. Cognitive sophistication does not attenuate the bias blind spot. Journal of Personality and Social Psychology 103, 506 (2012).

 

Page 1 ... 2 3 4 5 6 ... 37 Next 20 Entries »