follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Coolest article of the yr-- hot hands down! | Main | Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center! »
Tuesday
Dec292015

Mining even more insight from the Pew "public science knowledge/attitudes" data--but hoping for even better extraction equipment (fracking technology, maybe?) in 2016...

Futzing around "yesterday" with the "public" portion of the"public vs. scientists" study (Pew 2015), I presented some data  consistent with previous findings (Kahan 2014, 2015) that "beliefs" in human evolution and human-caused climate change measure cultural identity, not any aspect of science comprehension.

Well, there's actually still more fun things one can do with the Pew data (a way to pass the time, actually, as I wait for some new data on climate-science literacy... stay tuned!).

"Today" I'll share with you some interesting correlations between the Pew "science literacy" battery (also discussed yesterday; but actually, a bit more about it at the end of the post) & various "science-informed" policy issues.  I'll also show how those relationships intereact with (vary in relation to) right-left political outlooks.

Ready? ...

Okay -- consider this one!

See? It's scary to eat GM foods, but people of all political outlooks & levels of science literacy agree that it makes sense to put blue GM tomatoes (or even a single "potatoe") in the gas tank of their SUVs.

But you know my view here: "what do you think of GM ..." in a survey administered to the general public measures non-opinion.  Fun for laughs, and for creating fodder for professional "anti-science" commentators, but not particulary helpful in trying to make genuine sense of public risk perceptions.

Just my opinion....

Here's another:

Okay, now this is meaningful stuff.  Not news, of course, but still nice to be able to get corroboration with additional high-quality data.

When polarization on a "societal risk" doesn't abate but increases conditional on science comprehension, that's a super strong indicator of a polluted science communication environment.  It is a sign that positions on an issue have become entangled in antagonistic social meanings that transform them into badges of identity in and loyalty to groups (Kahan 2012). When that happens, people will predictably use their reasoning proficiencies to fit their understanding of evidence to the view that predominates in their group.

Here one can reasonably question the inference I'm drawing, since Pew's items aren't about "risk perceptions" but rather "policy preferences." 

But if one is familiar with the "affect heuristic"--which refers to the tendency of people to conform their understanding of all aspects of a putative risk source to a generic pro- or con- attitude (Slovic, Finucane & MacGregor 2005; Loewenstein, Weber, Hsee & Welch 2001)--then one would be inclined to treat the Pew question as just another indicator of that risk-perception-generating sensibility. 

The "affect heuristic" is what makes the "Industrial Strength Risk Perception Measure" so powerful.  Using ISRPM, CCP data has found that both the perceived risk of both fracking and of nuclear power (not to mention climate change, of course) display the signature "polluted science communication environment" characteristic  of increased cultural polarization conditional on greater reasoning proficiency.

I, anyway, am inclined to view the Pew data as more corroboration of this relationship, just as in "yesterday's" post I explained how the Pew data corroborated the findings that greater science comprehension generally and greater comprehension of climate science in particular magnify polarization.

But before signing off here, let me observe one thing about the Pew science literacy battery.

You likely noticed that the values on the y-axes of the figures start to get more bunched together at the high end.

That's because the six-item, basic facts sicence literacy battery used in the Pew 2015 report are highly skewed in the direction of a high score.

Some 30% of the nationally represenative sample got all six questions correct! 

The distribution is a bit less skewed when one scores the responses to the battery using Item Response Theory, which takes account of the relative difficulty and measurement precision (or discrimination) of the individual items. But only a bit less. (You can't tell from the # of bins in the histogram, but there are actually over 5-dozen "science literacy" levels under the IRT model, as opposed to the 7 that result when one simply adds the number of correct responses; pretty cool illustration of how much more "information," as it were, one can get using IRT rather than "classic test theory" scoring.)

To put it plainly, the Pew battery is just too darn easy. 

The practical consequence-- a serious one-- is that the test won't do a very good job in helping us to determine whether differences in science comprehension affect perceptions of risk or other science-related attitudes among individuals whose scores are above the population average.

Actually, the best way to see that is to look at the Item Reponse Theory test information and reliability characteristics for the Pew battery:

If you need a refresher on the significance on these measures, then check out this post & this one

But what they are telling us is that the power of the Pew battery to discern differences in science comprehension is concentrated at about -1 SD below the estimated population mean. Even there, the measurement precision is modest -- a reliability coefficient of under 0.6 (0.7 is better). 

More importantly, it quickly tails of to zero by +0.5 SD. 

In other words, above the 60th percentile in the population the test can furnish us with no guidance on differences in science literacy levels.  And even what it can tell us even at the population mean ("0" on the y-axis) is pretty noisy (reliability = 0.40).

As I've explained in previous posts, the NSF Indicators have exactly the same problem. The Pew battery is an admirable effort to try to improve on the familiar NSF science literacy test, but with these items, at least, it hasn't made a lot of progress.

As the last two posts have shown, you can in fact still learn a fair amount from a science literacy scale the measurement precision is this skewed toward the lower end of the distribution of this sort of proficiency.

But if we really want to learn more, we desperately need a better public science comprehension instrument.

That conviction has informed the research that generated the "Ordinary Science Intelligence" assessment.  An 18-item test, OSI combines a modest number of "basic fact" items (ones derived from the Indicator and from a previous Pew battery) with critical reasoning measures that examine cognitive reflection and numeracy, dispositions essential to being able to recognize and give proper effect to valid science.

OSI was deliberately constructed to possess a high degree of measurement precision across the entire range of the underlying latent (or unobserved) dispotion that it's measuring. 

That's a necessary quality, I'd argue, for an instrument suited to advance scholarly investigation of how variance in public science comprhension affects perceptions of risk and related facts relevant to individual and collective decisionmaking.

Is OSI (actually "OSI_2.0") perfect?

Hell no

Indeed, while better for now than the NSF Indicators battery (on which it in fact builds) for the study of risk perception and science communication, OSI_2.0 is primarily intended to stimulate other scholars to try to do even better, either by building on and refining OSI or by coming up with instruments that they can show (by conducting appropriate assessments of the instruments' psychometric characteristics and their external validity) are even better.

I hope that there are a bunch of smart researchers out there who have made contributing to the creation of a better public science comprehension instrucment one of their New Year's resolutions.

If the researchers at Pew Research Center are among them, then I bet we'll all be a lot smarter by 2017!

REferences

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as feelings. Psychological Bulletin 127, 267-287 (2001).

Pew Research Center (2015). Public and Scientists' Views on Science and Society.

Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (7)

==> "But if one is familiar with the "affect heuristic"--which refers to the tendency of people to conform their understanding of all aspects of a putative risk source to a generic pro- or con- attitude (Slovic, Finucane & MacGregor 2005; Loewenstein, Weber, Hsee & Welch 2001)--then one would be inclined to treat the Pew question as just another indicator of that risk-perception-generating sensibility. "

How do you tease out the influence of risk-related reasoning from a more basic influence of ideology-related reasoning? Although risk perception is obviously susceptible to identity-related biases (and easily leveraged by bogus-scare mongering such as we see with Ted Cruz), aren't there issues that show a similar kind of identity influence that are not directly linked to risk perception? And I would think that for many issues, the direction of causality looks like:

(A) Identity-orientation ---> drives perception of risk ---> drives positioning on an issue rather than;
(B) Perception of risk ---> drives identity orientation ---> drives positioning on an issue

Do you have an online link to any of the sources you cite?

-----------------


Anyway, personally, I find the implications behind the basic dynamics of motivated reasoning more important than the questions of whether (and how much) there is a trend towards greater polarization in association with greater "scientific literacy."

One reason that I think that the association with scientific literacy is less important (relatively) is that I am inherently dubious of trying to measure "scientific literacy" - whether it be through test with lists of factual items or through tests designed more to measure cognitive processing. There are just way too many variables to try to control to make such measurements reliable and valid for real world application, IMO - and even if such an association can clearly be established, there's still a basic problem with evaluating causality; perhaps people who are inclined to be more strongly identified ideologically are also those who are more likely to develop their scientific literacy (or develop stronger skills in areas that would result in higher scores on tests of cognitive functioning). If that's the case, then I think that you're really just going back to measuring an artifact of "culture." (For example, I would guess that any association between greater polarization on various issues and "scientific literacy" would be weaker among Japanese people than among Americans. I don't think that the existence of such an association tells us much about the underlying nature of how identity biases reasoning).

But beyond that, while it may be useful to know that the "deficit model" is likely not very explanatory, in the end what's more impactful, IMO, is to understand the near universal influence of identity-orientation on reasoning processes. Focusing on the relationship to scientific literacy seems a bit of a distraction from what's more important, IMO.

December 30, 2015 | Unregistered CommenterJoshua

@Joshua--

good q's (as usual).

1. On Id ->risk ... vs. risk -> id, there is of course the diffculty that one can't readily manipulate identity; if one could, then one could see if doing that changes risk perception. (Besides moving Ky farmer from kitchen table to tractor...)

One can do various things, though, that one would expect to change risk perception *conditional* on perception being a consequence of identity (like this & this). That they work is evidence that supports the surmise that identity determines risk rather than other way around.

Frankly, though, I think correlational data furnishes plenty convincing data. If the 2 variables are correlated, one might not have the sort of evidence of which is cause & effect that one gets w/ an experiment but one can still have plenty of information on the basis of which to judge the plausibility of the competing causal theories that might account for the correlation (Pearl 2009, pp. 83-85).

I get how someone who believes that his sons shouldn't wash dishes & sew might believe that the earth is not heating up & that allowing a man to carry a concealed weapon in public reduces crime. I *don't* get how someone who is made to believe that the earth is heating up & carrying a gun reduces crime would then be more likely to think his son shouldn't wash dishes & sew ....

I think your Cruz example is perfectly consistent with Id -> risk. Indeed, it is evidence in the nature of the experments I gave, in which one can infer Id -> risk from one's success in being able to shape people's perceptions of risk by manipulating the social meanings of them. The heightened risk perception can then be used to impel action of one sort or another.

2. I agree w/ you that evidence of how mechanisms of cultural cognition interact w/ differences in crtical reasoning proficiency can be useful for addressig the "knowledge deficit" view-- although if there's someone who can't see the problem with that view w/o the additional bit of evidence furnished by such data, then it isn't a "knowledge deficit" that explains that person's reluctance to give up on the view that "if only people knew what I do ..." as the explanation for resistance to his or her position.

But I do think there is a lot more one can learn than that from adding differences in reasoning styles to studies of these dynamics. The more discrete mechanisms through whcih motivated reasonkng operate, e.g., and the most likely means of counteracting them.

December 30, 2015 | Registered CommenterDan Kahan

Dan, I'm very impressed with all you're up to in this blog and elsewhere and intend to spread the word (only to smart people who will make thoughtful comments)

I'm with Joshua on the relative value of ID,tho I suspect your emphasis on smart informed people being divergent is useful counter to the divisive 'asymetrical' meme.

I share the concern with how to get useful empirical data on a shoestring

January 1, 2016 | Unregistered CommenterHal Morris

I think that the interesting part of understanding how cultural cognitive processes affect communication is in learning how knowledge of existing identities can be utilized to shape reactions to new information. The drivers are not necessarily internal. How are social meanings shaped? What can be done by science communicators to make sure that what they are trying to convey is conveyed?

I drove through the lower San Joaquin Valley in California just a couple of weeks ago, watching people in towns line up to fill water bottles, farmers ripping up almond orchards, while meanwhile a dairy with better water rights or a deeper well just down the road is busy irrigating alfalfa. Thus, I found the following article fascinating: http://www.nytimes.com/2015/12/31/us/farmers-try-political-force-to-twist-open-californias-taps.html?_r=0. IMHO, this is an apparently successful, albeit IMHO, duplicitous effort by those with power to manipulate workers desire for a job, into support for the sorts of policies that are depriving those same workers of tapwater to their homes. And (as with the example of the NRA's morph from hunters safety to right wing lobby), more is apparently planned, with the now "golden" brand of El Agua.

A substantial part of successful test taking involves analyzing the questions, not only for what you think they actually say, but what you think the writer had in mind, and how your answer serves your own best interest. In that regard, not only one's own identity, but also how questions are phrased with respect to that identity has impact. That also means that how sentences are written, as for polls, needs to be well analyzed. A political "push poll' is an example of going in the opposite direction, writing to elicit a response.

This morning I am fascinated by the following evaluation of Trump-talk. https://politicalwire.com/2016/01/02/how-donald-trump-answers-a-question/. A successful sales job can involve simple wording, lots of repetition and putting the key word to remember at the end of the sentence.

In science communication, I think that a "just the facts" presentation can derail if it is not recognized that certain key words or concepts have unrelated linkages. Certain words like "radiation" can leap right out of complete sentences and take over peoples thought processes.

January 2, 2016 | Unregistered CommenterGaythia Weis

@Gaythia--

The "social meaning" point is critical ... Maybe the most critical one for science communication in the Liberal Republic of Science....

I also think you are right that responses to standardized tests, particularly of science comprehension, can sometimes measure what testtakers anticipate test-designer wants to hear or believes the "right" answer is independently of what test taker "believes."

This is an external validity issue: that is, in such a case, the test is validly measuring *something* (internal validity)-knowledge of what the test taker believes is the right answer; but not what the test is supposed to measure-- a disposition or apptitude to comprehend science.

I wouldn't call that "push polling," thought. "Push polling" is the disingenuous use of survey collection methods to disseminate some information that is presented as the premise of the survey quesetion: "do you think candidate x's beating of his wife reflects on his fitness for office" etc. There the goal is just to shape views; that's different from eliciting responses on the basis of some mechanism other than the one that the reseracher intends to model (indeed, it is a creature of political campaigns, not scholarly research).

There is also the *internal validity* problem of "demand effect." This occurs when the survey item elicits the response that rspts know the researcher wants to hear just to show the researcher that the subject knows what the researcher wants to hear. It's like the "external validity" problem described earlier for standardized tests except that what's being measured is anything except a desire to please the researcher; in the former case, one is measuring what rspt knows about what others regard as "knowledge"

January 3, 2016 | Registered CommenterDan Kahan

@HalMorris--

I should say more, then, about why I think there is value in examining the connection between motivated reasoning & one or another form of reasoning proficiency. I don't think the benefits are limited either to discrediting "knowledge deficit" or "asymmetry thesis"; I think including the reasonign proficiency measures helps us to test plausible competing understandings of motivated reasoning & like mechanisms independently of those ends.

January 3, 2016 | Registered CommenterDan Kahan

@Joshua & @Halmorris

here you go...

January 3, 2016 | Registered CommenterDan Kahan
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.