Mining even more insight from the Pew "public science knowledge/attitudes" data--but hoping for even better extraction equipment (fracking technology, maybe?) in 2016...
Futzing around "yesterday" with the "public" portion of the"public vs. scientists" study (Pew 2015), I presented some data consistent with previous findings (Kahan 2014, 2015) that "beliefs" in human evolution and human-caused climate change measure cultural identity, not any aspect of science comprehension.
Well, there's actually still more fun things one can do with the Pew data (a way to pass the time, actually, as I wait for some new data on climate-science literacy... stay tuned!).
"Today" I'll share with you some interesting correlations between the Pew "science literacy" battery (also discussed yesterday; but actually, a bit more about it at the end of the post) & various "science-informed" policy issues. I'll also show how those relationships intereact with (vary in relation to) right-left political outlooks.
Okay -- consider this one!
See? It's scary to eat GM foods, but people of all political outlooks & levels of science literacy agree that it makes sense to put blue GM tomatoes (or even a single "potatoe") in the gas tank of their SUVs.
But you know my view here: "what do you think of GM ..." in a survey administered to the general public measures non-opinion. Fun for laughs, and for creating fodder for professional "anti-science" commentators, but not particulary helpful in trying to make genuine sense of public risk perceptions.
Just my opinion....
When polarization on a "societal risk" doesn't abate but increases conditional on science comprehension, that's a super strong indicator of a polluted science communication environment. It is a sign that positions on an issue have become entangled in antagonistic social meanings that transform them into badges of identity in and loyalty to groups (Kahan 2012). When that happens, people will predictably use their reasoning proficiencies to fit their understanding of evidence to the view that predominates in their group.
Here one can reasonably question the inference I'm drawing, since Pew's items aren't about "risk perceptions" but rather "policy preferences."
But if one is familiar with the "affect heuristic"--which refers to the tendency of people to conform their understanding of all aspects of a putative risk source to a generic pro- or con- attitude (Slovic, Finucane & MacGregor 2005; Loewenstein, Weber, Hsee & Welch 2001)--then one would be inclined to treat the Pew question as just another indicator of that risk-perception-generating sensibility.
The "affect heuristic" is what makes the "Industrial Strength Risk Perception Measure" so powerful. Using ISRPM, CCP data has found that both the perceived risk of both fracking and of nuclear power (not to mention climate change, of course) display the signature "polluted science communication environment" characteristic of increased cultural polarization conditional on greater reasoning proficiency.
I, anyway, am inclined to view the Pew data as more corroboration of this relationship, just as in "yesterday's" post I explained how the Pew data corroborated the findings that greater science comprehension generally and greater comprehension of climate science in particular magnify polarization.
But before signing off here, let me observe one thing about the Pew science literacy battery.
You likely noticed that the values on the y-axes of the figures start to get more bunched together at the high end.
That's because the six-item, basic facts sicence literacy battery used in the Pew 2015 report are highly skewed in the direction of a high score.
The distribution is a bit less skewed when one scores the responses to the battery using Item Response Theory, which takes account of the relative difficulty and measurement precision (or discrimination) of the individual items. But only a bit less. (You can't tell from the # of bins in the histogram, but there are actually over 5-dozen "science literacy" levels under the IRT model, as opposed to the 7 that result when one simply adds the number of correct responses; pretty cool illustration of how much more "information," as it were, one can get using IRT rather than "classic test theory" scoring.)
To put it plainly, the Pew battery is just too darn easy.
The practical consequence-- a serious one-- is that the test won't do a very good job in helping us to determine whether differences in science comprehension affect perceptions of risk or other science-related attitudes among individuals whose scores are above the population average.
Actually, the best way to see that is to look at the Item Reponse Theory test information and reliability characteristics for the Pew battery:
But what they are telling us is that the power of the Pew battery to discern differences in science comprehension is concentrated at about -1 SD below the estimated population mean. Even there, the measurement precision is modest -- a reliability coefficient of under 0.6 (0.7 is better).
More importantly, it quickly tails of to zero by +0.5 SD.
In other words, above the 60th percentile in the population the test can furnish us with no guidance on differences in science literacy levels. And even what it can tell us even at the population mean ("0" on the y-axis) is pretty noisy (reliability = 0.40).
As I've explained in previous posts, the NSF Indicators have exactly the same problem. The Pew battery is an admirable effort to try to improve on the familiar NSF science literacy test, but with these items, at least, it hasn't made a lot of progress.
As the last two posts have shown, you can in fact still learn a fair amount from a science literacy scale the measurement precision is this skewed toward the lower end of the distribution of this sort of proficiency.
But if we really want to learn more, we desperately need a better public science comprehension instrument.
That conviction has informed the research that generated the "Ordinary Science Intelligence" assessment. An 18-item test, OSI combines a modest number of "basic fact" items (ones derived from the Indicator and from a previous Pew battery) with critical reasoning measures that examine cognitive reflection and numeracy, dispositions essential to being able to recognize and give proper effect to valid science.
OSI was deliberately constructed to possess a high degree of measurement precision across the entire range of the underlying latent (or unobserved) dispotion that it's measuring.
That's a necessary quality, I'd argue, for an instrument suited to advance scholarly investigation of how variance in public science comprhension affects perceptions of risk and related facts relevant to individual and collective decisionmaking.
Is OSI (actually "OSI_2.0") perfect?
Indeed, while better for now than the NSF Indicators battery (on which it in fact builds) for the study of risk perception and science communication, OSI_2.0 is primarily intended to stimulate other scholars to try to do even better, either by building on and refining OSI or by coming up with instruments that they can show (by conducting appropriate assessments of the instruments' psychometric characteristics and their external validity) are even better.
I hope that there are a bunch of smart researchers out there who have made contributing to the creation of a better public science comprehension instrucment one of their New Year's resolutions.
If the researchers at Pew Research Center are among them, then I bet we'll all be a lot smarter by 2017!
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).
Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as feelings. Psychological Bulletin 127, 267-287 (2001).
Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).
Having posted the IRT test information & reliability data, figured might as well share the item response profiles too for the 6 items in the Pew "general knowledge" battery.