2. What and why?
The validity of any science-comprehension instrument must be evaluated in relation to its purpose. The quality of the decisions ordinary individuals make in myriad ordinary roles—from consumer to business owner or employee, from parent to citizen—will depend on their ability to recognize and give proper effect to all manner of valid scientific information (Dewey 2010; Baron 1993). It is variance in this form of ordinary science intelligence—and not variance in the forms or levels of comprehension distinctive of trained scientists, or the aptitudes of prospective science students—that OSI_2.0 is intended to measure.
This capacity will certainly entail knowledge of certain basic scientific facts or principles. But it will demand as well various forms of mental acuity essential to the acquisition and effective use of additional scientific information. A public science-comprehension instrument cannot be expected to discern proficiency in any one of these reasoning skills with the precision of an instrument dedicated specifically to measuring that particular form of cognition. It must be capable, however, of assessing the facility with which these skills and dispositions are used in combination to enable individuals to successfully incorporate valid scientific knowledge into their everyday decisions.
A valid and reliable measure of such a disposition could be expected to contribute to the advancement of knowledge in numerous ways. For one thing, it would facilitate evaluation of science education across societies and within particular ones over time (National Science Board 2014). It would also enable scholars of public risk perception and science communication to more confidently test competing conjectures about the relevance of public science comprehension to variance in—indeed, persistent conflict over—contested risks, such as climate change (Hamilton 2011; Hamilton, Cutler & Shaefer 2012), and controversial science issues such as human evolution (Miller, Scott & Okamoto 2006). Such a measure would also promote ongoing examination of how science comprehension influences public attitudes toward science more generally, including confidence in scientific institutions and support for governmental funding of basic science research (e.g., Gauchat 2011; Allum, Sturgis, Tabourazi, & Brunton-Smith 2008). These results, in turn, would enable more critical assessments of the sorts of science competencies that are genuinely essential to successful everyday decisionmaking in various domains—personal, professional, and civic (Toumey 2011).
In fact, it has long been recognized that a valid and reliable public science-comprhension instrument would secure all of these benefits. The motivation for the research reported in this paper is widespread doubt among scholars that prevailing measures of public “science literacy” possess the properties of reliability and validity necessary to attain these ends (e.g., Stocklmayer & Bryant 2012; Roos 2012; Guterbock et al. 2011; Calvo & Pardo 2004). OSI_2.0 was developed to remedy these defects.
The goal of this paper is not only to apprise researchers of OSI_2.0’s desirable characteristics in relation to other measures typically featured in studies of risk and science communication. It is also to stimulate these researchers and others to adapt and refine OSI_2.0, or simply devise a superior alternative from scratch, so that researchers studying how risk perception and science communication interact with science comprehension can ultimately obtain the benefit of a scale more distinctively suited to their substantive interests than are existing ones.
Allum, N., Sturgis, P., Tabourazi, D. & Brunton-Smith, I. Science knowledge and attitudes across cultures: a meta-analysis. Public Understanding of Science 17, 35-54 (2008).
Baron, J. Why Teach Thinking? An Essay. Applied Psychology 42, 191-214 (1993).
Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).
Gauchat, G. The cultural authority of science: Public trust and acceptance of organized science. Public Understanding of Science 20, 751-770 (2011).
Hamilton, L.C. Education, politics and opinions about climate change evidence for interaction effects. Climatic Change 104, 231-242 (2011).
Hamilton, L.C., Cutler, M.J. & Schaefer, A. Public knowledge and concern about polar-region warming. Polar Geography 35, 155-168 (2012).
Miller, J.D., Scott, E.C. & Okamoto, S. Public acceptance of evolution. Science 313, 765 (2006).
National Science Board. Science and Engineering Indicators, 2014 (National Science Foundation, Arlington, Va., 2010).
Pardo, R. & Calvo, F. The Cognitive Dimension of Public Perceptions of Science: Methodological Issues. Public Understanding of Science 13, 203-227 (2004).
Roos, J.M. Measuring science or religion? A measurement analysis of the National Science Foundation sponsored science literacy scale 2006–2010. Public Understanding of Science (2012).
Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education, Part B, 2(1), 81-101 (2012)
The published version of the OSI_2.0 working paper will appear in Journal of Risk Research. Keep your eye's peeled for it at the newstand--no doubt that issue will sell out right quick!