"Conservatives lose faith in science over last 40 years"--where do you see *that* in the data? 
Tuesday, November 25, 2014 at 10:52AM
Dan Kahan

Note: Special bonus! Gordon Gauchat, the author of PSPS, wrote a reflective response that I've posted in a "followup" below.  I can't think or write as fast he does (in fact, I'm sort of freaked out by his speed & coherence), but after I think for a bit, I'll likely add something, too, since it is the case, as he says, that we "largely agree" & I think it might be useful for me to be even clearer about that, & also to engage some of the other really good interesting points he makes.

 This is a longish post, & I apologize for that to this blog’s 14 billion regular readers.  Honestly, I know you are all very busy.

To make it a little easier, I’m willing to start with a really compact summary.

But I’ll do that only if you promise to read the whole thing. Deal?

Okay, then.

This post examines Gordon Gauchat’s Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012).

PSPS is widely cited to support the proposition that controversy over climate change reflects the “increasingly skeptical and distrustful” attitude of “conservative” members of the general public (Lewandowsky et al. 2013).

Is that supposed to be an elephant? Looks more like a snuffleupagus--everyone knows they don't believe in science (it's reciprocal)This contention merits empirical investigation, certainly.

But the data analyzed in PSPS, an admittedly interesting study!, don’t even remotely support it.

PSPS’s analysis rests entirely on variance in one response level for a single part of a multiple-part survey item.  The reported changes in the proportion of survey takers who selected that particular response level  for that particular part of the single item in question cannot be understood to measure “trust” in science generally or in any group of “scientists.”

Undeniably, indisputably cannot.

Actually—what am I saying? 

Sure, go ahead and treat nonselection of that particular response level to that one part of the single survey item analyzed in PSPS as evincing a “decline” in “trust of scientists” for “several decades among U.S. conservatives” (Hmielowski et al. 2013).

But if you do, then you will be obliged to conclude that a majority of those who identify themselves as “liberals” are deeply "skeptical" and “distrustful” of scientists too.  The whole nation, on this reading of the data featured in PSPS, would have to be regarded as having “lost faith” in science—indeed, as never having had any to begin with.

That would be absurd. 

It would be absurd because the very GSS survey item in question has consistently found—for decades—that members of the US general public are more “confident” in those who “run” the “scientific community” than they are in those who “run” “major companies,” the “education” system, “banks and financial institutions,” “organized religion,” the “Supreme Court,” and the “press.”

For the entire period under investigation, conservatives rated the “scientific community” second among the 13 major U.S. institutions respondents were instructed to evaluate.

If one accepts that it is valid to measure public "trust” in institutions by focusing so selectively on this portion of the data from the GSS "confidence in institutions" item, then we’d also have to conclude that conservatives were twice as likely to “distrust” those who “run . . . major companies” in the US as they were to “distrust” scientists .

That’s an absurd conclusion, too. 

PSPS’s analysis for sure adds to the stock of knowledge that scholars who study public attitudes toward science can usefully reflect on.

But the trend the study shows cannot plausibly be viewed as supporting inferences about the level of trust that anyone, much less conservatives, have in science.

That’s the summary.  Now keep your promise and continue reading.

A. Let’s get some things out of the way

Okay, first some introductory provisos

1. I think PSPS is a decent study.  The study notes a real trend & it’s interesting to try to figure out what is driving it.  In addition, PSPS is also by no means the only study by Gordon Gauchat that has taught me things and profitably guided the path of my own research.  Maybe he'll want to say something about how I'm addressing the data he presented (I'd be delighted if he posted a response here!).  But I suspect he cringes when he hears some of the extravagant claims that people make--the playground-like prattle people engage in--based on the interesting but very limited and tightly focused data he reported in PSPS.

2. There’s no question (in my mind at least) that various “conservative” politicians and conflict entrepreneurs have behaved despicably in misinforming the public about climate change. No question that they have adopted a stance that is contrary to the best available evidence, & have done so for well over a decade.

3. There are plenty of legitimate and interesting issues to examine relating to cognitive reasoning dispositions and characteristics such as political ideology, cutural outlooks, and religiosity. Lots of intriguing and important issues, too, about the connection between these indicators of identity and attitudes toward science.  Many scholars  (including Gauchat) and reflective commentators are reporting interesting data and making important arguments relating to these matters.  Nevertheless, I don’t think “who is more anti-science—liberals or conservatives” is an intrinsically interesting question—or even a coherent one.  There are many many more things I’d rather spend my time addressing.

But sadly, it is the case that many scholars and commentators and ordinary citizens insist there is a growing “anti-science” sensibility among a meaningful segment of the US population.  The “anti-science” chorus doesn’t confine itself to one score but “conservatives” and “religious” citizens are typically the population segments they characterize in this manner.

Advocates and commentators incessantly invoke this “anti-science” sentiment as the source of political conflict over climate change, among other issues.

Those who make this point also constantly invoke one or another “peer reviewed empirical study” as “proving” their position.

And one of the studies they point to is PSPS.

Because I think the anti-science trope is wrong; because I think it actually aggravates the real dynamics of cultural status competition that drive conflict over climate science and various other science-informed issues; because I think many reasonable people are nevertheless drawn to this account as a kind of a palliative for the frustration they feel over the persistence of cultural conflict over climate change; because I think empirical evidence shouldn’t be mischaracterized or treated as a kind of strategic adornment for arguments being advanced on other grounds; because I have absolutely no worries that another scholar would resent my engaging his or her work in the critical manner characteristic of the process of conjecture and refutation that advances scientific understanding; and because only a zealot or a moron would make the mistake of thinking that questioning what conclusions can appropriately be drawn from another scholar’s empirical research, criticizing counterproductive advocacy, or correcting widespread misimpressions is equivalent to “taking the side of” political actors who are misinforming the public on climate change, I’m going to explain why PSPS does not support claims like these:

 Have they actually read the study? click to see what they say ...

B. Have you actually read PSPS?

It only takes about 5 seconds of conversation to make it clear that 99% of the people who cite PSPS have never read it.

They don’t know it consists of an analysis of one response level to a single multi-part public opinion item contained in the General Social Survey, a public opinion survey that has been conducted repeatedly for over four decades (28 times between 1974 and 2012).

Despite how it is characterized by those citing PSPS, the item does not purport to measure “trust” in science. 

It is an awkwardly worded question, formulated by commercial pollsters in the 1960s, that is supposed to gauge “public confidence” in a diverse variety of (ill-defined, overlapping) institutions (Smith 2012):

I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?

a. Banks and Financial Institutions [added in 1975]

b. Major Companies

c. Organized Religion

d. Education

e. Executive Branch of the Federal Government

f. Organized Labor

g. Press

h. Medicine

i. TV

j. U.S. Supreme Court

k. Scientific Community

l. Congress

m. Military

For the period from 1974 to 2010, PSPS examines what proportion of respondents selected the response “a great deal of confidence” in those “running” the “Science community.”

 

As should be clear, the PSPS figure above plots changes only in the “great deal of confidence” response. 

I’m sure everyone knows how easy it is to make invalid inferences when one examines only a portion rather than all of the response data associated with a survey item

Thus, I’ve constructed Figures that make it possible to observe changes in all three levels of response for both liberals and conservatives over the relevant time period: 

As can be seen in these Figures, the proportion selecting “great deal” has held pretty constant at just under 50% for individuals who identified themselves as “liberals” of some degree (“slight,” “extreme,” or in between) on a seven-point ideology measure (one that was added to the GSS in 1974).

Among persons who described themselves as “conservatives” of some degree, the proportion declined from about 50% to just under 40%.  (In the 2012 GSS—the most recent edition—the figures for liberals and conservatives were 48% and 40%, respectively. I also plotted pcts for "great deal" in relation to the relevant GSS surveys "yesterday" in this post.)

The decline in the proportion of conservatives selecting “great deal” looks pretty continuous to the naked eye, but using a multi-level multivariate analysis (more on that below), PSPS reported finding that the decline was steeper after the election of Ronald Reagan in 1980 and George W. Bush in 2006.

That’s it.

Do you think that these data justify conclusions like "conservatives' trust in science has declined sharply," "conservatives have turned on science," "Republicans really don't like science," "conservatives have lost their faith in science," "fewer conservatives than ever believe in science," etc?  

If so, let me explain why you are wrong.

C.  Critically engaging the data

1. Is everyone anti-science?

To begin, why should we regard the “great deal of confidence” response level as the only one that evinces “trust”?

“Hardly any” confidence would seem distrustful, I agree.

But note that the proportion of survey respondents selecting “hardly any at all” held constant at under 10% over the entire period for both conservatives and liberals.

Imagine I said that I regarded that as inconsistent with the inference that either conservatives or liberals “distrust” scientists.

Could you argue against that?

Sure.

But if you did, you’d necessarily have to be saying that selecting “some confidence” evinces  “distrust” in scientists.

If you accept that, then you’ll have to conclude that a majority of “liberals” distrust scientists today,  too, and have for over 40 years.

For sure, that would be a conclusion worthy of headlines, blog posts, and repeated statements of deep concern among the supporters of enlightened self-government.

But such a reading of this item would also make the decision to characterize only conservatives as racked with “distrust” pathetically selective.

2.  Wow--conservative Republicans sure “distrust” business!

You’d also still be basing your conclusion on only a small portion of the data associated with the survey item.

Take a look, for example, at the responses for Major companies”: 

It’s not a surprise, to me at least, that conservatives have had more confidence than liberals in “major companies over the entire period.

I’m also not that surprised that even conservatives have less confidence in major companies today than they did before the financial meltdown.

But if you are of the view that any response level other than “a great deal of confidence” evinces “distrust,” then you’d have to conclude that 80% of conservatives today “distrust” our nation’s business leaders.

You’d also have to conclude that conservatives are twice as likely to trust those “running . . . the scientific community” as they are to trust those “running . . . major companies.”

I’d find those conclusions surprising, wouldn’t you?

But of course we should be willing to update our priors when shown valid evidence that contradicts them. 

The prior under examination here is that PSPS supports the claim that conservatives “don’t believe in science,” "have turned on science," “reject it," have "lost their faith in it," have been becoming "increasingly skeptical" of it "for decades,"  etc.

The absurdity of the conclusions that would follow from this reading of PSPS---that liberals and conservatives alike "really don't like science," that conservatives have so little trust in major companies that they'd no doubt vote to nationalize the healthcare industry, etc. -- is super strong evidence that it's unjustifiable to treat the single response level of the GSS "confidence" item featured in PSPS as a litmus test of anyone's "trust" in science.

3.  Everyone is pro-science according to the data presented in PSPS

What exactly do response to the GSS “confidence” item signify about how conservatives and liberals feel about those “running” the “Scientific community”?

Again, it’s always a mistake to draw inferences from a portion of the response to a multi-part survey item.  So let’s look at all of the data for the GSS confidence item.

The mean scores are plotted separately for “liberals” and “conservatives. The 13 institutions are listed in descending order as rated by conservatives-- i.e., from the institution in which conservatives expressed the greatest level of confidence to one in which they expressed the least in each period. 

The variance in selection of the "great deal" response level analyzed in PSPS is reflected in the growing difference between liberals' and conservatives' respective overall "confidence" scores for "the Scientific Community."

Various other things change, too.

But as can be seen, during every time period—including the ones in which Ronald Reagan and G.W. Bush were presidents—conservatives awarded “Science community” the second highest confidence score among the 13 rated institutions.  Before 1990, conservatives ranked the “science community” just a smidgen below “medicine”; since then, conservatives have vested more confidence in the “military.”

Conservatives rated the “science community” ahead of “major companies,” “organized religion,” “banks and financial institutions,” and “education,” not to mention “organized labor,” the “Executive Branch of the Federal Government” (during the Reagan and G.W. Bush administrations!), Congress, and “TV” throughout the entire period!

Basically the same story with liberals.  They rated the “science community” second behind “medicine” before 1990, and first in the periods thereafter.

So what inference can be drawn?

Certainly not that conservatives distrust science or any group of scientists.

Much more plausible is that conservatives, along with everyone else, hold science in extremely high regard.

That’s obvious, actually, given that the “Confidence” item sets up a beauty-contest by having respondents evaluate all 13 institutions.

click on me for thanksgiving treat! mmmmmm!But this reading—that conservatives, liberals, and everyone else has a high regard for science—also fits the results plainly indicated by a variety of other science-attitude items that appear in the GSS and in other studies.

It’s really really really not a good idea to draw a contentious/tendentious conclusion from one survey item (much less one response level to one part of a multi-part one) when that conclusion is contrary to the import of numerous other pertinent measures of public opinion.

4. Multivariate analysis

The analyses I’'ve offered are very simple summary ones based on “raw data” and group means.

There really is nothing to model statistically here, if we are trying to figure out whether these data could support claims like "conservatives have lost their faith in science" or  have become “increasingly skeptical and distrustful” toward it. If that were so, the raw data wouldn't look the way it does.

Nevertheless, PSPS contains a multivariate regression model that puts liberal-conservative ideology on the right-hand side with numerous other individual characteristics.  How does that cut?

As much as I admire the article, I'm not a fan of the style of model PSPS uses here.

E.g., what exactly are we supposed to learn from a parameter that reflects how much being a "conservative" rather than a "liberal" affects the probability of selecting the "great deal" response "controlling for" respondents' political party affiliation?

Overspecified regressions like these treat characteristics like being “Republican,” “conservative,” a regular church goer, white, male, etc. as if they were all independently operating modules that could be screwed together to create whatever sort of person one likes.

In fact, real people have identities associated with particular, recognizable collections of these characteristics.  Because we want to know how real people vary, the statistical model should be specified in a way that reflects differences in the combinations of characteristics that indicate these identities--something that can’t be validly done when the covariance of these characteristics is partialed out in a multivariate regression (Lieberson 1985; Berry & Feldman 1985).

But none of this changes anything.  The raw data tell the story. The misspecified model doesn’t tell a different one—it just generates a questionable estimate  of the difference in likelihood that a liberal as opposed to a  conservative will select “great deal” as the response on "Confidence" when assessing those who "run ... the Scientific Community” (although in fact PSPS reports a regression-model estimate of 10%--which is perfectly reasonable given that that's exactly what one observes in the raw data).

5. Someone should do a study on this!

There’s one last question worth considering, of course.

If I’m right that PSPS doesn’t support the conclusion that conservatives have “lost faith” in science, why do so many commentators keep insisting that that’s what the study says?  Don’t we need an explanation for that?

Yes. It is the same explanation we need for how a liberal democracy whose citizens are as dedicated to pluralism and science as ours are could be so plagued by unreasoning sectarian discourse about the enormous stock of knowledge at its disposal.

Refs

Berry, W.D. & Feldman, S. Multiple Regression in Practice (Sage Publications, Beverly Hills, 1985).

Gauchat, G. Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012)

Hmielowski, J.D., Feldman, L., Myers, T.A., Leiserowitz, A. & Maibach, E. An attack on science? Media use, trust in scientists, and perceptions of global warming. Public Understanding of Science  (2013).

Lewandowsky, S., Gignac, G.E. & Oberauer, K. The role of conspiracist ideation and worldviews in predicting rejection of science. PloS one 8, e75637 (2013).

Lieberson, S. Making it count : the improvement of social research and theory (University of California Press, Berkeley, 1985).

Smith, T.W. Trends in Confidence in Institutions, 1973-2006. in Social Trends in American Life: Findings from the General Social Survey Since 1972 (ed. P.V. Marsden) (Princeton University Press, 2012).

Update on Tuesday, November 25, 2014 at 5:15PM by Registered CommenterDan Kahan

Response from Gordon Gauchat

Thanks for the heads up and I read your thoughtful comments. I think I largely agree with the substance of what you are saying here, with some minor esoteric points of disagreement. For example, your argument about over-specification seems overblown given that the point estimate the model produces seems very accurate.

 So, I will start with the particulars about the PSPS paper, and then move onto broader themes. I do agree about some measurement/over-specification issues you bring up, especially about how to measure left-right orientation after controlling forDemocrat-Republican. Unfortunately, one has to accommodate reviewers in the process of publishing articles. I no longer use this technique (and wasn't a big fan in the first place), and in my current research I combine left-right ideology and Democrat-Republican party identity into a single continuous scale. I also agree that the dependent variable is limiting, with only three response categories and the need "for simplicity" to split up the data into a "great deal of confidence" and "only some" and "hardly any." I think the most important point here is not about the point estimates of particular categories, but change over time.

I think there has been too much emphasis on the "limits" of the question and not enough attention paid to identifying the mechanism behind the change in public perceptions among conservatives. That is, what is the mechanism(s) that produces the observed change among conservatives? I also think historical and social context is important, but often forgotten here.

As you point out, the devil is in the details, but I would elaborate on other complications, some of which you only touch on. In an earlier post on the PSPS paper, you suggested that the mechanism behind the change is likely climate change: conservatives perceptions of science are "biased" or I would argue related to the highly politicized discourse about climate change (which they oppose). The reasonable question is: does a change in conservative perceptions, driven by their perceptions toward climate science and environmental activism, really say anything about the cultural authority of science. I think the answer is YES. Here are a couple of very important points that need clarifying.

1) Science enjoys a tremendous amount of cultural authority across all segments of society. All of the public opinion data support this story. But, this DOES NOT mean that science's cultural authority is limitless. And, it maybe useful to understand where these limitations generally lie and to understand group-specific trends. That is, social science researchers (you and I) want to know how society is culturally divided in its acceptance/skepticism of the various aspects of science-in-society, including its relationship to the state or regulatory apparatus, or some specific-issue like climate change.

2) In fact, observing that most people say science is trustworthy or "beneficial" has a limited meaning, because we are really concerned about the connection between science and public policy. And, even if a small but well-funded and mobilized group opposes some key policy (climate change; OSHA regulation; vaccines), this can have an impact on how or whether a policy is enacted. Simply, in democratic societies like our own, some minorities have significant power. Also, because there are no "anti-science activists" who oppose all scientific activity and believe it must be stopped at all costs, it might be difficult to identify the limitations of science's authority. I would argue that the limits of scientific authority relate not to its "epistemic authority," (its ability to make claims about the world), but to its ability to influence policy or regulate human behavior. However, the use of science and scientific knowledge toward these ends seems to be exactly the direction advanced democracies are moving (or at least some parties in these societies). So, I would argue that public perceptions about the intersection of science and the state are really at issue here. Also at issue are attitudes about specific policies where scientific evidence are manifestly important to identifying and managing a social problem. In this sense, I wonder if climate science is a particular example of something larger about "regulatory science" in late modern society. How can we use the scientific knowledge that the public is paying for effectively? What are the limits here? Is the large scale funding and use of scientific knowledge for public policy feasible and politically supported?

3) The cultural authority of science is multidimensional and public perceptions are relational. This is an important point, because public perceptions involve a field of relationships, people don't view science NET OF other objects in society (groups and institutions). I think we need to be reminded of this continuously, because conservatives could become more "skeptical of science" if science and scientists are seen as more liberal and if Democrats are seen as strong supporters of science (these perceptions about other groups need not be true either). Multiple regression models do not encourage this sort of relational thinking, but are not entirely useless in this regard either. However, the researcher has to look at a variety of data and, in many cases, speculate about the mechanisms driving them. Thus, what started as a concern about government regulation and environmental issues could turn into a general disposition toward science among conservatives: if people perceive science in relation to their perceptions of other groups and their perceptions, then this is would produce a "more general" skepticism of scientific authority across the issues. In fact, recent public opinion data from CBS and PEW research point in this direction: climate change, vaccines and views on evolution are becoming "more polarized." Thus, attitudes toward science must account for public perceptions about science IN RELATION TO government policy, technological advances, public funding, economic innovation, their perceptions of other social groups' perceptions, and not search endlessly for the single dimension of science's cultural authority to measure. Given that researchers have yet to fully embrace this multidimensional approach, I think the jury is still out on whether or how science is politicized in the U.S., but I think the data encourage further exploration. Certainly, no single study of public opinion data can be definitive.

4) Finally, I think that conservative challenges to science (if real) are significant, because these dispositions COULD represent a rethinking of the way science relates to the regulatory state. If there is a politicized skepticism about how science is funded, and how scientific knowledge is incorporated into state-policy, then the way science is organized may well change, possibly in the form of severe retrenchment of federal funding in the U.S. Whether this is good or bad, I can't say for certain. But, I think there is currently much stronger support for science austerity on the political right in the U.S., not just attempts to "defund" the EPA. This proposition might be tested in a few years.    

Update on Thursday, November 27, 2014 at 2:21PM by Registered CommenterDan Kahan

My response to Gordon's response

Thank you, Gordon.

I've been thinking all this time since you posted your reflections & still haven't come up with a set of my own that fully reciprocates the contribution your response makes to thinking about the issues we are discussing. If we add to the value of your response the insights reflected in the comments of @L.Hamilton & others, then I'm really coming up short-handed.

But I am moved to make these observations:

1.  What's the point? 

I just want to underscore your observation that we "largely agree."

Indeed, your response doesn't take issue with my analysis of why the results in PSPS don't support claims (asserted incessantly by tribal contempt mongers and even by some scholars) that members of the public who identify as "conservative" "distrust" science or scientists, and that this "anti-science" sensibility explains conflict over climate change & other issues that turn on policy-relevant science. 

I'm pointing this out not in the spirit of a high-school debater who tries to convince the judges that she "wins" because her opponent has implicitly "conceded" her point by "ignoring" it, etc. (you didn't ignore anything, certainly).

Rather, I feel impelled to note that you aren't taking issue with my analysis so that I can remedy what might well have been a failure on my part to emphasize even more strongly than I did that the sort of critique I was offering -- one aimed at the inferences being drawn from your study -- is not a criticism of the soundness or value of the research reflected in PSPS! 

As I've mentioned in the post and have stated before, I think it's a really cool paper & calls our attention to a real trend in one measure of public opinion that we ought to try to understand as we puzzle through the larger question of why issues that admit of scientific inquiry so often (but less often than people think) generate political polarization.

It doesn't surprise me at all, then, that you aren't joining issue directly with my analysis.  It heartens me by corroborating my surmise that you that you yourself see nothing worth defending in the cartoonish characterizations of your study featured in sectarian and pseudo-scholarly discourse.

2.  What's the question?

Your comments are directed at the much more interesting, substantive issue of what we are to make of divisive, cultural conflict over policy-relevant science within a society that itself features very broad, very deep commitment to science, both as an institution and as a way of knowing.  How do such conflicts happen? What impact might they have on how science, as an institution and as a way of knowing, figures in political life & our society generally?

These are the issues that fascinate you & me and others who are in our scholarly conversation.  Unlike the pathetic and universally demeaning "who's really ant-science-- you or us?" question, they are worth serious attention.  They really matter.

In relation to them, I think you & I & others agree on many points but likely do disagree on some important ones too.

In a nutshell (a nanotube even), I think political polarization over risks and other policy-relevant facts is a consequence of the latent distrust citizens with opposing cultural identities have of one another, & their suspicion that "science" is being invoked opportunistically, disingenuously to disguise as claims about "how the world works" what are in fact contested understandings of "how we should live."

I don't think this dynamic reflects any sort of lurking, imminent ambivalence toward science on anyone's part.  

I share your worries that the consequence of this sort of conflict could damage the credibility of science, but to be honest I don't worry that much about that.  

I am more worried about the barrier this sort of "cultural conflict of fact" poses to our getting the value of our common commitment to science-informed policies, and about the degrading effect it has on liberal public reason in our political culture generally.

But much more important to me than "proving" I'm "right" and you or anyone else "wrong" about these things is that we address our competing conjectures in the manner that we agree is the only one that can warrant a basis for crediting any of them: by using the disciplined method of collecting observations, making reliable measurements, and drawing valid inferences that is the signature of science.

My engagement with PSPS here has to do with whether the evidence that it validly assembles and measures supports particular inferences relating to claims that matter to you and to me, and also relating to some that I think likely don't matter to either of us except to the extent that they create an ugly and unfortunate distraction.

You might be right in the broader claims you advance in your response-- ones that also inform the interesting comments of @LHamilton & others below.

But as I know you know (I make this point b/c so many others don't seem to get this), the question is whether and how much the PSPS evidence supports your position--not whether the position is right, much less how, if one already accepts that conservative members of the public "distrust" science, that position can be used to support an interpretation of the evidence in PSPS consistent with that very position.

3.  What's the upshot?

Implicit in your response & I now want to make explicit in mine is the most important agreement we share: that exchanges like this -- earnest, open ones, focused on challenging one another's best understandings of what we are able to observe using the tools of valid empirical inquiry at our disposal -- are essential to the process by which shared knowledge grows.

I was moved to offer this set of reflections on your paper at this particular time in part to answer people who clearly don't understand it, and who mistakenly view an empirical study as "settling" or "proving" a proposition rather than as simply adding another increment of evidence to be balanced along with all the rest as reasoning people weigh the considerations on both sides of a difficult question; who mistakenly view "peer review" as ending with publication, when in fact, that is precisely when the most important form of peer review starts!

But the truth is, I've been reflecting on and engaging with PSPS since the day it was published.  It has unquestionably advanced my understanding of matters at the center of my research.

This exchange, including your response & the comments of @LHamilton & others I have received off-line, have helped crystallize for me too that what is very much needed to advance our shared interest in the questions we are addressing are much better measures of trust in science.

Actually, I know you already agree with this.  It is one of the conclusions of your excellent study,The cultural authority of science: Public trust and acceptance of organized science, Public Understanding of Science 20, 751-770 (2011), which is filled with interesting analyses of new "science attitude" measures from the 2006 GSS but which you yourself help to show really don't help us to sort out anything -- because it really isn't clear what they are measuring (and they certainly weren't measuring the same thing; they didn't form a reliable scale).

When people trot out the tiresome, empirically uniformed "growing public distrust in science" claim, I like to point out  how NSF Science Indicator items (themselves part of the GSS) and other items included in the 2009 Pew public science attitude study uniformly indicate public reverence for science. I like to point out too how universal that sentiment is across diverse cultural, political, and religious groups, etc.

I'd say, in fact, that the GSS "Confidence" in the "science community" item tells the same story, once you consider how responses to it relate to the public's confidence in the full set of 13 public institutions featured in the GSS "Confidence" item.

You in your response and others at various times in their own comments suggest that these "I love science!" measures (let's call them) might be "missing" some more basic kind of ambivalence toward at least some forms of science, a disaffection that might be growing and that might be consequential for certain sorts of policy disputes.

I agree that it would be a mistake to assume the "I love science!" items are supplying a valid measure of what you have in mind.

I'm not sure what they are measuring, frankly, because no one, despite all the scholarly writings about what one or another measure of public opinion suggests about "trust" in science, has ever even defined the construct of "trust in science" they have in mind, much less validated the measures they are using to test their various hypotheses about it!

But then it's clear what's holding us up-- what we need not necessarily to assure convergence of our views but rather to continue the conversation that we know will make us both smarter whether or not we end up agreeing!

We need to articulate clearly what the relevant constructs are as we continue to assess competing conjectures that feature opposing claims about "trust in science," and we need reliable validated measures of those constructs.

(I would have thought it would be obvious that if we are trying to figure out whether in fact "distrust" in science among the public generally or among some particular subpopulation accounts for disputes over issues like climate change or even evolution, the positions of members of the public on those issues can't be used as measures of "trust in science"; those who correlate one or another group characteristic with climate skepticism or disbelief in evolution or any other issue to "prove" that "diminishing trust in science" can be attributed to one or another group's "science skepticism" are plainly arguing in a circle.  But they keep chasing their tails in this way, as do those who ignore evidence that the "trust" measures they use to examine particular risk controversies can in fact be shown to be measuring the very attitudes they are trying to explain. Bizarre!)

I'm betting that we'll eventually see valid "trust in science" measures that can be used to examine the interesting, important, and certainly very plausible claims you discuss in your response.  And I'm betting that your future work will make a major contribution to their development.

 

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.