follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: New paper on why "affirmative consent" (in addition to being old news) does not mean "only 'yes' means yes" | Main | Why the science of science communication needs to go back to highschool (& college; punctuated with visits to museum & science film-making studio) »
Thursday
Oct022014

What happens to Pat's perceptions of climate change risks as his/her science literacy score goes up?

A curious and thoughtful correspondent asks:

A while ago, I had read your chart with two lines in red and blue, showing the association between scientific literacy and opinion on climate change separately for liberals and conservatives. [A colleague] gave it favorable mention again in her excellent presentation at the * * * seminar today. 

The subsequent conversation reminded me that I had always wanted to see in addition the simple line chart showing the association between scientific literacy and opinion on climate change for all respondents (without breakdown for liberals and conservatives). Have you ever published or shared that? Please share chart, or, if you haven't ever run that one, please share the data?
Much thanks!

Sure!  

The line that plots the relationship for the sample as a whole will be exactly in between the other 2 lines.  The "right/left" measure is a composite Likert scale formed by summing the (standardized) responses to 5-point left-right ideology & 7-point party-identification measure. In the figures you are referring to, the relationship between science literacy and partisan identity is plotted separately for subjects based on their score in relation to the mean on that scale. 

I've added a line plotting "sample mean" relationship between global warming risk perceptions (measured on the "Industrial Strength Risk Perception Measure") to figures for two data sets, one in which subjects' science comprehension was measured with "Ordinary Science Intelligence 1.0" (used in the CCP Nature Climate Change study) & other in which the same was measured with OSI_2.0

click me for more detail! & for time of your life, I promise!

I'm sure you can see the significance (practical, as well as "statistical") of this display for the question you proposed, viz., "What's the impact of science literacy in general, for the population as a whole, controlling for partisanship, etc?" 

It's that the question has no meaningful answer.

The main effect is just a simple average of the opposing effects that science comprehension has on climate change risk perceptions (beliefs, etc) conditional on one's cultural identity (for which right-left political outlooks are only 1 measure of many). 

If the effect is "positive" or "negative," that just tells you something about the distribution of cultural-affinities, the relative impact of such affinities on risk perceptions, &/or differences in the correlation between science comprehension and cultural outlooks (which turn out to be trivially small, too) in that particular sample.

Maybe this scatterplot can get this point across visually:

 

In sum, because science comprehension interacts with cultural identity and b/c everyone identifies more or less with one or another cultural group, talking about the "main" effect is not a meaningful thing to do.  All one can say is, "the effect of science comprehension on perceptions of climate change risk depends on who one is." 

Or put it this way: the question, "What's the effect of science comprehension in general, for the population as a whole?" amounts to asking what happens to Pat as he/she becomes more science comprehending.  Pssssst . . . Pat doesn’t exist!

 

Again, I'm sure you get this now that you've seen the data, but it's quite remarkable how many people don't.  How many want to seize on the (trivially small) "main effect" & if it happens to be sloped toward their group's position, say "See! Smart people agree with our group! Ha ha! Nah, nah, boo, boo!" 

They end up looking stupid. 

Not just because anyone who thinks about this can figure out what I've explained about the meaninglessness of "main effect" when the data display this relationship. 

But also because when we see his relationship and when the "main effect" is this small, that effect is likely to shift direction the next time someone collects data, something that could happen for any of myriad non-consequential reasons (proportion of cultural types in the sample, random variation in the size of the interaction effect, slight modifications in the measure of science literacy). At that point, those who proclaimed themselves the "winners" of the last round of the "whose is bigger" game look like fools (they are, aren't they?).

But like I said, it happens over & over & over & over ....

But how about some more information about Pat? And about his/her cultural worldview & ideology & their effect on his/her beliefs about climate change?  Why not-- we all love Pat!

 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (14)

Dan -

"What happens to Pat's perceptions of climate change risks as his/her science literacy score goes up?"

This still bugs me. Only a longitudinal study could show what happens when Pat's science literacy score goes up.

October 2, 2014 | Unregistered CommenterJoshua

@Joshua:

Pat is a stand in for "person all else equal."

If one gets that, the question shouldn't be any more bothersome than any other "what happens to y as x increases?" query.

In any case, one can certainly learn about the plausibility of causal claims by figuring out what one would *expect* already to observe if they were true. These data are not what people who think the source of controversy over climate is "lack of science comprehension" would expect to see. They are more reason to think that the problem is something that is denying us the benefit of the science comprehension we have on hand.

October 3, 2014 | Registered CommenterDan Kahan

==> "If one gets that, the question shouldn't be any more bothersome than any other "what happens to y as x increases?" query."

Well, that would bug me too - but maybe I just don't understand?

Seems to me that you have no measure of what happens to people (w.r.t views on climate change) as they become more scientifically literate. To measure that, you'd need to do a longitudinal study.

You only have cross-sectional data that compares people in Group X (with a certain science literacy score) with people from Group Y (with a certain science literacy score). In point of fact, you don't know that people become more polarized as they become more scientifically literate - you only know that people who are more scientifically literate are more polarized.

For example, is it possible that people who are more inclined towards polarization, (i.e., people who are more inclined to attach to an identity related to positions on climate change), are more incentived to become more scientifically literate? That is different than someone who starts out relatively weakly identified and then becomes more polarized as a result of becoming more scientifically literate.


==> " These data are not what people who think the source of controversy over climate is "lack of science comprehension" would expect to see. "

That doesn't bug me - and the point I'm making isn't directly related to that issue.

October 3, 2014 | Unregistered CommenterJoshua

On a trivial level, you could do the same study with the same people - the second time around they would know all the answers to the questions and their 'science literacy' would go up. I'm guessing their opinions on climate change would stay the same.

That is, of course, partly because the 'science literacy' test doesn't so much measure science literacy as the ability to remember a list of pop-science trivia answers.

But if you liked, you could do an experiment where you administer the climate question first thing in the morning (asking also their reasons for thinking so), then for the rest of the morning you subject them to a series of lectures on scientific method, Enlightenment philosophers, Feynman videos, etc. Then give them an afternoon to research the climate question on their own, with text books, IPCC reports, the internet, mainstream and sceptical climate scientists on hand to ask questions of, then ask the climate question again. Don't forget to ask why.

If you want to make an experiment of it - one set you could give the lectures on scientific method, another lot you could give lectures on scientific authority. i.e. presentations showing scientists as intellectual giants who are always right, who work by occult methods and arcane symbols that mere mortals cannot hope to understand. (The way it is generally portrayed in the media.)

I don't know if it would be any more enlightening, but it sounds like fun.

--

I was looking at your graphs, and wondering about a question I've pondered before. Why do the people on the left hand end of the graph hold the same opinions, irrespective of political alignment? Is the question not still asking "who they are" rather than "what they know"?

Or is it that these people are so disengaged that they don't know who they are?

October 4, 2014 | Unregistered CommenterNiV

@NiV:

Have you actually looked at the instruments? Or at what sort of information there is on the sorts of dispostitions they are measuring & the tests of their validity?

E.g., why is this a "pop up" quiz question?


Suppose you have a close friend who has a lump in her breast and must have a mammogram. Of 100 women like her, 10 of them actually have a malignant tumor and 90 of them do not. Of the 10 women who actually have a tumor, the mammogram indicates correctly that 9 of them have a tumor and indicates incorrectly that 1 of them does not have a tumor. Of the 90 women who do not have a tumor, the mammogram indicates correctly that 81 of them do not have a tumor and indicates incorrectly that 9 of them do have a tumor. Imagine that your friend tests positive (as if she had a tumor), what is the likelihood that she actually has a tumor?

If the instrument successfully predicts vulnerability to one or another reasoning bias likely to defeat drawing valid inferences from evidence -- like "conjunction fallacy" & "gamblers fallacy" & "probability neglect" & "cell A-B" bias in covariance --why would you think it has no bearing on how reliably people can make use of valid scientific information in everyday decisinomaking?

"Not taking anyone's word" is one thing. Not bothering to look oneself at readily available information before forming an opinion is another.

On Figure, I take it you are asking why the plotted regression lines converge or close close to it at far left? First, as you know, the density of observations at far left is pretty low; if you look at the scatterplot, you get a more accurate impression. Second, "being who one is" is a form of social competence; it's not that hard but if one is 5th percentile or below, one is likely one of those rare, independent thinking clueless bumblers. Finally, the very small % of subjects who get literally *0* questions right out of 18 or 21 items are probably not responding in good faith; one could toss them but since their responses will essentially just be noise -- that's why they "average" out toward the middle of the scale -- & I have a big enough sample to shake that off, I prefer just to leave them be rather than implement some set of criteria for removing "noncomplying subjects."

October 4, 2014 | Registered CommenterDan Kahan

"Have you actually looked at the instruments? Or at what sort of information there is on the sorts of dispostitions they are measuring & the tests of their validity?"

Yes. I recall looking at it previously.

The questions also include 'science trivia' questions on radioactivity, lasers, electrons, nitrogen, Copernicus, and antibiotics, yes? And five questions on whether people understand probabilities expressed as percentages, yes? And for 92% of the population, the conditional probability question doesn't contribute any discrimination, right?

"why would you think it has no bearing on how reliably people can make use of valid scientific information in everyday decisinomaking?"

Numeracy and logic help. But that's not a test of whether you understand the scientific method - it's philosophy, principles, techniques, pitfalls, etc.

Let's say our everyday decisionmaker is confronted with the scientific information that "Eight out of ten owners said their cat prefers KittyChow". How do they interpret it?

Well, a numerate person would be able to say that was 80%, and that if you asked a 1000 owners you would get around 800 saying yes. A scientifically literate person would say: Compared to what? How big a sample size was that? How did you pick the sample? How did the owners test what the cats' preferences were? Could it be that cats tend to like what they're usually given and that KittyChow just happens to be the cheapest and most popular brand on the market at the moment?

Do you see the difference? It's not simply about knowing what "eight out of ten" actually means - i.e. that it's 80%. It's not even about knowing that experts have said this, and being able to remember the statistic correctly. It's about understanding what is needed for a scientific argument, and recognising if it's there. Granted, the average person is not going to be able to check the exact binomial confidence limits based on your raw survey results. But they ought to know that a sample size of 500 is better than a sample size of 10, and that a sample size of 10 is pretty rubbish. They ought to know that a random selection of customers in a supermarket is better than surveying the staff members of the KittyChow marketing department. They need to be able to recognise that without all this supporting information, the statistic is meaningless and doesn't tell them anything useful.

It is of no practical use to the man-in-the-street to know that the earth goes round the sun or vice versa, but it is rather handy to be able to spot bad science.

Your OSI 2.0 has one question that I'd say definitely tests this (the 'valid' one). The OSI 1.0, as I recall, was worse.

--
Of course, you might have the problem that if you tested for real scientific literacy, virtually none of the population would have any useful form of it and would all be crowded down at the left-hand end of the scale. That would itself explain why scientific literacy wasn't working as expected.

You shouldn't take this as a criticism of the scale, though. It's measuring something useful; I'm just not sure that 'scientific literacy' is the most appropriate description for it.

"On Figure, I take it you are asking why the plotted regression lines converge or close close to it at far left?"

Yes.

"First, as you know, the density of observations at far left is pretty low; if you look at the scatterplot, you get a more accurate impression."

So if the sample density is lower should the error bars be wider?

"Second, "being who one is" is a form of social competence; it's not that hard but if one is 5th percentile or below, one is likely one of those rare, independent thinking clueless bumblers."

Interesting thought.

"Finally, the very small % of subjects who get literally *0* questions right out of 18 or 21 items are probably not responding in good faith"

Possibly. As you know, I wonder about the good faith of some of those at the right hand end, too. The possibility of manipulation is always a problem for politically-contentious surveys.

"one could toss them but since their responses will essentially just be noise"

Will it? When doing a linear regression, values close to the ends of the range of the independent variable are weighted more heavily. If these are 'noise', doesn't this contaminate your regression line? Doesn't the claim that polarisation increases with science literacy depend on this comparison?

October 4, 2014 | Unregistered CommenterNiV

Here's the pre- and post-test I'd like to see.

Ask a series of questions about beliefs on climate change and then run an exercise where the participants are engaged in a guided discussion about motivated reasoning and confirmation bias - in a general sense and then contextualized to the debate about climate change. Include also, a discussion of decision-making in the face of uncertainty, both in a general sense and then applied to questions related to climate change policy.

Then use the same battery to assess views on climate change. My guess is that you'd see a substantial difference in the post test assessment of beliefs - with significantly less polarization.

If you screened the participants first, with some sort of method used to select people who would be willing to commit to consensus resolutions about policy recommendations, you'd get an even more significant signal of reduced polarization.

IMO, if people are engaged in a well-structured participatory process, they can assimilate more information about a subject w/o necessarily becoming more polarized. The greater polarization seen among people that are more knowledgeable about climate change is not because integrating more knowledge increases polarization but because people more inclined towards polarized views seek out more information so they can reinforce their starting orientation.

In participatory democratic urban planning, participants are provided with more information concurrent with a net outcome of reducing polarization and loosening grips on positions so as to advance a process of identifying common interests. One key, which relates a bit to a commonly found "skeptical" argument, is that the information be provided within a non-hierarchial framework, where all participants have equal power to shape outcomes. With such a structure, people are more free to become invested in shared benefit as opposed to defending (perceived) self-interest.

IMO, as long as people engage with a zero sum game scorched earth approach to an issue, the independence of "information" and open-mindedness will continue. Information, in a polarized context, is ammunition.

October 4, 2014 | Unregistered CommenterJoshua

@NiV:

The bars are wider at ends.

You can see from the scatter plot what's going on, particularly if lowess is superimposed, which confirms that the "scienc literacy 0's" (< 10 observations; maybe they really are just that uncomprehending) are basically just noise but more importantly-- given your question-- that polarization increases across the entire range of science literacy

October 4, 2014 | Registered CommenterDan Kahan

Can you show the lowess plot with its error bounds?

I was thinking that the data is heteroscedastic for each subset, so you probably need to use weighted least squares, or a weighted lowess, which ought to expand the error bounds at the left hand end more than the right.

My intuition may be wrong, but that was what I was expecting to see. It doesn't change the result, though.

Another interesting one would be to plot the variation in estimated standard deviation as well as the estimated mean. The regression lines are showing the estimated mean of the distribution and the errors in the estimation of it. But the distribution spread is also varying across the chart (standard deviation of the residuals). Showing that would make it clear that the values at the left hand end are very fuzzy.

I'm afraid I often look at your charts and think things like: "why are people on the left of the chart reporting 6 and those on the right of the chart reporting 3?" But this is misleading, because while it's true that those on the right genuinely are all reporting around 3, those on the left are reporting values all over the show, and 6 is just the middle of the distribution. Because we only see the mean and not the variation, we're missing a lot of information about the distribution.

The scatter plots are a lot more useful for understanding (in my view) than the summary regressions. This is the same point as was made a while ago about the benefits of reporting raw data and code along with the processed conclusions.

October 5, 2014 | Unregistered CommenterNiV

@NiV:

Not sure what you mean by "weighted lowess." Lowess (locally weighted scatterplot smoothing) is weighted. It basically is a model-free way to identify trends in data. Useful for exploratory purposes-- but not much else, in my view.

.

October 5, 2014 | Registered CommenterDan Kahan

@Joshua:


Go here.

October 5, 2014 | Registered CommenterDan Kahan

"Not sure what you mean by "weighted lowess.""

Sorry, yes I suppose it is a bit ambiguous.

Ordinary least squares regression assumes that all the individual measurements have the same accuracy - the variance is the same at every point. This isn't always true - sometimes it is known that some measurements are less accurate and more widely spread than others, and if you don't account for this, those worst errors tend to dominate the result. Weighted linear regression is a variant on ordinary linear regression where the individual errors are weighted individually proportionally to their estimated measurement accuracy before being squared and summed, and the minimum found. It's a handy way to deal with a variable that changes in variance.

LOESS or LOWESS is based on a model where the output is a deterministic function of the independent variable with smooth first and second derivatives that changes only slowly, plus Gaussian independent identically distributed errors. It attempts to estimate the smooth function at a point by selecting those observations close to it and fitting a first or second order regression curve to it, picking the neighbouring points out with a weighting function with a narrow peak centred at the estimation point. The peak is then slid along the axis to estimate the smooth function at all the points. This peaked weighting function is what the W in LOWESS stands for.

But you can also incorporate individual error weights in the same way as is done with weighted least squares. So you combine two weight functions - one fixed, representing the estimated measurement error, and one sliding, representing the neighbourhood around the point where the smooth function is being estimated. This simultaneously handles variations in measurement accuracy as well as allowing a more general deterministic trend function.

I've no idea what statistics software you're using, but I would guess there's probably an option for setting weights in the LOESS/LOWESS function. There certainly is in R.

Here's a quick R example to illustrate what I mean.
# LOESS plot with confidence bounds

testloess <- function() {
# Generate a triangular 'wedge' of data
x=sort(rnorm(50,50,20))
y=runif(50,0,10000)*x/100


# Support interval for prediction
xx=seq(min(x),max(x),1)

# Calculate fit
lo=loess(y~x,weights=1/x^2)
prd=predict(lo,xx,se=TRUE)

# Plot the data
plot(x,y,ylim=c(-5000,15000))

# Plot the loess fit
lines(xx,prd$fit,lwd=2)

# Draw the confidence bounds as a
# shaded region - 1-sigma and 2-sigma
polygon(c(xx,rev(xx)),c(prd$fit+prd$se.fit,
rev(prd$fit-prd$se.fit)),
col=rgb(1,0,0,0.3))
polygon(c(xx,rev(xx)),c(prd$fit+2*prd$se.fit,
rev(prd$fit-2*prd$se.fit)),
col=rgb(1,0,0,0.3))
}

testloess()

Calling the function generates some random data where the variance increases from left to right. At the left-hand end it is zero, at the right hand end it is pretty big. The LOESS fit is done using a vector of weights that are inversely proportional to the variance. The resulting fit smooths less at the left hand end where the data is known to be more accurate, and the error bounds (1-sigma and 2-sigma shown) can be seen to expand towards the right-hand end.

R is free and pretty good.

No method of fitting data is model free, but I would agree that it shouldn't be relied upon in this case because the model it uses is not particularly plausible as a mechanism for explaining people's opinions, and the errors are pretty unlikely to be Gaussian. (But then, the same goes for linear regression. Tricky, huh?)

I don't know if the amount of effort is worth it in this case. It was just an idle curiosity on my part if by chance you could do a plot very easily - I really don't care all that much, and if it's going to take you a lot of time/effort, don't bother on my account. But it's useful to know and be able to do this stuff for those occasions when it does matter, so I've given some more detail in case you're interested.

I realise you might know all this, and if so I apologise, but maybe others might be interested too?

October 5, 2014 | Unregistered CommenterNiV

Just the raw data is shocking to me. Is this to say that scientific literacy does not lead inevitably to acknowledgement that the planet is in deep trouble ecologically, that people's political persuasion is the determining factor?

October 9, 2014 | Unregistered CommenterRobert Bate

@Robert. Yes, that's what it means. Although what that means is deeply mysterious....

I wonder what pct of US population is aware of this? And what relationship is between knowing about this & cultural outlooks & science literacy?....

October 9, 2014 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>