follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Is the controversy over climate change a "science communication problem?" Jon Miller's views & mine too (postcard from NAS session on "science literacy & public attitudes toward science") | Main | Is the HPV vaccine still politically "hot"? You tell me.... »
Thursday
Jan282016

CCP/Annenberg PPC Science of Science Communication Lab, Session 2: Measuring relative curiosity

During my stay here at APPC, we'll be having weekly "science of science communication lab" meetings to discuss our ongoing research projects.  I've decided to post a highlight or two from each meeting.

We just had the 2nd, which means I'm one behind.  I'll post the "session 1 highlight" "tomorrow."

One of the major projects for the spring is "Study 2" in the CCP/APPC Evidence-based Science Filmmaking Initiative.  For this session, we hosted two of our key science filmmaker collaborators, Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.

One thing that came up during the session was the distribution of “science curiosity” in the general population.

The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1.  As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish, ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.

Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution.

Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.

But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?

The report addresses the distribution of science curiosity among various population subgroups.  The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).

Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.

The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.

Indeed, the graphic display was consciously selected to help communicate that.  Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries

By representing the scores of the opposing groups in relation to the scale's standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“less than 1/4 of a standard deviation--SEE???”).

But as came up at the session, this graphic is pretty lame.

Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation.   Aside from being able to see that there's not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.

So what might work better?

Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!

I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data.  In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.

Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.

In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores. 

As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of "predicted study outcomes" in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.

If one does this, one can determine the “weight of the evidence” that a study furnishes for one hypothesis relative to another by comparing how likely the observed study result was under each of the the probability-density distributions of “predicted outcomes” associated with the competing hypotheses.

This value—which is simply the relative “heights” of the points on which the observed value falls on the opposing curves—is the logical equivalent of the Bayesian likelihood ratio, or the factor in proportion to which one should update one’s existing assessment of the probability of some hypothesis or proposition.

Here, we can do the same thing.  We know the mean and standard deviations for the SCS scores of opposing groups.  Accordingly, we can determine the relative likelihoods of members of opposing groups attaining any particular SCS score. 

An SCS score that places a person at the 90th percentile is about 1.7x more likely if someone is “above average” in science comprehension (measured by the OSI assessment) than if someone is below average. 

There is a 1.4x greater chance that a person will score at the 90th percentile if that person is male rather than female, and a 1.5x greater chance that the person will do so if he or she has political outlooks to the "left" of center rather than the "right" on a scale that aggreates responses to a 5-point liberal-conservative ideology item and a 7-point party-identification item.

There is a comparable relative probability (1.3x) that a person will score in the 90th percentile of SCS if he or she is below average rather than above average in religiosity (as measured by a composite scale that combines response to items on frequency of prayer, frequency of church attendance, and importance of religion in one’s life).

A 90th-percentile score is about 2x as likely to be achieved by an “evolution believer” than by an “evolution nonbeliever.” 

Accordingly, if we started with two large, equally sized groups of believers and nonbelievers and it just so turned out that there were 100 total from the two groups who had SCS scores in the 90th percentile for the general population, then we’d expect 66 to be evolution believers and 33 of them to be nonbelievers (1 would a Pakistani Dr).

When I put things this way, it should be clear that knowing how much more likely any particular SCS score is for members of one group than members of another doesn’t tell us either how likely any group's members are to attain that score or how likely a person with a particular score is to belong to a any group!

You can figure that out, though, with Bayes’s Theorem. 

If I picked out a person at random from the general population, I'd put the odds at about 11:9 that he or she "believes in" evolution, since about 45% of the population answers "false" when responding to the survey item "Human beings, as we know them, evolved from another species of animal," the evolution-belief item we used.

If you told me the person was in the 90th percentile of SCS, I'd then revise upward my estimate by a factor of 2, putting the odds that he or she believes in evolution at 22:9, or about 70%.

Or if I picked someone out a random from the population, I’d expect the odds to be 9:1 against that person scoring in the 90th percentile or higher. If I learned the individual was above average in science comprehension, I’d adjust my estimate of the odds upwards to 9:1.7 (about 16%); similarly, if learned the individual was below average in science comprehension, I’d adjust my estimate downwards to 15.3:1 (about 6%).

Actually, I’d do something slightly more complicated than this if I wanted to figure out whether the person was in the 90th percentile or above.  In that case, I’d in fact start by calculating not the relative probability of members of the two groups scoring in the 90th percentile but the relative probability of them scoring in the top 10% on SCS, and use that as my likelihood ratio, or the factor by which I update my prior of 9:1. But you get the idea -- give it a try!

So, then, what to say?

I think this way of presenting the data does indeed give more guidance to a reflective person to gauge the relative frequency of science curious individuals across different groups than does simply reporting the mean SCS scores of the group members along with some measure of the precision of the estimated means—whether a “p-value” or a standard error or a 0.95 CI.

It also equips a reflective person to drawn his or her own inferences as to the practical import of such information.

I myself still think the differences in the science curiosity of members of the indicated groups, including those who do and don’t believe in evolution, is not particularly large and definitely not practically meaningful.

But actually, after looking at the data, I do feel that there's a bigger disparity in science curiosity than there should be among citizens who do & don't "believe in" evolution.  A bigger one than there should be among men & women too.  Those differences, even though small, make me anxious that there's something in the environment--the science communication environment--that might well be stifling development of science curiosity across groups.

No one is obliged to experience the wonder and awe of what human beings have been able to learn through science!

But everyone in the Liberal Republic of Science deserves an equal chance to form and satisfy such a disposition in the free exercise of his or her reason.

Obliterating every obstacle that stands in the way of culturally diverse individuals achieving that good is the ultimate aim of the of the project of which ESFI is a part.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (8)

R.e. Fig 7b, so you can easily make an effect look small by only using 1/4 of the y axis. Or choosing a bland color map, or whatever. Be very wary of any visualization that makes a difference look small by just making it hard to see. Either they don't understand visualization or they have an axe to grind. I liked your plots. Yes, it is a modest effect on the mean, but if the tails are interesting, then it's another story.

January 28, 2016 | Unregistered CommenterRob Maclachlan

@Rob: I agree effects can be made to "look small" by such means; & also big by the same ones! If the y-axis were -0.5 to +0.5 SDs, e.g., the effects would "look big." I'd worry more about the impression of the effect size being manipulated that way more if someone were manipulating a scale w/ arbitary units, like those on a Likert scale. One reason to use a standardized score is that people have a reference point for the effect size that means something-- in the way that differences in units on a Likert scale, even w/ CIs, lacks. But I think if one uses SDs, then one should display -1 to +1 at a minimum, b/c if the size looks "small" in relation to an SD, it probably is.

The point about the tails is a good one. The LRs are a fair way to convey this information only when the distributions are or are close to normal. If they were skewed, then one could use overlapping histograms or kernal density distributions. Latter would be better b/c easier to superimpose the plots that reflect the LR.

In fact, the SCS scores are pretty close to normal across all these groups.

January 28, 2016 | Registered CommenterDan Kahan

To clarify (and please tell me if I'm misreading/just wrong):

You say: "We know the mean and standard deviations for the SCS scores of opposing groups. Accordingly, we can determine the relative probability that anyone we observe at a particular SCS score came from one of the groups as opposed to the other."

This is only true if the "opposing groups" are of equal size, right? If the prior odds of being in one group as opposed to the other is 1:1? I think what you know directly from the graphs is the relative probability, between the two groups, of having a particular SCS score.

For the many traits that you split down the middle, there's no issue. But how did you measure evolution "believer" vs. "nonbeliever"? If there are 3x as many believers as nonbelievers (or vice versa), the statement "Someone who scores at the 90th percentile in science curiosity is 2x as likely to 'believe' in evolution as not 'believe' in it" is wrong, right? Because of the base rates?

Thanks!

January 28, 2016 | Unregistered CommenterMW

@Mw:

Correct -- we need to take account of base rates or priors to know the absolute probability of person believing in evolution.

The method I described tells us the likelihood ratio associated with the evidence in relation to the hypothesis "So & so is an x." Or in other words, if you tell me that a person scores 90th percentile on SCS, I can say that that the probability of being in the 90th percentile is 2x bigger for evolution believers than for nonbelievers; the relative probability is thus 2x as big that he or she believes in evolution.

That *doesn't* tell us the probability that the person believes in evolution-- only that we should now regard the odds as 2x as high as we otherwise would.

I tried to make that clear w/ discussion of BAyes theorem at end. But you have convince me that I haven't been as clear as I should have been.

So I've added a second Bayesian illustration that addresses exactly the case you descirbe & changed the language you referred to to read as follows: '

"Here, we can do the same thing. We know the mean and standard deviations for the SCS scores of opposing groups. Accordingly, we can determine how probable it is that someone from each group will attain any particular SCS score, and hence the relative probability that he or she belongs to one group as opposed to the other."

But given that, nothign else depends on the relative sizes of the the opposing groups.

Even if one is 100x as large as the other, we can figure out the relative probability of observing someone scoring in the 90th percentile conditional on being a member of one of those groups vs. the other. That's our likelihood ratio. We can now multiply our priors -- 99:1 that the person belongs to the bigger group -- by that.

January 28, 2016 | Registered CommenterDan Kahan

DK:

Thanks! And yes, your examples are totally clear (and were even before I commented the first time). It's just sentences like this that I think are confusing: "Someone who scores at the 90th percentile is about 2x as likely to be an 'evolution nonbeliever' than an 'evolution believer.'" (It's also backward, but that's easily fixed.) That sounds like you're drawing an overall conclusion. It sounds to me like: "this graph shows that if we look in the world and find someone in the 90th percentile, that person is about 2x as likely to be someone who believes in evolution as someone who does not." Do you disagree?

I also think I was a little confused by the use of the term "relative probability"...I don't think I've seen it used exactly this way before (except in your other post, I now see). The likelihood ratio seems more like a comparison of the relative probabilities. The "relative relative probability," perhaps.

(Sorry for the language pickiness! I just want to minimize misinterpretation!)

January 28, 2016 | Unregistered CommenterMW

@Mw--

I think you are right to worry about confusion. It is clearer to say that "an SCS score in the 90th percentile is 2x as likely among evolution believers than among evolution nonbelievers" instead of "someone who scores in the 90th percentile is 2x as likely to be a believer than a nonbeliever" -- my use of opposing groups that are equal in size for all the other examples makes it all the more likely that a reader will not see that I mean merely that the likelihood ratio is 2.

I've edited a bit more (in particular by changing the caption for the final graphic;)

January 28, 2016 | Registered CommenterDan Kahan

Yay.

January 29, 2016 | Unregistered CommenterMW

It seems to me a natural inclination to ((what would be scientific curiosity if it had been channeled properly)) can be and often is permuted or perverted in various ways, leading to wandering in philosophical labyrinths or lapping up Deepak Chopra style nonsense that is justified by allusions to quantum physics -- allusions that might in fact be consistent with the models presented by science popularizers so that we can feel like we understand quantum physics when we don't. Not to mention Christian or Islamic theological conceptual castles.

One problem is that using questionaires, you can't tell those whom Howard Gardner would say are testing virtuosos with no real understanding from those who really meaningfully grapple with science.

January 31, 2016 | Unregistered CommenterHal Morris

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>