follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Democracy & the science communication environment (video lecture) | Main | Weekend update: "Knowing disbelief in evolution"-- a fragment »
Monday
Aug252014

Lewandowsky on "knowing disbelief"

So my obsession with the WIGOITH (“What is going on in their heads”) question hasn’t abated since last week. 

The question is put, essentially, by the phenomenon of “knowing disbelief.” This, anyway, is one short-hand I can think of for describing the situation of someone who, on the one hand, displays a working comprehension of and assent to some body of evidence-based propositions about how the world works but who simultaneously, on the other, expresses-- and indeed demonstrates in consequential and meaningful social engagements-- disbelief in that same body of propositions.

One can imagine a number of recognizable but discreet orientations that meet this basic description. 

I offered a provisional taxonomy in an earlier post

  • “Fuck you & the horse you rode in on” (FYATHRIO), in which disbelief is feigned & expressed only for the sake of evincing an attitude of hostility or antagonism (“Obama was born in Kenya!”); 
  • compartmentalization, which involves a kind of mental and behavioral cordoning off of recognized contradictory beliefs or attitudes as a dissonance-avoidance strategy (think of the passing or closeted gay person inside of an anti-gay religious community);
  • partitioning, which describes the mental indexing of a distinctive form of knowledge or mode of reasoning (typically associated with expertise) via a set of situational cues, the absence of which blocks an agent’s reliable apprehension of what she “knows” in that sense; and
  • dualism, in which the propositions that the agent simultaneously “accepts” and “rejects” comprise distinct mental objects, ones that are identified not by the single body of knowledge that is their common referent but by the distinct uses the agent makes of them in inhabiting social roles that are not themselves antagonistic but simply distinct

The last of these is the one that intrigues me most. The paradigm is the Muslim physician described by Everhart & Hameed (2013): the “theory of evolution” he rejects  “at home” to express his religious identity is “an entirely different thing” from the “theory of evolution” he accepts and indeed makes use of “at work” in performing his medical specialty and in being a doctor.

But the motivation for trying to make sense of the broader phenomenon—of “knowing disbelief,” let’s call it—comes from the results  of the “climate science literacy” test—the “Ordinary climate science intelligence” assessment—described in the Measurement Problem (Kahan, in press).

Administered to a representative national sample, the OCSI assessment showed, unsurprisingly, that the vast majority of global-warming “believers” and “skeptics” alike have a painfully weak grasp of the mechanisms and consequences of human-caused climate change.

But the jolting (to me) part was the finding that the respondents who scored the highest on OCSI—the ones who had the highest degree of climate-science comprehension (and of general science comprehension, too)—were still culturally polarized in their “belief in” climate change.  Indeed, they were more polarized on whether human activity is causing global warming than were the (still very divided) low-scoring OCSI respondents.

What to make of this?

I asked this question in my previous blog post.  There were definitely a few interesting responses but, as in previous instances in which I’ve asked for help in trying to make sense of something that ought to be as intriguing and puzzling to “skeptics” as it is to “believers,” discussion in the comment section for the most part reflected the inability of those who think a lot about the “merits” of the evidence on climate change to think about anything else (or even see when it is that someone is talking about something else).

But here is something responsive. It came via email correspondence from Stephen Lewandowsky, who has done interesting work on “partitioning” (e.g., Lewandowsky & Kirsner 2000), not to mention public opinion on climate change:

1. FYATHYRIO. I think this may well apply to some people. I enclose an article [Wood, M.J., Douglas, K.M. & Sutton, R.M. Dead and Alive Beliefs in Contradictory Conspiracy Theories. Social Psychological and Personality Science 3, 767-773 (2012)] that sort of speaks to this issue, namely that people can hold mutually contradictory beliefs that are integrated only at some higher level of abstraction—in this instance, that higher level of abstraction is “fuck you” and nothing below that matters in isolation or with respect to the facts.

2. Compartmentalization. What I like about this idea is that it provides at least a tacit link to the toxic emotions that any kind of challenge will elicit from those people.

3. Partitioning. I think as a cognitive mechanism, it probably explains a lot of what’s going on, but it doesn’t provide a handle on the emotions.

4. Dualism. Neat idea, I think there may be something to that. The analogy of the Muslim physician works well, and those people clearly exist. Where it falls down is because the people engaging in dualism usually have some tacit understanding of that and can even articulate the duality. Indeed, the duality allows you to accept the scientific evidence (as your Muslim Dr hypothetically-speaking does) because it doesn’t impinge on the other belief system (religion) that one holds dear.

So what do I think? I am not sure but I can offer a few suggestions: First, I am not surprised by any sort of apparent contradiction because my work on partitioning shows that people are quite capable of some fairly deep contradictory behaviors—and that they are oblivious to it. Second, I think that different things go on inside different heads, so that some people engage in FYATHYRIO whereas others engage in duality and so on. Third, I consider people’s response to being challenged a key ingredient of trying to figure out what’s going on inside their heads. And I think that’s where the toxic emotion and frothing-at-the-mouth of people like Limbaugh and his ilk come in. I find those responses highly diagnostic and I can only explain them in two ways: Either they feel so threatened by [the mitigation of] climate change that nothing else matters to them, or they know that they are wrong and hate being called out on it—which fits right in with what we know about compartmentalization. I would love to get at this using something like an IAT

Anyhow, just my 2c worth for now..

I do find this interesting and helpful. 

But as I responded to Steve, I don’t think “partitioning,” which descirbes a kind of cognitive bias or misfire related to accessing expert knowledge, is a very likely explanation for the psychology of the "knowing disbelievers" I am interested in.

The experts who display the sort of conflict between "knowing" and "disbelieving" that Steve observes in his partitioning studies would, when the result is pointed out to them, likely view themselves as having made a mistake. I don't think that's how the high-scoring OCSI "knowing disbelievers" would see their own sets of beliefs.

And for sure, Steve's picture of the “frothing-at-the-mouth” zealot is not capturing what I'm interested in either.

He or she is a real type--and has a counterpart, too, on the “believer” side: contempt-fillled and reason-free expressive zealotry is as ideologically symmetric as any other aspect of motivated reasoning.

But the “knowing disbeliever” I have in mind isn’t particularly agitated by any apparent conflict or contradiction in his or her states of belief about the science on climate change, and feels no particular compulsion to get in a fight with anyone about it.

This individual just wants to be who he or she is and make use of what is collectively known to live and live well as a free and reasoning person.

Not having a satisfying understanding of how this person thinks makes me anxious that I'm missing something very important.   

References

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo. Edu. Outreach 6, 1-8 (2013).

Hameed, S. Making sense of Islamic creationism in Europe. Unpublished manuscript (2014).

Kahan, D. M. Climate Science Communication and the Measurement ProblemAdvances in Pol. Psych. (in press).

Lewandowsky, S., & Kirsner, Kim. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (52)

Dan,

Do you know of any studies that tie any of this to how strongly someone feels allegiance or bond to their social/cultural group? I don't even know how you'd measure that- are there any batteries of questions that elicit a sense for how strongly someone feels tied to their social group, how often someone acts in ways that demonstrate allegiance or how much fear or anxiety someone feels if they demonstrate behaviors or beliefs they perceive to be not accepted by their trusted social peers and authorities?

August 25, 2014 | Unregistered CommenterJen

The Wood "dead and alive" paper has been debunked
http://climateaudit.org/2013/11/07/more-false-claims-from-lewandowsky/
by Steve Mcintyre and also by Brandon Shollenberger.
Comically, the data showed thay zero individuals belived both statements.

August 25, 2014 | Unregistered CommenterPaul Matthews

@PaulMathews: thx. I'll be sure to look at it (hadn't read "dead & alive"). I don't think "conspiracy theorizing" has any bearing on the sort of orientation I'm trying to make sense of. There's not an ounce of conspiracy in the Muslim Dr. Of course, maybe I am making a mistake in thinking that is the "right answer"

August 25, 2014 | Registered CommenterDan Kahan

@Jen:

In our cultural cognition studies, we use continuous measures of subjects' worldviews. So the scores on the scales can be considered measures of intensity of affiliation.

But I think you have something else in mind: self-conscious, intentional partisan commitement to group?

It is different. I think it is possible that many of the people who score high on our measures are only dimly aware -- maybe even completely unaware -- of their membership in a particular affinity group. It just comes naturally to them...

August 25, 2014 | Registered CommenterDan Kahan

In our cultural cognition studies, we use continuous measures of subjects' worldviews. So the scores on the scales can be considered measures of intensity of affiliation.

Yep- that I understand. Maybe what I'm interested in is the actual score- the "radius" from the center of that 2x2 matrix- not necessarily which quadrant someone falls in, but how far out from neutral center they are- and whether there are any correlations between that "score" (as an absolute value?) and one's tendency to be a "knowing disbeliever?"

But I think you have something else in mind: self-conscious, intentional partisan commitement to group?

Almost. Not necessarily self-conscious or intentional- at least with respect to their own group or the existence of these groups- but a self-conscious acknowledgment of the idea they are saying believe/disbelieve?

I'm actually interested in how strongly someone is susceptible to the sort of cultural cognition/motivated reasoning you study... though I'm not suggesting that would necessarily be represented by the "radius" on the 2x2 matrix as I mentioned above... but essentially, we know people tend to draw conclusions that are motivated by the views held by trusted authorities people perceive to share their own worldview and by implications that affirm or threaten their values.. and as you've said, people are rarely conscious of or aware of these affinity groups... but they can be more or less aware of the threat/anxiety/dissonance that results if they hold an idea that goes against that worldview. So maybe in a way, it's not a primary awareness of cultural group but a secondary awareness of the discomfort in adopting views that challenge their values?

Maybe this is just coming back to the concept of being self-aware/reflective that I always revisit and never get anywhere satisfactory with...

However, since you brought it up:

I think it is possible that many of the people who score high on our measures are only dimly aware -- maybe even completely unaware -- of their membership in a particular affinity group. It just comes naturally to them...

Do you have data on this? Just curious. Would love to learn more about the people who aren't like everyone else and do demonstrate some kind of awareness of this sort.

August 25, 2014 | Unregistered CommenterJen

@Jen:
Don't have very good data on that. Closest would be that the measures pick up affinities even among people who are not very partisan or interested in politics generally (i.e., most people)

August 25, 2014 | Unregistered CommenterDan Kahan

As an interested layperson I've speculated too about how it can be that the highest scoring respondents to the OCSI can also be climate change denialists.
So I wonder if at some level this could be a consequence of that ancient saw: 'know thy enemy'?
If you know how the opposition thinks, you're better able to anticipate their responses and to sharpen the defences for your a priori world view. Maybe for these folks, their above average understanding of climate change science is something they mainly experience as knowing the mind of the enemy.

August 25, 2014 | Unregistered CommenterDon Norris

Dan:

Regarding "Obama was born in Kenya" etc, I don't know if it's quite that "disbelief is feigned & expressed only for the sake of evincing an attitude of hostility or antagonism." My impression is that it's more of a question of what standards of evidence are being applied in different situations. So, the extreme anti-Obama people are basically demanding iron-clad proof that Obama was not born in Kenya.

This goes on a lot, right? People don't require a lot of evidence for things they want to believe, but then they demand a high level of proof for things they want to disbelieve.

August 26, 2014 | Unregistered CommenterAndrew Gelman

@Andrew:

That could be. I meant this as a stylized example -- of how someone might express a statement the propositional content of which he or she knows is false in order to *do* something (in Austinian terms), viz., express an attitude, that actually depends on shared understanding that that proposition is false. For evidence that this might well have been what was being measured in typical polling questions on birthplace of Obama, consider Krosnick, J.A., Malhotra, N. & Mittal, U. Public Misunderstanding of Political Facts: How Question Wording Affected Estimates of Partisan Differences in Birtherism. Public Opin Quart 78, 147-165 (2014). But one could come up with other examples: "OJ didn't kill his wife," particularly for African American poll respondents in 199whenver it was.

It's also possible that "different standards of proof" explain variance in perceptions of risk or other culturally contested, policy-relevant facts. But in that case, we could understand the study of disagreement on this class of facts to be about what explains variance in "standards of proof."

A related problem is how to disentangle priors from politically motivated assessments of evidence. If one just shows partisans who disagree on some issue the some information & then observes that they still disagree, that doesn't prove they processed the information in a biased way. For one, they could have already seen the evidence & thus not have had any reason to change their views. For another, they both could have given it the *same* weight but remained divided (if less strongly) b/c of differences in their priors.

The way to avoid that confound in a study of motivated reasoning is to manipulate the valence of one and the same piece of evidence in relation to the subjects' motivating predispositions. Then measure what weight the subjects give the evidence conditional on its fit for the outcome that they are predisposed to favor. If subjects report *seeing* different things in a film of a protest conditional on whether it is abortion or military protestors, or give different answers to a covariance problem, etc. conditional on whether experiment results favor or disfavor their preferred outcome then, in Bayesian terms, they are fitting the likelihood ratio to their predispositions. They will either never converge or not converge as quickly as they would if they were asessing the evidence based on truth-seeking rather than predisposition-satisfying criteria.

That's a different phenomenon from someone having a "burden of proof" that is normatively weighted in light of aversion to type 1 & type 2 error. I can conclude that the information in a 2x2 contingency table suggests that adopting gun control *was* associated with reduced crime & still say, "but I need stronger proof -- enough to give me a 99.9% certainty that such a law would reduce crime" etc. But if I say, "hey, whaddaya know the experiment result suggested gun control increased crime" b/c I stopped analyzing the problem when the heuristically appealing but wrong result fit my ideological predisposition -- & I actually wouldn't have done that if the heuristic approach suggested crime actually went down; I'd have worked harder & got the right answer in that case-- then something is out of whack.

But in any case, this is not really the puzzle of "knowing disbelief." That is the situation when someone assents to or "knows" a bunch of propositions that seem to entail a conclusion he or she disbelieves. *That's* what I'm trying to figure out here

--Dan

August 26, 2014 | Unregistered Commenterdmk38

@Don:

I agree that if what's going on is that the high OCSI-scoring & high science-comprehending "liberals" both understand & accept the validity of scientific evidence on climate whereas the high OCSI-scoring "conservatives" understand but reject the validity of that same scientific evidence, the puzzle is solved.

But I don't think that is what's happening. I take up that possibility in the Measurement Problem paper & identify the reasons (including bits of empirical evidence in this study & in others) that make me discount that explanation. Take a look & tell me what you think.

The logic of what you say is so compelling & straightforward that I wouldn't even see a puzzle if I weren't starting w/ the premise that "Republicans are anti-science" is false.

But maybe there is still a way in which the "comprehend" but "reject" answer (which I think has to deal w/ many puzzles of its own!) works. It merits further study.

oh-- one more thing! I don't think the study respondents see themselves as "enemies" of anyone for the most part. If someone asks them, "Do you believe in climate change?," they answer in a way that reflects their cultural identity. But they aren't particularly worked up about the issue; very few people are. Moreover, they also happily *use* climate science evidence of the sort that their answer suggests they "reject" in common projects w/ people who have the cultural predisposition that corresponds w/ answering "yes" on the "believe in" question!

As you can tell, I don't think the study respondents include very many of the sorts of people who tend to get into long, drawn out & sometimes (not always; some of these guys like each other, I've observed) arguments about climate in response to blog posts. I'd say that the frequency of such people in my sample is, oh, 1% at most. That's b/c members of the my sample are nationally represenative of population; climate warriors, on either side, are not

August 26, 2014 | Registered CommenterDan Kahan

Don,

"As an interested layperson I've speculated too about how it can be that the highest scoring respondents to the OCSI can also be climate change denialists. [...] Maybe for these folks, their above average understanding of climate change science is something they mainly experience as knowing the mind of the enemy."

You would be very welcome to ask us, if you want to know.

Andrew,

"This goes on a lot, right? People don't require a lot of evidence for things they want to believe, but then they demand a high level of proof for things they want to disbelieve."

Yes, that's how I think it usually works. And then you get arguments about which is more appropriate/useful in science, low standards of proof or high ones.

Dan,

"But in any case, this is not really the puzzle of "knowing disbelief." That is the situation when someone assents to or "knows" a bunch of propositions that seem to entail a conclusion he or she disbelieves."

Another one for your collection, that's just occurred to me.

Imagine for a moment that you're watching a stage magician. Do you believe what you see? Do you believe your own reasoning? If the ball is not under the first two cups, it must be under the third one, right?

This seems like a case where people can 'knowingly disbelieve'. A person might not trust their own reasoning or powers of observation. They know that the other person is capable of fooling them. So if the conclusion forced by the data conflicts with what they believe to be true strongly enough, and they have reason to suspect their own judgement, they can reject the conclusion following from the data. I assent to the propositions that the ball is not under the first two cups, which seems to entail that it must be under the third one, but I don't believe it.

And the more experience you have with science (or magicians), the better you know how easily you can misinterpret things, and how things are so often on closer examination less straightforward than they seemed.

I don't think it's 'the' explanation, but you might like to add it to the pile.

August 26, 2014 | Unregistered CommenterNiV

@Niv:

As I plainly told Don, I certaiinly do not see myself as studying you.

It's weird to me that people interested enough in climate change to argue about it for hrs on end in social media (there are people who seem to do it 20 hrs/day on twitter) thnk that their own mental life gives them any insight into how the public thinks. Not just b/c introspection is always an unreliable guide in trying to make sense of thinking on the part of others (and often onthe part of oneself). But because climate-change debate aficionados simply aren't representative of ordinary members of the public (even though the latter, too, strangely, have become polarized by this issue).

Asking you, then, how you experience yourself thinking is not going to help Don, I'm convinced.

Our asking you to help draw from observable pieces of evidence inferences about things that no one can actually see directly -- that often turns out to be helpful.

But it turns out you are not represenative in that respect either!

Stubbornly refusing to get that their own reactions are not even the phenomenon under inivestigation, much less data relating to it, is much more the norm for the vocational and avocational skeptics who tune in to my blog in response to a post like the "What is going on in their heads?" one!

And it goes w/o saying that "believers" who spend so much time thinking and arguing are unrepresenative too & generally not able to grasp that -- something that definitely is to their detriment as well.

August 26, 2014 | Registered CommenterDan Kahan

@NiV:

On the magician: You are right. It is not really possible to *see* the trick w/o experiencing some form of cognitive *assent* to it being what it appears.

But I think we right away treat that experience of "believing" what we "see" in that setting as something we should question. We are capable, certainly, of taking critical stances toward beliefs & interrogating them.

I'm sure their are a complex of psychological mechanisms that cut across all of these forms of "knowing disbelief" too... But the forms are not all of a piece, and the one that I have the hardest time making sense of is #4.

The reason that #4 seems compelling to me for my study, btw, is that I feel like I do in fact see it various settings related to climate chagne ... as I said to Andrew.

But I might be fooling myself -- so I will put that "belief" in the box I have for things I "see" & therefore "know" but wonder if I should... One should definitely be worried about magicians getting into one's data

August 26, 2014 | Registered CommenterDan Kahan

==> "Asking you, then, how you experience yourself thinking is not going to help Don, I'm convinced."

Indeed, NiV's concept of "we" (or "us") when he talks about "skeptics" as a group has always struck me as wishful thinking/naive/"motivated"/unskeptical. Projecting from his own thinking to understanding the OSCIs being talked about is not as big a stretch, but still seems to me to not be particularly useful. Nor would be simply asking the OSCIs in question to describe their own thinking. The theory of motivated reasoning would predict that their descriptions would just be more evidence along the lines of the existing evidence of how politics/world view influence reasoning.

August 26, 2014 | Unregistered CommenterJoshua

Joshua, Dan,

It sometimes seems like people regard me as of another species to the rest of the human race.

It's like people are saying: "We're studying people. Except for you. You don't count."

People like me are a part of the general public. People like me are a part of people's social networks, as they and their interests are a part of mine. And everybody else I know has characteristics that set them apart. It's not a big mass of identical clones called "ordinary people" plus a few easily-identifiable outliers. It a big mass of different sorts of outliers, all outlying in different directions. People are all individuals.

Crowd shouts in unison: "Yes! We're all individuals!"

August 26, 2014 | Unregistered CommenterNiV

NiV -

The point is that all of us here, pretty much definition, are outliers w/r/t the general public - not in the sense of being human beings who reason in particular ways or who have a different sent of general influences, but in the sense of not being representative of the general public w/r/t thinking views on climate change. I wouldn't suggest that my thinking w/r/t climate change is representative of the general public except in the sense that it would show the similar kinds of general patterns (influence of political/world view, or influence from human psychological or cognitive attributes, as examples).

One of the major unskeptical patterns I see in the climate wars is the way that strongly-identified combatants project their views onto the general public - without taking the time to observe that they are obviously outliers.

August 26, 2014 | Unregistered CommenterJoshua

@NiV:

@Joshua is exactly right. Perhaps I should let you in on the joke? There really aren't "14 billion" readers of this blog. Seriously, only really really really really weird people come wi/ a mile of it. Or of Wattsupwiththat.com or skeptical.science.com. Or NYTIMES.com or Wallstreejournal.com.

Can you answer these three questions? Who was Al Gore's running mate in 2000? Who was last President to be impeached -- Bill Clinton or Richard Nixon? What is term of a US Senator -- 2 yrs, 4 yrs, 5 yrs, or 6 yrs?

I'm guessing you can.

Now -- can you tell me what % of the general population in the US can answer all 3 of those questions correctly? Explaining public opinion means explaining the % who can't.(BTW, my assumption is that the reaspon so few people get all 3 right is this information is not important in their lives; they wisely don't waste space on what they can appropriately view as trivia.)

Yes, you & me & Joshua are "part of the public" & included in studies. But we are noise.

(And if you were sereious about "we are all individuals," then for sure it would be bad advice to Don to tell him to ask individuals anything. But there are in fact many interesting patterns & much to be learned by asking the right people the right things.)

August 26, 2014 | Registered CommenterDan Kahan

This post deals in part with an issue I've discussed in the past. Paul Matthew alludes to this above when he says I've "debunked" Michael Wood's Dead and Alive paper. The problem goes far beyond one paper though. The problem can be seen in paper after paper, including at least two of Stephan Lewandowsky's. That problem is, put simply: Correlation is meaningless. You can see a post highlighting this here:

http://wattsupwiththat.com/2014/01/23/lewandowsky-call-your-office-correlation-is-meaningless/

But I recommend reading a rough draft of the full writeup I did:

http://hiizuru.files.wordpress.com/2014/01/rough-draft2.pdf

For people with a technical understanding of how correlation works, I can give a direct explanation of the problem. Correlation tests assume results are normally distributed. If a dataset has a different distribution, that assumption is violated and the tests have no meaning. It's performing an analysis on data that does not fit the analysis.

In the Dead and Alive paper, Wood asked people if they believed in various conspiracy theories. Practically every respondent said they didn't believe in any conspiracy theory. The data was not normally distributed; it was greatly skewed. That meant correlation tests cannot be sensibly applied to the data. Wood performed them anyway. When he did, he found not believing in one conspiracy theory correlated with not believing in another conspiracy theory. He interpreted that as believing in one conspiracy theory correlates with believing in another conspiracy theory.

That approach says if people don't believe in A, and they don't believe in B, anyone who believes in A is likely to believe in B. It's complete nonsense. You can plug practically anything into A and B and find a "statistically significant" correlation simply by using a skewed data set - something which happens all the time because population sizes are not evenly distributed.

This is why I don't pay much attention to what any of you guys in this field say if you don't also publish your data. A remarkable number of people in your field seem to have no understanding of fundamental aspects of tests they use. People seem to be able to make entire careers off creating bogus results by misusing statistics in ways that suggest they have no idea what they are doing.

Go to a Republican convention sometime. Ask a hundred people what their political views are and how much they support rape. You'll find a statistically significant correlation between being a Democrat and supporting rape. Go to global warming advocacy web sites and ask people their views on global warming and conspiracy theories. You'll find a statistically significant correlation between global warming skepticism and conspiracy theorizing.

Correlation tests aren't magical things which can be applied to any data set. They're mathematical tools which require certain things be true in order to be used. That seems to be something this entire field has failed to grasp.

August 27, 2014 | Unregistered CommenterBrandon Shollenberger

Dan

You might be guessing wrong.. the readers of the blog may not all be American.. and those obvious facts are irrelevant to them.. or less meaningful.. ie how many british subjects care about any of you examples.

I can of course take 30 seconds to google them, as can any other member of the public..

By the way, why did you not post my criticism of Lewandowsky's work...where I expanded on Paul Matthews criticism and where I linked to social psychologist Jose Duartes blog who was very harsh (basically calling it fraud)

perhaps it was because Lewandowksy might read it...?

It seems pointless continuing to post here.

August 27, 2014 | Unregistered CommenterBarry Woods

@Barry:

I did indeed have Americans in mind (they are whom I study for most part). But I suspect if I came up w/ "civic knowledge" tests that were valid for othe countries, they'd reveal that climate warriors in those nations tend to be more engaged in politics than most of their fellow citizens.

I don't know what happened to your comment on Lewandowsky. I don't "post" the comments; they are posted automatically, although occasionally I liberate them from the Hal 9001 series Spam Filter. Yours isn't there. Another possibility: sometimes the squarespace "captcha" behaves in an arbitrary fashion that can mislead people into thinking a comment has been submitted when it hasn't (there's an unobtrusive "unable to post" message that's easy to miss & doesn't give any clue what the problem is).

Try again? &/or send me via email?

August 27, 2014 | Registered CommenterDan Kahan

@Brandon:

In reading some of the studies, I've definitely wondered what the means are on "conspiracy theory" scales. If only tiny fraction of rspts believe in the conspiracies, & if variance in how "strongly" they disagree is being used to explain variance, then I agree that suggests a serious problem in those studies. I'm eager to read more.

As I said to @Paulmathews, though, I don't think the phenomenon in question has anything to do w/ belief in conspiracy theories.

August 27, 2014 | Registered CommenterDan Kahan

Dan Kahan, it's not even "variance in how 'strongly' they disagree" that "is being used to explain variance." It's variance in how many people from each group participated.

I'll use Stephan Lewandowsky's work since it provides the most obvious example. He did two surveys, both of which found skeptics are conspiracy theorists. In both surveys, there were far more responses from non-skeptics than there were from skeptics. This meant there was a skewed distribution, violating the prerequisites for calculating correlation coefficients. The effect meant Lewandowsky found a relationship for non-skeptics then simply assumed it would be inverted for skeptics.

To put it simply, with his approach, if you have 80 warmists and 20 skeptics, you'll find skeptics are conspiracy theorists. If you have 80 skeptics and 20 warmists, you'll find warmists are conspiracy theorists. Both groups can respond in the exact same way. It doesn't matter. You can get "statistically significant" correlations like Lewandowsky got simply by having a skewed sample set.

As for what you said to Paul Matthews, the point I made has nothing to do with belief in conspiracy theories. The point I'm making is this field routinely produces bogus results by performing correlation calculations on data sets those tests aren't appropriate for. The examples I used were regarding beliefs in conspiracy theories because Stephan Lewandowsky and Michael Wood were brought up, but the same problem can appear with any topic.

Indeed, you've presented correlation calculations when discussing the OCSI data set. I have no way of knowing if those calculations were appropriate. For all I know, the "statistically significant" correlations you've presented are every bit as bogus as those presented by Stephan Lewandowsky and Michael Wood. You never established those tests were appropriate for the data you used, and I can't look at the data to see for myself.

You're looking for answers to a dilemma. My point is I'm not convinced that dilemma even exists. I'm not saying it doesn't. I'm saying this field routinely produces and promotes bogus results. I'm saying your post includes discussion of some of those bogus results. I'm saying your post centers on quotes from a person who routinely produces and promotes some of those bogus results.

I'm saying given all that, I have a hard time seeing why I should assume the results you discuss aren't bogus. For all I know, if I looked at the same data you looked at, I might get totally different answers.

August 27, 2014 | Unregistered CommenterBrandon Shollenberger

Dan, I apologise for assuming the worst..

My comments have been removed recently from Scientific American, and the Guardian (Dana Nuccitelli).
Prof Richard Tol described my comments as factual and civil he was interviewed for the article, Prof Richard Betts asked why my comments had been deleted (I had quoted his criticism of Cook et al) and even his comment, asking Sci Am why my comments removed, were deleted as well.

At Cultural Cognition I had linked to Jose Duartes, who is highly critical of Lewandowsky’s ‘work’.
at a more basic level and in the same manner as the issues with the Wood et al paper (Dr Paul Matthews pointed out and as Brandon described)

Jose simply say his LOG12 paper is a fraud, 3 responses that match the papers headline and conclusions out of 1145, is just fraud

http://www.joseduarte.com/blog/more-fraud
http://www.joseduarte.com/blog/lewandowsky-fraud

http://www.joseduarte.com/blog/data-is-overrated

I highly recommend reading Jose Duartes, a recent paper of his is also very interesting and relevant

Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (in press). Political
diversity will improve social psychological science. Behavioral and Brain Sciences.

This roughly summarizes my missing comment ( i left a bit out, sent via email), blogs can be problematic at times, I have had my own comments end up in spam at my own blog, and Watts up (no idea why) as I’m an author at both!

All the best

Barry

Jose is highly critical of Cook’s 97% consensus paper as well
(pushing consensus messaging, is of course Lewandowsky’s advice to Cook)

http://www.joseduarte.com/blog/ignore-climate-consensus-studies-based-on-random-people-rating-journal-article-abstracts

August 27, 2014 | Unregistered CommenterBarry Woods

@Barry:

Thanks (to you & @Brandon) for the links. I'll read up.

I would say that I find it very suspicious that those who use the "conspiracy theory" scale have all agreed to score as "false" the statement "The Royal family killed Princess Diana."

August 27, 2014 | Registered CommenterDan Kahan

==> "perhaps it was because Lewandowksy might read it...?"

Speaking of What Were They Thinking?...

Barry - I would think that someone who has posted here as much as you have would be able to see how very, very unlikely it would be that Dan would reject a comment because it was critical of Lewandowsky.

But even more, I'm wondering if you could explain what you were thinking because...

You've posted here before and you've seen that comment appear more or less immediately after you get through the captcha screening. I would think that would make it obvious that Dan doesn't read posts before they appear and then accept some and reject others. It should be obvious that no one could conceivably read and filter the comments in so short a period of time.

So for you to speculate that your comment was being withheld because of being critical of Lewandowsky, you would have had to overlook the obvious. Instead, it seems, your proclivity for (1) feeling that you're a victim and, (2) labeling and categorizing others (in this case, Dan as someone who would feel a need to protect Lewandowsky from seeing your devastating criticism), led you to invent a highly improbable rationale to explain an event.

To me, anyway, it looks like a great example where the phenomenon of motivated reasoning helps to explain what you were thinking. Reminds me of when Willis went off because he was convinced Dan was using a sockpuppet to hide his identity and trick people. Why do smart and knowledgeable people, who self-describe as skeptics, fail to apply due skeptical diligence on issues with which they are emotionally invested?

.

August 27, 2014 | Unregistered CommenterJoshua

Joshua

One simple problem with your theorizing... is your lack of information.

As I have previously made comments here, that Dan has not published..
He wrote to me at the time about them and explained why and we chatted about it
(on the very topic of Lewandowsky's work and conduct)

so the obvious is, that this had happened before.

In my above comment, I have removed part of my email reply to Dan (for tactfullness)

I recall that Willis went off on one in irritation, and I could see why..

I had been commenting here for a while, and I had no idea that Dan Kahan and DMK38 were one and the same, mainly because they seemed to argue with each other! ;-)

Willis being new to the blog, saw the same, and must have felt it very odd when it was made clear.

August 27, 2014 | Unregistered CommenterBarry Woods

Barry -

==> "As I have previously made comments here, that Dan has not published.."

Interesting. So an apology is in order.

Could you clarify the process of what happened? If I understand you correctly, you posted a comment that got lodged in moderation (for perhaps some unknown reason) and Dan found it and didn't post it and emailed you and told you why? I ask because the immediacy in how comments appear makes it seem impossible that Dan reviews comments and I have never before seen any indication that he rejects any comments (although very occasionally - for example those with multiple links - comments do end up in a spam-type moderation)

==> "Willis being new to the blog, saw the same, and must have felt it very odd when it was made clear."

Finding it odd and insisting that Dan's intent was to deceive through the use of a "sockpuppet" (that just happened to deviously contain his own initials) are not one and the same, Barry.

August 27, 2014 | Unregistered CommenterJoshua

@Joshua:

@Barry is correct. I myself had forgotten about the matter (from April 2013). I think it occurred after I decided to disable the "moderator" feature on comments but it doesn't matter, since when I "moderated" I didn't have an attitude different from the one I do now.

My attitude is that commentators are free to critically engage any point or argument I advance & any argument or point anyone else does in the course of a discussion relating to what I've posted.

The only reason I can think to intervene is if it looks to me like the comment doesn't fit that criterion & seems to me to pose a risk of some bad consequence that is independent of any position anyone is taking on issues growing out of the blog post.

Looking back, I can see that I thought @Barry's comment in that instance-- which he was expressed in a perfectly reasonble way -- was raising an objection to a matter that struck me as unrelated to the blog post & discussion, & that the topic he was raising (as legitimate as it was for him to raise it) would distract from the focus of the site. I wrote to him & explained & asked if he thought I was seeing things reasonably; I also proposed that he consider whether there might be a way to adjust the comment and re-post it in the ongoing discussion of another (more current) post to which it seemed potentially relevant. I'm not sure if @Barry did that -- although I was glad to see that he continued to post plenty of interesting comments thereafter, as he had before.

I very much appreciate your vouching for me, but I don't think that @Barry was out of bounds to voice his concern that I had censored him on this occasion. I also am confident that anyone trying to gauge his general cast of mind would get the right idea from seeing the posture he struck in his response to my explanation & invitation to re-post the errant comment.

I can think of 1 other time in which I "intervened." It was again to try to preempt a discussion that it seemed to me would make the comment section a forum for debate about conduct unrelated to the subject of this site. In that instance too I wrote the commentator to explain my decision & solicit his view & he responded in a perfectly gracious manner.

That was in fact before I elected to go "moderation free." Looking back on that one (I found it b/c I thought *that* might have been the time @Barry was referring to), I now think I would make a different call in the same circumstances. I was less experienced then in how conversations go.

Live & learn.

Oh--on Willis. I think I merely pointed out he was insane. That was pretty unnecessary since anyone who read his comments -- a vicious attack on CCP robo-dog -- would have come to that conclusion on his or her own.


August 27, 2014 | Registered CommenterDan Kahan

Ugh. I was going to comment on this subject in more general terms in the hope of contributing more productively, but I can't after reading some of the questions that were asked. Most notably:

According to astronomers, the universe began with a huge explosion. (True/False)

What!?

Seriously, what!?

Astronomers do not say the universe began with any explosion. Leaving aside the fact the Big Bang is just one view advanced by astronomers to explain the beginnings of our universe, the Big Bang did not involve an explosion. The idea of an "explosion" comes entirely from a poor grasp of the science behind the Big Bang, primarily because of the name it was assigned. This is like saying people have a poor grasp of global warming because they don't think the planet's greenhouse effect is dominated by convection (as in actual greenhouses).

I don't get how you seek to measure people's scientific intelligence while getting basic, fundamental facts wrong.

August 27, 2014 | Unregistered CommenterBrandon Shollenberger

I had italicized the word "True" in the quote in my comment above to indicate that's what was listed as the "right" answer. I hadn't realized you wouldn't be able to see it.

Also, I should point out the idea the universe began with a huge explosion is just stupid. The Big Bang is said to have happened when all of the universe was compressed into a single point of infinite density, a point smaller than a single atom. Even if there had been an explosion at that point, it couldn't possibly have been huge. Any explosion there could have been would have been so small it couldn't be seen with the naked eye (supposing somehow an eye could exist outside the universe in order to observe it).

The "right" answer to the question isn't just wrong, it's stupid.

And you have to admit, it's funny religious folk were less likely to pick the stupid answer.

August 27, 2014 | Unregistered CommenterBrandon Shollenberger

@Brandon-- I don't much like either BIGBANG or EVOLUTION NSF indicators items, w/ or w/o the "according to ..." introductory clauses -- although I do think the versions w/ the clauses help to show why the versions w/o are biased (in measurement sense) against religious respondents. Indeed, I'm not a big fan of the NSF "factual knowledge" battery in general.

Am guessing that you are looking at OSI_2.0 notes? You'll see that I didn't include either BIGBANG or EVOLUTION in that scale.

I do wonder if whether your reactions might be reproducing the results Stocklmayer et al observed when they admnistered the NSF science literacy test to scientists... (e.g., Sir Fred Hoyle would reasonably take issue w/ "earth round the sun or sun round the earth?"). There's discussion in that post, too, about whether a "wrong" answer can be a valid measure of science comprehension: no if one conceives of the thing being measured as a retained inventory of canonical facts; but at least theoretically yes if understood as a not directly observable disposition or capacity to acquire knowledge & to reason, in which case everything turns on diagnostic or predictive properties of responses to items

August 27, 2014 | Registered CommenterDan Kahan

Dan Kahan, I wish I had read your response sooner. I just uploaded a post discussing this:

http://hiizuru.wordpress.com/2014/08/27/scientific-literacy-and-the-big-bang/

It'd have been worth addressing the point you raise now in it. I've heard the same response multiple times, and it's ridiculous. First though, I'd like to point out something. Problems of bad phrasing are one thing. In my post, I highlighted a couple examples, but I said I forgive them.

What you asked about the Big Bang theory is not the same. For a person to say true to it, they have to agree to something which has no connection to reality. There is no caveat you can add to the question to clarify it. It is completely and utterly wrong. As for the defense you offer, you've previously said:

Indeed, while such an outcome is unlikely, an item could be valid even if the response scored as “correct” is indisputably wrong, so long as test takers with the relevant comprehension capacity are more likely to select that response.

But this shows the problem of your defense. This shows you aren't seeking to measure people's knowledge of facts or ability to get the right answer. All you're measuring is the ability for people to give a particular story. What story? You don't know. You can assume it to be scientific literacy, but that's just one of many possible assumptions.

For instance, your discarded the question about the Big Bang on the basis responses to it indicate religious biases not genuine differences in scientific knowledge. You offer no basis for that assumption. An equally valid assumption is religious folk better understand the Big Bang theory than non-religious folk. You have no data which favors one interpretation over the other, yet you choose one interpretation as the "truth." Why? Because it fits the story you believe.

And that's what this all comes down to. You found a common "factor," and you decided it is scientific intelligence. Why? We don't know. You did nothing to show that's what it actually indicates. You just decided it made for a good story.

.

By the way, I should point out while reading those notes, I saw multiple instances of correlation coefficients calculated on data with non-normal distributions. The use of PAF for your FA is good as it doesn't require normality, but the r scores show you making the same sort of mistake Stephan Lewandowsky and Michael Wood made.

This is basic statistics. You guys have no excuse for not understanding it.

August 27, 2014 | Unregistered CommenterBrandon Shollenberger

@Brandon

As you read in the notes, nothing in the construction of the OSI scale was based on linear correlations; the items are all dichotomous, and the relatinship between responses to them and the latent or unobserved disposition was estimated with item response theory, which uses logistic regression. Logistic regression was also used to assess the item response functions etc.

It's true that at various points I report simple correlations between continuous scales and one or another thing that I think a reader would be interested in. Those are what you are referring to?

Or tell me which correlations you think are objectionable & what you'd do instead. I'd be indebted.

As for BIGBANG, you realize, as I said, that it isn't included in the OSI scale. But seems worth noting again -- to avoid confusion on anyone else's part.

Also worth repeating, for benefit of others, that my position on cathecistic "science literacy" scales that seek to certify due assimilation of some set of canonical "facts"-- the model of the NSF Indicators-- is pretty close to yours. That is, I am against them. We might still disagree, but I think it would be about how to understand standardized test items as indirect measures of an unobservable reasoning disposition

On stories, I'm curious. Would you say the same thing about this item: "Does the Earth go around the Sun, or does the Sun go around the Earth?"

August 28, 2014 | Registered CommenterDan Kahan

Dan Kahan, I'm not sure why you would need me to clarify what I am referring to. I specifically highlighted the fact you used an appropriate methodology when creating the OSI scale. As such, I obviously wasn't referring to that when I said you did something inappropriate. The only thing I could have possibly been referring to were the "simple correlations" you refer to now, ones you can find by searching for "r =0." in the document. As for your comment:

Or tell me which correlations you think are objectionable & what you'd do instead. I'd be indebted.

Without seeing the data, I can't tell how one should analyze it. There are a variety of transformations ones can use to address non-normal distributions, and there are other tests one could use instead of simple correlation calculations. A person needs to be able to look at the data to tell which are most appropriate. Otherwise, all they can do is provide the same laundry list of possible solutions one could find with Google.

Also worth repeating, for benefit of others, that my position on cathecistic "science literacy" scales that seek to certify due assimilation of some set of canonical "facts"-- the model of the NSF Indicators-- is pretty close to yours. That is, I am against them. We might still disagree, but I think it would be about how to understand standardized test items as indirect measures of an unobservable reasoning disposition

You specifically labeled saying the universe began with a huge explosion as the "correct answer." When discussing the question, you labeled the "correct response" one which requires saying something incredibly stupid.

You can't publish a document in which you interpret something as the correct answer then turn around and tell everyone, "I'm not saying it is the right answer, but it measures a latent variable." That's not what you said! If that's what you intended to say, the material you published is wrong, and you should say so.

On stories, I'm curious. Would you say the same thing about this item: "Does the Earth go around the Sun, or does the Sun go around the Earth?"

No. This should be clear as my post says I can forgive poor phrasing and issues of semantics. I'm not an idiot looking for technicalities.

And when I say I'm not an idiot, I mean that to imply your question is idiotic. It's true the planet does not revolve around the sun, but rather whatever location happens to be the centerpoint for mass in the solar system at a particular moment. However, that doesn't change the fact Earth goes around the Sun. Nothing about the phrase "go around the Sun" requires a revolution with the sun as a centerpoint.

August 28, 2014 | Unregistered CommenterBrandon Shollenberger

Dan -

Don't know if you have the data - but if you excluded the other religion-associated items (e.g., questions related to evolution) from the battery, is there any association between "true" answers on the big bang question and higher "scientific literacy" scores on the remaining items?

August 29, 2014 | Unregistered CommenterJoshua

Dan -

I don't know if you have the data, but I'd be curious to know whether...

If you exclude the items on the survey that are likely to be associated with religious views, such as questions related to evolution, is there any association between "true" answers on the big bang question and higher "scientific literacy" scores on the remaining items?

August 29, 2014 | Unregistered CommenterJoshua

Dan -

I don't know if you have the data, but I'd be curious to know whether...

If you exclude the items on the survey that are likely to be associated with religious views, such as questions related to evolution, is there any association between "true" answers on the big bang question and higher "scientific literacy" scores on the remaining items?

August 29, 2014 | Unregistered CommenterJoshua

test

August 29, 2014 | Unregistered Commenterdmk38

@Brandon:

So in other words you won't tell me which correlations you think are invalid. Okay. I'm sure you are very busy-- just thought since you'd made the effort to read, I'd see if I could get some useable insight from you.

On "earth goes round the sun/sun round the earth," you can't see why it displays the problem you have (and that I do too!) with "factual knowledge" tests.

But the eminent astrophysicist Sir Fred Hoyle did. He pointed out that there isn't a "right" answer-- heliocentric & geocentric models of planetary motion are mathematically equivalent. The preference for the former has to do with its power to suggest interesting & testable hypotheses.

It's a great example of how any "factual knowledge" question is likely to defy a simple "right/wrong" coding.

If one doesn't conceive of science comprehension tests as designed to certify "right" answers to factual questions but rather as measures of unobserved latent reasoning dispositions that *correlate* with particular responses, one avoids this problem.

Or does so long as one can validate the test questions by showing that particular responses do indeed correlate with the relevant disposition. That's a tricky business, for sure.

All of this is explained in the post from which you selected the "indisputably wrong" quote.

It's pretty clear you didn't read, much less understand, the entire post. B/c the example of Hoyle is in it.

Guess you think he is an "idiot" too. At least I am in good company!

August 29, 2014 | Unregistered CommenterDan Kahan

Just to point out - I would like my question to be answered, but I'm not so impatient as to deliberately have it posted three times. Apparently there was some sort of glitch with the blog software...

August 29, 2014 | Unregistered CommenterJoshua

@Joshua:

Good question.

Sure,I can answer that. The information is in fact in the OSI_2.0 paper.

If you look at Fig 7., you'll see the item response curves for both the standard & the "According to ..." variants of both EVOLUTION and BIGBANG.

Those curves represent how likely it is someone with the level of "ordinary science intelligence" indicated on the x-axis is to give the response coded as "correct." Level of "ordinary science intlligence" is the score on the OSI_2.0 scale.

I've plotted above avg & below avg religiousity subjects separately, to show that the probability of supplying the "correct" response on the standard variants differs for the two groups. That means that those two items don't have the same relationship to performance on the assessment instrument for members of both groups.

The discrepancy is very pronounced for the standard variants, and less so for the "According to..." ones, which supports the inference that the discrepancy had to do with the connotation of "belief in" evolution & big bang theories of natural histories of humans & universe, respectively. The paper also shows that response to the standard items perform like indicators of religiosity, not science comprehension.

Among both below avg and above avg religious subjects, giving the answer scored correct for BIGBANG ("true") does indeed correlate with doing well on the rest of the test, in both the standard and the "according to ..." variant.

I suppose you could put at least that variant version into a science comprehension scale, but the problem at that point would be that the item is just way too easy. Only subjects a full standard deviation below the mean on OSI_2.0 -- a scale that includes mainly critical reasoning measures from Numeracy and CRT -- before one is more likely than not to answer "false" to "according to..." variant of BIGBANG .

August 29, 2014 | Registered CommenterDan Kahan

Thanks, Dan. I wonder what Brandon makes of that association between the answer of "true" to that question and a higher "scientific literacy" as reflected in the rest of the items? It seems that if you accept that the tests have some degree of validity, then picking the answer of "true" does in fact indicate a higher "scientific literacy" even though in reality the answer is false.

Of course, anyone who has read what you've written about evaluating "scientific literacy" would be well-acquainted with your caveats about the validity of "scientific literacy" assessments to begin with - but it seems that Brandon doesn't have the advantage of having read what you have written before he launched into his criticism.

August 29, 2014 | Unregistered CommenterJoshua

Ugh. This blog's CAPTCHA system is irritating. I've had two comments not show up because I failed to realize I had failed the tes.

Dan Kahan has repeatedly made comments like:

The paper also shows that response to the standard items perform like indicators of religiosity, not science comprehension.

Yet he has done nothing to show this is actually true. To accept this as true, we have to accept the idea his scale measures "science comprehension." The question about the Big Bang shows, to some extent, that is not true. Whatever latent variable he may be measuring, it is not simply "science comprehension." At best, it is only something related to "science comprehension."

That is the central point, a point he has done nothing to address.

.

Now Dan, if you want people to engage with you, you need to stop making things up about what they say. You say:

So in other words you won't tell me which correlations you think are invalid. Okay. I'm sure you are very busy-- just thought since you'd made the effort to read, I'd see if I could get some useable insight from you.

But this is completely and utterly untrue. I said which correlations I took issue with. I even provided an exact search string which would allow one to find them, saying "you can find [them] by searching for 'r =0.'" Given I devoted an entire paragraph to specifying which correlations I take issue with, your response to me is ridiculous. You're apparently copping an attitude because you simply failed to read what I wrote. This is akin to when you responded to me by saying:

the scale is used to *critique* both the Bigbang & Evolution items, neither of which was drafted by me. The critique was exactly the one you are offering: that the items (standard ‘science literacy’ ones) are *invalid* b/c they confound knowledge w/ religious identity.

When I had never said anything of the sort. The primary issue I've raised is you can't know such a view to be true. That means you falsely claimed I advanced the exact opposite position of the one I actually hold. You refused to address your misrepresentation that time. I hope you won't refuse to address your newest misrepresentation in the same way.

As for your comment:

On "earth goes round the sun/sun round the earth," you can't see why it displays the problem you have (and that I do too!) with "factual knowledge" tests.
...
It's pretty clear you didn't read, much less understand, the entire post. B/c the example of Hoyle is in it.

Guess you think he is an "idiot" too. At least I am in good company!

No. The difference is Fred Hoyle understood what he was saying well enough to actually say it. You, in trying to translate an idea, got it wrong. Hoyle would defend both "the Earth goes around the Sun" and "the sun goes around the Earth" as valid statements. As such, he would not answer the question, "Does the Earth go around the Sun" by saying, "No." To say "No," we would have to hold the Earth does not go around the sun, something Hoyle would never accept as he doesn't accept that one view must be invalid. The only correct answer under that view is to say, "The question has multiple valid answers." You can't pick one answer as correct when both are equally valid.

It's true I didn't realize what argument you were making (assuming you had mixed up "revolve" and "go around"), but the discussion was about the Big Bang item in your survey, an item which could only be (correctly) answered with, "No." That is not comparable to the question you asked, which when taken literally, cannot be answered (given the options presented). The interpretation I went with would be comparable if one changed "goes around" to "revolves around."

I assumed you used poor wording to offer an example of a question whose right answer was one most people wouldn't realize (akin to the Big Bang item we were discussing). Your example was actually a question which has no right answer (not akin to the Big Bang item we were discussing). Either way, it was wrong. I just assumed it was wrong in a way that made some sort of sense.

August 30, 2014 | Unregistered CommenterBrandon Shollenberger

There's another thing I should point out about the example of the heliocentric and geocentric models. That is, the word "models." Models are just descriptions of things. You can have an infinite number of models that accurately describe the same set of obsersvations. That does not make the models equal.

Mathematically speaking, many coordinate systems are equivalent. We can define movement relative to the Sun and have a heliocentric model. We can define movement relative to the Earth and have a geocentric model. We can define movement relative to my left ring finger's nail and have another model. All of those are mathematically equivalent.

When you drop a ball, we say the ball falls. However, we could define an infinite number of coordinate systems where that wasn't true. For instance, we could say in some models, the ball doesn't fall; the Earth rises to meet it. We can create a model in which the two views are mathematically equivalent, thus they are both valid.

But being mathematically equivalent is not the same as being equivalent. There are many reasons to choose one model over another, even if both are mathematically equivalent. People choose the more useful of models all the time. It's not because the model they choose is "right." It's because it is useful.

When asked if the Earth goes around the sun, there are a couple possible answers. You could be pedantic and think, "All movement is relative so that question can't be answered." You could be casual and think, "In the sense we mean in normal conversations, yes." You could be obnoxious and think, "No, because I arbitrarily choose to use an unhelpful model nobody else could know I am using."

The fact coordinate systems are mathematically equivalent does not mean we must answer all questions about movement which don't specify a coordinate system as unanswerable. We can reasonably assume questions are asked with the assumption we'll interpret them as we'd interpret them in regular conversations.

.

And that has nothing to do with the question about the Big Bang. Dan Kahan specifically said the "correct answer" regarding the Big Bang is "[astronomers say] the universe began with a huge explosion." There is no coordinate system in which that statement is true. There is no mathematical equivalency we can draw to make that statement okay.

It is wrong, plain and simple. It is not, as Kahan claims, the "correct answer."

August 30, 2014 | Unregistered CommenterBrandon Shollenberger

Brandon,

I've lost the reference to it, but in one of my earlier comments I noted that what the survey seems to be measuring is not scientific knowledge, but the ability to recollect the the simplified statements commonly used to teach children science.

I went through about half the questions on the scale showing how most of them were wrong or meaningless. But I was nevertheless able to figure out what the intended 'right' answer to every question was by simply thinking about what people are taught in school or in the media. People are commonly *told* that the big bang was an explosion, the graphics in pop-science programmes about the big bang on TV *show* it as an explosion. It doesn't matter that this is scientifically completely wrong, because that's not what the scale is measuring. It's actually measuring the recollection of 'popular science trivia', which will therefore also measure all the common misunderstandings and misconceptions in pop science.

The most science literate will likely know that the expected answers are wrong, but will nevertheless lie and give them because they know what is expected of them. This is a perennial problem in surveys of this sort. People know how they're going to be interpreted and what answers are expected, and so on politically divisive issues they answer tactically to direct the experiment towards the conclusion they prefer. That's the trouble with using science as a weapon in politics.

It's not science literacy. It's just correlated with it, over part of its range.

August 31, 2014 | Unregistered CommenterNiV

NiV, sorry for the slow response. Since Dan Kahan has flat out made things up about what I've said then ignored my complaints when I pointed it out three times now, I've lost interest in his blog. It's hard to want to comment here when I have every reason to believe what I say has no bearing on the responses I'll get.

Anyway, I agree people can figure out the "right" answers to these questions. My problem is the more subjective the questions, the more subjective the results. You can't tell what you're actually measuring if your questions are bad. For all we know, well-worded questions about evolution and the Big Bang would have produced results with no possible religious biases.

Another problem I have is what you said, that this is:

actually measuring the recollection of 'popular science trivia', which will therefore also measure all the common misunderstandings and misconceptions in pop science.

But Kahan has repeatedly argued otherwise. For instance, his paper about this scale says the scale is:

Designed for use in the empirical study of public risk perceptions and science communication, OSI_2.0 comprises items intended to measure a latent (unobserved) capacity to recognize and make use of valid scientific evidence in everyday decisionmaking.

But you can't measure people's ability to "make use of valid scientific evidence in everyday decisionmaking" by measuring their ability to recall popsci catch phrases. These sort of popsci ideas aren't based on "valid scientific evidence." They're based on vague, hand-waved notions people heard about somewhere. How can agreeing with claims based on bad evidence you made no effort to verify show you can "make use of valid scientific evidence"?

The worst part is I don't doubt there is a genuine signal in this data. I don't doubt there is a latent variable being extracted. I bet one could produce useful results from the responses. Those results just aren't what Kahan says they are. Or if they are what he says they are, we have no way to know it.

(By the way, while I've been focusing on the Big Bang item, I agree with you that many of the questions are bad.)

September 1, 2014 | Unregistered CommenterBrandon Shollenberger

Brandon,

I agree with everything you say. But being slow to integrate criticism is pretty standard, nowadays, and by the standards of this polarised and acrimonious debate, Dan's pretty tolerant and open-minded. You take what you can get.

Bear in mind that to Dan, you're just a random person on the internet. Probably one whose politics and other opinions he disagrees with quite strongly. You're not going to get overnight agreement. (If agreement is even what you ought to want.)

I think the useful point here is that Dan agrees that current methods are not very good, and is working to improve them. Compared to what went before, his proposal is a *big* improvement. It's not perfect, and it's not measuring quite what he thinks it's measuring, but it's a small step in the right direction. If people aren't encouraged for heading in the right direction, they'll stop doing it.

September 2, 2014 | Unregistered CommenterNiV

NiV -

I was wondering if you were going to weigh-in on this thread. I was hoping that you would have done better.

This is what Dan is dealing with here. Brandon makes claims such as the following -

=> "Since Dan Kahan has flat out made things up about what I've said "

After posting something like the following on his blog:

Did you know the universe began with a huge explosion? If not, you’re an idiot. So says Dan M. Kahan.

Try a different color of implants.

September 3, 2014 | Unregistered CommenterJoshua

Joshua,

"This is what Dan is dealing with here."

What we're dealing with here is failure to communicate. Brandon is making some fairly specific complaints, and expecting direct, responsive answers which he isn't getting, because Dan apparently doesn't get, or doesn't agree with, or doesn't want to talk about what he says. That makes Brandon irritable.

I get the same impression, sometimes, the difference is that I don't get irritable about it. Or at least, not so easily. This debate is pretty acrimonious - the way to calm it down is to ignore it when people trash-talk their opponents, and concentrate on the material content. Don't worry about the responses and behaviours you can't get, see the positive side in the bits that you do.

Whereas you seem intent on (one-sidedly) picking out every bit of irritability and highlighting it, to support your partisan contention that sceptics are being unreasonable. And sure, you can score a few points that way, but you don't make any friends or influence people on the other side. The debate shifts to the mire of the picky details about who said what to who, rather than the real content.

Brandon has made several interesting points. The field uses a lot of statistics, but often does so badly - tending to use multivariate regression and correlations as a sort of universal answer to everything. The field routinely presents such statistical conclusions without the raw data, so readers can neither see if there's a problem nor know for sure if the statistics have been done right. And these surveys are basing their results on questions to which their approved answers are simply wrong, or ambiguous.

We didn't exactly get off to a good start with Dan quoting Lewandowsky as a useful/credible source. And Dan's response to the first point was to say he'd look into it (although I'd bet we're not going to see any criticism of Lewandowsky here as a result), to the second to ask what statistics Brandon was talking about, although I think Brandon had been clear enough (and it's something I've muttered about here too, in the past), and to the third to say that the errors/ambiguities didn't matter because they were measuring a latent variable, and in any case he'd already dropped that question for an unrelated reason.

Which Brandon got irritated about because it was pretty much non-responsive.

Where I disagree with Brandon is in thinking he should have expected an instantaneous positive response. People don't change their attitudes overnight, and getting irritable about it doesn't help them to do so.

"Did you know the universe began with a huge explosion? If not, you’re [...]"
... of low 'ordinary science intelligence'.

Is that better?


"Try a different color of implants."

I agree materially with Brandon, but I was defending Dan. As I often do.

Is the difference due to the tint of my spectacles, or of yours?

September 5, 2014 | Unregistered CommenterNiV

@NiV:

I didn't drop BIGBANG. The paper uses the OSI scale to show BIGBANG is not a valid measure of science comprehension.

The criticism of Lewandowsky's stats are irrelevant b/c they aren't my stats or related in any way to anything I've said.

I can't do anything more than refer to the paper itself.

I'm sure you agree it would be a mistake for interested readers to take @Brandon or your "word for it" on what it actually says.

September 5, 2014 | Registered CommenterDan Kahan
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.