follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Who sees accidental shootings of children as evidence in support of gun control & why? The "cultural availability" effect | Main | Deja voodoo: the puzzling reemergence of invalid neuroscience methods in the study of "Democrat" & "Republican Brains" »
Wednesday
May012013

Even *more* Q & A on "cultural cognition scales" -- measuring "latent dispositions" & the Dake alternative

Given how interesting the conversations were in the last two “Q&A” posts (here & here), I thought—heck, why not another. 

Here are a set of reflections in response to an email inquiry from a thoughtful person who wanted to understand what it means to treat the cultural worldview scales as “latent” measures of cultural dispositions, and why we—my collaborators & I in the Cultural Cognition Project—thought it necessary to come up with alternatives to the scales that Karl Dake initially formulated to test hypotheses relating to Douglas & Wildavsky’s “cultural theory of risk.” For elaboration, see Kahan, Dan M. "Cultural Cognition as a Conception of the Cultural Theory of Risk." Chap. 28 In Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, edited by R. Hillerbrand, P. Sandin, S. Roeser and M. Peterson. 725-60: Springer London, Limited, 2012.

Question: What do you mean when you say the "cultural cognition worldview scales" measure a "latent variable"? And that they "work better" than Dake's scales in this regard?

My answer:

(A) Let's hypothesize that there is inside each member of a group an unobserved & unobservable thing -- which we'll call that group's cultural predisposition -- that interacts with the mental faculties and processes by which that person processes information in a way that tends to bring his or her perceptions of risk into alignment with those of ever other member of the group. This would be an explanation (or part of one, at least) for "the science communication problem"-- the failure of valid, compelling, widely available scientific evidence to resolve political conflict over risks and other facts to which that evidence speaks.

(B) Although we can't observe cultural dispositions directly, we might still be able to make valid inferences about their existence & nature by identifying observable things that we would expect to correlate with them if the predispositions exist and if they have the nature that we might hypothesize they do. We had reason to believe that atoms existed long before they were "seen" under a scanning tunneling microscope because Einstein demonstrated that their existence would very precisely explain the observable (and until then very mysterious!) phenomenon of Brownian motion (in fact, we only "see" atoms with an ST microscope b/c we accept that the observable images they produce are best explained by atoms, which of course remain unobservable no matter what apparatus we use to "look" at them). Similarly, we might treat certain patterns of responses among a group's members as evidence that the predispositions exist and behave a certain way if such conclusions furnish a more likely explanation for those patterns than other potential causes and if we would not expect to see the patterns otherwise.  Within psychology, this is known as a "latent variable" measurement strategy, in which "manifest" or observable "indicators"--here the patterns of responses -- are used to measure a posited "latent" or unobserved variable --"cultural predispositions" in our case.

(C) That's what the items in our cultural worlscales are -- indicators of the latent cultural predispositions that we hypothesize explain the science communication problem. The scales reflect a theory that people would not be expected to respond to the statements the items comprise in patterns that sort individuals out along two continuous, cross-cutting dimensions unless people had "inside" of them group predispositions that correspond to "hierarchy individualism," "hierarchy communitarianism," "egalitarian individualism," and "egalitarian communitarianism."  On this view, responses are understood to be "caused" by the predispositions. The causal influence is only crudely understood and thus only imprecisely measured by each item; the whole point of having multiple ones is to aggregate responses to them, a process that will make the "noise" associated with their imprecision balance or cancel out & thus magnify the "signal" associated with them.  The resulting scales can be viewed as "measuring" the intensity of the unobserved predispositions.

(D) For this strategy for "observing" or "measuring" cultural predispositions to be valid, various things must be true.  The most basic one is that the items assigned to the scales must "perform" as the underlying theory posits.  The responses to them must correlate with each other in ways that generate the pattern one would expect if they are indeed "measuring" the cultural predispositions.  If the items correlate in some other pattern, the scales are not a 'valid" measure of the posited dispositions.  If they correlate in the expected pattern, but the correlations are very weak, then the scales can be viewed as "unreliable," which refers to the degree of precision by which an instrument measures whatever quantity it is supposed to be measuring (imagine that your bathroom scale had some sort of defect and as a result gave readings that erratically over- or underestimated people's weight; it wouldn't be very reliable in that case).

(E) The Dake scales did not perform well.   They were not reliable; they didn't correlate with *one another* as one would expect if the ones that were placed in the same scale were measuring the same thing. Moreover, to the extent that they seemed to measuring things "inside" people, those things did not fit expectations one would form about their relationship under the theory posited by the "cultural theory of risk." 

(F) Once one has valid & reliable scales, one does not yet have evidence that cultural predispositions explain the science communication problem.  Rather one has measures of what one is prepared to regard as cultural predispositions.  At that point, one must devise studies geared to generating correlations between the predispositions, as measured by the valid and reliable scales, and risk perceptions, as measured in some appropriate way.  Those correlations must be of a sort that one would expect to see if the predispositions are causing risk perceptions in the way one hypothesizes but would not expect to see otherwise. 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

The analogies/metaphors above are superb, constituting one of the best explications of CCT yet.

Among the observed/observable and STM aspects of them, here in the case of CCT, perhaps the Faraday Cage http://en.wikipedia.org/wiki/Faraday_cage i.e., lots of energy/inputs used up in transmission yet easily blocked with relatively crude materials might be applicable.

May 1, 2013 | Unregistered CommenterWalter Borden

Dan you stated:" At that point, one must devise studies geared to generating correlations between the predispositions, as measured by the valid and reliable scales, and risk perceptions, as measured in some appropriate way. " And previously " (A) Let's hypothesize that there is inside each member of a group an unobserved & unobservable thing -- which we'll call that group's cultural predisposition -- that interacts with the mental faculties and processes by which that person processes information in a way that tends to bring his or her perceptions of risk into alignment with those of ever other member of the group."

What of measuring the processes that effect information that tends to vacate alignment. I ask because of the previous comments we have discussed about which I, and NiV IIRC, is an assumption: your statement " the failure of valid, compelling, widely available scientific evidence to resolve political conflict over risks and other facts to which that evidence speaks." Is cultural cognition more like a gate function or a continuous reversible phenomena? I know in my fields this basic function has to be determined prior to assessing "valid and reliable."

You speak of "Once one has valid & reliable scales, one does not yet have evidence that cultural predispositions explain the science communication problem." The question is what measurement do we make to ensure we have valid and reliable scales, and not the "thumb on scales" bias?

I know we have discussed this in general, but I think Walter has brought up a good analogy. If nothing else, its consideration would tend to make the argument of SoSC stronger.

May 4, 2013 | Unregistered CommenterJohn F. Pittman

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>