follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Geoengineering & the cultural plasticity of climate change risk perceptions: Part I | Main | Democracy and the science communication environment (lecture synopsis and slides) »
Sunday
Feb232014

Three models of risk perception -- & their significance for self-government . . .

From Geoengineering and Climate Change Polarization: Testing a Two-channel Model of Science Communication, Ann. Am. Acad. Pol. & Soc. Sci. (in press).

Theoretical background

Three models of risk perception

The scholarly literature on risk perception and communication is dominated by two models. The first is the rational-weigher model, which posits that members of the public, in aggregate and over time, can be expected to process information about risk in a manner that promotes their expected utility (Starr 1969). The second is the irrational-weigher model, which asserts that ordinary members of the pubic lack the ability to reliably advance their expected utility because their assessment of risk information is constrained by cognitive biases and other manifestations of bounded rationality (Kahneman 2003; Sunstein 2005; Marx et al. 2007; Weber 2006).

Neither of these models cogently explains public conflict over climate change—or a host of other putative societal risks, such as nuclear power, the vaccination of teenage girls for HPV, and the removal of restrictions on carrying concealed handguns in public. Such disputes conspicuously feature partisan divisions over facts that admit of scientific investigation. Nothing in the rational-weigher model predicts that people with different values or opposing political commitments will draw radically different inferences from common information. Likewise, nothing in the irrational-weigher model suggests that people who subscribe to one set of values are any more or less bounded in their rationality than those who subscribe to any other, or that cognitive biases will produce systematic divisions of opinion of among such groups.

One explanation for such conflict is the cultural cognition thesis (CCT). CCT says that cultural values are cognitively prior to facts in public risk conflicts: as a result of a complex of interrelated psychological mechanisms, groups of individuals will credit and dismiss evidence of risk in patterns that reflect and reinforce their distinctive understandings of how society should be organized (Kahan, Braman, Cohen, Gastil & Slovic 2010; Jenkins-Smith & Herron 2009). Thus, persons with individualistic values can be expected to be relatively dismissive of environmental and technological risks, which if widely accepted would justify restricting commerce and industry, activities that people with such values hold in high regard. The same goes for individuals with hierarchical values, who see assertions of environmental risk as indictments of social elites. Individuals with egalitarian and communitarian values, in contrast, see commerce and industry as sources of unjust disparity and symbols of noxious self-seeking, and thus readily credit assertions that these activities are hazardous and therefore worthy of regulation (Douglass & Wildavsky 1982). Observational and experimental studies have linked these and comparable sets of outlooks to myriad risk controversies, including the one over climate change (Kahan 2012).

Individuals, on the CCT account, behave not as expected-utility weighers—rational or irrational—but rather as cultural evaluators of risk information (Kahan, Slovic, Braman & Gastil 2006). The beliefs any individual forms on societal risks like climate change—whether right or wrong—do not meaningfully affect his or her personal exposure to those risks. However, precisely because positions on those issues are commonly understood to cohere with allegiance to one or another cultural style, taking a position at odds with the dominant view in his or her cultural group is likely to compromise that individual’s relationship with others on whom that individual depends for emotional and material support. As individuals, citizens are thus likely to do better in their daily lives when they adopt toward putative hazards the stances that express their commitment to values that they share with others, irrespective of the fit between those beliefs and the actuarial magnitudes and probabilities of those risks.

The cultural evaluator model takes issue with the irrational-weigher assumption that popular conflict over risk stems from overreliance on heuristic forms of information processing (Lodge & Taber 2013; Sunstein 2006). Empirical evidence suggests that culturally diverse citizens are indeed reliably guided toward opposing stances by unconscious processing of cues, such as the emotional resonances of arguments and the apparent values of risk communicators (Kahan, Jenkins-Smith & Braman 2011; Jenkins-Smith & Herron 2009; Jenkins-Smith 2001).

But contrary to the picture painted by the irrational-weigher model, ordinary citizens who are equipped and disposed to appraise information in a reflective, analytic manner are not more likely to form beliefs consistent with the best available evidence on risk. Instead they often become even more culturally polarized because of the special capacity they have to search out and interpret evidence in patterns that sustain the convergence between their risk perceptions and their group identities (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).

Two channels of science communication

The rational- and irrational-weigher models of risk perception generate competing prescriptions for science communication. The former posits that individuals can be expected, eventually, to form empirically sound positions so long as they are furnished with sufficient and sufficiently accurate information (e.g., Viscusi 1983; Philipson & Posner 1993). The latter asserts that the attempts to educate the public about risk are at best futile, since the public lacks the knowledge and capacity to comprehend; at worst such efforts are self-defeating, since ordinary individuals are prone to overreact on the basis of fear and other affective influences on judgment. The better strategy is to steer risk policymaking away from democratically accountable actors to politically insulated experts and to “change the subject” when risk issues arise in public debate (Sunstein 2005, p. 125; see also Breyer 1993).

The cultural-evaluator model associated with CCT offers a more nuanced account. It recognizes that when empirical claims about societal risk become suffused with antagonistic cultural meanings, intensified efforts to disseminate sound information are unlikely to generate consensus and can even stimulate conflict.

But those instances are exceptional—indeed, pathological. There are vastly more risk issues—from the hazards of power lines to the side-effects of antibiotics to the tumor-stimulating consequences of cell phones—that avoid becoming broadly entangled with antagonistic cultural meanings. Using the same ability that they reliably employ to seek and follow expert medical treatment when they are ill or expert auto-mechanic service when their car breaks down, the vast majority of ordinary citizens can be counted on in these “normal,” non-pathological cases to discern and conform their beliefs to the best available scientific evidence (Keil 2010).

The cultural-evaluator model therefore counsels a two-channel strategy of science communication. Channel 1 is focused on information content and is informed by the best available understandings of how to convey empirically sound evidence, the basis and significance of which are readily accessible to ordinary citizens (e.g., Gigerenzer 2000; Spiegelhalter, Pearson & Short 2011). Channel 2 focuses on cultural meanings: the myriad cues—from group affinities and antipathies to positive and negative affective resonances to congenial or hostile narrative structures—that individuals unconsciously rely on to determine whether a particular stance toward a putative risk is consistent with their defining commitments. To be effective, science communication must successfully negotiate both channels. That is, in addition to furnishing individuals with valid and pertinent information about how the world works, it must avail itself of the cues necessary to assure individuals that assenting to that information will not estrange them from their communities (Kahan, Slovic, Braman & Gastil 2006; Nisbet 2009).

References 

Breyer, S.G. Breaking the Vicious Circle: Toward Effective Risk Regulation, (Harvard University Press, Cambridge, Mass., 1993).

Gigerenzer, G. Adaptive thinking: rationality in the real world, (Oxford University Press, New York, (2000).

Jenkins-Smith, H. Modeling stigma: an empirical analysis of nuclear waste images of Nevada. in Risk, media, and stigma : Understanding public challenges to modern science and technology (ed. J. Flynn, P. Slovic & H. Kunreuther) 107-132 (Earthscan, London ; Sterling, VA, 2001). 

Jenkins-Smith, H.C. & Herron, K.G. Rock and a Hard Place: Public Willingness to Trade Civil Rights and Liberties for Greater Security. Politics & Policy 37, 1095-1129 (2009).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan, D.M. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4(2), 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahneman, D. Maps of bounded rationality: Psychology for behavioral economics. Am Econ Rev 93, 1449-1475 (2003).

Keil, F.C. The feasibility of folk science. Cognitive science 34, 826-862 (2010).

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Marx, S.M., Weber, E.U., Orlove, B.S., Leiserowitz, A., Krantz, D.H., Roncoli, C. & Phillips, J. Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ Chang 17, 47-58 (2007).

Nisbet, M.C. Communicating Climate Change: Why Frames Matter for Public Engagement. Environment 51, 12-23 (2009).

Philipson, T.J. & Posner, R.A. Private choices and public health, (Harvard University Press, Cambridge, Mass., 1993).

Spiegelhalter, D., Pearson, M. & Short, I. Visualizing Uncertainty About the Future. Science 333, 1393-1400 (2011).

Starr, C. Social Benefit Versus Technological Risk. Science 165, 1232-1238 (1969).

Sunstein, C.R. Laws of fear: beyond the precautionary principle, (Cambridge University Press, Cambridge, UK ; New York, 2005).

Sunstein, C.R. Misfearing: A reply. Harvard Law Review 119, 1110-1125 (2006).

Viscusi, W.K. Risk by choice: regulating health and safety in the workplace, (Harvard University Press, Cambridge, Mass., 1983).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (5)

This is a very thought provoking analysis and helpful in articulating the critiques of the current theories of risk. Very often the irrational weigher model is juxtaposed against the rational weigher - it's one or the other and nothing else. You are drawing attention to understandings that have long been the domain of sociologists and cultural studies commentators whose rich critiques need more light shone on them.

I think the cultural cognition model (and I'm sure you would agree) goes beyond communitarian/individualistic drivers. For example, in Australia it is very clear that cultural tags influence and reflect a community where not immunising a child could either be quite acceptable or highly non-acceptable in their social networks. At present, we don't have good ways of describing them because of a lack of empirical qualitative investigation, but it appears that non-immunisation or delayed immunisation would either be attractive in alternative regional off-the-grid communities and also in wealthy, well travelled inner urban intensive mothering communities. I wonder if it is possible to fit these groups into the cultural cognition grid? Regardless, non immunisation is a minority behaviour in such communities (except for one town in NSW).

Although non-immunising is a minority choice for Australian parents (2% approx) and not culturally divided between communitarian/individualistic perspectives, our conservative media seem to have attempted to draw-out these distinctions in their representation, largely ignoring the majority who are not immunised for reasons of social exclusion or poverty and focusing on the wealthy 'yummy mummies'. It will be interesting to see how parents reflexively respond to these representations in their own behaviour and conceptualising of what non-immunisation means and represents.

Another comment is that de-biasing techniques have been proposed as an answer to the problem of the irrational weigher so it would be interesting to review the literature on whether they work and when (or not).

Thanks again for your analysis of this fascinating field.

February 23, 2014 | Unregistered CommenterJulie Leask

@Julie

thanks!

1. I agree with you that the group/grid "ways of life" are not the only forms of affinity that will figure in cultural-cognition dynamics. Indeed, I don't think they ever genuinely are! That is, I think they are only models -- in the nature of the Bohr/Rutherford atom -- that help us to envision & thus make more amenable to informed engagement group dispositions that we can't see & likely don't even admit of being "seen." The same would be true, I imagine, for any alternative to the group-grid scheme as a tool or device for making latent motivating dispositions susceptible to measurement & subsequent empirical study. All we can say is, which model helps us most to do what we want, which is to explain, predict & prescribe? The nice feature of the group-grid scheme (as we have operationalize; others have done so in different ways -- and as I'm sure you can guess from the nature of this response, I don't think it is useful to figure out which one of them is "right" in some abstract, theory sense!) is that they are very robust; it manages to get enough of a piece of what I'm sure are the multiplicity of overlapping &interlocking types of affinities to get guide us in many settings. But I am sure that it can be outperformed by other latent-disposition measures, particularly more fine-grained ones, in particular settings. I'll go w/ anytthing that explains, predicts & prescribes better for the task at hand!

2. What *motivates* vaccine-hesitancy is, as you know & have shown in your own research, quite complicated & subtle -- & the media tends to resist recognition of subtly & complexity (but that's not always the case & I think we are right to demand that journalists make things clear & also to recognize & admire & be grateful when they pull it off!). Many individual parents are likely to be concerned and confused for many reasons, including ones that reflect dynamics or influences unrelated, I'd think, to the sort of identity-proective motivations at issue with cultural cognition. We need decent, evidence-based tools for helping doctors to identify those individuals and give them the information that will enable them to apprehend the best available evidence and made confident decisions in light of it. I'd also be very interested myself in exploring whether there are dynamics related to cultural cognition that generate fear of or hostility to vaccines within groups of people who share defining values -- but if such groups are out there, the sorts of outlooks they share are definitely not the ones that charactetrize the more familiar types of cultural styles in US (I'm guessing too Australia, which as far as I can tell is more American than America) & whose shared identifying features evade not only the cultural worldview measures but pretty much anything that researchers tend to put into general population surveys. But I am eager to add both that I think vaccine-risk perceptions vary across socieities, & so might well be entangled with the sorts of identities that figure in conflicts over "big" societal risks (climate, nuclear, GM foods or whatehaveyou) in other societies (maybe Australia-- you'd know better than I!).

3. Great point on the debiasing. My instinct is to say that the "irrational weigher model," as I've described it, is very skeptical of the feasibility of meaningful debiasing of the public on perceptions of societal risk -- and that it therefore favors a strategy of expert risk regulation by politically insulated experts. In any case, that is how Sunstein, who I think is the most reflective expositor of this position, sees things (& he's by no means alone). But the "biases & heuristics" view generally does, of course, allow for debiasing in some contexts; maybe here too. But the sorts of strategies it proposes for debiasing are necessarily different from the ones the Cultural Evaluator Model would support. CEM doesn't try to come up with devices to "compensate" for definiciencies in the rationality of ordinary citizens; it tries to devise strategies for neutralizing the conditions that disable the rational faculties that ordinary citizens ordinarily & reliably use to orient their decisionmaking--personal and collective -- with respect to the best available evidence. (The "Two channel model" featured in the geoengineering-experiment paper is in that spirit.) Its attitude about the science communication problem (and it has a bit of an attitude, certainly) is -- "it's not stupid people; it's a polluted science communication enviornment, stupid!"

But these are (as always!) just provisional responses/reactions. You should write a guest post where you address these or related matters!

February 23, 2014 | Registered CommenterDan Kahan

Here's a short(ish) review of the debiasing literature by Scott Lilienfeld and colleagues:

PDF is available online here

Citation: Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives in Psychological Science, 4, 390-398.

February 23, 2014 | Unregistered CommenterGord Pennycook

@Gord:

Thanks! That is a great paper -- & it does address debiasing w/r/t to ideologically motivated reasoning.

One thing to note, though, is that Lilienfeld et al. assume that ideologically motivated reasoning reflects overreliance on heuristic or system-1 forms of information processing. I think Sunstein is skeptical about debiasing the public precisely b/c he thinks there's no reliable way to overcome heuristic reasoning as applied to societal risks & like issues (he probably is open to "debaiasing" or at least "nudging" people on their personal decisionmaking).

But both Lilienfeld et al & Sunstein are wrong to assume that ideologically motivated reasoning (of which cultural cognition of risk is a species) is a product of heuristic information processing. As pre-Kahneman dual process theorists (like Shelly Chaiken) recognized, identity-protective cognition recruits effortful, conscious "system 2" reasoning as well.

Indeed, the disposition/aptitude to use system 2 forms of reasoning seems to magnify politically motivated reasoning (something you & I have discussed a bit in connection w/ your cool work on system 2 informationj processing & religiosity).

The prospect for "debiasing" for the sorts of pathologies associated with cultural cognition & the like depends, obviously, on figuring out exactly what mechanisms are at work, what forms of information processing affected, how & by what, etc.

I think the "science communication enviornment protection" (SCEP) program can be thought of as a debiasing one -- but it involves things very different from what one would do if one accepted the (very popular) view that politically motivated reasoning is a consequence of overreliance on heuristic, system 1 reasoning.

I think, too, that the SCEP framework generates more reason to believe that enlightnened self-government in the age of risk regulation is possible than does the heuristic-biases framework. That, of course, isn't a reason to believe the former is true. But it might supply a reason to persevere & to engage other thoughtful people w/ even more passion & excitement.

February 23, 2014 | Registered CommenterDan Kahan

Very insightful response, Dan.

I really enjoy Chaiken's work (along with Kunda and all that other related stuff). Interestingly, in one of the first papers that postulated dual-processes in reasoning, Peter Wason & Jonathan Evans (1975) used the concept in an attempt to explain what was effectively a motivated reasoning effect in an abstract reasoning task (the classic '4-card problem'). They called it "rationalization", but the mechanism is effectively the same: Using focused analytic reasoning in an attempt to prove (or reinforce, or defend) a heuristic response (or intuitive response, or belief, or ideology).

I think you're absolutely correct that motivated reasoning relies on Type 2 ("system 2") processing. However, I also agree with Lilienfeld that debiasing also largely requires Type 2 processing*. At least at the cognitive level. I hold the view that Type 1 processing brings the belief (or ideology, or intuition) to mind as an autonomous response to some environmental cue and Type 2 processing can be used to either reinforce (e.g., motivated reasoning) or undermine (e.g., debiasing) it. Sure, people are motivated reasoners... but they do sometimes change their minds. And it's a good thing for that or this science business would be a fool's errand!

Of course, "debiasing" can also be used more generally. We can alter the context (often cultural) to lower the probability of motivated reasoning, leading to a less biased individual than in a more emotionally charged context. I don't think that this is the type of debiasing that Lilienfeld is interested, though**. He may not even consider this truly debiasing because the belief/ideology/intuition hasn't really changed. It merely wasn't given an opportunity to enter into reasoning. If we could change the context so that the probability of motivated reasoning is undermined but the belief/ideological position is nonetheless cued in a way that it enters into Type 2 processing, debiasing may be more effective. Are you aware of any research that has attempted this? A very difficult thing to study. But also a problem of considerable importance!

The Memory & Cognition study that you mentioned sort of gets at this issue, actually. Ask a religious (or non-religious) person to justify their belief in a "hot" cultural context (such as a religious debate), and you may find evidence of motivated reasoning. However, a low-level conflict between supernatural beliefs and, for example, a folk belief about the material world may cue Type 2 reasoning that, due to the lack of "hot" cultural context, may potentially proceed unmotivated. Naturally, it's an open question as to whether this could occur for other classes of belief. Moreover, it may be the case that only ideological positions that are epistemically suspect can be "debiased" [don't ask me which ideological positions are epistemically suspect]. Some interesting possibilities nonetheless!

Anyway, those are my thoughts on the matter. At the very least, I hope you find the Wason & Evans paper entertaining! It's a bit of a funny paper, but it's an interesting look at the state of the field in 1975.

Thanks again for the very thought provoking post!


*An exception might be a belief or ideological position that merely diminishes over a long period of time. This is technically "debiasing" in my view.
**Although I suppose if the context was changed permanently, it may provide an opportunity for the position to weaken over time. I don't know of any research that has looked at this (or the related point just above).

February 24, 2014 | Unregistered CommenterGord Pennycook

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>