follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Women for & against Trump: who sees what & why . . . . | Main | Law & Cognition 2016, Sessions 6 & 7 recap: To bias or not to debias--that is the question about deliberation »
Saturday
Oct152016

Weekend Up(back)date: 3 theories of risk, 2 conceptions of emotion

From Kahan, D.M. Emotion in Risk Regulation: Competing Theories, in Emotions and Risky Technologies. (ed. S. Roeser) 159-175 (Springer Netherlands, 2010).

2. Three Theories of Risk Perception, Two Conceptions of Emotion

The profound impact of emotion on risk perception cannot be seriously disputed.  Distinct emotional states–-from fear to dread to anger to disgust (Slovic, 2000)--and distinct emotional phenomena--from affective orientations to symbolic associations and imagery (Peters & Slovic, 2007)--have been found to explain perceptions of the dangerousness of all manner of activities and things--from pesticides (Alhakami & Slovic, 1994) to mobile phones (Siegrist, Earle, Gutscher, & Keller, 2005), from red meat consumption (Berndsen & van der Pligt, 2005) to cigarette smoking (Slovic, et al., 2005).

More amenable to dispute, however, is exactly why emotions exert this influence.  Obviously, emotions work in conjunction with more discrete mechanisms of cognition in some fashion.  But which ones and how?  To sharpen the assessment of the evidence that bears on these questions, I will now sketch out three alternative models of risk perception--the rational weigher, the irrational weigher, and the cultural evaluator theories--and their respective accounts of what (if anything) emotions contribute to the cognition of risk.

2.1. The Rational Weigher Theory: Emotion as Byproduct

Based on the premises of neoclassical economics, the rational weigher theory asserts that individuals, over time and in aggregate, process information about risky undertakings in a way that maximizes their expected utility.  The decision whether to accept hazardous occupations in exchange for higher wages, (Viscusi, 1983) to engage in unhealthy forms of recreation in exchange for hedonic pleasure, (Philipson & Posner, 1993) to accept intrusive regulation to mitigate threats to national security (Posner, 2006) or the environment (Posner, 2004) -- all turn on a utilitarian balancing of costs and benefits.

On this theory, emotions don’t make any contribution to the cognition of risk.  They enter into the process, if they do at all, only as reactive byproducts of individuals’ processing of information:  if a risk appears high relative to benefits, individuals will likely experience a negative emotion--perhaps fear, dread, or anger--whereas if the risk appears low they will likely experience a positive one--such as hope or relief (Loewenstein, et al., 2001). This relationship is depicted in Figure 2.1.

2.2. The Irational Weigher Theory: Emotion as bias

The irrational weigher theory asserts that individuals lack the capacity to process information that maximizes their expected utility.  Because of constraints on information, time, and computational power, ordinary individuals must resort to heuristic substitutes for considered analysis; those heuristics, moreover, invariably cause individuals’ evaluations of risks to err in substantial and recurring ways (Jolls, Sunstein, & Thaler, 1998). Much of contemporary social psychology and behavioral economics has been dedicated to cataloging the myriad distortions--from the “availability cascades” (Kuran & Sunstein, 1998) to “probability neglect” (Sunstein, 2002) to “overconfidence” bias (Fischhoff, Slovic, & Lictenstein, 1977) to “status quo bias” (Kahneman, 1991)--that systematically skew risk perceptions, particularly those of the lay public.

For the irrational weigher theory, the contribution that emotion makes to risk perception is, in the first instance, a heuristic one.  Individuals rely on their visceral, affective reactions to compensate for the limits on their ability to engage in more considered assessments (Loewenstein, et al., 2001).  More specifically, irrational weigher theorists have identified emotion or affect as a central component of “System 1 reasoning,” which is “fast, automatic, effortless, associative, and often emotionally charged,” as opposed to “System 2 reasoning,” which is “slower, serial, effortful, and deliberately controlled” ((Kahneman, 2003, p. 1451), and typically involves “execution of learned rules” (Frederick, 2005, p. 26).  System 1 is clearly adaptive in the main--heuristic reasoning furnishes guidance when lack of time, information, and cognitive ability make more systematic forms of reasoning infeasible--but it remains obviously “error prone” in comparison to the more the “more deliberative [and] calculative” System 2 (Sunstein, 2005, p. 68).

Indeed, according to the irrational weigher theory, emotion-pervaded forms of heuristic reasoning can readily transmute into bias.  The point isn’t merely that emotion-pervaded reasoning is less accurate than cooler, calculative reasoning; rather it’s that habitual submission to its emotional logic ultimately displaces reflective thinking, inducing “behavioral responses that depart from what individuals view as the best course of action”--or at least would view as best if their judgment were not impaired (Loewenstein, et al., 2001).  Proponents of this view have thus linked emotion to nearly all the cognitive biases shown to distort risk perceptions (Fischhoff, et al., 1977; Sunstein, 2005). The relationship between emotion, rational calculation of expected utility, and risk perception that results is depicted in Figure 2.2.

2.3. The Cultural Evaluator Theory: Emotion as Expressive Perception

Finally there’s the cultural evaluator theory of risk perception.  This model rests on a view of rational agency that sees individuals as concerned not merely with maximizing their welfare in some narrow consequentialist sense but also with adopting stances toward states of affairs that appropriately express the values that define their identities (Anderson, 1993).  Often when an individual is assessing what position to take on a putatively dangerous activity, she is, on this account, not weighing (rationally or irrationally) her expected utility but rather evaluating the social meaning of that activity (Lessig, 1995).  Against the background of cultural norms (particularly contested ones), would the law’s designation of that activity as inimical to society’s well-being affirm her values or denigrate them (Kahan, et al., 2006)?

Like the irrational weigher theory, the cultural evaluator theory treats emotions as entering into the cognition of risk.  But it offers a very different account of how--one firmly aligned with the position that sees emotions as constituents of reason.

Martha Nussbaum describes emotions as “judgments of value” (Nussbaum, 2001). They orient a person who values some good, endowing her with the attitude that appropriately expresses her regard for that good in the face of a contingency that either threatens or advances it.  On this account, for example, grief is the uniquely appropriate and accurate judgment for someone who values another who has died; fear is the appropriate and accurate judgment for someone who values her or another’s well-being in the face of an impending threat to it; anger is the appropriate and accurate judgment for someone who values her own honor in response to an action that conveys insufficient respect.  People who fail to experience these emotions under such circumstances--or who experience these or other emotions in circumstances that do not warrant them--lack a capacity of discernment essential to their flourishing as agents capable of holding values and pursuing them.

Rooted heavily in Aristotelian philosophy, Nussbaum’s account is, as she herself points out, amply grounded in modern empirical work in psychology and neuroscience.  Antonio Damasio’s influential “somatic marker” account, for example, identifies emotions with a particular area in the brain (Damasio, 1994).  Persons who have suffered damage to that part of the brain display impaired capacity to recognize or imagine conditions that might affect goods they care about, and thus lack motivation to respond accordingly.  They are perceived by others and often by themselves as mentally disabled in a distinctive way, as suffering from a profound kind of moral and social obtuseness that makes them incapable of engaging the world in a way that matches their own ends.  If being rational consists, at least in part, of “see[ing] which values [we] hold” and knowing how to “deploy these values in [our] judgments,” then “those who are unaware of their emotions or of their emotional lacks” will necessarily be deficient in a capacity essential to being “a rational person” (Stocker & Hegeman, 1996, p. 105).

The cultural evaluator theory views emotions as enabling individuals to perceive what stance toward risks coheres with their values.  Cultural norms obviously play a role in shaping the emotional reactions people form toward activities such as nuclear power, handgun possession, homosexuality, and the like (Elster, 1999). When people draw on their emotions to judge the risk that such an activity poses, they form an expressively rational attitude about what it would mean for their cultural worldviews for society to credit the claim that that activity is dangerous and worthy of regulation, as depicted in Figure 2.3.  Persons who subscribe to an egalitarian ethic, for example, have been shown to be particularly sensitive to environmental and technological risks, the recognition of which coheres with condemnation of commercial activities that generate distinctions in wealth and status.  Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like--an attitude that expresses their commitment to the autonomy of markets and other private orderings (Douglas, 1966).  Individualistic persons worry instead about the risk that gun control--a policy that denigrates individualist values--will render law-abiding citizens defenseless (Kahan, Braman, Gastil, Slovic, & Mertz, 2007).  Persons who subscribe to hierarchical values worry about the dangers of drug distribution, homosexuality, and other forms of behavior that defy traditional norms (Wildavsky & Dake, 1990).

This account of emotion doesn’t see its function as a heuristic one.  That is, emotions don’t just enable a person to latch onto a position in the absence of time to acquire and reflect on information.  Rather, as a distinctive faculty of cognition, emotions perform a unique role in enabling her to identify the stance that is expressively rational for someone with her commitments.  Without the contribution that emotion makes to her powers of expressive perception, she would be lacking this vital incident of rational agency, no matter how much information, no matter how much time, and no matter how much computational acumen she possessed.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

Persons who subscribe to an egalitarian ethic, for example, have been shown to be particularly sensitive to environmental and technological risks, the recognition of which coheres with condemnation of commercial activities that generate distinctions in wealth and status. Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like--an attitude that expresses their commitment to the autonomy of markets and other private orderings (Douglas, 1966). Individualistic persons worry instead about the risk that gun control--a policy that denigrates individualist values--will render law-abiding citizens defenseless (Kahan, Braman, Gastil, Slovic, & Mertz, 2007). Persons who subscribe to hierarchical values worry about the dangers of drug distribution, homosexuality, and other forms of behavior that defy traditional norms (Wildavsky & Dake, 1990).

Dan -

Is there some taxonomy of people along your axes of "values" and "ethics" that are not, by definition, associated with ideological or political or cultural frameworks? The notion that some people could be characterized as generically "egalitarian" as opposed to 'individualistic," in contrast to others, and divorced from context that makes those "values" or "ethics" meaningful, seems problematic to me.

Your description above seems to me to imbue people with an "ethic" or "value" in some fashion independently from their cultural influences, as if those traits are genetic, or consistent across issues. As much as I read these arguments, IMO, they fail to take a full account of causality.

My favorite example is the reaction of Republicans to the healthcare mandate, which originally was seen by those with 'individualistic values" regarding the notion of "personal responsibility" but which morphed into "government overreach,"...nay..."tyranny" as soon as the context changed towards implementation in the ACA. WE can think of myriad other issues where huge blocks of people, who might be identified as sharing an "ethic" switching stances on issues that putatively express that "ethic," contingent on the context (States' rights is another favorite, as seen in Bush v. Gore were entire groups of people switched their orientation on States' rights).

To burrow down a bit:

==> Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like. ==>


These seems to suggest that there is some characteristic of these issues that distinguishes them as being similar in kind and distinct from issues of other kinds. That doesn't seem very realistic, IMO. What links these issues is more, IMO, a cultural identification than anything than some idiosyncratic characteristic of those issues as distinguished from others. They are all risks. The are linked to the environment, but no doubt we would find those who test out as sharing "individualist values" but who are nonetheless VERY concerned about environmental issues - but a different set of environmental issues that are no, a priori, tied to an ideological framework.


These three different and distinct theories of risk perception are certainly interesting and useful for examining all the different mechanisms that could potentially be in play, but an approach that considers them to be somehow mutually exclusive or singularly explanatory seems rather unrealistic, IMO.

October 15, 2016 | Unregistered CommenterJoshua

@Joshua--

This was written when I was in middle school & much more under the spell (a useful one; I was an ugly frog before that) of Douglas & Wildavsky. I'd say that I'm much more sensitive now to how contingent social influences invest various putative risk sources with the social meanings that generate the information-processing effects that this framework features. See,e.g., our paper on Zika & antagonistic memes.

But that said, the memes & other dynamics do seem to push the putative risk sources into the template. That is, individualistic sorts do seem to become risk sekpitical, once they do, b/c the source has become imbued w/ meanings that make it an effective symbol of what those with those worldivews admire. Likewise, egalitarian communitarians happily ignore all manner of technological risk, but when an activity has become imbued with meanings that make it symbolic of hierarchy, domination, selfishness & the like, they see it as lethal.

The basic predispositional scheme is fine & emotions do, I think, mediate the predispositions.

The selectivity of the dynamics, though, is remarkable & way underremarked.

October 15, 2016 | Registered CommenterDan Kahan

Thanks for this Dan,
At the level where we work, neuron by neuron, all these theories fit together nicely. Emotions are required to build the mostly maximized Bayesian decider. The decider is neither rational nor irrational in a large scale sense but is composed of lots of tiny rational rules. Like Wiles' proof of Fermat's last theory, the decider is composed of rational chemical rules but in this case there are billions of such rules, so it is hard to understand what the rules are doing and how small changes in certain mostly unknowable rules yield an emergent change in a particular belief. We are working on understanding the connection from the behavior of a few neurons to the coupled output of 10s of millions of neurons. We understand the cellular level details. The emergent behavior is harder to understand.
One of the subtleties of the emergent rational behavior is that such behavior seems to include rules based on our lives but of which we know nothing consciously.

October 15, 2016 | Unregistered CommenterEric Fairfield

"Is there some taxonomy of people along your axes of "values" and "ethics" that are not, by definition, associated with ideological or political or cultural frameworks?"

I think they're just convenient labels for people who answer the survey questions a particular way. I've said before I don't think the descriptions are particularly accurate.

"These seems to suggest that there is some characteristic of these issues that distinguishes them as being similar in kind and distinct from issues of other kinds. That doesn't seem very realistic, IMO."

It's because the solutions proposed to them all are instances of state regulation standing in the way of technological and economic progress without justification. Just as the reason the egalitarians support them all is that the problems are examples of unfettered technological and economic progress taking risks with people's lives (and nature) without proper regulation. The egalitarian sees people as naturally greedy and dangerous, and in need of control. State regulation by a government of the enlightened elite (i.e. people like them) is needed. Individualists see industry and economics as inherently good people providing the solutions to all humanity's problems, and state regulation getting in the way of it.

The division goes back to the origins of the terms "left" and "right" in the French revolutionary parliament. The "left" was the side the "egalitarian" Robespierre and the "Committee of Public Safety" sat on. Kinda interesting that they called it that, eh? :-)

October 15, 2016 | Unregistered CommenterNiV

@NiV & @Joshua--

The axes, the labels-- just risk-perception/motivated reasoning equivalent of Bohr-Rutherford atoms.

The ones who don't respond this way? They are the ones high in science curiosity, of course.

October 15, 2016 | Registered CommenterDan Kahan

We are probing the taxonomy problem. The first serious discussion is later this week. Framing the mathematics as a taxonomy problem seems like a good way to do things.
In contrast to a framing of the Bohr Rutherford atom, I think (subject to serious revision as needed) that a more useful idea might be to think of a particular ethical decision as equivalent to having fur. There are lots of ways that animals produce fur. The taxonomy of these ways exists but is not expressible as a two dimensional graph. The graph has lots more dimensions all leading to fur. Many of these dimensions are not well understood. For instance, why does a single nucleotide change in a region of DNA that does not code for fur result in the loss of fur?
For 'fur' substitution any ethical or value decision that fits.

October 15, 2016 | Unregistered CommenterEric Fairfield

@Eric--

I'm all fur that.

October 16, 2016 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>