follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Much scarier than nanotechnology, part 2 | Main | Marschack Lecture at UCLA on Friday March 8 »

"Tragedy of the Science-Communication Commons" (lecture summary, slides)

Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science.  The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.

I believe they were filming and will upload a video of the event. If that happens, I'll post the link. For now, here's a summary (to best of my recollection) & slides.

1. The science communication problem & the cultural cognition thesis

I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control. 

In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.

The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.

2.  A Model

click to enlargeA Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).

A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.

go ahead, click me!The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.

Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.

click on this! or you will wish you had for rest of your life!The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.

If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”

3. Nanotechnology: culturally biased search & assimilation

CCP tested this model by studying the formation of nanotechnology risk perceptions. In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjecDo it! Do it!ts along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.

This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight  (the "likelihood ratio," in Bayesian terms) they'd give to information conditional on being exposed to it.

4. Climate change

a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a Whoa!manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is.  And they are, we found in an observational component of the study.  These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.

b. In another study, CCP found that science comprehension magnifies cultural polarization. This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing.  (The result has also been corroborated experimentally.) 

5.  The “tragedy of the science communications commons”

The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group.  This is collectively irrational, however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.

The solution to this “tragedy of the science communication commons” is to neutralize the conflict between the formation of accurate beliefs and group-congenial ones. Information must be conveyed in ways—or conditions otherwise created—that avoid putting people to a choice between recognizing what’s known and being who they are.

You will want me to show you how to do that, and on climate change. But I won’t. Not because I can’t (see these 50 slides flashed in 15 seconds). Rather, the reason is that I know that there’s no risk that you’ll fail to ask me what I have to say about “fixing the climate change debate” if I don’t address that topic now, and that if I do the risk is high you’ll neglect to ask another question that I think is very important: how is it that this sort of conflict between recognizing what’s known and being who one is happen in the first place?

Such a conflict is pathological.  It’s bad. And it’s not the norm: the number of issues on which the entanglement of positions with group-congenial meanings could happen relative to the number on which they do is huge.  If we could identify the influences that cause this pathological state, we likely could figure out how to avoid it, at least some of the time.

The HPV vaccine is a good illustration.  The HPV vaccine generated tremendous controversy because it became entangled in divisive meanings relating to gender roles and parental sovereignty versus collective mandates of medical treatment for children. But there was nothing necessary about this entanglement; the HBV vaccine is likewise aimed at a sexually transmitted disease, was placed on the universal childhood-vaccination schedule by the CDC, and now has coverage rates of 90-plus percent year in & year out. Why did the HPV vaccine not travel this route?

The answer was the marketing strategy followed by Merck, the manufacturer of the HPV vaccine Gardasil. Merck did two things that made it highly likely the vaccine would become entangled in conflicting cultural meanings: first, it decided to seek fast-track approval of the vaccine for girls only (only females face an established “serious disease” risk—cervical cancer—from HPV); and second, it orchestrated a nationwide campaign to press for adoption of mandatory vaccine policies at the state level. This predictably provoked conservative religious opposition, which in turn provoked partisan denunciation.

Neither decision was necessary. If the company hadn’t pressed for fast-track consideration, the vaccine world have been approved for males and females within 3 years (it took longer to get approval for males because of the resulting controversy after approval of the female-only version). In addition, with state mandates, universal coverage could have been obtained through commercial and government-subsidized insurance. That outcome wouldn’t have been good for Merck, which wanted to lock up the US market before GlaxoSmithKline obtained approval for its HPV vaccine. But it would have been better for our society, because then instead of learning about the vaccine from squabbling partisans, they would have learned about it from their pediatricians, in the same way that they learn about the HBV vaccine.

The risk that Merck’s campaign would generate a political controversy that jeopardized acceptability of the vaccine was forecast in empirical studies. It was also foreseen by commentators as well as by many medical groups, which argued that mandatory vaccination policies were unnecessary.

The FDA and CDC ignored these concerns, not because they were “in Merck’s pocket” but because they were simply out of touch. They had not mechanism for assessing the impact that Merck’s strategy might have or for taking the risks this strategy was creating into account in determining whether, when, and under what circumstances to approve the vaccine.

This is a tragedy too. We have tremendous scientific intelligence at our disposal for promotion of the common welfare. But we put the value of it at risk because we have no national science-communication intelligence geared to warning us of, and steering us clear of, the influences that generate the disorienting fog of conflict that results when policy-relevant facts become entangled in antagonistic cultural meanings.

6. A “new political science”

Cultural cognition is not a bias; it is integral to rationality.  Because individuals must inevitably accept as known by science many more things than they can comprehend, their well-being depends on their becoming reliably informed of what science knows. Cultural certification of what’s collectively known is what makes this possible.

In a pluralistic society, however, the sources of cultural certification are numerous and diverse.  Normally they will converge; ways of life that fail to align their members with the best available evidence on how to live well will not persist. Nevertheless, accident and misadventure, compounded by strategic behavior, create the persistent risk of antagonistic meanings that impede such convergence—and thus the permanent risk that members of a pluralistic democratic society will fail to recognize the validity of scientific evidence essential to their common welfare.

This tension is built into the constitution of the Liberal Republic of Science. The logic of scientific discovery, Popper teaches us, depends on the open society. Yet the same conditions of liberal pluralism that energize scientific inquiry inevitably multiply the number of independent cultural certifiers that free people depend on to certify what is collectively known.

At the birth of modern democracy, Tocqueville famously called for a “new political science for a world itself quite new.”

The culturally diverse citizens of fully matured democracies face an unprecedented challenge, too, in the form of the science communication problem. To overcome it, they likewise are in need of a new political science—a science of science communication aimed at generating the knowledge they need to avoid the tragic conflict between converging on what is know by science and being who they are.


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (15)

Good summary!

"In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative)."

There is another element to this that the usual Bayesian formulation doesn't address, and that is the statistical model by which one assigns probabilities to outcomes under each hypothesis. The way the usual description goes, the odds are known, the likelihood ratio is clearly defined, and deviation from an update in line with that is considered an error (if not pathological).

But people differ too in their models for assessing likelihood. You could say that it was a part of their priors. So if one person believes that the two outcomes have very different probabilities under each hypothesis, but a second person believes they would have the same probability, then both can see the same evidence, perform a perfect Bayesian update, and yet one finds it totally convincing and the other is unmoved.

Consider, for example, trust in experts. One person has a mental model of experts under which it is extremely unlikely that they'll say something incorrect. So what an expert says is considered strong evidence. A second person has a mental model in which experts are no less likely than an average well-educated Joe Public to make an error. Experts are mostly relying on other experts too, just like Joe. Perhaps they have perhaps a more Kuhnian view of science, or maybe they're aware of numerous past cases where experts made errors on this subject. So in Bayesian terms they should put a lot less weight on the evidence.

Is either of them incorrect to do so? There is, presumably an actual objective probability for experts to make true statements when speaking on this subject. Do we say that the person closest to the true probability is the more justified? Or the one with the most evidence to back their view up?

"Individuals don’t increase their risk or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group."

This is personal anecdote, but in a lot of the social groups I move in there is a significant social cost to being a sceptic. My response is not to conform my views to the group, but instead to keep quiet about it when I'm with those groups. (Not so much to avoid the social penalties myself as to avoid them being inflicted on my associates.)

I'm not convinced that social cost is the primary motivation, since there's such an easy way to avoid it. I feel that people form voluntary social groups because they are like-minded, not that they become like-minded in order to fit into the social group. And when group membership is involuntary for certain sorts of groups, people are far more likely to clam up than to change their minds. If a Democrat went to a Republican convention (perhaps with friends, perhaps on business), would they become more Republican in their views to fit in?

Other possibilities include trust, access to information, requirements of consistency, thresholds for evidence, and conceptual availability. An expert is judged on their record, and one who has come to obviously wrong conclusions in the past (i.e. holds an ideology they consider incorrect) is presumably less trusted to make good judgements on other issues. As part of a certain social network you will rapidly hear of arguments and evidence supporting ones views, and debunking evidence contradicting them. Frequently, ideologies are logically interdependent, so believing in one basic principle may imply a whole load of other statements in many other areas. People will accept a conforming belief at a lower threshold than one that disconfirms their beliefs (or that is controversial and forms the subject of common arguements). The latter wil get examined much more closely. And certain general principles will be mentally more accessible for application if they are core to a group's beliefs. For example, rabid Bayesians will see applications of Bayesian ideas everywhere, while someone who believed in mystical intuition would likewise find their own mental methods and approaches more obvious. They have a different set of mental tools, and so go about the construction task in a different way.

I think I mentioned this before, but it seems to me that 'social cost' is a specifically communitarian idea, and one that individualists would probably give short shrift to. Perhaps different groups have different reasons?

March 9, 2013 | Unregistered CommenterNiV

1. Yes, agree 100% (or more) about importance of recognizing that Bayesianism doesn't tell you how to figure out likelihood ratio; it presupposes that. That people will vary in assessing LR based on cultural predispotions is, to me, the lynchpin of CC of risk. I don't think there are "right" or "wrong" ways to determine likelihood ratio independent of what one's ends are. But I'd say that most people would consider it a misadventure to discover they were unconsciously adjusting their LR in order to reach results that congenial to their cultural values, particularly if somethkng important were at stake (say, health of their daughter in HPV context).

2. The "social cost" point is in the nature of ... a model! For sure, the account of why it is "rational" for individuals to form group-congruent beliefs does not describe a conscious process; that would be incoherent. I think it is plausible to belive that people in general will form beliefs in ways that enable them to get by better in life. But I don't think it is possible to observe directly the mechanisms I'm positing (at that level); if positing them makes what can be observed more predidctable and manageable -- I'm satisfied.

3. Don't individualists care about externalities? IN most of these settings, the claim is that the costs to health or welfare associated with some risk are not being internalized. The issue is only whether one perceives that claim to be true or not --not whether it would justify action of some sort. Disagree?

March 9, 2013 | Registered CommenterDan Kahan

"Don't individualists care about externalities?"

Yes, but they often count different things as externalities.

" IN most of these settings, the claim is that the costs to health or welfare associated with some risk are not being internalized."

Sometimes the claim is that costs to health or welfare are (or are not) acknowledged as risks, sometimes that the risks are (or are not) realised. Sometimes that counter-costs are (or are not) being ignored.

A cost/risk is just a motivation to care. Sometimes people disagree, but don't consider it to be worth arguing about. But when there are costs/risks involved, then strong feelings are invoked. It's not that people necessarily think complaining about it will get them anywhere, but they feel the urge to do so anyway.

To take the climate change case as an example - one side sees it as corporate greed versus the environment, they see no downside to hurting corporations, but enormous external costs associated with environmental damage. The other side see it as the global economy versus one of the recurring Malthusian scares, and see enormous external costs particularly for developing nations from the economic damage of removing access to cheap energy, but at most small, speculative and uncertain costs set against them.

As you can see, one's views on economics are apt to influence ones views on risk, and what counts as an externality. An individualist will see different externalities to a communitarian. For an individualist, loss of personal liberty and opportunity (for people generally as well as themselves) might be the motivation. For a communitarian, respect within their community, their self-image as someone who cares, and their taking part in the joint effort to save the world might be the issue.

The individualist has been steeped in the harsh logic of economic trade-offs and paying for one's living, and thinks in those terms. The communitarians has equally been submerged in empathy, guilt, and self-sacrifice, and thinks in those terms. They come to a different conclusion, sometimes, because they think differently, and have different priorities and values. They'd come to the same conclusion if they were all alone in the world, and nobody would ever know what they thought.

I don't think there's any doubt that people of either cultural inclination will sometimes say to themselves "what would my friends think?" But that's not all there is to it. That's all I meant.

March 9, 2013 | Unregistered CommenterNiV

Dan, to rephrase: Strategic behavior can create the persistent risk of antagonistic meanings that impede convergence—and thus the permanent risk that members of a pluralistic democratic society will actively choose risk management strategies they are comfortable with and may fail to recognize the validity of other strategies that scientific evidence shows to be more likely essential to their common welfare.

This is why I disagree with Joshua about the relevence of the history of climate change with the UN. One can almost word for word substitute UN, climate change, etc for your Merck example.

In the link to WUWT post on the cooling scare are the beginnings of the proposition of policy that individualists would oppose and the strategic employment of scare tactics, advocacy, and "one world government." Looking at climate change from this perspective and that in the original proposed climate change treaty was mandated funding for the UN, it is little wonder that the US Senate voted 95 - 0 against Kyoto.

Unlike the Merck case, for CC, we do not have a political entity that can resolve the issue with proclamation. This difference is why I think the history is important to the solution. Without disaggregation of policy and the UN, one should not expect the Senate to agree to such a treaty in our lifetimes. The UN and its policies are cause celebre for a large segment of the US voting public.

Another disaggregation needed is the "doom and gloom" neo-malthusians from science or policy. It is their choice. However, trying to be both will mean that the public airing by mass communication will likely automatically generate that core group that will begin the poisoning of the discussion from the start. Those who insist on both will also provide the ammunition that the core group will need to "convert" others. The "doom and gloomers" will gravitate towards the policies that will insure a vocal opposition. This goes back to one of your earlier posts about the structure of the SoSC and relaible experts. It is my contention that once one is both scientist and policy advocate, that one needs to be excluded. This is what persons are doing anyway. Examples abound on sites like WUWT, or NRA for that matter.

You would not want Stephan Schneider, the "Club of Rome", and the UN to propose climate change action. But that is what happened.

The question becomes the same that Johnson&Johnson had to answer with the Tylenol scare.

March 10, 2013 | Unregistered CommenterJohn F Pittman

@NiV: to what extent does "count different things as externalities" refer to diffrences in (a) what people value and to what extent diffs in (b) what they *see* or believe about facts relevant to a common framework of valuation (or at least an overlapping portion of the diverse ones they have)?

If you are focusing on (b), I agree w/ you. That is, I would anticipate that even when all manner of hierarch & individualists agree w/ egalitarian collectivists that a Pigouvian tax is the right mechanism for handling any harm externalized by a commmercial activity to 3d parties (i.e., to parties other than the consumers of whatever the commercial activity is producing), the former might be less likely to perceive that there *is* an externality, or to see it as being as great in magnitude, as the latter.

But if you are saying (a), I'm inclined to disagree. The *main* debate is along the lines I described in addressing (b): one group says that saturation of the atmosphere w/ carbon by burning of fossil fuels is imposing or will be imposing very high 3d party costs; the other group is *contesting* the factual predicates of that claim.

There are some people saying "we should reduce carbon emissoins b/c capitalism is evil-- climate change is our big chance to strike a blow for Marxism" etc but they are really fringe. I also don't see any big group on other side saying "not only 'better dead than red' but 'better dead than pay a tax to internalize the climate change cost associated with my SUV!!!!'"

Now those on both sides do believe different things about the moral quality of markets & commerce. Those things are likely contributiong to their respective factual perceptions. Indeed, *that* those commitments are influencing the *other* side's perceptions is part of why each believes that the other is being disingenuous when it focuses its attention on the predicate facts of some regulatory response -- any sort -- to climate change.

March 10, 2013 | Unregistered Commenterdmk38

Rant alert--

I'm not convinced that social cost is the primary motivation, since there's such an easy way to avoid it. I feel that people form voluntary social groups because they are like-minded, not that they become like-minded in order to fit into the social group.

I also don't think that social cost is the primary motivation - although from that I reach different conclusion.

Look, for example, at the strength of the argumentation done anonymously on blogs. There is no social cost but if anything, it only heightens the polluted, vitriolic environment.

I don't think it is extrinsic factors that are the primary motivation. People identify with groups as the result of their formative influences. Once their identifications are formulated, they filter evidence in such a way as to reinforce their identification. Their motivation is primarily to be right (intrinsically motivated). They are trying to protect their identity for internal reasons. Sure, some concern about fitting in with their group is at play - but I don't think it is the primary motivation. Of course this will vary - some people are more motivated by intrinsic reasons and some people are more motivated by extrinsic reasons.

I don't think that these categories of communitarian, or whatever, really reflect fundamental differences in values or how people reason, interpret risk, etc. I see commonality in how people reason. I see commonality in values. Once again, I point to directionality in the cause-and-effect.

The fact that people reach different conclusions even though (IMO) they share values and attributes of reasoning is rooted in the patterns in how people express those values - not in the values themselves or the reasoning itself. The way way that those factors change conclusions is pre-defined by cultural or social patterns.

In other words:

As you can see, one's views on economics are apt to influence ones views on risk, and what counts as an externality.

Not how I see it. I believe it is how one identifies, culturally, with specific views on economics that influences values on risk and what counts as an externality. I see a difference in directionality. I don't think the direction is that views on economics shapes views on risk. I think it is that cultural self-identification, self-driven cultural identification, shapes both.

This is why you see so much logical incoherence in the arguments that many people make - my favorite example being how "conservatives" selected their economic "values" and "views on risk" on the healthcare mandate - putting together contradicting beliefs in logically inconsistent ways. At one point in time, their "values" and "views on economics led them to support mandates. And a few years later, their "values" and "views on economic" led them to see mandates as unAmerican totalitarianism. But certainly, the pattern isn't unique to "conservatives," and it isn't limited to arguments about healthcare; that kind of logical inconsistency is abundant whenever you see controversies that set off ideological identifications. We could come up with a list of many examples, such as views on states' rights.

Ideological identification trumps values; people with the same values (on the different sides of the divide) construct oppositional arguments. And further, people of one particular cultural identification can interpret risk in completely contradictory ways from a logical perspective - because it isn't logic that is driving their conclusions, but instead cultural identifications. I will feel one way or another about a healthcare mandate because in one situation being anti-mandate protects my self-identity and in another situation being pro-mandate protects my self identity.

Issues serve as ways for us to fortify our sense of selves. "Values," IMO - has little to do with it. I share many, many basic values with people who I disagree with on almost all political issues. We take issues and turn them into Rorschach ink lots. We see our "values" in them - when in reality there is no intrinsic value in the issue. There is no "value" in climate change. We read values into the issue as it meets our needs. We can approach any particular side-argument from whichever side we want. Take accounting for externalities of emissions from fossil fuels. If we want, we could consider that issue to reflect a stance on "responsibility," such as might appeal to the self-identification of someone who calls themselves a "conservative." Or it might appeal to to a stance on "corporate welfare" as it might appeal to someone who identifies as a "liberal." It is all about framing - not values. "Liberals" and "conservatives" share the value that people shouldn't get away with exploiting others. What I find particularly amusing is how combatants on both sides claim to be motivated by an interest in protecting the world's poor - because they uphold that value while those on the other side don't. I have met very few people, no matter what political persuasion, who are actually indifferent to exploitation of the world's poor.

March 10, 2013 | Unregistered CommenterJoshua

Dan, you say: "Now those on both sides do believe different things about the moral quality of markets & commerce." But isn't that just what NiV was talking about under your (a), differences in what people value? Of course you can exaggerate it so that it sounds absurd, or, as you say, "fringe": "climate change is our big chance to strike a blow for Marxism", etc., just as, as you say, not many are going to prefer death to paying more for gas. But don't those extremes sound like a way of avoiding the much deeper and harder problem of differences in underlying values, and the ramifications that ensue from such differences? Ramifications involving not just different views on what counts and costs as externalities (not to mention the uncertainties in the costs of attempting to internalize such externalities), but in the whole issue of cultural/political groupings, that may turn on much more than mere group loyalty?

March 11, 2013 | Unregistered CommenterLarry


Perhaps I stopped answering in NiV in the middle ... The summation was supposed to be -- "yes, they have different values & value different things; that's what makes them see different facts; but they are having & understand themselves to be having an argument only about whether CO2 emissions from fossil fuels are externalizing a very mundane and straightforward 'cost' (drowning under rising sea water, baking in a dust bowl, etc) to a 3d party, something the answer to which is both analytically independent of the differences in values & morally consequential under some overlapping portion of both of their opposing moral valuations of markets, the environment, communal living, etc."

I am guessing you would still put the same question to me. In fact, I think I agree with you & am inclined to disbelieve my answer & ask me the same question, but in a less respectful way. I'd say to me:

"Do you *really* think they only care about something like *that*? That's *it*? C'mon! Don't be obtuse!Don't play dumb! They are arguing about the best way of life! They are arguing about what's more beautiful and admirable -- human beings making blank meaningless empty space into something that accommodates and reflects the goals and intentions of self-defining rational actors; or human beings who recognize, perceive the meaning all around them -- in the unmolested, infinitely intricate processes of nature, in the needs of other people -- and who adopt toward those things stances that reflect appropriate reverence for them."

To which the person I am who wrote the answer to NiV replies:

"Oh please. The people we are *actually* talking about are not nearly that interesting -- or nearly that dangerous and obnoxious. Each is trying only to make sure that he or she & everyone else else have food on the table-- not to ram his or her values down the other's throat. What sort of weird conspiracy of performance art would we have to imagine they have agreed to to believe that they are philosophical/religious zealots speaking in a code in which 'earth heats up' means love nature, love others etc. & 'just a natural cycle, nothing to worry about' means 'the only thing of any value is the meaning that autonomous individuals create through private orderings etc.' That's insane. *You* are insane!"

At which point, both of myselves shake their heads in disgust at the other & walk away.

Can you try being both me's & see if you can make the conversation come out somewhere more satisfying?

March 11, 2013 | Registered CommenterDan Kahan

I think that, behind all the exaggeration in the two you's responses, there's a third you that asks, in those famous, sad words, "Can't we all just get along?" It's a good question, based upon an admirable sentiment, but I think that here too it's naive. This third you seems to be a figure waving his hand impatiently at the possibility of deeply divisive values -- hence the reliance upon caricature or poetic licence in describing those variant values -- and always trying to get us back to two things: a) Just the facts, ma'am (a nod to Dragnet fans still alive), and b) the "overlapping portion of both of their opposing moral valuations ". Those two things are fine in themselves, but the problem lies in the apparent need to dismiss just how significant and deep are those divisions -- i.e., how powerful and far-reaching are the non-overlapping portions. And this leads in turn to two big stumbling blocks for the whole project: 1) a need to believe that there is an unimpeachable source of "just the facts", which will be accepted all round if we only pitch it the right way, and 2) that people aggregate in cultural/political values coalitions solely out of group loyalty -- and so can be convinced of "the facts" if we simply avoid threatening their group -- rather than out of the intellectual coherence of deeply held values, that lead them to reject arguments that seem inadequate, incoherent, exaggerated, one-sided, etc. just in themselves, quite aside from their effect on one's cultural group.

I doubt if that's a more satisfying addition to the conversation of the you's, which to my mind seem both to be more concerned with exaggerating the position of the other in order to dismiss it. I understand exaggeration -- it can be used to make a point that might otherwise be hard to see. But it can also obliterate finer or more nuanced points that may be the source of the real problem.

March 11, 2013 | Unregistered CommenterLarry

@Larry: The two me's might be a bit overdramatic in their modes of speech. But I think there is a finer & more nuanced question in their exchange that you still aren't answering (I say, speaking as the 3d me you describe). You seem (to the 3d me) to be saying that the 2d me (if we could just get him or her not to exaggerate) is correct. But if the 2d me regards the 1st's argument more charitably (if we can imagine getting that version of me to stop being so bombastic than we can imagine getting him or her trying to actually get the other me's point), how would or should he or she answer? Wouldn't the 2d me agree the 1st me is also correct?

March 11, 2013 | Registered CommenterDan Kahan

"Wouldn't the 2d me agree the 1st me is also correct?"

Yes, partially correct. What you1 is saying, in time-honored negotiator fashion, is, more or less: let's focus on the facts, and then let's focus on what we need to do that's in our mutual interest given the facts, setting aside, as a practical matter, larger issues of group loyalty though without abandoning such loyalty. What the second you is saying, however, toned down to less hi-falutin terms, is that the question of whether we're all going to drown under rising seas or bake in a dust bowl or both is not a factual question that's actually at issue here, though under the influence of bombasts on both sides in can sometimes appear that way. What's actually at issue are much less decisive matters of relative costs, benefits, values, goals, etc. that are not mere matters of fact and are not "analytically independent of the differences in values", and are part of the non-overlapping moral valuations. And you're right that I'm more in agreement with the toned down version of you2, who is not putting out "some weird conspiracy of performance art" but simply pointing out the knotted connections of ideas and values, and the inherent ambiguity of what counts as facts when we get involved in complex matters of far-reaching policy decisions.

March 11, 2013 | Unregistered CommenterLarry

As usual, I'm lost trying to parse the discussion...but...

@Larry: The two me's might be a bit overdramatic in their modes of speech.

If you look around the blogosphere, you will see the following viewpoints dominate:

That "realists" are indifferent to the plight of millions of starving children in the developing world. All they care about is attacking capitalism so they can establish a one-world-government and impose their totalitarian fantasies to make sure no one has any more money than anyone else.

That "skeptics" are indifferent to the plight of millions of starving children in the developing world. All they care about is amassing as much wealth as possible, and exploiting anyone different from themselves in order to achieve that goal.

Although widely "seen" (by opposing combatants) in the blogosphere - neither of these caricatures is even remotely accurate except for a tiny minority of the combatants. The reality is that most "realists" and "skeptics" share values, for the most part.

Opinions that you read in the blogosphere are, in the main, outliers - consider why "lurkers" outnumber commenters by perhaps an average of 9-1. Most people don't view the other side with such animosity, but the vitriol in the blogosphere does reflect a more widespread pattern - each side on these issues tends to react to caricatures of the other side, and in a desire to validate their own identity. As a result, they ignore or completely miss nuance in what the other sides says and believes. Creating an demonic "other" out of those who reach different political conclusions serves a fundamental human need.

As an example, I'm struck by data that show much commonality in views on the ideal distribution of income across society even across strong political divisions. But the common argument is that "liberals" demand perfect equality, "envy" those with more money than them, are only seeking "handouts," want the gubmint to control all aspects of our lives, etc,. "Conservatives" are indifferent to inequalities, don't care if greedy capitalists make money from exploitation and graft, think that people of different races, class, or sexuality don't deserve the same rights as themselves, etc.

For the most part, we share values. Tribalism leads us to create false perceptions of the values of others, and to create false distinctions between their values and our own values.

March 11, 2013 | Unregistered CommenterJoshua

Dan -

I thought about this when looking at your paper on cultural cognition and rape.

The reaction to Zerlina Maxwell's comments about acquaintance rape

seems consistent with your findings.

March 11, 2013 | Unregistered CommenterJoshua

" to what extent does "count different things as externalities" refer to diffrences in (a) what people value and to what extent diffs in (b) what they *see* or believe about facts relevant to a common framework of valuation (or at least an overlapping portion of the diverse ones they have)?"

What I was referring to was the way values and background knowledge influence our assessment of the facts. What often happens is that there are arguments on both sides regarding a particular point, and people judge whether points are important, relevant, determinative, etc. or not, influenced in part by their values, methods, and expectations. People choose particular definitions, take particular assumptions as justified, accept particular methods and sources as valid. They regard statements and steps as 'obvious' or not differently. They pick a particular subset of observations and build a narrative from them. 'Facts' depend on context more than most people think.

It's quite difficult to describe without getting into specific examples, which risks being distracted from looking at the reasons why people differ about the facts onto arguing about what the facts are.



March 11, 2013 | Unregistered CommenterNiV

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control.

But it's not an issue of "valid, compelling, widely available science ..."; the issues quoted are political, not scientific! The climate debate isn't whether or not the earth is warming (it is), it's the issue of causality, Man or Nature, with much money to be made by "green" researchers and businesses if this issue can be successfully exploited, and that can only be done if Man can be made to be the Villain; this, of course, looks to be scientifically un-provable, given the cyclical climatic history of the Earth, and can thus be exploited for reasons noble or ignoble, ad nauseum.

The debate is needed precisely because Scientists are not God, and are not "Holier Than Thou." If the Science was indeed clear cut, there would likely be a good deal less resistance, but it is not as clear cut as has been implied. This is true of all the issues used as examples.

When any issue becomes politicized, Science effectively goes out the window of public debate. The public simply does not have access to anything scientific on any of the issues quoted; all we get is media propaganda, and many of us have learned not to believe it when "The lady doth protest too much, methinks!"

Respectfully, John

October 17, 2013 | Unregistered CommenterJohn Meshkoff

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>