Bounded rationality, unbounded out-group hate
Saturday, April 30, 2016 at 12:04AM
Dan Kahan

  By popular demand & for a change of pace ... a guest post from someone who actually knows what the hell he or she is talking about!

Bias, Dislike, and Bias

Daniel Stone

Read this! Or you are just a jerk, like all the other members of your stupid political party!Thanks Dan K for giving me the chance to post here.  Apologies - or warning at least - the content, tone etc might be different from what's typical of this blog.  Rather than fake a Kahan-style piece,[1] I thought it best to just do my thing.  Though there might be some DK similarity or maybe even influence.  (I too appreciate the exclamation pt!)

Like Dan, and likely most/all readers of this blog, I am puzzled by persistent disagreement on facts.  It also puzzles me that this disagreement often leads to hard feelings.  We get mad at - and often end up disliking - each other when we disagree.  Actually this is likely a big part of the explanation for persistent disagreement; we can't talk about things like climate change and learn from each other as much as we could/should - we know this causes trouble so we just avoid the topics. We don’t talk about politics at dinner etc.  Or when we do talk we get mad quickly and don’t listen/learn.  So understanding this type of anger is crucial for understanding communication.

 

It's well known, and academically verified, that this is indeed what's happened in party politics in the US in recent decades - opposing partisans actually dislike each other more than ever.  The standard jargon for this now is 'affective polarization'.  Actually looks like this is the type of polarization where the real action is since it’s much less clear to what extent we’ve polarized re policy/ideology preferences- though it is clear that politician behavior has diverged - R's and D's in Congress vote along opposing party lines more and more over time.  For anyone who doubts this, take a look at the powerful graphic in inset to the left, stolen from this recent article.

So—why do we hate each other so much? 

Full disclosure, I'm an outsider to this topic.  I'm an economist by training, affiliation, methods.  Any clarification/feedback on what I say here is very

The fingerprint(s) of polarization in Congress....

welcome.

Anyway my take from the outside is the poli-sci papers on this topic focus on two things, "social distance" and new media.  Social distance is the social-psych idea that we innately dislike those we feel more "distance" from (which can be literal or figurative).  Group loyalty, tribalism etc.  Maybe distance between partisans has grown as partisan identities have strengthened and/or because of gridlock in DC and/or real/perceived growth in the ideological gap between parties.  New media includes all sorts of things, social media, blogs, cable news, political advertising, etc.  The idea here is we're exposed to much more anti-out party info than before and natural this would sink in to some extent.

There's a related but distinct and certainly important line of work in moral psychology on this topic – if you’re reading this there’s a very good chance you’re familiar with Jonathan Haidt's book The Righteous Mind in particular.  He doesn't use the term social distance but talks about a similar (equivalent?) concept—differences between members of the parties in political-moral values and the evolutionary explanation for why these differences lead to inter-group hostility.

So—this is a well-studied topic that we know a lot about.  Still, we have a ways to go toward actually solving the problem.  So there’s probably more to be said about it.

Here’s my angle: the social distance/Haidtian and even media effects literatures seem to take it as self-evident that distance causes dislike.  And the mechanism for this causal relationship is often treated as black box.  And so, while it’s often assumed that this dislike is “wrong” and this assumption seems quite reasonable—common sense, age-old wisdom etc tell us that massive groups of people can’t all be so bad and so something is seriously off when massive groups of people hate each other—this assumption of wrongness is both theoretically unclear and empirically far from proven.

Citizens of the the Liberal Republic of Science-- unite against partyism!But in reality when we dislike others, even if just because they’re different, we usually think (perhaps unconsciously) they’re actually “bad” in specific ways.  In politics, D’s and R’s who dislike each other do so (perhaps ironically) because they think the other side is too partisan—i.e. too willing to put their own interests over the nation’s as a whole.  Politicians are always accusing each other of “playing politics” over doing what’s right.  (I don’t know of data showing this but if anyone knows good reference(s) please please let me know.)

That is, dislike is not just “affective” (feeling) but is “cognitive” (thinking) in this sense.  And cognitive processes can of course be biased.  So my claim is that this is at least part of the sense in which out-party hate is wrong—it’s objectively biased.  We think the people in the other party are worse guys than they really are (by our own standards).  In particular, more self-serving, less socially minded. 

This seems like a non-far-fetched claim to me, maybe even pretty obviously true when you hear it.  If not, that’s ok too, that makes the claim more interesting.  Either way, this is not something these literatures (political science, psychology, communications) seem to talk about.  There is certainly a big literature on cognitive bias and political behavior, but on things like extremism, not dislike.

Here come the semi-shameless[2] plugs.  This post has already gotten longer than most I’m willing to read myself so I’ll make this quick.

In one recent paper, I show that ‘unrelated’ cognitive bias can lead to (unbounded!) cognitive (Bayesian!) dislike even without any type of skewed media or asymmetric information. 

In another, I show that people who overestimate what they know in general (on things like the population of California)--and thus are more likely to be overconfident in their knowledge in general, both due to, and driving, various more specific cognitive biases--also tend to dislike the out-party more (vs in-party), controlling carefully for one’s own ideology, partisanship and a bunch of other things.

Feedback on either paper is certainly welcome, they are both far from published.

So—I’ve noted that cognitive bias very plausibly causes dislike, and I’ve tried to provide some formal theory and data to back this claim up and clarify the folk wisdom that if we understood each other better, we wouldn’t hate each other so much.  And dislike causes (exacerbates) bias (in knowledge, about things like climate change, getting back to the main subject of this blog).  Why else does thinking of dislike in terms of bias matter?  Two points.

1) This likely can help us to understand polarization in its various forms better.  The cognitive bias literature is large and powerful, including a growing literature on interventions (nudges etc).  Applying this literature could yield a lot of progress. 

2) Thinking of out-party dislike (a.k.a. partyism) as biased could help to stigmatize and as a result reduce this type of behavior (as has been the case for other 'isms').  If people get the message that saying “I hate Republicans” is unsophisticated (or worse) and thus uncool, they’re going to be less likely to say it. 

For a decentralized phenomenon like affective polarization, changing social norms may ultimately be our best hope. 

 


[1] Ed.: Okay, time to come clean. What he's alluding to is that I've been using M Turk workers to ghost write my blog posts for last 6 mos. No one having caught on, I’ve now decided that it is okay after all to use M Turk workers in studies of politically motivated reasoning.

[2] Ed.: Yup, for sure he is not trying to imitate me. What’s this “semi-” crap?

Update on Monday, May 2, 2016 at 10:32AM by Registered CommenterDan Kahan

My [i.e., really me; *not M Turk worker!] 2 cents [what I saved by writing myself & not hiring M Turk worker!]:

Too little rationality? or Too much?!

Daniel--

These are great papers!

Am still working through JBM?; to understand in particular how the controls & instrumental-variable strategy etc. are contributing to the inferences you are drawing. Will take me a while, but for sure well worth the time.

But an interim question: what are we to make of your account in relation to the evidence we have on how identity-protective reasoning relates to dual process theories of cognition? To conscious, effortful processing of the sort that is generally resistant to cognitive bias vs. heuristic reasoninc information processing of the sort that generally is vulnerable to it (Stanovich & West 2011)?

You in effect are attributing self-reinforcing types of group rivalries to cognitive bias -- that is to imperfect or bounded rationality.

You identity "motivated reasoning" as an alternative explanation.

But I don’t think that that alternative -- or a conception of it -- is specified w/ enough precision. What sort of cognitive dynamic *is* motivated reasoning? What’s going on in it? Is it a form of bounded rationality? Why isn’t it a disposition to form overconfident judgments of a certain sort or under certain circumstances, etc?

I'm going to go through some effort here to fill things in a way that suggests particular answers to these questions. You actually cite some of this work but I don't think you've extracted from it the position I'm going to spell out. I'm greedy & want to know what you think of that position & how relates to yours!

So ....

I'd say that "motivated reasoning" is not really in itself a description of any mechanism. It is just a description of how information is being processed. Relative to a simple or at least normatively appealing Bayesian model, "motivated reasoning" involves assessing likelihood ratio to new information on basis of criteria that promote some goal collateral to truth of the proposition that is the object of one’s priors (e.g., Kahan in press_b).

Identity-protective cognition is species of motivated reasoning in which goal that determines likelihood ratio is stake an individual has in maintaining status within an important affinity group.

It is, I believe, the form of motivated reasoning that drives cultural or political polarization on risk and other policy-relevant facts (Kahan 2013, in press_a, in press_b).

Now we can ask whether it is plausible to understand *that* dynamic as a consequence of bounded rationality.

The evidence that’s relevant examines the relationship between identity-protective reasoning  & the use of effortful, systematic information processing ("System 2") and heuristic processing ("System 1").

The former is characterized by the use of dispositions that bring to conscious attention the information necessary to make valid inferences & by subsequent use of forms of analytical reasoning necessary to make valid inferences based on it.

The latter is characterized by lack of those features of information-processing—by failure to attend consciously to relevant information and by lack of motivation or ability to give it inferentially proper effect.

We know from observational studies that political polarization is greatest in those who are highest in "System 2" reasoning (Kahan et al. 2012; Kahan 2015).

We know from experiments that in fact such individuals *use* such reasoning to extract from information the parts of it most supportive of group-associated beliefs, and to dismiss the rest (Kahan 2013; Kahan et al. working).

This supports the idea that identity-protective reasoning is not a consequence of bounded rationality. It is a manifestation of rationality.

It is in the interest of people to form identity-expressive beliefs, b/c those tend toward affective responses that effectively convey to others on whom their status depends that they have the values and commitments that mark them out as reliable, trustworthy, admirable etc (Kahan in press_a, in press_b).

This account of identity-protective reasoning predicts that people will use their reason to form the judgment that members of other groups are stupid & evil.

Experimental evidence supports that prediction: people construe evidence of the open-mindedness of others in way that way; people highest in System 2 reasoning proficiency do it all the more (Kahan 2013).

I’m pretty sure this is *not* consistent with your view.

Because, as I said, you treat the sort of low estimation of the out-group that is involved here as a consequence of defects in or bounded rationality. You think that is what drives conflict over facts in politics. But if that were so, then we would expect those most vulnerable to cognitive bias to be the most subject to the kind of information processing that generates political polarization on facts.

Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question.

Like I said, I’m not sure yet how to appraise your evidence; I might well come away convinced that it warrants the interpretation you give it—namely, that a form of bounded rationality aggravates low-estimation of the moral character of out-group members.

But I want to know what you make of the conflict between your basic hypothesis and the work I've described.

Is it possible that if your evidence is right, it really isn’t getting at what is driving political polarization on facts of policy significance, given that we have evidence that that sort of phenomenon doesn’t reflect bounded rationality?

Alternatively, do you think the body of work I've described doesn't really support the inference that identity-protective reasoning is *not* associated with bounded rationality?

Or maybe this a situation where both bodies of evidence bear the inferential significance being attributed to it (by you in case of your work, by me in case of mine) & we just have to try to take all this on board & aggregate it in some Bayesian fashion?

If so, is there some set of observations we can make that will give us more reason than we have now that one or the other position is the right one?

 

Refs

Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (working). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology, 1-43 (2015).

Kahan, D.M. (in press_a). The expressive rationality of inaccurate perceptions of fact. Brain & Behav. Sci.

Kahan, D.M. (in press_b)The Politically Motivated Reasoning Paradigm.Emerging Trends in Social & Behavioral Sciences.

Stanovich, K.E. & West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23, 645-665 (2000).

Update on Monday, May 2, 2016 at 11:01AM by Registered CommenterDan Kahan

Daniel Stone replies:

Ok - my long awaited reply to DK - thanks again for great comment. (by the way I appreciate your non-overconfident/info-seeking tone; appropriate for those of us studying this kind of thing but still not always the case).

Yes, the discussion of motivated reasoning (MR) in my paper is not as clear as it could be, and MR is big issue.[1]

Your definition of MR is what I did mean to refer to - you say this is not a mechanism (for driving out-party feelings), is this because you mean it's a class of mechanisms, or a characteristic of class of mechanisms? If so, either way, I'm with you; in addition to social image identity-protective cognition, I'd include internal motivations (ego, identity) as well, but this doesn't matter much for purposes here. For short I will refer to just MR in rest of this post. And how about BR for bounded rationality (non-MR systematic biases)

You write:

Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question.

Wow, yes this does seem at odds with my basic claim. So this is a really interesting/important discussion. So that's my excuse for why this goes on for a while (brace yourself)

My impression of your work and related literature was, to oversimplify, that you'd made strong arguments that identity (MR) was the dominant force driving biased beliefs on climate change and other similar topics, with some key evidence being that more numerate/educated/higher cognitive ability types tend to be more biased. First, I don't think you're saying this but worth noting clearly that these factors (numeracy etc) are not equivalent to rationality.[2]

Second, re your comment, I'm not aware of evidence that holding everything else fixed (ideology, party strength, numeracy, etc) as a measure of rationality increases (say CRT, or even better, biases I focus on in my papers, overprecision and the false consensus bias), beliefs about factual topics become more biased. I checked the Hamilton papers you cite in your 2015 paper and doesn't look like they quite do this. I saw a reference to CRT in your paper but did not see analysis of just this (in yours or Hamiltons).

But I could definitely be missing something here (if so pls let me know!). I do buy that aspects of S2/analytic reasoning can enhance MR-driven bias. But the claim that 'bias re factual topics is an increasing function of rationality (for topics with truth 'opposed' to motivation)' seems too strong. Maybe this isn't quite what you mean anyway. If it was you meant - I might want to discuss the evidence/future investigation with you further 'offline'

Either way, suppose the strong version of this claim is true, holding ideology, numeracy etc fixed, less BR bias means more climate (and other?) biases. A few other comments then.

Would this mean we should expect BR to have analogous effect on my outcome, out-party dislike? I'm not sure, but doubtful. BR (in particular overprecision, thinking you know more than you really do) may apply more to beliefs about people than beliefs about more abstract/'scientific' topics like climate. If I'm an R I might think I can judge a D, even if I don't think I can form intelligent opinion about GMOs. So it's possible the context is different enough the relative importance of MR/BR factors could vary. This is likely worth thinking through some more

But suppose as you allude to, more rational types should feel more hate just as they are more biased re climate. How could we reconcile this with my results? 1 possibility is my BR variable is actually so badly measured that it's correlation with rationality is the opposite of what it's supposed to be. This is always possible but I do think this is unlikely (especially since I get the strongest results for less educated respondents, Table 6 - the mismeasurement story would then imply hate increases in rationality mostly for the less educated, and I don't the literature supports that).

A more likely scenario is that my BR variable (OC) is picking up omitted party strength effects (those who have higher overprecision are stronger party identifiers (stronger than what's captured in survey responses already controlled for), and these guys feel more hate). This would be consistent with fact that my strongest result is the instrumental effect of BR on dislike is via party strength (that is, via identity/MR; Table 5). In this case, we'd all be at least partly right - hate is caused largely by strength of identity (MR) but this identity is caused in good part by BR (so BR still indirectly causes hate). Personally I don't think this is the whole story but I don't think I can rule this out with my data. I should likely discuss further in the paper, will try in revision.

Last - to be clear - taking my results/interpretation as they are, to be clear I do not mean to imply I am ruling out MR factors. I am just claiming that BR factors are (also) important. (so the title is a bit misleading; hopefully I get a little poetic license here and do try to clarify in text) The MR measure I use in Table 9 is my best attempt to get at this issue (beyond party identity etc) but this is far from a perfect measure.

 


[1] This is partly b/c I think the distinction between MR and non-MR overconfidence still isn't as clear as it could be, though I think there's been good progress here in recent yrs; the distinction between overprecision and overoptimism is very useful (see e.g. http://www.jstor.org/stable/43611009?seq=1#page_scan_tab_contents, which still isn't often used in psychology, e.g. the Noori paper you cite, thanks for that). Overprecision refers to the variance, overoptimism to the mean. But there are motivated aspects of overprecision - it's nice to think we know something better than we really do. I don't know of work on this (motivated overprecision), if anyone does pls let me know.

[2] Here are a couple recent ones from econ that I think find low correlation of cognitive ability and BR,

Stango, Joanne Yoong, and Jonathan Zinman, We are all behavioral, more or less: Measuring the prevalence, heterogeneity and importance of multiple behavioral factors

Mark Dean & Pietro Ortoleva, Is it All Connected? A Testing Ground for Unified Theories of Behavioral Economics Phenomena

Update on Monday, May 2, 2016 at 7:48PM by Registered CommenterDan Kahan

I certainly have nothing more to say-- or nothing that would possibly make anyone any smarter now that Daniel has responded in that thoughtful way (encourage everyone to read his papers & think hard about them!).

But b/c everyone is soooooo caught up in Gelman Cup fever (haven't seen ths sort of excitement among the site's 14 billion regular subscribers since legendary MAPKIA 73), that I thought I'd just toss in a mesmerizing graphic on CRT & motivated reasoning --

From Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

I think it gets at least a "meh" on the Gelman scale! (He likely would disagree, but since he's not here to contradict me, what the heck).

 

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.