follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Raw data: the best safeguard against empirical bull shit! | Main | Hey, everyone! Try your hand at graphic reporting and see if you can win the Gelman Cup! »
Saturday
Apr302016

Bounded rationality, unbounded out-group hate

  By popular demand & for a change of pace ... a guest post from someone who actually knows what the hell he or she is talking about!

Bias, Dislike, and Bias

Daniel Stone

Read this! Or you are just a jerk, like all the other members of your stupid political party!Thanks Dan K for giving me the chance to post here.  Apologies - or warning at least - the content, tone etc might be different from what's typical of this blog.  Rather than fake a Kahan-style piece,[1] I thought it best to just do my thing.  Though there might be some DK similarity or maybe even influence.  (I too appreciate the exclamation pt!)

Like Dan, and likely most/all readers of this blog, I am puzzled by persistent disagreement on facts.  It also puzzles me that this disagreement often leads to hard feelings.  We get mad at - and often end up disliking - each other when we disagree.  Actually this is likely a big part of the explanation for persistent disagreement; we can't talk about things like climate change and learn from each other as much as we could/should - we know this causes trouble so we just avoid the topics. We don’t talk about politics at dinner etc.  Or when we do talk we get mad quickly and don’t listen/learn.  So understanding this type of anger is crucial for understanding communication.

 

It's well known, and academically verified, that this is indeed what's happened in party politics in the US in recent decades - opposing partisans actually dislike each other more than ever.  The standard jargon for this now is 'affective polarization'.  Actually looks like this is the type of polarization where the real action is since it’s much less clear to what extent we’ve polarized re policy/ideology preferences- though it is clear that politician behavior has diverged - R's and D's in Congress vote along opposing party lines more and more over time.  For anyone who doubts this, take a look at the powerful graphic in inset to the left, stolen from this recent article.

So—why do we hate each other so much? 

Full disclosure, I'm an outsider to this topic.  I'm an economist by training, affiliation, methods.  Any clarification/feedback on what I say here is very

The fingerprint(s) of polarization in Congress....

welcome.

Anyway my take from the outside is the poli-sci papers on this topic focus on two things, "social distance" and new media.  Social distance is the social-psych idea that we innately dislike those we feel more "distance" from (which can be literal or figurative).  Group loyalty, tribalism etc.  Maybe distance between partisans has grown as partisan identities have strengthened and/or because of gridlock in DC and/or real/perceived growth in the ideological gap between parties.  New media includes all sorts of things, social media, blogs, cable news, political advertising, etc.  The idea here is we're exposed to much more anti-out party info than before and natural this would sink in to some extent.

There's a related but distinct and certainly important line of work in moral psychology on this topic – if you’re reading this there’s a very good chance you’re familiar with Jonathan Haidt's book The Righteous Mind in particular.  He doesn't use the term social distance but talks about a similar (equivalent?) concept—differences between members of the parties in political-moral values and the evolutionary explanation for why these differences lead to inter-group hostility.

So—this is a well-studied topic that we know a lot about.  Still, we have a ways to go toward actually solving the problem.  So there’s probably more to be said about it.

Here’s my angle: the social distance/Haidtian and even media effects literatures seem to take it as self-evident that distance causes dislike.  And the mechanism for this causal relationship is often treated as black box.  And so, while it’s often assumed that this dislike is “wrong” and this assumption seems quite reasonable—common sense, age-old wisdom etc tell us that massive groups of people can’t all be so bad and so something is seriously off when massive groups of people hate each other—this assumption of wrongness is both theoretically unclear and empirically far from proven.

Citizens of the the Liberal Republic of Science-- unite against partyism!But in reality when we dislike others, even if just because they’re different, we usually think (perhaps unconsciously) they’re actually “bad” in specific ways.  In politics, D’s and R’s who dislike each other do so (perhaps ironically) because they think the other side is too partisan—i.e. too willing to put their own interests over the nation’s as a whole.  Politicians are always accusing each other of “playing politics” over doing what’s right.  (I don’t know of data showing this but if anyone knows good reference(s) please please let me know.)

That is, dislike is not just “affective” (feeling) but is “cognitive” (thinking) in this sense.  And cognitive processes can of course be biased.  So my claim is that this is at least part of the sense in which out-party hate is wrong—it’s objectively biased.  We think the people in the other party are worse guys than they really are (by our own standards).  In particular, more self-serving, less socially minded. 

This seems like a non-far-fetched claim to me, maybe even pretty obviously true when you hear it.  If not, that’s ok too, that makes the claim more interesting.  Either way, this is not something these literatures (political science, psychology, communications) seem to talk about.  There is certainly a big literature on cognitive bias and political behavior, but on things like extremism, not dislike.

Here come the semi-shameless[2] plugs.  This post has already gotten longer than most I’m willing to read myself so I’ll make this quick.

In one recent paper, I show that ‘unrelated’ cognitive bias can lead to (unbounded!) cognitive (Bayesian!) dislike even without any type of skewed media or asymmetric information. 

In another, I show that people who overestimate what they know in general (on things like the population of California)--and thus are more likely to be overconfident in their knowledge in general, both due to, and driving, various more specific cognitive biases--also tend to dislike the out-party more (vs in-party), controlling carefully for one’s own ideology, partisanship and a bunch of other things.

Feedback on either paper is certainly welcome, they are both far from published.

So—I’ve noted that cognitive bias very plausibly causes dislike, and I’ve tried to provide some formal theory and data to back this claim up and clarify the folk wisdom that if we understood each other better, we wouldn’t hate each other so much.  And dislike causes (exacerbates) bias (in knowledge, about things like climate change, getting back to the main subject of this blog).  Why else does thinking of dislike in terms of bias matter?  Two points.

1) This likely can help us to understand polarization in its various forms better.  The cognitive bias literature is large and powerful, including a growing literature on interventions (nudges etc).  Applying this literature could yield a lot of progress. 

2) Thinking of out-party dislike (a.k.a. partyism) as biased could help to stigmatize and as a result reduce this type of behavior (as has been the case for other 'isms').  If people get the message that saying “I hate Republicans” is unsophisticated (or worse) and thus uncool, they’re going to be less likely to say it. 

For a decentralized phenomenon like affective polarization, changing social norms may ultimately be our best hope. 

 


[1] Ed.: Okay, time to come clean. What he's alluding to is that I've been using M Turk workers to ghost write my blog posts for last 6 mos. No one having caught on, I’ve now decided that it is okay after all to use M Turk workers in studies of politically motivated reasoning.

[2] Ed.: Yup, for sure he is not trying to imitate me. What’s this “semi-” crap?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (27)

I think that the perception of differences is, in itself,, to a large degree, flawed.

People from opposing sides of the political aisle are fully convinced that they have a different set of "values" than their counterparts. They get that notion based on backwards engineering from different, identity-based, positions on specific issues. But in fact, their underlying values are not likely significantly different (there are always oultiers, of course).

The concept of differentiating "positions" from "interests" and also I would add "values" is a fundamental plank of conflict resolution theory, and I think it is a very useful frame for informing the issues you're discussing.

April 30, 2016 | Unregistered CommenterJoshua

"So—why do we hate each other so much?"

I have a theory. In the same way that language is an evolved mechanism to enable us to plan and work together, so morals are an evolved mechanism to enable us to *live* together in close proximity. The interests of individuals who share turf often conflict, and it's mutually beneficial to have rules and conventions about who gets what without having to fight about it every time.

Both language and morals involve an overall structure that is fixed and in-built, but are highly adaptive and flexible with regard to the details. We negotiate the 'vocabulary' of our language through the constant interaction with one another, resolving the 'friction' of misunderstandings and miscommunication. A society negotiates its moral system in the same way. And just as language changes over time - each new generation introduces its own slang terms and usages - so do morals. The language has to be close enough to the collectively agreed one to be comprehensible, but you can vary quite a lot from the rules and still be understood - we call those dialects. And with morals, you have to stay close enough to the collective not to raise a mob, but there's freedom to interpret - people might be annoyed with you, but they'll let it go.

Moral systems are very effective in enabling us to cooperate peacefully with close neighbours - allowing very high population density societies. But to converge on a common moral system, the mechanism also has an enforcement component. People act positively to their neighbours complying with the social rules, but very negatively to them breaking them. This cost of dissent ensures that it remains in everybody's interests to fit in.

It's also part of the way morality evolved - it is essentially a gene that programs the organism to cooperate in a certain complex way with others who have the same morality gene and kill any individuals it comes across who don't. And it recognises whether an individual does have the gene by observing whether it follows the same cooperate/kill rules. (You can thus get killed not only for breaking the rules yourself, but also for not killing others who break the rules!) The genes for morality are thereby self-referentially self-preserving.

However, like languages, when people are socially isolated from one another, they can converge on different and incompatible moral systems. They are far enough apart that mutual comprehension breaks down. In the case of foreign languages, that's just frustrating. But in the case of morals, there is that in-built enforcement mechanism that turns incompatible moralities into violent conflict, as each tries to enforce its social bounds, and spread itself by fitting outsiders into its own 'clump'.

Instinctive morality is one of the major factors that makes human capabilities so remarkable compared to those of any other animal. Along with language, it is the adaptive flexibility of human morality that enables complex societies. But a consequence of its mechanism, essential to making it work, is that moral systems don't get on well with competitors. The hate is just us following our programming.

April 30, 2016 | Unregistered CommenterNiV

Thanks Joshua, thanks NiV

Joshua I think you're right that we under-estimate similarities in values - when we see the other side support X and we support Y, we assume they're doing it for 'wrong reason' and under-estimate importance of other (potentially legitimate) factors that could influence their support.. so this sounds to me like could also be a cognitive bias issue.. we are overconfident in our understanding and evaluation of other side's position. i'd also note that we also sometimes under-estimate *differences* in values (that's what a lot of Haidt's work is about - e.g. conservatives *really* value loyalty more than liberals, but neither side is fully aware of this - and what i talk about as false consensus bias in my work)

NiV, thanks for thoughtful, well-written comment. Sounds in line with standard evolutionary moral psychology - again Haidt is great reference. 'hate is just us following our programming' is exactly what i mean by black box mechanism. My goal is to go a little further and get into what we really *think* about other group (and how/why these beliefs might be biased - and we might at some pt correct them)

April 30, 2016 | Unregistered Commenterdan s

Dan S -

==> so this sounds to me like could also be a cognitive bias issue.. we are overconfident in our understanding and evaluation of other side's position.

I very much agree. But perhaps I disagree as to whether that is the locus of the conflict, or a by-product. It's like motivated reasoning. We start with our group identification. It's a fundamental component of our own sense of self. We then see an "other," which is a by-product of our group identification. Through a variety of cognitive biases, such as over confidence (Duning Kruger, for example), our reliance on pattern-finding and therefore sometimes essentially inventing patterns of differences) in order to reason, and motivated reasoning, we reinforce our sense of self and our perception of an "other." I wouldn't say that what I'm describing is uniform or completely explains this partisan animus, but I do think that there's probably more diversity in values among groups of like-identified folks than there are mutually exclusive differences across differently identified groups.

==> i'd also note that we also sometimes under-estimate *differences* in values (that's what a lot of Haidt's work is about - e.g. conservatives *really* value loyalty more than liberals, but neither side is fully aware of this - and what i talk about as false consensus bias in my work)

It has been hard for me to get past Haidt's more recent advocacy on the whole Heterodox Academy front, as think a lot of it is pretty much crap....but his earlier work is certainly interesting. I will admit, however, that I am reflexively dubious about analyses that finds important distinctions in values, or personality, or brain physiology, etc., across different ideological groups. Again, I go back to the greater diversity within groups than across groups. That isn't to say that uniformly, there no differences exist across groups, but to question the explanatory powers of those differences. If the average conservative values "loyalty" more than the average "liberal," how much does that really explain why conservatives and liberals hate each other in comparison to other influences such as those I describe. Isn't it reasonable to expect that there would be a greater difference in individuals who identify as conservatives in levels of "loyalty" than what exists in average between liberals and conservatives? Wouldn't we expect, then, there to be a great deal of animosity between highly "loyal" conservatives towards less "loyal" conservatives?

April 30, 2016 | Unregistered CommenterJoshua

@Daniel--

I really like the papers.

Am still working through JBM?; to understand in particular how the controls & instrumental-variable strategy etc. are contributing to the inferences you are drawing. Will take me a while to grasp fully!

But a question in meantime: what are we to make of your account in relation to the evidence we have on how identity-protective reasoning relates to dual process theories of cognition? To conscious, effortful processing of the sort that is generally resistant to cognitive bias vs. heuristic reasoninc information processing of the sort that generally is vulnerable to it (Stanovich & West 2011)?

You in effect are attributing self-reinforcing types of group rivalries to cognitive bias -- that is to imperfect or bounded rationality.

You identity "motivated reasoning" as an alternative explanation.

But I don’t think that that alternative -- or a conception of it -- is specified w/ enough precision. What sort of cognitive dynamic *is* motivated reasoning? What’s going on in it? Is it a form of bounded rationality? Why isn’t it a disposition to form overconfident judgments of a certain sort or under certain circumstances, etc?

I'm going to go through some effort here to fill things in a way that suggests particular answers to these questions. I know you cite some of this work but I don't think you've extracted from it the position I'm going to spell out. I don't really care whether you cite the work-- I want to know how you think about the theory that guides it & how it relates to yours!

So ....

I'd say that "motivated reasoning" is not really in itself a description of any mechanism. It is just a description of how information is being processed. Relative to a simple or at least normatively appealing Bayesian model, "motivated reasoning" involves assessing likelihood ratio to new information on basis of criteria that promote some goal collateral to truth of the proposition that is the object of one’s priors (e.g., Kahan in press_b).

Identity-protective cognition is species of motivated reasoning in which goal that determines likelihood ratio is stake an individual has in maintaining status within an important affinity group.

It is, I believe, the form of motivated reasoning that drives cultural or political polarization on risk and other policy-relevant facts (Kahan 2013, in press_a, in press_b).

Now we can ask whether it is plausible to understand *that* dynamic as a consequence of bounded rationality.

The evidence that’s relevant examines the relationship between the use of effortful, systematic information processing ("System 2") and heuristic processing ("System 1").

The former is characterized by the use of dispositions that bring to conscious attention the information necessary to make valid inferences & by subsequent use of forms of analytical reasoning necessary to make valid inferences based on it.

The latter is characterized by lack of those features of information-processing—by failure to attend consciously to relevant information and by lack of motivation or ability to give it inferentially proper effect.

We know from observational studies that political polarization is greatest in those who are highest in "System 2" reasoning (Kahan et al. 2012; Kahan 2015).

We know from experiments that in fact such individuals *use* such reasoning to extract from information the parts of it most supportive of group-associated beliefs, and to dismiss the rest (Kahan 2013; Kahan et al. working).

This supports the idea that identity-protective reasoning is not a consequence of bounded rationality. It is a manifestation of rationality.

It is in the interest of people to form identity-expressive beliefs, b/c those tend toward affective responses that effectively convey to others on whom their status depends that they have the values and commitments that mark them out as reliable, trustworthy, admirable etc (Kahan in press_a, in press_b).

This account of identity-protective reasoning predicts that people will use their reason to form the judgment that members of other groups are stupid & evil.

Experimental evidence supports that prediction: people construe evidence of the open-mindedness of others in way that way; people highest in System 2 reasoning proficiency do it all the more (Kahan 2013).

I’m pretty sure this is *not* consistent with your view.

Because, as I said, you treat the sort of low estimation of the out-group that is involved here as a consequence of defects in or bounded rationality. You think that is what drives conflict over facts in politics. But if that were so, then we would expect those most vulnerable to cognitive bias to be the most subject to the kind of information processing that generates political polarization on facts.

Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question.

Like I said, I’m not sure yet how to appraise your evidence; I might well come away convinced that it warrants the interpretation you give it—namely, that a form of bounded rationality aggravates low-estimation of the moral character of out-group members.

But I want to know what you make of the conflict between your basic hypothesis and the work I've described.

Is it possible that if your evidence is right, it really isn’t getting at what is driving political polarization on facts of policy significance, given that we have evidence that that sort of phenomenon doesn’t reflect bounded rationality?

Alternatively, do you think the body of work I've described doesn't really support the inference that identity-protective reasoning is *not* associated with bounded rationality?

Or maybe this a situation where both bodies of evidence bear the inferential significance being attributed to it (by you in case of your work, by me in case of mine) & we just have to try to take all this on board & aggregate it in some Bayesian fashion?

If so, is there some set of observations we can make that will give us more reason than we have now that one or the other position is the right one?

Refs

Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (working). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology, 1-43 (2015).

Kahan, D.M. (in press_a). The expressive rationality of inaccurate perceptions of fact. Brain & Behav. Sci.

Kahan, D.M. (in press_b)The Politically Motivated Reasoning Paradigm.Emerging Trends in Social & Behavioral Sciences.

Stanovich, K.E. & West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23, 645-665 (2000).

April 30, 2016 | Registered CommenterDan Kahan

@Daniel--

An addendum:

Here's a nice recent study on CRT & various biases, including overconfidence. Noori, M. Cognitive reflection as a predictor of susceptibility to behavioral anomalies. Judgment and Decision Making 11, 114 (2016). Uses same testing strategy you do to measure overconfidence, it finds CRT counteracts it.

If CRT predicts resistance to overconfidence, then wouldn't you expect it to predict less of the *kind* of biased processing of information that generates polarization on facts relating to issues like climate change, gun control, & like?

Whereas if individuals w/ highest CRT scores are most likely to display those effects, then that would seem to be evidence contrary to your theory?

April 30, 2016 | Registered CommenterDan Kahan

"This account of identity-protective reasoning predicts that people will use their reason to form the judgment that members of other groups are stupid & evil. Experimental evidence supports that prediction: people construe evidence of the open-mindedness of others in way that way; people highest in System 2 reasoning proficiency do it all the more (Kahan 2013)."

Be careful about the possibility of 'confirming the consequent', here. Making a prediction and finding it true doesn't necessarily add much support to the hypothesis. It's an example of the invalid reasoning: A implies B, B is true, therefore A is true.

""motivated reasoning" involves assessing likelihood ratio to new information on basis of criteria that promote some goal collateral to truth of the proposition that is the object of one’s priors (e.g., Kahan in press_b)."

An alternative goal one might have is limiting the amount of mental calculation one has to do to check a conclusion. People use heuristics and take shortcuts, knowing that they're unreliable, but finding them reliable enough to justify the risk of error.

For example, a physicist might approximate a mechanical problem by assuming rigid bodies subject to Newtonian dynamics. It's not true, but it's probably close enough and it's a lot easier to calculate with than more accurate models. Does this sort of trade off constitute what you would call "motivated reasoning", given that it involves goals other than the pure seeking of truth?

April 30, 2016 | Unregistered CommenterNiV

@NiV--

I have no idea what you are trying to say here w/ "confirming the consequent...." Would you have said this to Arthur Eddiington when he confirmed predictions of general relativity by observing bending of light during an eclipse?

Theories make predictions. Experiments either corroborate them or they don't. If they do, then no one says, "oh, you predicted that, so it doesn't add much." They say, was the experiment valid; and how much evidentiary weight does it have in relation to hypothesis vs. competitors.

And no, what you are describing doesn't fit my definition of MR-- viz., having a goal for information processing independent of figuring out truth. One whose goal is to improve on her priors can make a decision about how much effort to expend to determine the likelihood ratio associated with new infomration; the goal is still to form a more accurate assessment of the probability of the hypothesis than one had before.

April 30, 2016 | Registered CommenterDan Kahan

Dan - wow - this is great, thanks so much. Will reply soon, hopefully tomorrow, also to Joshua, thanks to you too

May 1, 2016 | Unregistered Commenterdan s

"Would you have said this to Arthur Eddiington when he confirmed predictions of general relativity by observing bending of light during an eclipse?"

Yes.

"Theories make predictions. Experiments either corroborate them or they don't. If they do, then no one says, "oh, you predicted that, so it doesn't add much.""

This is the basis of the 'null hypothesis' method and Popper's falsifiability principle.

The following logical steps are invalid:
A implies B
B is true
Therefore A is true.

However, the following similar-looking sequence is valid:
A implies B
B is false
Therefore A is false.

As Popper pointed out, this latter sequence is how modern science works: you never 'confirm the theory', instead you 'eliminate all the alternatives'. (The former does have a respectable pedigree, though - it comes the 'Inductivist' approach pioneered by Sir Francis Bacon, and is deeply embedded in scientific history.)

You propose a null hypothesis A which is the hypothesis you want to refute - that there is no effect or that the existing belief stands unmodified. You make a prediction B from it, which distinguishes it from the alternatives. You then perform the experiment and show that B doesn't happen, that the prediction is false, and the null hypothesis has been rejected.

But if the null hypothesis is not rejected, that does not imply that the null hypothesis is true. Failure to reject a null hypothesis - the thing used to generate predictions - generally "doesn't add much".

Thus Arthur Eddington's experiment took as the null hypothesis the existing belief that light would not be deflected by gravity (I'm simplifying), made a prediction that the apparent position of the star wouldn't move, and refuted it by observation. That constitutes real progress: the pre-existing pre-Einstein theory is rejected. But this does not show that general relativity is true! There are infinitely many alternative hypotheses that would predict such a shift, general relativity is only one of them. (And probably the wrong one, too. It was later found to be incompatible with quantum field theory. We're not sure which, if either, is correct, but the most likely answer is 'neither'.)

Science is like sculpture - the shape is created and the truth revealed by chipping away all falsehood, by testing each piece and removing it if it fails. If you test a piece and decide *not* to remove it for the time being, no immediate progress is made. Maybe you'll remove it later. Only when a hypothesis has been tested many times and survives all tests, over a long enough time that you would expect any flaws that existed to have been revealed, can you tentatively judge it to be a part of the final shape.

Confirmations only provide evidence to the extent that they eliminate alternatives. If no alternatives are eliminated - if you make a prediction from a hypothesis that all the alternatives would predict as well - then no evidence is provided. Similarly, if only a few alternatives are eliminated, then only weak evidence is provided. It doesn't matter how firmly your hypothesis predicts the effect - if the competitors can predict it as well, nothing is gained. The likelihood ratio is P(Obs|H_null) / P(Obs|H_alt). Confirmation only says P(Obs|H_null) is large, but that means nothing unless P(Obs|H_alt) is small.

In this particular case, identity-protective reasoning is only one particular hypothesis about why someone might act as they do. Many more can be constructed, and in a number of of the examples you give, I think there are others that seem to me more plausible.

It's good and necessary work. You're doing the right thing - trying to refute your own theory by applying tests, and showing that it survives them. I'm just cautioning you to be careful about seeing confirmed predictions as "support" for the identity-protective hypothesis, unless you can be certain that no other plausible explanations for the observations can be found.

Failed attempts to refute a theory do constitute support for it, but only very slowly.

May 2, 2016 | Unregistered CommenterNiV

"And no, what you are describing doesn't fit my definition of MR-- viz., having a goal for information processing independent of figuring out truth. One whose goal is to improve on her priors can make a decision about how much effort to expend to determine the likelihood ratio associated with new infomration; the goal is still to form a more accurate assessment of the probability of the hypothesis than one had before."

Thus there is an alternative hypothesis - that people decide how much effort to expend checking an argument depending on their prior beliefs or identity-protective motives or other politically-correlated factors. The decision not to expend effort checking confirmations of what you already know is aimed at obtaining a more accurate assessment of the truth - you are unlikely to change your mind, and you are less likely to find an error, reading the details of what appears at first glance to be correct. When you have limited resources, you make the most progress by expending them on the areas most likely to teach you something new. So you only thoroughly check arguments that contradict what you already believe. If you fail to identify anything wrong with them, you may then modify your belief.

Identity-protective motivations would apply whatever your degree of scientific literacy - if your aim is simply to avoid uncomfortable beliefs, then people who don't have any truth-seeking rationale for their beliefs can avoid them even more easily. Just reject it as wrong, simply because that's what the enemy tribe say. But they don't. People require a justification for rejecting an argument that supports an uncomfortable conclusion, which scientific literacy makes them more able to find, and this indicates their aim is still truth-seeking.

Or to summarise, it's not the content their reasoning that is "motivated", but their decision whether they need to reason at all. The decision not to reason is still motivated by truth-seeking; it's just that they believe they already know the truth, and so don't need to.

May 2, 2016 | Unregistered CommenterNiV

Ok - my long awaited reply to DK - thanks again for great comment. (by the way I appreciate your non-overconfident/info-seeking tone; appropriate for those of us studying this kind of thing but still not always the case)

Yes, the discussion of motivated reasoning (MR) in my paper is not as clear as it could be, and MR is big issue.* Your definition of MR is what I did mean to refer to - you say this is not a mechanism (for driving out-party feelings), is this because you mean it's a class of mechanisms, or a characteristic of class of mechanisms? If so, either way, I'm with you; in addition to social image identity-protective cognition, I'd include internal motivations (ego, identity) as well, but this doesn't matter much for purposes here. For short I will refer to just MR in rest of this post. And how about BR for bounded rationality (non-MR systematic biases)

-You write: "Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question."

Wow, yes this does seem at odds with my basic claim. So this is a really interesting/important discussion. So that's my excuse for why this goes on for a while (brace yourself)

My impression of your work and related literature was, to oversimplify, that you'd made strong arguments that identity (MR) was the dominant force driving biased beliefs on climate change and other similar topics, with some key evidence being that more numerate/educated/higher cognitive ability types tend to be more biased. First, I don't think you're saying this but worth noting clearly that these factors (numeracy etc) are not equivalent to rationality**. Second, re your comment, I'm not aware of evidence that holding everything else fixed (ideology, party strength, numeracy, etc) as a measure of rationality increases (say CRT, or even better, biases I focus on in my papers, overprecision and the false consensus bias), beliefs about factual topics become more biased. I checked the Hamilton papers you cite in your 2015 paper and doesn't look like they quite do this. I saw a reference to CRT in your paper but did not see analysis of just this (in yours or Hamiltons). But I could definitely be missing something here (if so pls let me know!). I do buy that aspects of S2/analytic reasoning can enhance MR-driven bias. But the claim that 'bias re factual topics is an increasing function of rationality (for topics with truth 'opposed' to motivation)' seems too strong. Maybe this isn't quite what you mean anyway. If it was you meant - I might want to discuss the evidence/future investigation with you further 'offline'

Either way, suppose the strong version of this claim is true, holding ideology, numeracy etc fixed, less BR bias means more climate (and other?) biases. A few other comments then.

-Would this mean we should expect BR to have analogous effect on my outcome, out-party dislike? I'm not sure, but doubtful. BR (in particular overprecision, thinking you know more than you really do) may apply more to beliefs about people than beliefs about more abstract/'scientific' topics like climate. If I'm an R I might think I can judge a D, even if I don't think I can form intelligent opinion about GMOs. So it's possible the context is different enough the relative importance of MR/BR factors could vary. This is likely worth thinking through some more

-But suppose as you allude to, more rational types should feel more hate just as they are more biased re climate. How could we reconcile this with my results? 1 possibility is my BR variable is actually so badly measured that it's correlation with rationality is the opposite of what it's supposed to be. This is always possible but I do think this is unlikely (especially since I get the strongest results for less educated respondents, Table 6 - the mismeasurement story would then imply hate increases in rationality mostly for the less educated, and I don't the literature supports that). A more likely scenario is that my BR variable (OC) is picking up omitted party strength effects (those who have higher overprecision are stronger party identifiers (stronger than what's captured in survey responses already controlled for), and these guys feel more hate). This would be consistent with fact that my strongest result is the instrumental effect of BR on dislike is via party strength (that is, via identity/MR; Table 5). In this case, we'd all be at least partly right - hate is caused largely by strength of identity (MR) but this identity is caused in good part by BR (so BR still indirectly causes hate). Personally I don't think this is the whole story but I don't think I can rule this out with my data. I should likely discuss further in the paper, will try in revision.

-Last - to be clear - taking my results/interpretation as they are, to be clear I do not mean to imply I am ruling out MR factors - I am just claiming that BR factors are (also) important. (so the title is a bit misleading; hopefully I get a little poetic license here and do try to clarify in text) The MR measure I use in Table 9 is my best attempt to get at this issue (beyond party identity etc) but this is far from a perfect measure


*This is partly b/c I think the distinction between MR and non-MR overconfidence still isn't as clear as it could be, though I think there's been good progress here in recent yrs; the distinction between overprecision and overoptimism is very useful (see e.g. http://www.jstor.org/stable/43611009?seq=1#page_scan_tab_contents, which still isn't often used in psychology, e.g. the Noori paper you cite, thanks for that). Overprecision refers to the variance, overoptimism to the mean. But there are motivated aspects of overprecision - it's nice to think we know something better than we really do. I don't know of work on this (motivated overprecision), if anyone does pls let me know.

**Here are a couple recent ones from econ that I think find low correlation of cognitive ability and BR,
https://www.dartmouth.edu/~jzinman/Papers/WeAreAllBehavioralDraft1.pdf
http://www.columbia.edu/~po2205/papers/DeanOrtoleva_Relationship.pdf

May 2, 2016 | Unregistered Commenterdan s

Joshua, I'll keep my reply here short given i've likely already written way too much here.
-sounds like we're on same page re combo of various biases and motivation causing out-group animus (which is not a given!)
-your claim about greater diversity within than across groups is interesting and off-hand I don't know of work that addresses this precisely, but something that should be done if not already
-your pt about being reflexively dubious of differences across groups being large seems consistent with claim that we intuitively under-estimate these differences :). don't forget the false consensus bias, an underrated one I think. but your point about these differences seeming to not cause so much conflict within groups is a good one - though group members do not always get along of course - look at the party nomination campaigns this yr
Thanks again

May 2, 2016 | Unregistered Commenterdan s

@DanS--

I think your reply is very helpful.

But I would say that it is hard for me to understand what it would mean to examine whether " as a measure of rationality increases (say CRT, or even better, biases I focus on in my papers, overprecision and the false consensus bias), beliefs about factual topics become more biased" after "holding everything else fixed (ideology, party strength, numeracy, etc)."

To begin, Numeracy is one measure of profici9ency in the sort of rasoning that evinces resitance to cognitive bias-- or overreliance on heuristic reasoning. It is in fact a superior measure of it than is CRT-- so we wouldn't want to "control" for that.

But more importantly, I don't think we want to control for ideology or any other dispositon we care to treat as representing the identity that is being protectied by IDPMR.

The claim is that higher proficiency predicts motivated reasoning in the direction of one's disposition & in proportion to the strength of it. If we "control" for identity, there's nothing left of the claim that admits of measurement.

The right model doesn't hold "ideology fixed" & examine how CRT or Numeracy or anything else influences an outcome variable that reflects perceptions of policy-relevant facts (or information processing thereon). It is one that treats ideology & CRT as interacting w/ each other, so that CRT predicts greater "bias" (in sense of directionality!) of belief or information processing in direction associated with conclusions congenial to identity.

*But* ... what's making me pause here is your effort to try to model your way out of the endogeneity of using partisan id as a predictor of in-group dislike in JBM? I can see why you feel constrained to do that to test your hypothesis; and I don't think it is a mistake to try to do it!

Maybe partisanship is a kind of disease of the mind, a kind of virus that gets inside of reasoning proficiencies & makes them do its bidding. It's "bias" that has the effect of making normally bias-resistant dispositions contribute in biased way to id-protective informatioin processing. *Real* rationality has to be measured in people who aren't already "biasd" by being partisans... That might reconcile things.

Seems wrong, though! Certainly many of history's "best" reasoners were strongly opinionated; hard to believe that their poltical convictions-- right (Hayek), left (J.S. Mill)-- weren't themselves an expression of the power of their reason as opposed to an evil kind of mental cancer that was eating away at it...

So I'm confused about how to think about this. I'll be smarter when I dispel my confusion by working through the paper carefully-- or dumb enough that a smart person whould be able to see what's wrong at that point & straighten me out.

On behalf of all 14 billion readers of the blog, thanks for great post & reply!

May 2, 2016 | Registered CommenterDan Kahan

Dan S -


==> your pt about being reflexively dubious of differences across groups being large seems consistent with claim that we intuitively under-estimate these differences :).

Could be. But "is consistent with" doesn't really get us very far as it's also consistent with my belief that most people over-estimate those the differences in values across different groups.

==> ...though group members do not always get along of course - look at the party nomination campaigns this yr
Thanks again

Of course...but what does that tell us? For example, when Boehner says that Cruz is Lucifer in the flesh and a miserable SOB. (Actually, that immediately reminded me of the Republicans who say that Obama is the anti-Christ and a tyrant and and egocentric narcissist who displays values such as "hating America" and wanting to advance an Islamic state.)

But is Boehner's animosity because they value loyalty to different extents? I mean no doubt, part of Boehner's animosity towards Cruz is explainable by Cruz's lack of loyalty to the Republican Party establishment - that Boehner is a part of - but how could we know if any part of it is attributable to differences in valuing loyalty? Who values loyalty more, Cruz, because he's more conservative? What about with the Dems. Do Sanders supporters hate Hillary (and her supporters) because in being more conservative, Hillary values loyalty more than Sanders? But if so, then why do Boehner supporters hate Hillary? Do they hate Hillary more than they hate Sanders, who in being more liberal, so the theory goes, values "loyalty" less?

I dunno.The causal mechanism being the identity-based hatred seems awfully messy to me, and I think not likely explained by something as simple as a clear differentiation in "values."

May 2, 2016 | Unregistered CommenterJoshua

Dan S -


W/r/t some of your comments to Dan K....(I should probably stay out of the discussion between you eggheads and stick to things that are easier to understand, but I often don't do what I should do and I'm hoping that I might be able to learn something if either you or Dan K might respond).

==> First, I don't think you're saying this but worth noting clearly that these factors (numeracy etc) are not equivalent to rationality**.

If I understand your point... it touches on one problem I have with Dan K's assertion that a propensity towards a particular kind of reasoning causes increases in polarization about climate change --- the problem being that I have questions about how generalizable (across different "domains" of reasoning) are the attributes that he measures with his tests for how people reason.

==> Second, re your comment, I'm not aware of evidence that holding everything else fixed (ideology, party strength, numeracy, etc) as a measure of rationality increases (say CRT, or even better, biases I focus on in my papers, overprecision and the false consensus bias), beliefs about factual topics become more biased.

And if I understand this point, this touches on another problem I have with the causality Dan K. asserts w/r/t the association between certain kinds of reasoning and/or knowledge of the science of climate change, and polarization. My own sense (not backed up with empirical evidence) is that increased proficiency in certain kinds of reasoning, and knowledge on certain subjects, are (at least potentially) associated with certain social or cultural identity groups. I suspect that it's quite possible that the factors that Dan asserts are causal are more likely a mediator (or moderator?) between the association with identity group and polarization on issues such as climate change.

May 2, 2016 | Unregistered CommenterJoshua

@Joshua--

At the risk of sending us back into if not a bottomless pit one as as deep as Chicxulub crater, realize that @DanF is using obsrevational data to draw inferences about causation of politically moderated biases.

He's doing his best to rule out that the correlations he is obsreving are being "caused" by something else. But that's all one can ever do (& no matter what method one uses).

But in any case, If your view is that poltical identity *cause* reasoning styles, it hsouldn't matter that he & I disagree about whether greater proficiency in reasoning mitigates (his view)or aggravates (mine) the sorts of biased forms of information processing associated with political polarization. We both have the causal arrow going wrong way.


As for " certain kinds of reasoning ... [being] (at least potentially) associated with certain social or cultural identity groups w/ identity," neither CRT nor Numeracy is correllated meaningfully w/ political identity or cultural outlooks; nor is Ordinary Science INtelligence.

They are, very weakly, w/ religiosity-- but since the motivated reasoing studies look at how identity influences beliefs or information processing *conditional* on reasoning proficiency (moderation), the results aren't attributable to such correlations.

I certainly accept that there could be correlations between cuturaal dispositions & substantive areas of kowledge such as climate change science. The trick is how to distenangle knoweldge & identity so that one could actually figure that out.

And stop this "maybe I should stay out" business: the only 2 explanations for not getting something are (a) you aren't trying hard enough; & (b) the person expalining something to you is doing a shitty job.

I've ruled out (a) in your case. Although my evidence is all obseravtional...

May 2, 2016 | Registered CommenterDan Kahan

Dan K. -

==> But in any case, If your view is that poltical identity *cause* reasoning styles, it hsouldn't matter that he & I disagree about whether greater proficiency in reasoning mitigates (his view)or aggravates (mine) the sorts of biased forms of information processing associated with political polarization. We both have the causal arrow going wrong way.

That isn't my view. "Political identity" is too restrictive to capture what I'm suggesting. And "cause" is too strong.

My guess is that there may be an association between social/cultural identity/affiliation and a tendency towards particular reasoning styles, especially when those reasoning styles are assessed with measurement instruments which may, in turn, be particularly sensitive to artifacts of specific cultural/social identities. And, in turn, those same social/cultural identity/affiliation may be associated with political identity and beliefs about issues such as climate change.

==> neither CRT nor Numeracy is correllated meaningfully w/ political identity or cultural outlooks; nor is Ordinary Science INtelligence.

Do you think that CRT, numeracy, or "ordinary scientific intelligence" - as measured using your instruments - correlate with the condition of being an East Coast Jew as compared to a Catholic jn Oregon, or a Baptist in Oklahoma, or a Protestant in New Hampshire? I would guess that they do, at least to some extent, as would political identity, as would orientation on an issue such as climate change.

==> And stop this "maybe I should stay out" business: the only 2 explanations for not getting something are (a) you aren't trying hard enough; & (b) the person expalining something to you is doing a shitty job.

If you and another geek/professional analyst in this field are having a discussion and not having any trouble interpreting what each other are saying, and I'm having trouble understanding the convo, then there is a 3rd possible explanation.

I was reading an critique of Haidt's book on the "Righteous MInd," and came across this:

Haidt approvingly quotes Phil Tetlock who argues that “conscious reasoning is carried out for the purpose of persuasion, rather than discovery.” Tetlock adds, Haidt notes, that we are also trying to persuade ourselves. “We want to believe the things we are saying to others,” Haidt writes. And he adds, “Our moral thinking is much more like a politician searching for votes than a scientist searching for truth.”

Haidt criticized the author of that passage for inaccurate quotes...but I think there's something to work with there, anyway...

My guess is that the "purpose of persuasion" is (1) meaningfully varied in terms of importance in association with certain social/cultural orientations and (2) associated, stylistically, with certain social/cultural orientations. Yes, in trying to persuade other people, IMO, we're all trying to persuade ourselves that we are right and moral. But there are different styles in how people go about persuading ourselves and others, and different levels of importance placed upon developing the skills and abilities necessary to master those styles. For example, we differ culturally in terms of our comfort with conflict and disagreement versus consensus and uniformity. Someone from a cultural identity which prioritizes convincing oneself through a process of abstracting a problem, approaching it analytically, and arguing about it with someone as a way to reaffirm their beliefs and convince others has a particular kind of motivation for developing the skills necessary to carry out that goal. Someone from a cultural identity which prioritizes consensus and shared values as a way to convince themselves and others that they are right would have a different kind of motivation for developing abstract reasoning skills.

May 3, 2016 | Unregistered CommenterJoshua

-re DK Monday 7:48 pm update - that is darn cool, got to be solid Gelman Cup entry. This may be motivated reasoning on my part but I'm afraid I'm still a bit of a ways from you on the implications of this graph for the relation between BR and MR (and re this topic more generally). I see this graph and think, the gap may grow for higher CRT scores, but these scores may be picking up education/cognitive ability effects. And these effects are distinct from other BR biases (various blinders we have, in particular, being overconfident in what we know). Within the set of Yale Law Profs who know stats (and get same OSI score), some will tend toward more overprecision than others.

-re your comment on ideology, I think we're not far apart here, looking at say your 2013 JDM paper, you control for ideology as I was thinking, and I agree the interaction term (ideology-CRT) is key. But controlling for ideology does not eliminate the variation we're interested in. Within group of Yale Law Profs who are staunch liberal Dems, there's still variation in BR (which you'd measure with OSI and I'd whine about?)

-more generally, I think there's more good work to be done in sorting out motivated/non-motivated biases! Thanks again much for this discussion, I learned a ton from it

May 3, 2016 | Unregistered Commenterdan s

Hey Joshua, appreciate the humble/info-seeking tone, a few more quick pts.
-I do think intra-party conflict could be driven by differences (in values, broadly defined) in a way analogous to inter-party conflict. Bernie supporters 'hate' Hillary b/c they think she's cynical, self-serving, careerist whereas Hillary/her supporters think pragmatism, compromise etc are necessary to achieve goals to improves society. her choices and bernies' differ b/c of different 'values' which lead to harsh judgments about the other's character
-I am with you re climate change being a pretty unique context and the effects of numeracy etc likely varying quite a bit across contexts (I hit on this in my first reply (the long one) to DK)
-Not sure what you mean in your very last pt but afraid this really should be my last comment in this thread - communication is hard!!
Thanks again, best - Dan

May 3, 2016 | Unregistered Commenterdan s

@DanS--

The point of the plot is to pick up the "cognitive ability" efects measured by CRT -- the best measure of disposition to use System 2 reasoning.

On "other things": The sample is general population one. I wouldn't "control" for education or anything else related to cogntive ability. CRT is correlated w/ all of those in the real world, and will be correlated in the same way w/ it in the people in the study; if we partial them out, then we are no longer modeling real people, in whom such abilities come in signature combaintions. We live in the world in which real people identified by differences in ideology disagree w/ each other on things. The question is whether those real people disagree b/c those real people vary in reasoning proficiency, for which CRT is one measure.

If you have non-represntative samples, then I agree you might have confounds/biases if you don't try to remove the contribution that other covariates are contributing to the model estimates. But in a represenative sample, leaving the covariates out is better if there is any reason to think they are indicators of the latent disposition that one is measuring with someithing like CRT.

Understand this might be last commment in thread -- you've been super generous in helping us to learning things-- but if you have different philosophy, I would love to hear it, b/c I think about this issue -- appropraite specification of models for examining these efects & the problem w/ "overspecified ones that contrl for things they shouldn't-- a lot & wish there were more discussion in the literature; some decent ones listed below. But I might have to visit you in person to find out what you think & why!

Berry, W.D. & Feldman, S. Multiple regression in practice. (Sage Publications, Beverly Hills; 1985), p. 48.

Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Edn. 3rd. (L. Erlbaum Associates, Mahwah, N.J.; 2003), p. 419.

Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007), p. 187.

Lieberson, S. Making it count : the improvement of social research and theory. (University of California Press, Berkeley; 1985), pp. 14-43

May 3, 2016 | Registered CommenterDan Kahan

@Joshua--

speaking for myself, I'm not much interested in being part of a scholarly converaton that *you* as a reflective & curious person can't understand *if* you try hard enough.I don't care to discuss matters w/ scholars who feel differently.

I know you try hard. so if you don't get it, that's prima facie evidence that I'm need to try harder.

May 3, 2016 | Registered CommenterDan Kahan

Dan S -

Thanks for the convo. On the chance that you'll still read another (long) comment...

==> -I do think intra-party conflict could be driven by differences (in values, broadly defined) in a way analogous to inter-party conflict.

Theoretically possible, I'm sure. But I'm interested in something more substantial than theory. Perhaps it gets down to how "values" are defined. I consider "values" to be reflected in how people live their lives, interact with neighbors, treat family, etc. As such, I see little evidence that people are differentiated by "values" in ways that align with their ideological alignments. More often, I think that what I see is people starting with a differentiation in positions on certain issues, in accordance with group identity, and then falsely and over-confidently reverse engineering to impute high-minded "values" to themselves and the group they identify with and lowly "values" to those that they don't identify with.

Of course, the fact that people are over-confident in assigning values, differentially, based on group-identity differences, doesn't mean that there are no such differences in values in association with political identification. But even if such differences are real, the question remains quite open, at least w/r/t any solid evidence that I've seen, how much those differences actually explain ideological partisanship.

I certainly know that in my personal experience, I have encountered many people who have mistakenly thought that they knew what my values were, from a process of reverse engineering from the positions I took on polarized issues. And I believe that I see that wrong-headed heuristic being used all of the time. Libs think that cons don't value or care about the disadvantaged. Cons think that libs want an dictatorial, authoritarian state to lift any personal responsibility off the hands of people to work hard. What nonsense!

But it gets even better. People switch in their orientation on "values" in accordance with their positions on specific issues. Cons switch from saying that a health insurance mandate is an important component of the value of "personal responsibility," to saying that a health insurance mandate is the epitome of statist values and government overreach. Cons see government entitlements as undermining the "value" of a worth ethic or self-sufficiency until someone comes to take away Medicare or Social Security. They see the value of States Rights as important to hedge against federal government overreach until a state wants to determine how its citizens' votes should be counted in a presidential election... and on and on...I could easily give a similar litany for the other side of the political aisle.

==> Bernie supporters 'hate' Hillary b/c they think she's cynical, self-serving, careerist whereas Hillary/her supporters think pragmatism, compromise etc are necessary to achieve goals to improves society. her choices and bernies' differ b/c of different 'values' which lead to harsh judgments about the other's character"

This looks to me like argument by assertion. What evidence do you have to show the direction of causality? Perhaps Bernie supporters "hate" Hillary because they are identified differently than she, and so they impute a different set of values to her than those they like to assign to themselves.

How does one come to see Bernie, someone running for president, as not being "self-serving" in comparison to someone who has spent a significant portion of her life advocating on behalf of others? Do Bernie's supporters value "pragmatism" any less than Hillary supporters, or do they see different goals as being more pragmatic? I might consider supporting Bernie because I think that his populist rhetoric is more "pragmatic" in the sense of differentiating his views from Republicans' and getting minorities and working class whites and young people to the polls to vote for a Democrat in contrast to the "pragmatism" of the New Democrats that try to out-Republican the Republicans by appealing to moderates. Are Bernie's views on gun control more "pragmatic" than Hillary's?

IMO, the value differentiation you are describing is mostly shallow, largely media and meme driven, and based mostly on policy stances based on political calculation - rather than value differences. Saying that Bernie and Hillary supporters value "pragmatism" to different extents is easy, but I think that is something that would be hard to establish empirically. Libs and cons may answer polling questions in ways that differentiate themselves on questions of "loyalty," but are they really any different in how they manifest the "value" of loyalty to their neighbors, or their families?

Again, my ideas are based on the framework of conflict resolution, where the key is differentiating positions on which there is disagreement from interests and I would say values which are largely shared. Again, I think that as a general principle, when it comes to something like values, there is far more intra-group diversity than their is inter-group differentiation. I would be happy to learn that I'm mistaken if you could provide some evidence, but noting that Haidt has some evidence that cons value loyalty more than libs, or that differentiates cons and libs by virtue of the other "moral" divides he outlines, even if it were true (I admit I'm dubious), leaves a lot to be desired as a way of explaining the mechanism behind partisan animosity. Thus, explaining inter-group animosity on the basis of value differences seems to me to be insufficient..

May 3, 2016 | Unregistered CommenterJoshua

Dan K.

==> I know you try hard. so if you don't get it, that's prima facie evidence that I'm need to try harder.

I appreciate the sentiment, but trust me, you wouldn't want to be tasked with the burden of having to explain all of your theory and analysis in a way I could understand. I, at least, have no such expectation. I'm content to glean what I can and ask for clarification when I can formulate a question that I think is decipherable.

May 3, 2016 | Unregistered CommenterJoshua

@Joshua--

Well, I did say "prima facie..."

May 4, 2016 | Registered CommenterDan Kahan

Joshua - thanks again - afraid I don't have time to respond in detail but sounds like have a ways to go before coming to consensus on what's meant by term 'values'. Admit that examples I gave were unsubstantiated but they were just that, examples, to illustrate some mechanics of ideas. And understand your possible skepticism of what Haidt's up to nowadays but I think it's unfair to say you're dubious of his body of work showing fundamental differences in moral thinking across parties/ideological groups, I think that's pretty rock solid at this pt, of course could be wrong, but not just result of 1 or 2 small N studies. That said of course that work alone doesn't explain everything we want/need to know about partisan hostility (that's why I'm trying to hack away at this now myself). all best- Dan

May 4, 2016 | Unregistered Commenterdan s

Dan S -

Thanks for that response...

All fair points.

I'll try to go back and look at Haidt's work again - to see...

1) How it quantifies the magnitude of the differences he posits in moral values associates with political ideology.
2) Whether he addresses the question of comparing how values are manifest in people's lives as opposed to how they are suggested by how people answer questions about their values (answers which, I would imagine might often be a function of ideology and identity-protective cognition more than values in and of themselves)
3) Whether he addresses the comparison of the magnitude of intra-group diversity to inter-group differentiation.

A couple more thoughts....

The Trump nomination is fascinating w/r/t questions of values and political identification.... so here we have a Republican nominee who...

1) Promotes interventionist government action to prevent anyone who traveled and/or worked in countries where there were cases of Ebola from entering the country...advocates that the government to step in and prevent Muslims from entering the country...thinks we should build a wall to prevent people from crossing the border...says that women who have abortions should be punished...

who gains a high level of support from many Republicans who identify "small-government" as a defining, or at least fundamental, value.

2) Says that America is losing, had declined, is a mess, etc. (paraphrasing) and that he's going to "make American great again" and gains majority support among from the same constituency who attacked Michelle Obama for having different values when she said that when Barack was gaining support for his candidacy it was the first time in her adult life she felt proud of her country because "hope is finally making a comeback."

It is interesting that there seem to be quite a few Republicans who say they won't vote for Trump because they believe they have a different set of values than he...but there is likely a significantly larger number of Republicans who will vote for him even though, while they agree with him on some of his positions, they would probably say that they have a different set of values than he.

It certainly isn't coincidental that Cruz tried to play on that difference by labeling Trump as having "New York values."

But that strategy didn't seem to work for a lot of Republicans. Why?

Because they don't really think that they have different values than Trump?

Because there is some hierarchy of values where they see commonality with the most important ones even if they differ with him on others?

Because in reality, they align with Trump's identity signals and their group affiliation "trumps" their differentiation on "values?"

Because the calculus on how they define "values" is actually very difficult to unpack (e.g., he has a pretty wife, has a lot of money, seems to be nice to his kids, is willing to insult people in public and use curse words and not act differently in public than he might in private)...in other words, because they embrace "values" that wouldn't likely register in research paradigms or at least fall out along Haidt's clean and simplified "moral foundation" taxonomy?

Certainly, Haidt's "moral foundation" is only a model. It may be useful, but I think that it's accuracy must be defined through an exploration of the extent to which it doesn't explain differences and similarities in how people live their lives (i.e., manifest their values) should be explored.

May 5, 2016 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>