follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« A measured view of what can be validly measured with M Turk samples | Main | Let's keep discussing M Turk sample validity »
Tuesday
Jul162013

Proof of ideologically motivated reasoning--strong vs. weak

A couple of weeks ago I posted the abstract & link to Nam, Jost & Van Bavel’s “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance, and asked readers to comment on whether they thought the article made a good case for the “asymmetry thesis.”

The "asymmetry thesis"—a matter I’ve actually commented on about a billion times on this blog (e.g., herehereherehere,
 here  . . .)—is the claim that individuals who subscribe to a conservative or “right-wing” political orientation are uniquely or disproportionately vulnerable to closed-minded resistance to evidence that challenges their existing beliefs. 

The readers' responses were great.

Well, I thought I’d offer my own view at this point.  

I like the study. It's really interesting.  

Nevertheless, I don't think it supplies much if any additional evidence for treating the asymmetry thesis as true than one would have had before the study. Consequently, if one didn't find the thesis convincing before (I didn't), then NJV-B doesn't furnish much basis for reconsidering.

One reason the study isn't very strong is that NJV-B relied on a Mechanical Turk sample.  I just posted a two-part set of blog entries explaining why I think MT samples do not support valid inferences relating to cultural cognition and like forms of motivated reasoning.

But even leaving that aside, the NJV-B study, in my view, rests on a weak design, one that defeats confident inferences that any ideological “asymmetries” observed in the study correspond to how citizens engage real-world evidence on climate change, gun control, the death penalty, health care, or other policies that turn on contested empirical claims.

NJV-B purported to examine whether “conservatives” are more averse to “cognitive dissonance” than “liberals” with respect to their respective political positions—a characteristic that would, if true, suggest that the former are less likely to expose themselves to or credit challenging evidence.

They tested this proposition by asking subjects to write “counterattitudinal essays”—ones that conflicted with the positions associated with subjects’ self-reported ideologies—on the relative effectiveness of Democratic and Republican Presidents.  Democrats were requested to write essays comparing Bush favorably to Obama, and Reagan favorably to Clinton; Republicans to write ones comparing Obama favorably to Bush, and Clinton favorably to Reagan.

They found that a greater proportion of Democrats complied with these requests. On that basis, they concluded that Republicans have a lower tolerance for actively engaging evidence that disappoints their political predispositions.

Well, sure, I guess.  If the two groups had demonstrated an equal likelihood to resist writing such essays, then I suppose that would count as evidence of “symmetry,” so their unwillingness to do so by the same token is evidence the other way.

The problem is that it’s not clear that the intensity of the threat that the respective tasks posed to Republicans’ and Democrats’ predispositions was genuinely equal.  As a result, it’s not clear whether the “asymmetry” NJV-B observed in the willingness of the subjects to perform the requested tasks connotes a comparable differential in the disposition of Democrats and Republicans to engage open-mindedly with evidence that challenges their views in real-world political conflicts.

By analogy, imagine I hypothesized that Southerners were lazier than Northerners. To test this proposition, I asked Southerners to run 5 miles and Northerners to do 50 sit-ups. Observing that a greater proportion of Northerners agreed to my request, I conclude that indeed Southerners are lazier—more averse to physical and likely all other manner of exertion—than Northerners are.

This is obviously bogus.  One could reasonably suspect that doing 50 sit-ups is less taxing than running 5 miles. If so, then we’d expect agreement from fewer members of a group of people asked to do the former than from members of a group asked to do the latter—even if the two groups’ members are equally disposed to exert themselves.

Well, is it as “dissonant” for a Democrat to compare Bush favorably to Obama, and Reagan favorably to Clinton, as it is for a Republican to compare Obama favorably to Bush and Clinton favorably to Reagan? 

I think we could come up with lots of stories—but the truth is, who the hell knows? We don’t have any obvious metric by which to compare how threatening or dissonant or “ideologically noncongruent” such tasks are for the respective groups, and hence no clear way to assess the probative significance of differences in the willingness of each to engage in the respective tasks they were requested to perform.

So, sure, we have evidence consistent with “asymmetry” in NJV-B—but since we have no idea what weight or strength to assign it, only someone motivated to credit the “asymmetry” thesis could expect a person who started out unconvinced of it to view this study as supplying much reason to change his or her mind, given all the evidence out there that is contrary to the asymmetry thesis.

The evidence contrary to the asymmetry thesis rests on study designs that don’t have the sort of deficiency that NJV-B displays.  Specifically, the studies I have in mind use designs that measure how individuals of diverse ideologies assess one and the same item of evidence, and show that they are uniformly disposed to credit or discredit it selectively, depending on whether the researcher has induced the study subjects to believe that the piece of evidence in question supports or challenges, affirms or threatens, a position congenial to their respective group commitments.

One example involved the CCP study featured in the paper They Saw a Protest. There, subjects, acting as jurors in a hypothetical trial, were instructed to view a videotape of a political protest and determine whether the demonstrators physically threatened bystanders. Half the subjects were told that the demonstrators were anti-abortion activists protesting outside of an abortion clinic, and half that they were pro-gay/lesbian activists protesting “don’t ask, don’t tell” outside of a military recruitment center.

We found that what “Republicans” and “Democrats” alike reported seeing—protestors “blocking” and “screaming” in the face of “fearful” bystanders or instead noncoercive advocacy inducing shame, embarrassment, and resentment among those seeking to enter the facility—flipped depending on which type of protest they believed they were watching.

Are Republicans and Democrats (actually, we used cultural worldview measures, but also reported the results using partisan self-identification, too) “equally” invested in their respective positions on abortion and gay rights?

I don’t know.  But I don't need to in order to draw inferences from this design.  For however strongly each feels, they both were equally prone to conform their assessment of evidence to the position that was most congenial to their ideologies.

That’s evidence of symmetry in motivated reasoning. And I think it is pretty darn strong.

I’ve addressed this point more generally in previous posts that describe what counts as a “valid” design for an ideologically motivated reasoning experiment. In those posts, I’ve shown how motivated reasoning relates to a Bayesian process of information processing

Bayesianism describes the logical operations necessary for assimilating new information or evidence with one’s existing views (which themselves reflect an aggregation of all the other evidence at one’s disposal).  Basically, one revises (updates) one’s existing view of the probability of a proposition (or hypothesis) in proportion to how much more consistent the new evidence is with that proposition as opposed to some other, alternative hypothesis—a property of the information known as the “likelihood ratio” (a ratio of how likely the proposition is to be true given the evidence and how likely it is to be false given the evidence).

In Bayesian terms, the reasoning deficiency associated with motivated reasoning consists in the opportunistic adjustment of the likelihood ratio.  When they display ideologically or culturally motivated reasoning, individuals treat the new information or evidence as “more consistent” or “less consistent” with the proposition in question (the film shows the protestor “blocked entry to the building” or instead “made an impassioned verbal appeal”) depending on whether the proposition is one that gratifies or disappoints their motivating ideological or cutlural commitments.

When people's reasoning reflects motivated cognition, their ideological commitments shape both their prior beliefs and the likelihood ratio they attach to new evidence.  As a result, they won't update their “prior beliefs” based on “new evidence,” but rather assign to new evidence whatever weight best "fits" their ideologically determined priors.  

Under these conditions, ideologically diverse people won’t converge in their assessments of a disputed fact (like whether the earth is heating up as a result of human CO2 emissions), even when they are basing their assessments on the very same evidence.

The study in They Saw a Protest involved a design aimed at testing whether individuals do this.  The information that the subjects received--the images displayed in the video--were held constant, while the ideological stake the subjects had in giving that information effect with respect to whether the protestors resorted to physical intimidation was manipulated.

The study found that subjects gave selective effect to the evidence--opportunistically adjusted the likelihood ratio in Bayesian terms--in a manner that gratified their ideologies.  Moreover, they did that whether their outlooks were "liberal" or "conservative."

So again, I believe that’s convincing evidence of “symmetry” in the vulnerability of ideologically diverse citizens to motivated reasoning--evidence that is a lot more probative (has a much higher likelihood ratio, in Bayesian terms!) than what NJV-B observed in their study given the relative strength of the respective study designs.

Nor is our Saw a Protest study the only one that used this kind design to look at ideologically motivated reasoning. In a companion follow-up post, I’ll identify a variety of others, some by CCP researchers and some by others, that use the same design and reach the same conclusion.

All the studies I am aware of that use this design for testing motivated reasoning (one, again, that manipulates the ideological motivation that subjects have to credit or discredit evidence, or opportunistically adjust the "likelihood ratio" they assign to one and the same piece of information) reach the conclusion that ideologically motivated reasoning is symmetric.

The only studies that support the asymmetry thesis are ones that use designs that either are not valid or that suffer from a design limitation that defeats reliable comparison of the reasoning styles of subjects of opposing predispositions.

NJV-B is in the latter category. As a result, I give it a likelihood ratio of, oh, 1.001 in support of the asymmetry thesis.

Some references 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (48)

I have to agree with you. Moreover, it's difficult to do what they hoped to do: Pick pairs of presidents who represent 'symmetric' challenges for people to write both sincere and devils-advocate like endorsements. In this case they paired:

1) A sitting president during a period when he was campaigning for re-election (Obama) with a former president (Bush II. ).

2) A former president whose wife is in the current administration and who, with his wife, regularly makes appearances for politicians who are running for office (Clinton) with a former president who got Alzheimers, dropped out of public view and has now died (Reagan). (Example of how recently Clinton is out campaigning: June 12, 2013 appearance for ed markey.)

Their paper tries describes these two choices as somehow 'equivalent' with the only metric of difference being Democrat v Republican, but there exist tangible differences that could affect how willing one might be to write up the 'pros' for whichever of the two presidents one dis-prefers. I suspect such differences will exist for any choice of "D" vs "R" president at any time. But in the present case, those who preferred the Republican were asked to write up an endorsement for the politician who was <I>the more politically active at the time when the endorsement was to be written.

Beyond this, I can't help but wonder whether one can truly diagnose the motivation for refusing to write the endorsement. Is not accepting the invitation to write something flattering about Obama or Clinton due to "dissonance avoidance"? Would instructions like this really incline me to write up something I didn't want to write up:

"To foster compliance, the instructions mentioned that “an important aspect of general intelligence is the ability to craft logical arguments arguing positions you may not personally endorse.” This wording was used to encourage participants to attempt counter-attitudinal essays even when they were explicitly given the option to decline the request."

(Should I care whether the people doing the study think I'm generally intelligent? Do I? No! And as to the statement itself: There are many aspects of general intelligence. Those instructions might just make me laugh and feel someone was trying to manipulate me-- which would turn out to be an accurate diagnosis. Presumably making that diagnosis is also "an important aspect of general intelligence"! )

I'm also a bit curious about the content of the 'counter-attitudinal' essays and also those that ended up being pro-attitudinal (..."because they wrote pro-attitudinal essays ") Did the counter-attitudinal pro-Bush essays written by Obama supporters actually say substantive nice things about Bush? Or not? Because it seems to me that one can always go through the motions of complying with a request to write a "favorable" evaluation of someone one doesn't like but do so in a way that doesn't necessarily result in any cognitive dissonance. So merely writing something up doesn't present strong evidence that anyone is willing to experience cognitive dissonance-- it's only making a sincere attempt that can result in the cognitive dissonance. The only way to diagnose whether those who wrote counter attitudinal essays really tried to write endorsements of those they disliked is to read them!

July 16, 2013 | Unregistered Commenterlucia

When people's reasoning reflects motivated reasoning, their ideological commitments shape both their prior beliefs and the likelihood ratio they attach to new evidence. As a result, they won't update their “prior beliefs” based on “new evidence,” but rather assign to new evidence whatever weight best "fits" their ideologically determined priors.

I agree with the general point here, but question the implication that "ideologically motivated reasoning", as described above, is less rational than whatever alternative is offered. This assumes that "ideological commitments" are themselves irrational, but why must that be the case? And if they're not -- if in fact such commitments or beliefs are based in a more or less rational judgement about the general nature of things -- then wouldn't it be more rather than less rational to allow such a general apprehension to influence what one accepts as evidence -- i.e., to " shape both their prior beliefs and the likelihood ratio they attach to new evidence"? In fact, wouldn't failing to do this be merely naive credulity regarding what counts as "evidence"?

Note that this doesn't or needn't imply that one's "factual beliefs" are altogether immune to countervailing evidence -- only that such evidence, and the very notion of prior "factual beliefs", are both placed in the context of a more general view of the world.

July 16, 2013 | Unregistered CommenterLarry

@Larry:

Curious: what do you think implies that ideologically motivated reasoning is "irrational" or "less rational" than some alternative? I suppose it was that I contrasted "motivated reasoning" with Bayesianism.

But in fact I agree w/ you, as it turns out.

Or at least I'd say that it is impossible to say in the abstract whether that form of reasoning is 'rational' or not. I'd say that it's rational if that way of thinkiing promotes the ends that the person using it is trying to achieve by engaging w/ information. Sometimes it might, & sometimes it might not.

What's more, it might be individually rational for everyone to do this but still be a form of reasoning that makes everyone worse off when everyone does it all at once. Clearly it will if as a result democratic institutions fail to converge on the best evidence on how to secure various collective goods. In that situation, too, individuals still won't have any reason to change how they think--since doing so won't make any difference in how democratic institutions behave and will only make that individual's life awkward. How tragic!

One of the posts that I link to elaborates on these points.

So do:

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012);

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735 (2012) ; and

Kahan, D.M.. Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study. Cultural Cognition Working Paper No. 107 (2012).

Why exactly do you thin

July 16, 2013 | Unregistered Commenterdmk38

I probably should have been more specific in using the terms "rational" and "irrational" -- clearly they can have different meanings. In one sense, I suppose, "rational" can mean "what's in one's interest", and then, as you've said, individuals often find that simple loyalty to the belief's prevalent in one's cultural group is what's in their interest (unless they happen to be scientists), and hence "rational". But there seems to be another notion of "rationality" implicit in your Protest study, and really implicit in the culture generally (as well as explicit in and among scientists), which pertains to beliefs that correspond to reason and evidence, and hence do "converge on the best evidence on how to secure various collective goods", which is distinct from, though may coincide with, a "way of thinkiing [that] promotes the ends that the person using it is trying to achieve by engaging w/ information". In any case, that's the meaning that I was referring to when I suggested that "ideological commitments", in the sense of general apprehension of the world, are not necessarily irrational. Of course, as you say, they may be, and some must be (since some are inconsistent with others), but that's simply the reason there are political/cultural debates.

July 16, 2013 | Unregistered CommenterLarry

@Larry:

I agree "rationality" can be source of confusion...

But I actually do mean to be saying that it can be "rational" in cognitive sense & expected-utility sense for someone to engage information in manner that reliably connects his beliefs about how world works to positions that are associated with some identity-defining group. That can be the best understanding of what someone is *doing* when he engages information; and it might be that the person is *doing* that b/c it is an adaptive thing to do.

Perfectly rational, fine. But that can still generate beliefs that are *wrong* about how the world works. The consequence to the individual of being wrong might be zero, for all practical purpose; but the collecdtive consequences of enough individuals being *wrong* in that way might be disasterous.

You might, following useage in public choice economics, call this a conflict between individual & collective rationality. But that doesn't mean that individuals are usefully described as being "irrational" in that situation either in cognitve or utilitarian terms.

But I really think it depends on context. If someone finds out that he is processing information in the manner I described in connection w/ gun control or climate change, he or she might not care all that much or even be happy about it. But imagine that the same person discovers that he or she is making decisions about whether his or her daughter should get the HPV vaccine in a manner that reflects the unconscious motivation to conform his or her judgment to positoins that sre dominant in that individual's cultural group. Now the person might be mortified.

I myself, as an observer, can also have my own moral attitude toward how someone engages information. I might think that even if it suits someone to engage information in a manner that more reliably connects him or her to that person's group identity than to the truth, that is still a morally undesirable situation, particularly if lots of people are doing this. But I would be confused if I thought that I was then objecting to a defect in rationality on the part of the people whose reasoning style I objected to -- indeed, I might end up looking a bit foolish if I say that & it turns out that the style of reasoning I find undesirable is in fact most pronounced in individuals who display the highest capacities for critical thinking.

July 16, 2013 | Unregistered Commenterdmk38

How carefully do studies of motivated reasoning distinguish between strategic biases that serve to advance the status of a cultural group, or an individual's status within that group, versus a naive overestimation of that group's trustworthiness relative to other groups?

Have any of these studies used a right-wing authoritarianism scale, either instead of or in addition to the more common ideological scales?

July 17, 2013 | Unregistered CommenterConceptTinkerer

With specific regard to "They Saw a Protest," I wonder what happens if the order is reversed between when subjects see the video and when they see the explanation for who is in it. The order is then: read description of incident that doesn't specify who or where the protesters are, watch video, get explanation of who and where the protesters are, answer questions. In other words, if people are given a chance to form their observations first, do they revise them afterwards based on new ideological information? Does the effect size change compared to the original experiment?

July 17, 2013 | Unregistered CommenterConceptTinkerer

@Dan:

I might think that even if it suits someone to engage information in a manner that more reliably connects him or her to that person's group identity than to the truth, that is still a morally undesirable situation, particularly if lots of people are doing this. But I would be confused if I thought that I was then objecting to a defect in rationality on the part of the people whose reasoning style I objected to....

Yes, I get that. But let's set aside "rationality" altogether then, for now at least. And let's substitute "truth-seeking" or "truth-adhering" instead -- awkward, but hopefully less ambiguous. My point is simply that allowing ideology to influence one's beliefs about the truth and/or about what counts as valid evidence for the truth can be more truth-adhering than simply accepting evidence at its face value, and it would be so just in the case that the ideology itself is generally truth-adhering. By the way, this would imply that contrary evidence would affect not just one's prior factual beliefs, but also have some impact on one's ideology, group pressures notwithstanding -- an implication that can help explain fundamentalist or "true believer" sorts of phenomena.

July 17, 2013 | Unregistered CommenterLarry

@Larry:

Well, as usual, you have pushed & prodded & lured & steered me to the point where I see that you are thinking something more complicated than I am thinking.

I'm not sure how someone could believe that treating an "ideology" as normative for the weight to assign evidence is "truth seeking" in any sense that doesn't definie "truth," analytically, as 'consistent" with the tenets of the ideology.

Of course, if someone had access to & used *any* standard for assessing evidence that was *reliably* correlated with "truth-seeking" & that could be used to determine the LR or weight to be assigned evidence in a Bayesian framework, that wouldn't be "motivated reasoning" as I've defined it; motivated reasoning refers to the tendency of individuals to assess evidence in a manner that promotes some goal or interest independent of forming an accurate judgment--such as maintaining his or her standing in an identify-defining group.

I have to say, the idea that there's an "ideology' that one whose goal is to form accurate understandings of facts could use to asses the weight to assign new information or evidence relevant to hypotheses about what those facts are strikes me preposterous!

Ideologies, as systems of values, tell us which states of affairs to try to promote. They don't supply insight into empirical questions about how to achieve those states of affairs. History is filled with examples of spectacular, colossal instances of foolishness involving the failure to recognize the difference.

July 17, 2013 | Unregistered Commenterdmk38

@Conceptthinker:

1. There is a modest correlation between "hierarchy" & "authoritarian personality" -- just as there is a modest correlation between "conservativism" and "identification w/ the Republican party" & "authoritarian personality" scales. All of these, in my view, are just alternative latent variable measures for theories that try to connect one or another aspect of information processing to a latent motivating group-based disposition.

I myself haven't used the "authoritarian personality" scale in studies. But if the reason to do so is the claim that it uniquely identifies individuals who are vulnerable to ideologically motivated reasoning, or identifies ones who are disproportionately vulnerable (clearly Jost is espousing something like this), then that proposition is, in my view, demonstrably false, since individuals whose outlooks are negatively correlated w/ "authoritarian personality"-- "liberals," "egalitarians" & so forth -- are plainly, obviously, indisputably subject to ideologically motivated reasoning.

2. Interesting variation on design you propose for They Saw a Protest. It would help to get at the "mechanism" underlying the "mechanism."

The study results suggest that cultural cognition or some like variant of motivated reasoning is driving "perception" of the images in the film.

But one can always dig down further. What's the "mechanism" underlying the mechanism here? One possibility is that the subjects are "reconstructing" the contents of their recollections in a manner that fits their stake in forming perceptions that gratify their cultural or ideological predispositions.

Another possibility is that the the motivation to fit the evidence to the congenial conclusion is penetrating into the processes that comprise interpretation of visual stimuli. Maybe the subjects are focusing their attention on parts of the video most likely to support their favored conclusion; or perhaps they are using a kind of biased virtual processing that "fills in" the gaps in their perception w/ information that fits their cultural or ideoloical predispositions.

Your design woudl help to sort this out. If the "mechanism underneath the mechanism" is reconstruction of the contents of memory, then your design should generate a result pretty close to the one observed in the original study. If the motivated-reasoning effects do not show up in your version of the design, then that would be evidence in favor of one of the other theories.

The other theories can be tested in various ways too. Indeed, there is a researcher at NYU, Emily Balcetis, who is using eye-tracking measurements to assess whether effects like the one in They Saw a Protest, & another CCP study, Whose Eyes Are You Going to Believe, involve the moviated attention to elements of an event that generate identity-congruent conclusions. From conference presentations, I gather her results support the hypothesis that that is what's going on.

July 17, 2013 | Registered CommenterDan Kahan

@Dan:

I have to say, the idea that there's an "ideology' that one whose goal is to form accurate understandings of facts could use to asses the weight to assign new information or evidence relevant to hypotheses about what those facts are strikes me preposterous!

Okay, that's at least helping to flesh in the nature of the model that you're working within. It's consistent with a view that "facts" are just facts and clearly distinguishable -- off in their own box, in "fact" -- from ideology, that "evidence" is simply evidence, and that science is the one human endeavor immune to the influence of cultural cognition. In my view, though, it seriously oversimplifies the actual cultural and epistemological situation.

An alternative view of ideology is this: it is indeed a system of values, ideals, and objectives, but it's also, in varying degrees, implicated in the empirical world of facts and evidence. In this sense, it would help to distinguish it as a more conscious body of thought from the simpler "predispositions" that make up your cultural quadrants, which are no doubt more stable. Ideologies, on the other hand, can be seen as more systematic efforts to bring values, goals, etc., into alighnment with the empirical world, aka reality. And they come in a wide variety of forms and scopes, from vast and ancient religious structures, to more recent and limited social/political forms -- the "Liberal Republic of Science", for example, would be one. It's certainly true that they've given rise to not just foolish but evil efforts to force reality to fit their own mold, rather than adjusting to fit reality, but we're not going to be able to avoid that sort of error by simply refusing to think systematically about values and goals in the real world -- that way, we simply become less aware of the cultural/political forces that act upon us.

July 17, 2013 | Unregistered CommenterLarry

My (non-systematic) impression is that there are enough high-RWA "liberals" to significantly obscure effects related to authoritarian outlooks. I think party identification would be even worse at picking up these effects than ideology scales. Most people don't aggressively assess their own views, so they often unknowingly pick up cultural precepts that aren't consistent with their underlying judgments. These have to be peeled back to make sure you put them in the right category, and modest correlations aren't really good substitutes.

I would be shocked if motivated reasoning is uniform across the entire population, or if it only varies with general intelligence.

July 17, 2013 | Unregistered CommenterConceptTinkerer

ConceptTinkerer

I would be shocked if motivated reasoning is uniform across the entire population, or if it only varies with general intelligence.

I wouldn't be shocked at all. I see motivated reasoning in lots of people. I sometimes even catch it in myself. :)

July 17, 2013 | Unregistered Commenterlucia

@Larry:

I agree that ideologies can be understood as dispositions that guide those who hold them to form beliefs about the world suppportive of the sort of social ordering that is distinctive of the ideology in question. That is certainly the understanding of it from sociology; political science -- or the form of it that is dedicated to studying public opinin & that thinks "liberal" & "conservative" are "ideologies" -- has a much narrower view in mind.

I ceratinly understand 'cultural cognition' as an attempt to free the classic undertanding of Ideology from its silly, pseudoscience functionalist roots & replant it in a soil rich with genuine psychological mechanisms (ones that are consistent with methodlogical individualism). Cf. J. Elster, Making Sense of Marx.

But I think the resulting, psychologically realistic picture of how systems of shared ideas interact w/ cognition doesn't imply that anyone is trapped inside of such a system or as suggesting any particular difficulties for a way of making sense of the world that treats science's way of knowing as correct, and for reasons unrelated to any such systems of thought etc.

I do recognize that treating 'science's way of knowing" as correct is a "cultural stance," too, and that a commitment to a liberal political regime as the form of political life most sutied to knowing is in some fundamental sense deeply partisan.

These are all thigns, I think, that are informing your comments here, as elsewhere.

Now help me to see, though, how it could possibly be "truth seeking" to think in any other way other than the way science thinks? And how any way of thinking that fits what looks like 2d figure in my post could be understood as consinstent with a thnking disposition geared toward aligning beliefs w/ "truth"? I'm not saying people are "irrational" to think in the way described in the 2d figure; we are past that. But I am still saying that I don't see how any way of thinking that looks like that cann be described as "truth seeking" except in some extravangant way that is defining "truth" in a manner opposed to what science treats as knowing.

July 17, 2013 | Registered CommenterDan Kahan

@Conceptthinkinier & @Lucia:

I'd say it is clear that cultural cognition & like forms of motivated reasoning vary, if they do, only in degree in relatoin to "general intelligence" and that the variance is in the direction of people who are more intelligent being more prone to display it. It's not easy, actually, to fit all the evidence to one's motivating commitments, and so those who are better at math, know more sceince, are more cognkitively reflective, etc., are likely to do a better job. This is, of course, a point that I've made 943,290 times.

But I still do suspect that @concept is correct that the disposition to experience it -- under the conditions in which people generally do (those conditions can themselves be regulated, and reduced in frequency, I believe)--likely varies across the population.

But I at least don't know what the distinguishing features are of the makeup or identity of the people who are immune to or significantly less affected by this dynamic. I only know (believe strongly on the basis of evidence; show me contrary evidence & I'll reassess) that they aren't distinguished either by their political or cutural outlooks or by their capqacity for crtical thinking generally.

I hope we might be able to find these people who are immune, though, for maybe then we will be able to learn how to cultivate a similar immunity in ordinary people.

Of course, we might discover that the immunity is in fact a consequence of some deficit in some sort of valuable comprehension or perception. Something akin to autism. That wouldn't surprise me either, but in that case the idea of trying to propogate the sensibility or cultivate it would be misplaced.

As for RWA "liberals," at least in Jost's work, there is a very strong negative correlation between all the various measures of that disposition and liberal ideology. I agree w/ you, though, that there are many "liberals" who are quite "authoritarian" in their everyday political style (just as there are many conservatives, liberaterians, etc. who are)

July 17, 2013 | Registered CommenterDan Kahan

Autism is where I was going next, followed by the dark triad. I specified general intelligence in part to exclude more specific sorts of social intelligence.

July 17, 2013 | Unregistered CommenterConceptTinkerer

Now help me to see, though, how it could possibly be "truth seeking" to think in any other way other than the way science thinks?

No one denies the efficacy of "science's way of knowing" -- Marx certainly didn't, neither did Freud, neither do present-day creationists, nor "The End is Nigh" environmentalists. The problem -- and it's maybe the basic problem with your model, as I see it -- is that it isn't really detachable from ideology (unless we're talking about the Higgs boson or continental drift, say). That is, everyone's "prior factual beliefs", in areas that matter to them, are not detachable from the way they view the world, nor is the way they interpret or accept evidence. And that's not something that either can or should be separated out -- without a view of the world, necessarily involving values, ideals, etc., one is left just a credulous naif, flipping around like a weather vane with every puff of wind. I.e., I think that ideology, whether or not it's conscious, really does have a function (though I hope that doesn't invoke its "silly, pseudoscience functionalist roots"), and that lacking it would really be a kind of deficit, as you indicate in a slightly different context in another reply above.

July 17, 2013 | Unregistered CommenterLarry

Re the Bayesian explanation of human reasoning: you don't need to extend it to any complicated model of ideologically motivated reasoning to obtain lack of convergence when prior information is different - the 2nd diagram is misleading in that it suggests an extension of Bayes rule is at play. But in the standard Bayesian setting, convergence of posteriors when priors are different only happens in special cases - in general, having different prior information is in itself enough to remove expectation of convergence. This was pointed out effectively by E.T. Jaynes in his book: http://omega.albany.edu:8008/JaynesBook.html. See the section "Converging and diverging views" in Chapter 5, which seems very applicable to the argument you are making.

Basically, divergence happens for the sort of reasons you describe (information coming in as "person A said/did X", so prior information/opinion about A affects the direction in which opinions about other questions will change) - my point is that this is already built into the basic Bayesian framework.

July 18, 2013 | Unregistered Commenterkonrad

@Konrad:

I don't know whether the 2d Figure is consistent w/ Bayesianism. Probably it is, since nothing in Bayesianism says how to determine the likelihood ratio to assign new information.

But why don't we call Figure 1 "ordinary Bayesian information processing" and Figure 2 "motivated reasoning," to avoid getting bogged down in that issue, which isn't very interesting.

But there is an interesting point here, and I think you are misunderstanding what it is and why it helps to use the framework I am proposing as a way to get at it. Or in any case, let me try to set out those things in a bit more detail and then you can tell me what you think.

The question I'm addrssing is, what sort of experimental design do we need to determine whether the failure of parties to "agree" on some disputed point reflects motivated reasoning & not different priors or any other process consistent with ordinary Bayesian information processing?

Bayesian updating will generate convergence among people w/ different priors if they assign the same weight (likelihood ratio) to the evidence. By "convergence" here, I mean simply a decrease in the differential in their revised or posterior odds.

But if their priors are very different, then obviously one instance of their agreeing on the weight to be assigned a new piece of evidence won't necessarily be enough to generate "agreement" -- i.e., equivalent posterior odds or posterior odds (or posterior odds that at least agree on whether a proposition is more likely than not to be true). They might have to see a lot of information before that happens. But it will happen eventually so long as they can find new evidence that has an LR ≠ 1.

If in fact, people display ideologically or culturally motivated reasoning, though, they will not converge in this sense when exposed to one and the same piece of information. If they are engaged in motivated reasoning, they will

a. polarize (end up w/ a greater differential in their posterior odds) if their priors were the same; &

b. never converge (i.e., never experience a decrease in the differential in their posterior odds) if their priors were different.

That will happen even if they are engaged in "Bayesian updating," b/c what motivated reasoning means, in Bayesian terms, is the adjustment of the likelihood ratio to fit one's predispositions. If people who disagree assign LR < 1 & > 1 respectively every time they are shown a new piece of information, then (a) & (b) will be the result no matter how much common evidence they are shown.

You are right that "one doesn't need" motivated reasoning to explain why people w/ different priors diagree even after being shown the same evidence.

But if we see polarized people, it is useful to know whether the problem is that they just haven't seen enough information yet or whether they are engaged in motivated reasoning.

We want to know that b/c if the problem is the former, the solution is to just to give them more information, while if the problem is the latter, giving more information won't help. One will hvae to do something to change their disposition to attach to new information opposing likelihood ratios.

So-- I'm talking about what sort of design an experiment has to have to be able to conclude that the problem is the second type -- the motivated rasoning one.

I'm confident you can see why it would be useful-- in the midst of disputes like the ones over climate change, nuelcaer power, gun control, etc. -- to be able to distinguish these two sources of persistent disagreement.

The framework I am using is to help people see that the best design is one that allows us to observe whether study subjects are opportunitically adjusting the likelihood ratio-- or weight-- they assign to the evidence in response to experimental manipulations that change the significance of it, not w/ reference to the "truth" but w/ reference to their motivating dispositions.

Capisce?

July 19, 2013 | Registered CommenterDan Kahan

"But if we see polarized people, it is useful to know whether the problem is that they just haven't seen enough information yet or whether they are engaged in motivated reasoning.

We want to know that b/c if the problem is the former, the solution is to just to give them more information, while if the problem is the latter, giving more information won't help. One will have to do something to change their disposition to attach to new information opposing likelihood ratios."

The problem could also be poisoning of the well, which doesn't require motivated reasoning. People judge new information for how likely it is to be both reliable and honest, both at the source and in transmission. If you give them lots of information they judge to be dishonest, you're likely to see either no effect or a backlash. So, even without motivated reasoning, giving more information won't help. The solution seems to be the same, though. You have to present information in a way they'll judge to be honest and reliable.

July 19, 2013 | Unregistered CommenterConceptTinkerer

@Dan: "Bayesian updating will generate convergence among people w/ different priors if they assign the same weight (likelihood ratio) to the evidence."

The point is that, in the sort of real-world scenarios under discussion, people with different priors do _not_ assign the same likelihood ratio to the evidence, and this happens simply because of their different priors, not because of any bias in their reasoning process. The section from Jaynes I linked to describes it better than I can.

I am not talking about something that can be fixed by throwing more evidence at the reasoner - like you, I am talking about opinions that can diverge as more evidence comes in. I am saying that this scenario is well described by ordinary Bayesian updating and need not involve motivated reasoning.

July 19, 2013 | Unregistered Commenterkonrad

Dan

Bayesian updating will generate convergence among people w/ different priors if they assign the same weight (likelihood ratio) to the evidence. By "convergence" here, I mean simply a decrease in the differential in their revised or posterior odds.

Doesn't this depend on whether both priors assign a non-zero probability to "the truth".

I know at my blog,some people who appear seem to consider the probability climate sensitivity (i.e. of net warming due to doubled CO2) is greater than 0 to be literally zero. They don't consider the probability small-- they consider it to be zero. So

p(0 < S)=0

That's <I>their prior. They think S≤0.

Meanwhile, there are many who think the probability that warming due to elevated CO2 is greater than 0 is 1. (Many think even S=0 is impossible. I'd say I'm pretty close to having that belief- though maybe not quite.)

So for these p(0 < S)=1 .

That's their prior.

As far as I can see the math, no amount of evidence could make the views of these people converge. Though, it's possible that if S=0, they might each slowly move closer and closer toward S=0 until they got to the point where the difference in their views was of no practical significance. So with this example they might 'converge' toward the specific answer that represents the boundary between answers each considers to be possible <I>provided the correct answer is S=0. But based on strict formalism, the first answer can't converge on S=1 and the second can't converge on S=-1.

Right? (Or am I missing something?)

But of course, if I haven't bungled badly above, it seems to me difficulty is that for the posteriors to converge to the correct value, the correct value has to have a non-zero probability in the prior. Otherwise, if the prior assigns a probability of '0' to the answer that is 'correct' (and worse the entire neighborhood in the vicinity of 'correct'), you can't converge on it.

Of course with human beliefs all this happens informally and I'm pretty sure people to some extent don't really even know the precise dimensions of their internal priors. At least I don't. I only know the rough boundaries in most instances.

So it seems to me that if someone's 'internal prior' did not allow the possibility that something had a non-zero probability of being true , no amount of evidence would convince them it was. One would end up spending a huge amount of time trying to figure out what was wrong with the evidence. Once first reaction would be to down-weight the evidence. (Example: If I see a man walking on liquid water, before accepting the idea that a person is literally walking unaided on water, I'm going to look for the stones, the levitation device, some sort of hidden bouyant 'shoes" and so on.)

July 19, 2013 | Unregistered Commenterlucia

@Konrad:

Well, then do we not agree? But the idea that the source of the problem is *not* that people have diffreent priors but rather are assigning different weight to the evidence is not as straightforward as you think. Or not so straightforward to others as it might be to you. It is a concept that is hard for some people to get. Moreovder, even those who get it deserve *evidence* -- since the idea that that's what's going on is only a conjecture or hypothesis, as is the idea that people disagree b/c they haven't been "shown" the evidence. What sort of design is suited for testing these competing hypotheses is not simple -- studies that purport to show it can easily contain flaws. Hence this post.

I still wonder, though, whether you are grasping the psychological dimension of my argument.

The "disagreement" about "likelihood ratio" is something that happens not b/c there is heterogeneity in experiences or other sorts of exposure to information that people w/ different ideologies happen to have and that bears properly on logical processing of information by people trying to figure out the truth of a proposition.

People whose reasoning fits Figure 2 are *opportunistically adjusting* the weight they give in a manner that is responsive to something *other than figuring out the truth.*

As wonderful as Bayesianism is -- I truly do love it! It is the most elegant and ingenious truism ever invented--there's noting (zero, zip) that one can learn from it about the psychological mechanisms that I'm describing.

That's why I assumed you must be misudnerstanding what I was trying to accomplish when you suggested that everything we need to think about here is "built into" the Bayesian framework. That framework can be used by us to structure how we figure out what we need to know & how we should go about about learning it, but it doesn't itself tell us what the processes are that generate persistent conflict over risks and other policy-relevant facts that admit of scientirfic investigation.

July 19, 2013 | Unregistered Commenterdmk38

@Lucia:

If one starts out w/ priors of 0 probability, then, yes, evidence that one assigns a likelihood ratio ≠ 1 won't change one's priors.

I'm tempted to say that anyone who assigns probability 0 to anything that admits of empirical observation is a dogmatic ass as well as a fool.

But I'll confine myself to saying that it's uninteresting to discuss how to reason about evidence on matters that are properly assigned a probability of 0.

July 19, 2013 | Registered CommenterDan Kahan

The "disagreement" about "likelihood ratio" is something that happens not b/c there is heterogeneity in experiences or other sorts of exposure to information that people w/ different ideologies happen to have and that bears properly on logical processing of information by people trying to figure out the truth of a proposition.

People whose reasoning fits Figure 2 are *opportunistically adjusting* the weight they give in a manner that is responsive to something *other than figuring out the truth.*

This itself reads like a dogmatic assertion -- and one that seems almost designed to be self-illustrating. Why couldn't or wouldn't it be the case that disagreements about likelihood ratios are something that happens b/c people with different ideologies are differentially exposed to experience "that bears properly on logical processing of information by people trying to figure out the truth of a proposition"?

More generally, how exactly does the psychology of people who a) have different ideologies, but b) reason in ways other than in figure 2, actually work? Are the ideologies severed somehow from "prior factual beliefs" and/or from "evidence" -- i.e., cut off from the world? If so, why bother with them? In particular, e.g., why bother caring about "The Liberal Republic of Science", if it can have no connection with empirical reality?

July 19, 2013 | Unregistered CommenterLarry

Dan,

I'm tempted to say that anyone who assigns probability 0 to anything that admits of empirical observation is a dogmatic ass as well as a fool.

I'm tempted to say the same. But the probability that fools exist is not zero!

Also, sometimes different people think different sorts of things admit empirical observations. That said: I picked an example whose value ought to be affected by empirical observations (though we can't actually do the precise clearest experiment.)

it's uninteresting to discuss how to reason about evidence on matters that are properly assigned a probability of 0.

Maybe.

Except I guess I don't see you merely talking about how we ought to reason, but also about things that get in the way of proper reasoning. And for the later, we might want to remember all the possible explanations for lack of convergence toward the same view as evidence for a particular view is presented.

I know you might want to wave this away as uninteresting. But sometimes failing to mention a mechanism explicitly can result in people either not knowing the mechanism exists or overlooking it even though they do know it exists. So I think flat declarations like "peoples posteriors will converge" that fail to mention the caveat might result in people discounting or overlooking the possibility that someone's 'internal prior' assigns probabilities of 0 to things that ought not to fall in the range of 'not impossible'. And it is their odd (possibly not conscious) prior that can result in their inability to correctly incorporate the meaning of new data.

Why they came to have that odd/foolish prior is a question in and off itself. But deeming it foolish, odd or dogmatic isn't the same as saying 'bad priors' can't contribute to the inability of people to adjust their views as new information accumulates.

I don't want to go in a direction where my own thoughts are so fuzzy I can't even explain them to myself. But I'm thinking a bit about the "Brigg Meyers" personality types. I've taken a web test and I'm an ENTP. But many in my family end with "J", and it does seem to me that to some degree, there are people whose <I>internal inclination is to view the range of 'what is possible' down to a small range. Often, this can be useful as one avoids navel gazing. Taken to an extreme -- particularly if unconscious-- it could result in a tendency to deem things have p=0 when that's not literally true. (Mind you, I'm not saying all the "J" people suffer from any extreme form of this. They don't. )

July 19, 2013 | Unregistered Commenterlucia

What I am objecting to is the idea of "ideologically motivated reasoning" as opposed to (presumably) ordinary rational reasoning. The suggestion that divergence happens because people reason in a way that is dependent on what they want to be true rather than simply on what they already know/believe about the world. If that is not what you meant, we may well be in agreement, but it is the picture conveyed to me by the original post.

I do not think it is straightforward that Bayesian reasoning can lead to divergence just because priors are different - in fact it is a general misconception that it does _not_ - that's why I bothered to make the point and give a reference explaining it.

As to whether Bayesian probability theory is informative about psychology - it is informative exactly to the extent that one considers it a good model for human reasoning. So in a context where you have already committed to using it as a model for human reasoning, it should be considered informative, no?

July 19, 2013 | Unregistered Commenterkonrad

@Lucia:

Okay: I hereby retract the statement that is uninteresting to discuss how people process information about propositions for which Pr = 0.

You have made it interesting by turning it into an empirical question: can we attribute the science communciation problem--the failure of citizens to converge on perceptions of risk and related poliyc-relevant facts in the face of ample, compelling, widely disseminated scientific evidence -- to those on both sides starting w/ "priors" of 0 within a Bayesian framework?

I think the answer is no. That is, I think that people on both sides recognize that the issues they are talking about are empirial and depend on evidence, and that the evidence is falliable, or that they might not understand of be relying on all of it etc. I think they are honestly open to changing their minds.

But the problem is, when somoene shows them evidence they assign an LR that reflects their priors. The sorts of studies that I'm citing -- the cultural cogniton studies -- are like that: they show that the "action" is in the LR, not the priors.

Take a look, e.g., at

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009); and

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

The results in those studies can't be reconciled with Pr = 0. The reason is that in response to manipulation of the LR, those of opposing ideologies changed their posterior odds!

But the process of "changing their minds" was one that guaranteed "no converngence" -- in the absence of something that coutneracted the tendency of those w/ competing culturla outlooks to assign opposing LR's to the evidence.

July 19, 2013 | Registered CommenterDan Kahan

"The "disagreement" about "likelihood ratio" is something that happens not b/c there is heterogeneity in experiences or other sorts of exposure to information that people w/ different ideologies happen to have and that bears properly on logical processing of information by people trying to figure out the truth of a proposition."

I disagree - the disagreement about likelihood ratio happens precisely b/c there is heterogeneity in experiences or other sorts of exposure to information that people w/ different ideologies happen to have (these experiences/information having been integral in them forming those ideologies in the first place). And they do bear properly on logical processing of information by people trying to figure out the truth of a proposition. See Jaynes. (You can click through to his text - I'm not going to repeat it here.)

July 19, 2013 | Unregistered Commenterkonrad

@Konrad:

Obviously people can assign differemnt LRs to same evidence b/c of different experiences etc.


The point is that if they change the LR they give to one and the same piece of evidence in response to an experimental manipulation that changes something related to its significance for the status of their cultural group but unlrelated to the "truth value" of the evidence for them, we can rule that hypothesis out. (E.g, if a Yankees fan sees the ball as fair when hit by Bucky Dent but foul when it by Carl Yastrzemski, then she is assigning an LR to the evidence that reflects her motivation to see the world as Yankees fans want to see it; she isn't determining the LR based on criteria related to the truth of the proposition.)

Ditto on not agreeing if they start w/ different priors despite giving same weight to evidence: yes, that can happen; but no, it's not what's going on here.

You keep referring me to things that explicitly informed the experiment designs. If you "disagree" b/c you think the deisgns weren't any good, fine -- say so, & why. You might raise some important point in that case, one worth discussing.

But the points you say you don't want to repeat -- those are the ones I am saying are missing the point.

July 19, 2013 | Registered CommenterDan Kahan

Dan

The results in those studies can't be reconciled with Pr = 0. The reason is that in response to manipulation of the LR, those of opposing ideologies changed their posterior odds!

It's that's what the evidence shows, that's what it shows then! :)

July 19, 2013 | Unregistered Commenterlucia

@Larry & @Konrad:

Here's a simple way to put it. Whenever priors & likelihood ratio are *correlated* (that's what the double-headed arrow in Figure 2 signifies), then the reasoning process in question will not be truth-seeking.

In that case, by definition, someone who is shown evidence that warrants revising her priors toward acceptance of a hypothesis other than the one she currently believes is true won't; a person who is in error will never overcome that -- no matter how much evidence that person is shown.

This is all a matter of simple logic.

There can be lots of things that cause such a correlation. One is simple confirmation bias, which we can define as the tendency to assign new information an LR that reflects one's priors.

Another is motivated reasoning, in which case a 3d influence -- the motivating disposition -- causes both the priors and the likelihood ratio. That's what is being displayed in Figure 2.

Is a process of reasoning in which priors & LR are correlated "Bayesian"? I suppose, b/c again, nothign in Bayesianism tells us how to determine the LR for a piece of evidence. Bayesianism just tells us what to do given priors & LR. But the point is, it's normatively undesirable to engage in this reasoning -- consistent w/ Bayesianism or not -- if one's goal is to form the most accurate beliefs possible.

Whether Figure 2 represents what's going on in disputes over climate change, nuclear power, guns, HPV vacine etc etc -- that's an empirical question. Answering it requires valid study designs.

To be valid, the study designs have to be able to figure out the diffrence between pesistent disagreement consisteng with Figure 1 and persisntent disagreement that reflects Figure 2.

The point of the post is to explain why NJV-B doesn't do that & to identify study designs that do.

July 19, 2013 | Unregistered Commenterdmk38

I get the impression you still haven't read the material I linked. That's ok, I'm not trying to force you - but it does mean you're not engaging with my argument. Specifically, you only seem to be thinking in terms of the prior on a single proposition - there are many other propositions which legitimately affect the reasoning process, and for which a reasoner also has priors. Once this is taken into account, it no longer makes sense to talk about correlation (correlation between whcih prior and which likelihood ratio?).

July 19, 2013 | Unregistered Commenterkonrad

@Konrad:

On the contdrary. I took a look. It struck me as interesting but not on point. There is discussion of convergence when people have different priors & agree on the likelihood of E; also discussion of nondivergence when they can't agree on likelihood of E. But nothign about the psychological dynamic that involves adjusting the likelihood -- or likelihood ratio if one uses the notation I prefer -- in response to an unconscious motivation to form & maintain culturally congenial or cultural-identity supportive beliefs. That you keep telling me to read this is part of how I can tell you aren't engaging with my point -- which I'm actually trying to help you see not by telling you to read things but by spelling out my reasoning in terms that I keep hoping will make it possible to get past the misreading of the post that generated your itnial comment.

How about your trying to spell out your reasoning? Tell me how the point that you think I keep missing relates to experiment deisgns that try to sort out Figure 2 from Figure 1.

Tell me, e.g., how you would explain everything going on in "They Saw a Protest" -- or the original "They Saw a Game" experiment (the basis of my Yankees/Red Sox hypo)-- as ""built into" the Baysian framework or elucidated by what you see in the book you cite (the relevant sections would be easier to read, btw, if they were not in postscript form).

Tell me, too, how someone who sees things as you do -- all built into Bayesianism; no need to address whether the problem is different priors or different sources of information relevant to truth-seeking likelihood ratios vs. a form of biased perception that opportunisitcally bends whatever evidence is presented to fit a preconception; no need apparently either for empirical study on any of this -- can straighten out someone who says the key to dispelling public conflict over climate change is just to disseminate study findings on scientific consensus.

My hypothesis is that you won't be able to. What you say will be conceptually unclear, & involve handwaiving where evidence is what's required to make progress.

Prove me wrong.

July 19, 2013 | Registered CommenterDan Kahan

@Dan:


Here's a simple way to put it. Whenever priors & likelihood ratio are *correlated* (that's what the double-headed arrow in Figure 2 signifies), then the reasoning process in question will not be truth-seeking.

In that case, by definition, someone who is shown evidence that warrants revising her priors toward acceptance of a hypothesis other than the one she currently believes is true won't; a person who is in error will never overcome that -- no matter how much evidence that person is shown.

This again puts its finger at least on what seems to me to be wrong with your argument and model. I don't see how any kind of "definition" says or implies that the mere correlation of priors and LRs must necessarily assign an LR of 0 to evidence contrary to one's current hypothesis -- the correlation merely says that the assigned LR is less than what it would be for someone who's current hypothesis is confirmed by, or at least consistent with, said evidence.

We could set aside the case of someone who has no "current hypothesis" to be correlated, since that just seems a different matter altogether. But what about the case of someone who has a "current hypothesis" but either doesn't care about it one way or another, or is somehow able to disengage her care from her evaluation of the LR -- which comes to the same thing, as far as I can see, and which seems to be what you're proposing as the ideal of uncorrelated assessment of new evidence? I think, as I indicated before, that this situation would not be ideal but would instead represent a kind of deficit in one's mental processing, since it would mean throwing away an amount of accumulated and processed experience that is generally an important aid in the evaluation of new experience, even though, of course, it can also be a source of error. If you found a person able to do this, I think he would appear either frivolous or handicapped (or both).

July 20, 2013 | Unregistered CommenterLarry

@Larry:

I'm not describing a deficit in reason. I'm just describing a process of reasoning. If there is a correlation between priors & LR, then the process of reasoning is not one suited for figuring out the truth of a hypothesis. It's very suited -- perfectly so-- to any purpose that involves conforming everything one observes to one's existing beliefs. (It's also boring to argue about whether that style of reasoning is "consistent" w/ Bayesianism -- that turns the entire exercise into a semantic exercise. I'll accept whatever answer anyone prefers on that.)

July 20, 2013 | Registered CommenterDan Kahan

@Dan:
I think my point is essentially the same as konrad's from a different angle, but since you didn't reply directly, I'll state my last post again here:
"The problem could also be poisoning of the well, which doesn't require motivated reasoning. People judge new information for how likely it is to be both reliable and honest, both at the source and in transmission. If you give them lots of information they judge to be dishonest, you're likely to see either no effect or a backlash. So, even without motivated reasoning, giving more information won't help. The solution seems to be the same, though. You have to present information in a way they'll judge to be honest and reliable."

As to your more recent posts:
It's definitely important to be clear on which priors we're talking about. The same group of people who informed someone's priors related to an issue proposition also informed their priors related to reasoning about likelihood ratios. While this is certainly a source of error in reasoning, the clearest component of that is sampling error. Add in poisoning the well for some serious trouble with assigning likelihood ratios. Now, that doesn't fully address "They Saw a Protest," which I find mostly convincing as a case of motivated reasoning - but not fully convincing. If people over-economize sensory input in order to form approximate models of what is going on, they can borrow from prior understandings to fill in the gaps. Throw in polarized prior understandings, and you get different fill-ins, yielding polarized results. This is what I was getting at with the modified study design.

July 20, 2013 | Unregistered CommenterConceptTinkerer

DK's phrase: “likelihood ratio” (a ratio of how likely the proposition is to be true given the evidence and how likely it is to be false given the evidence)

Necessary modification: “likelihood ratio” (a ratio of how [probable] the proposition is to be true given the evidence and how [probable] it is to be false given the evidence)

July 21, 2013 | Unregistered CommenterPeter Tillers

@Concepttinkerer:

I take it that "poisoning the well" for you refers essentially to some tendency of people to credit evidence selectively based on whether it comes from a source that shares their cultural or political identity.

I agree that this happens -- ineed that it is very much central to polarization on issues like climate, guns, etc. The HPV vaccine risk study looks at this specifically.

I'd say 3 things about that.

The 1st is is that the tendency to adjust the LR based on the cultural identity of the source of information can be viewed as a particular instance of how the species of motivated reasoning in question operates -- one of the specific mechanisms that motivated reasoning comprises. It's "inside" the box labeled "predispositions" in Fig. 2.

The 2d, however, is that this tendency by itself is ambiguous. It could reflect the respnsiveness of individuals to cues that reliably steer them toward beliefs that are consistent w/ their cultural identities independent of whether those beliefs are true. But it could also be a heuristic that reflects a truth-seeking objective: individuals might come to view (correctly or incorrectly) those who share their identities as more likely to be "right" about contested matters.

The 3d is that this source-credibility dynamic is only one of myriad ones that reflect the reasoning process reflected in Figure 2. All the rest operate independently of any disposition to weight evidence based on its source. The "Saw a Protest" study is an example; the study on formation of nanotechology risk perceptions linked above is another. The *convergence* of these results w/ ones like the ones in the HPV-vaccine-risk study (where source credibility was central) help to support the inference that what's going on w/ "poisoning the well" as you call it (and assuming I'm understanding you correctly) more likely reflects the contribution of identity-protective motivations than reliance on a "truth-seeking" heuristic

July 22, 2013 | Unregistered Commenterdmk38

@ConceptTinkerer:

Yes, that would be one representative example, but as Dan points out, not the only one.

@Dan:

Your distinction between truth-seeking and identity-protective (or other non-truth-seeking) reasoning is the heart of the matter. My view is that truth-seeking reasoning should be the default assumption; to take the idea of non-truth-seeking reasoning seriously we first need to have observations that are not already well explained by truth-seeking reasoning. So far you have not presented any such observations. Talking about convergence and divergence will not help in this regard, because both convergence and divergence are perfectly compatible with truth-seeking reasoning.

July 22, 2013 | Unregistered Commenterkonrad

But nothign about the psychological dynamic that involves adjusting the likelihood

That's because it's a book on probability, not psychology. As long as probability (truth-seeking reasoning) is enough to explain the observations presented, we have no need to look for anything beyond it.

How about your trying to spell out your reasoning? Tell me how the point that you think I keep missing relates to experiment deisgns that try to sort out Figure 2 from Figure 1.

You haven't spelled out exactly how the reasoning process works in Fig 2, so it's not even clear whether there _is_ a difference between the figures.

Tell me, e.g., how you would explain everything going on in "They Saw a Protest"

I'm puzzled why you assign importance to this example - it is not a new realization that what people see or hear is the outcome of a reasoning process where priors really matter - we see/hear what we expect to see/hear. This is abundantly clear from (e.g.) fields like computer speech recognition (to make it work at all you have to give the computer a decent idea of what to expect (a good prior, in other words)) and psycholinguistics (which shows that people hear what they expect to hear). So I see nothing to explain, or at all surprising, in the Protest example - people saw what they expected to see based on the prior information they had been given (purpose of the protest) in combination with the prior information they already had (whether people involved in protests with that purpose are inherently peaceful or disruptive). This is straightforward truth-seeking reasoning.

no need to address whether the problem is different priors or different sources of information relevant to truth-seeking likelihood ratios vs. a form of biased perception that opportunisitcally bends whatever evidence is presented to fit a preconception; no need apparently either for empirical study on any of this

On the contrary, I see this as an interesting question. Certainly if we delve into psychology (at the individual level) we will find many examples of non-truth-seeking reasoning, and the question here is to what extent such examples exist at the level of societies. But before empirical study, the two options need to be delineated more clearly. You talk about "biased perception that opportunisitcally bends whatever evidence is presented to fit a preconception" - but it is the nature of all truth-seeking perception (and reasoning) that it will fit new evidence into the framework of what is already known (preconception if you like, though I might point out that this is not a connotatively neutral word). So you first need to explain in precisely what way your alternative is non-truth-seeking.

Tell me, too, how someone who sees things as you do ... can straighten out someone who says the key to dispelling public conflict over climate change is just to disseminate study findings on scientific consensus.

If they were theoretically inclined I'd point them to Jaynes to show that increasing evidence can lead to divergence even with rational reasoning. If they were empirically inclined, I'd point them to your examples which demonstrate divergence empirically.

July 22, 2013 | Unregistered Commenterkonrad

@Dan
"Poisoning the well" is this: https://en.wikipedia.org/wiki/Poisoning_the_well
It's not a tendency to rate people with shared predispositions higher. It's a rhetorical device that convinces people to discredit information from particular sources on the grounds that those sources specifically are dishonest or unreliable. If this gets thrown in the box with motivated reasoning, then the term 'motivated reasoning' isn't specific enough to be useful. I take 'motivated reasoning' to refer to reasoning processes that seek something other than the truth.

Your post doesn't address my argument. You don't need a single explanation to link all of these studies together if there are other known mechanisms that explain them individually. The nanotech study has nothing to do with poisoning the well, but it can reasonably be explained by assuming participants have different priors as to how to evaluate individual risks and benefits when presented in a particular format. "They Saw a Protest" can be explained as an honest response to computational constraints (and I addressed it because it was the hardest example for me to explain without motivated reasoning). Without accounting for mechanisms that mimic or interact with motivated reasoning, it's impossible to tell what role motivated reasoning plays.

July 22, 2013 | Unregistered CommenterConceptTinkerer

@Konrad:

You are reasoning in a fallacious manner. Yes, people reason in a truth-seeking manner in many contexts. That doesn't mean that they do in all contexts; and does not rebut evidence that shows that they aren't reasoning in a truth-seeking manner in one or another context for one or another reason. You will bear out my prediction in this way, sure enough.

Any study that furnishes evidence consistent with the inference that likelihood ratio & priors are endogenous will be evidnece of thinking that is not suited for truth-seeking.

There are many many many many studies that show such a phenomeon. Cultural cognition is one form.

I am convinced that you are motivated by a case of mistaken identity. You confuse me w/ someone who thinks Bayesianism is not useful, doesn't describe human thinking etc. Your comments all suggest that. The section of Jaynes you cite too is taking on psyhcologists (Kahneman & Tversky, in particular) he sees that way.

Sorry-- wrong guy. Not my project; not even close to it.

I was using Bayesianism here only a an expositional heuristic -- to pin down the sorts of things one would want to investigate to figure out how psychological dynamics work. You say now how important it is to do that -- so we can figure out whether patterns of belief we observe reflect processes that are consistnet w/ Bayesianism or w/ truth-seeking etc. That's what I was doing -- and you still don't see that apparently, b/c you are fighting w/ someone else doing some other thing entirely.

But in fact, I don't think what I'm describing in Fig. 2 is inconsistent w/ Bayesianism, even. As I said, Bayesianism only tells you what to do *after* you figure out the likelihood ratio. It does't tell you how to derive it.

I think it *should* be derived in a manner that is consistent w/ sound causal reasoning. It isn't when LR is determined by an influence that also causes one's priors -- b/c then one necessdarily will be stuck on current understandings that are susceptible of improvement w/ new evidence.

ARe you not familiar w/ any evidence that such a thing happens? If so, then you are the one who is conforming what you see to your priors. I keep pointing it out to you, after all.

July 23, 2013 | Registered CommenterDan Kahan

@Conceptthinkier:

One doesn't need a single explanation for seemingly diverse phenomena. But if one works, it is better than more than one.

July 23, 2013 | Registered CommenterDan Kahan

@Dan:

That only applies when you make a comprehensive model of a system. What we're modeling here is a very narrow slice. A single Ptolemaic epicycle might be a very simple explanation for a pattern in astronomy, but that doesn't make it better than a more complex Newtonian gravitational interaction. Critically, the motivated reasoning model is very easy to fit to a wide range of scenarios, which means narrower, harder-to-fit explanations (where model failures are easier to see) get priority. That is, unless the motivated reasoning model is made precise enough to be as hard to fit as those other explanations.

I take issue with your latest response to Konrad. Konrad has made reasonable arguments in support of a single, consistent claim: that non-truth-seeking reasoning is not needed to explain the effects of cultural cognition, or at least that you haven't demonstrated such a need here. My arguments have been a bit more meandering, but I'm mostly pointing at the same target. We don't need to demonstrate anything stronger than the negation of your claim. Ultimately, I'm sure you have a great deal of evidence to support your claim that simply hasn't been brought into this discussion. If you want to rest your case on that point, I see no need to argue further. However, to do justice to the broader body of evidence, you need a more sophisticated starting point than the analogy you've given in this post.

July 24, 2013 | Unregistered CommenterConceptTinkerer

Dan: not even close to a good response, and you are completely mischaracterizing me. I'll leave this discussion now.

July 24, 2013 | Unregistered Commenterkonrad

@Conceptthinker:

I don't see any genuine engagement in Korad's argument with the evidence in the studies that demonstrate subjects are adjusting the likelihood ratio or weight assigned to one and the same piece of evidence in response to a manipulation of the ideological stake they have in crediting or discrediting the evidence. I don't even see genuine comprehension of it.

I don't understand what sort of truth-seeking "computational constraint" makes someone see a person as blocking another person from entering a building when the person doing the blocking is an abortion protestor but not a military-recruitment protestor. If people "see" what they "expect to see" or already belive-- that's exactly what I'm describing: they are adusting their assessments of evidence to their existing beliefs, not adjusting their beliefs based on new evidence. It's *not* truth-seeking to process information in a manner that prevents you from ever seeing that you are wrong.

I don't see how it is truth-seeking for a scientist to look at the methods of a study & declare them sound when the study reaches one reasult (one he believed in before the study) but unsound if it reaches another.

Similarly I don't see how it is truth-seeking for an ordinary citizen to credit evidence in the form of sterling credentials that a scientit as an "expert" when that scientist takes a position on climate change that fits the one that obtains in that person's cultural group but not when that same scientist -- w/ the same credentials -- takes a position that is contrary the one that prevails his culturla group.

I don't see how it is truth-seeking for a person to treat a study on climate change as "valid" when someone 5 mins ago argued we need to make greater use of carbon-emisson controls but *invalid* when told 5 mins ago that "geoengineering" is a solution to climate chagne.

ETc.

All of these studies use the design I described. But we aren't even getting to the interesting issues about these studies b/c we are stuck on a very tendentious debate about whether people are "Bayesians" etc-- something that I have explained is irrelevant to the question at hand: they could be Bayesians and still be engaged in reasoning that reflects an endogeneity between likelihood ratio & prior.

You can keep coming up w/ stories that "explain" everything as "consistent with" truth-seeking. I'm sure you can. It's precisely b/c story-telling never settles disputes over plausible competing conjectures that we look at evidence.

July 25, 2013 | Registered CommenterDan Kahan

I had a much more detailed response typed up, but I don't see value in continuing this exercise, and I take it you don't either. I do have one final comment to make that I hope you'll think about for the future.

I see three overarching claims in your argument, and I think it's useful to spell them out. The first is purely descriptive: there is a pattern of reasoning errors in which people polarize on their views of facts in correspondence with their cultural affiliations or ideological commitments. The second is that there is at least one reasoning mechanism behind this pattern that is tuned for a goal other than seeking the most accurate understanding of the facts. The third is that there is a dominant bias that specifically seeks to protect or elevate the reasoner's identity groups and to advance an ideological agenda. These are fundamentally different claims, and you've been opportunistically interchanging them in a fallacious shell game. Naturally, the result is more evasion than engagement.

Really, I'm surprised. The whole point of my argument was to get you to make your claims more precise, not less. As a layperson, this is how I get a clear picture of what (individual) experts really know under the surface. I hope in the future, you'll think through your core concepts more carefully before you post.

July 27, 2013 | Unregistered CommenterConceptTinkerer

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>