follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Science of Science Communication 2.0, Session 10.1: Teaching science in a polluted science communication environment, part I -- Evolution | Main | "You -- talking to me? Are *you* talking to *me?" Actually, no, I'm not; the data don't tell us how any individual thinks (or that your side is either "biased" or "right"). »
Thursday
Mar262015

On self-deception & motivated reasoning: Who is fooling whom?

From something I'm working on....

Who is fooling whom?

Identity-protective cognition is a species of motivated reasoning that consists in the tendency of people to conform disputed facts (particularly ones relevant to political controversies) to positions associated with membership in one or another affinity group. I will present evidence—in the form of correlational studies, standardized assessment tests, and critical-reasoning experiments—that show that identity-protective cognition is not a consequence of over-reliance on heuristic information processing. On the contrary, proficiency in one or another aspect of critical reasoning magnifies individuals’ tendency to selectively credit evidence in a manner that conforms to the position associated with their group identity. The question I want to frame is, Which of these two conclusions is more supportable: that  individuals who engage in this form of information processing are using their reason to fool themselves; or that we (those who study them) are fooling ourselves about what these individuals are actually using their reason to do?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (25)

Dan -

==> "The question I want to frame is, Which of these two conclusions is more supportable: that individuals who engage in this form of information processing are using their reason to fool themselves; or that we (those who study them) are fooling ourselves about what these individuals are actually using their reason to do?"

This language, it seems to me, implies that all of "these individuals" can be characterized in the same manner. Although I know that you don't feel that they can, and that an examination of your accompanying analysis should make that clear, I would expect that such wording would elicit "You - talking to me? Are you talking to me?" responses.*

*Which raises an interesting question of whether, if you changed the wording, you might be just as likely to get those types of responses.

March 26, 2015 | Unregistered CommenterJoshua

Which of these ... or both?!

"The first principle is that you must not fool yourself — and you are the easiest person to fool"

as a certain famous physicist said.

March 26, 2015 | Unregistered CommenterPaul Matthews

@Joshua:

Fair enough. But I actually have no interest in the question whether researchers are "deceiving themselves" -- only in whether they are making a mistake. The mistake would be to assume that identity-protective cognition (IPC) is a bias b/c it diminishes the probability of forming true beliefs; if people form beliefs to do something other than "get the right answer," then a research program is ill-founded.

I don't know that that's true. I think, in fact, probably there are instances in which IPC is best viewed as promoting some goal independent of truth seeking and others in which it frustrates an agent's own ends.

I just do know that there is a problem in not questioning whether information process related to facts should always be modeled as performance of mental operations to arrive at "true" conclusions about those facts.

March 26, 2015 | Registered CommenterDan Kahan

"The question I want to frame is, Which of these two conclusions is more supportable: that individuals who engage in this form of information processing are using their reason to fool themselves; or that we (those who study them) are fooling ourselves about what these individuals are actually using their reason to do?"

There's another hypothesis - that individuals who apply their reasoning do so rationally, but whether they apply it depends on whether the claim conforms to or contradicts their existing beliefs. They're all capable of sophisticated information processing, but they're lazy. Why go to a lot of effort to prove the obvious? Why do it the hard way when you already know the answer from other sources?

In fact, people who are more aware of how they can be fooled may be more likely to rely on what they already know rather than rational reasoning. It's what I call the Magician Effect. You watch the magician saw the lady in half - the clear and logical implication of what you're seeing is that the saw must be penetrating her body. But you don't believe it - your prior beliefs override what your senses seem to imply. Likewise, any mathematician will be able to show you a bunch of tricks to prove - absolutely prove - that two plus two equals five.

And when a political opponent presents an argument that appears to demonstrate that everything you believe in is wrong, you suspect a trick. Your belief in your own fallibility can cause you to override your own reasoning faculties.

If you know your own reasoning is fallible, what can you reasonably conclude?

March 26, 2015 | Unregistered CommenterNiV

@NiV:

Could be. But look at the relationship between beliefs on climate change & scores on climate science literacy test.

But in any case, it is the *pattern* that needs to be explained; why is there a *bias* in favor of identity-protective beliefs relative to truth-seeking.

March 26, 2015 | Registered CommenterDan Kahan

@Joshua--

Occurs to me now that your point was about individuals as well as researchers -- or maybe only former?

It's a kind of odd way of talking but I think that the usage is to say "huh, individuals experience identity-protective cognition; wonder why?" The ones who don't are noise.

When I say to people who say they don't fit the pattern, I certainly am talking to noisy people!

March 26, 2015 | Registered CommenterDan Kahan

"...that identity-protective cognition is not a consequence of over-reliance on heuristic information processing. On the contrary, proficiency in one or another aspect of critical reasoning magnifies individuals’ tendency to selectively credit evidence in a manner that conforms to the position associated with their group identity."

I should think that's a given, statistically. Affect is a fundamental part of our thinking machinery, and will form the underlying tone, the direction, for critical reasoning to follow up on and extend and reinforce.

"...if people form beliefs to do something other than "get the right answer," then a research program is ill-founded."

Of course beliefs are not formed in order to 'get the right answer'. The above system was bequeathed to us by evolution as a means of continuing to operate in a societal form when there *is no* right answer, due to irreducible uncertainty. Or irreducible on any reasonable time-frame or economic threshold, at any rate. While in recent times we know a great deal more than throughout most of our evolutionary history, the system will take time to change. And besides, there is still irreducible uncertainty in many domains. Political controversies arise typically out of the underlying political philosophies, and there is still no 'right' answer at all regarding these, although the 'best' answer is to have a variety of them and ensure that their influence rotates and none get too extreme.

Strong group identity and the affect it implies in individuals will typically arise in the face of underlying uncertainty, although it also has to be uncertainty that encompasses serious societal impact, or *perceived* impact. There are very many complications, such as the fact that a 'once uncertainty', now revealed, can still be the reason a for now undermined group identity that nevertheless can still coast on cultural inertia for generations, or morph to something more aligned to the modern uncertainty landscape. And as you have often noted, an apparent issue that invokes identity may duck the surface logic because it's actually about deeper issues that do touch on uncertain themes. But the concept is straight-forward enough. Is this system a *net* evolutionary advantage? Yes. Can it go badly wrong? Yes.

Individuals are not islands, despite their personal skills and intelligence. Our minds are social minds (per neuroscientist Michael Gazzaniga). All individuals 'engage in this form of information processing' to some extent, and are not fooling themselves but taking part in normal societal activity (whether the end result works out good or bad). Investigators are only fooling themselves if they think individuals are disconnected entities, and do not look to cultural evolution and associated disciplines for deeper clues about why we might behave as we do.

March 26, 2015 | Unregistered CommenterAndy West

"Could be. But look at the relationship between beliefs on climate change & scores on climate science literacy test."

What about it?

"But in any case, it is the *pattern* that needs to be explained; why is there a *bias* in favor of identity-protective beliefs relative to truth-seeking."

Because according to a believer, the identity-related beliefs *are* the truth, in the same way that the magicians's audience know that magicians don't really use magic to saw women in half. The magician can present me with as much incontrovertible evidence as he likes - I know that I can be fooled, so I know not to trust appearances. "This will make sense to anyone who’s ever read the work of a serious climate change denialist. It’s filled with facts and figures, graphs and charts, studies and citations. Much of the data is wrong or irrelevant. But it feels convincing. It’s a terrific performance of scientific inquiry." It feels convincing, but you know not to trust it and that it must somehow be wrong because it comes to a conclusion you 'know' to be false; it must be a trick of some kind. That's the feeling of your faith overriding your reason being described there.

March 26, 2015 | Unregistered CommenterNiV

Dan -

==> "Occurs to me now that your point was about individuals as well as researchers -- or maybe only former?"

Actually, I was thinking of the former. I was in part getting at what Paul was getting at - that "both" could be true, but also that either could be true for any particular individual.


==> "It's a kind of odd way of talking but I think that the usage is to say "huh, individuals experience identity-protective cognition; wonder why?" The ones who don't are noise."

Yeah. That's the way I think of it. Is that really odd?

March 26, 2015 | Unregistered CommenterJoshua

@NiV:

The idea that we can infer people are reasoning correctly, rather than using identity-protective cognition, when they get the right answer is called "the Krugman fallacy."

Scores on "ordinary climate science intelligence" scale show that the vast majority of those on both sides are not bothering to use evidence to figure out their positions

March 26, 2015 | Registered CommenterDan Kahan

==> "I should think that's a given, statistically."

I disagree because of the [that] in the statement referenced...

==> "On the contrary, proficiency in one or another aspect of critical reasoning magnifies individuals’ tendency to selectively credit evidence in a manner that conforms to the position associated with their group identity."

This wording, it seems to me, assumes a causality, and even more problematically a direction of causality, in a relationship where all we know, "statistically," is that there is an association.

I think that there could well be a causal mechanism that drives both greater critical reasoning skills and identity-related bias.

But even if we accept that there is a direct causal relationship between greater critical reasoning skills and identity-related bias, how do we know the direction of causality? I suspect that those who have a greater drive toward identity protection are more likely to develop critical reasoning skills. I don't think that there is any single causation, I assume that the causality is multifactorial (multifarious?) - but I think that there is at least some...

identity-protection ====> greater critical reasoning skills

causality going on.

March 26, 2015 | Unregistered CommenterJoshua

@Andy West -- take a look at William Von Hippel & Robert Trivers, The Evolution and Psychology of Self-Deception, 34 Behavioral and Brain Sciences 1-16 (2011) for confirmation. Then try to figure out what Goedel would say about the idea that reading Trivers for conformation of this position is a form of self-deception

March 26, 2015 | Registered CommenterDan Kahan

"The idea that we can infer people are reasoning correctly, rather than using identity-protective cognition, when they get the right answer is called "the Krugman fallacy.""

We already agree, I think, that most people don't reason scientifically from the evidence, but instead use unreliable heuristics like "the expert consensus". People on different political sides recognise different people as "experts", but they're both using the same basic method. However, that doesn't mean they're not truth-seeking in their intentions when they do so.

You, I think, base your belief in dangerous climate change on "the expert consensus", yes? You're doing so because you think that's a more reliable way to get to the truth than your own personal understanding of climate physics, right? You're not deliberately subjugating your desire for truth to a need for political conformity - although I've no doubt there'd be unpleasant consequences for you personally and professionally if you 'came out' as a climate sceptic. But you're not a believer because you're scared of the social consequences, you're a believer because you genuinely believe that trusting these particular experts is your straightest path to truth.

And when you see the "experts" that the other side recognise as such - Steve McIntyre, Roy Spencer, Nic Lewis, Demetris Koutsoyiannis, Richard Lindzen, Judith Curry, etc. - I'm guessing you're looking for reasons why they might be saying things you 'know' not to be true, and perhaps finding them. You've got yourself good reasons to dismiss them. You're not rejecting them because you think it would be politically inconvenient, you're rejecting them because you think they're wrong.

People think their political beliefs are true. And so conformity with political dogma is conformity with truth. The body of political belief is the collective "expert consensus" of all right-thinking people, and if people subordinate their own reasoning to the collective's beliefs, it's only because they think the latter is more reliably the truth.

I don't think it's anything like that simple. Individuals do think about and disagree with their own political side on many points. But if we're going to generalise...

March 27, 2015 | Unregistered CommenterNiV

@Joshua--

I feel the experimental work supports the inference that it is indeed the *use* of such proficiencies that explains why those who score higher on CRT, science literacy, numeracy, etc. are more polarized. the experiments were done precisely to test that conjecture.

Obviously, I'd revise my views appropriately in light of additional evidence. But this is evidence & it is supportive of causation.

March 27, 2015 | Registered CommenterDan Kahan

Dan -

Thanks,

My ability to understand this stuff is limited, but wasn't convinced of evidence of causality or direction of causality from that post the first time I read it, and I'm dubious that such causality can be shown w/o longitudinal evidence (arguing about causality without longitudinal data is kind of a pet peeve of mine)....but you understand this stuff better than I, so I'll try looking at it again.

March 27, 2015 | Unregistered CommenterJoshua

I think you’re looking in the wrong place. Even if we believe Hippel & Trivers as opposed to the critical papers that follow, as the paper points out, mechanisms for the detection of self-deception constantly keep evolutionary pace with the mechanisms for self-deception itself. This provides a (constant in the long-term) limit on the scope of self-deception as a primary social driver. Group loyalties / identity (and so the beliefs and individual affect they spawn) do not rest primarily in self-deception, but on more fundamental mechanisms rooted in the evolutionary primacy of social groups (and so tied to altruism too).

Bear in mind that in the examples you note above such as political controversies, there is no absolute truth in the underlying philosophies, albeit surface logic may be ducked when identity defense is aroused. And notwithstanding complications too like the cultural inertia mentioned above, there is no need for self-deception when there is no absolute truth; everything is a (group) stance. Also, group adherents will accommodate some inconsistencies for the sake of the overall benefit of membership. Very few folks say they agree with *everything* their political party promotes, even if they are strong supporters. Vanilla cultural bias, and especially emotive bias components, are sufficient to drive behavior. Extremists fringes may get more into self-deception, but I doubt they’d skew your stats much.

It looks like you’re trying to stack another layer onto the ‘knowing disbelief’ house of cards. I guess mass self-deception would come in very handy for that ;) Best make the foundations more stable first, or discard, whichever works out. I think some of the lower layers are dodgy. While the effect exists, I understand that your Pakistani Dr is a real case for instance, there are about 2.5 doctors per 1000 in the population of the USA and 0.1 for Indonesia (the most populous Muslim nation). Most countries are in-between. And I’d guess most Muslim doctors are dealing with evolution just like their Christian colleagues dealt with it over the last century, by simply redefining the religious literature to be metaphoric not literal. No clash anymore. Nor would most ordinary General Practitioners have a true clash anyhow, unlike someone working on stem-cell research who actually needs to invoke evolutionary principles. But even if many Muslim doctors actually had a problem, it wouldn’t really measure. Moving to your Kentucky farmer, as noted previously the main paper you quote doesn’t support him. If you think it does support ‘knowing disbelief’, you have to explain his cousin Jacob and ‘unknowing belief’; but both are equally unsound as a formula for what’s going on in that data. I do not think ‘knowing disbelief’ is a mass effect; it requires very specific conditions to create and maintain.

The strong polarizations you uncover, stronger still for those more literate / numerate, indicate culture in play as you've noted yourself before. And critical capability in service to cultural bias as you mention to Joshua above. If you find out who is within which culture, I don't think you'd need to try and massage self-deception up to a solution. Not as easy to identify the cultures as one might think, if oneself is inside one or more or them

I think Goedel would merely observe that our knowledge on the topic, is incomplete.

March 27, 2015 | Unregistered CommenterAndy West

@NiV

"...The body of political belief is the collective "expert consensus" of all right-thinking people..."

Yes, and not just for political beliefs. It is a big part of the 'job' of culture to manufacture consensus about the unknowable. Works for other secular belief systems too, and of course religions. There are some pretty decent upsides, like civilization wouldn't have arisen without this mechanism. But the benefits are net, and there are some pretty heavy downsides too ):

March 27, 2015 | Unregistered CommenterAndy West

Scientists are being fooled by their own ideology, supposing that search for a non-context-dependent truth is the normal use of reason. Mostly people use argument to cement their social bonds and coordinate group action, which has generally been highly adaptive. A ring of truth is helpful, yes, but in everyday group disagreement, rigorous truth is usually unattainable anyway. As the Pakistani doctor, etc., shows, in those cases where beliefs have non-social consequences, people are more than capable of holding whatever logically inconsistent beliefs are necessary in order to address both their social and technical needs.

March 28, 2015 | Unregistered CommenterRob Maclachlan

@AndyWest-- I'm now confused about what your original position was. I thought you meant to be saying that "of course" we don't use our reason to "figure out the truth"-- b/c there was not an evolutionary advantage in being "right" but forming social connections. That's Trivers' position, pretty much.

I like Kurzban's reply. But I wish he had focused, on-point evidence & not just imaginative syntheses an stylized rechcaracterizations of the work of others to back it up.

March 29, 2015 | Registered CommenterDan Kahan

Group identity / loyalty and the the beliefs and affect they spawn, are indeed not about truth. They are about maintaining a social consensus in the face of the unknowable. But evolution works on multiple levels at once. So there is group selection, individual selection, cell selection, gene selection etc all simultaneously. If we assume Trivers position is correct, the co-evolutionary struggle that he references is in any case essentially at the individual level. Individuals are using self-deception as a means to enhance deception as a means to gain a little bit more from their group (along with other postulated benefits like enhanced self-image and enhanced pleasure or whatever). He points out that this would have a cost to truth. Sure, this is essentially hawk behavior in the hawk versus dove game; the practicing individual gains at a cost to other individuals and the group. Since some individuals may also lose out via personal interactions, part of the reason the detection of self-deception keeps pace is also at this level. But the important thing is that via other and much stronger mechanisms, group selection dominates, which is why we have civilization and altruism and millions will volunteer to die to protect their own people. The doves always win in any system featuring 'correlated interaction'*.

Self-deception may well be ubiquitous for all I know, at least in a mild form such as most folks puffing up their self-image. But even at the stronger end I figure it will be a second order effect in the kind of cultural polarizations you are looking at. These are driven by more dominant mechanisms which arise from group selection. So looking at self-deception is most likely a distraction; at the power levels you need I don't think it will be a mass phenomena. Maybe at extremist fringes. Maybe where cultural inertia has left a group way out there in la la land. But not for the main event. Given detection of self-deception keeps pace with self-deception, that says there will always be a counter-balance limiting social drive on the bigger stage. What other effects don't have a counter balance? And you are essentially searching for things that drive *cultural loyalty* to a group (hence the polarization), not things that show disloyalty by stealing from the group as Trivers suggests.

Maybe it's last nights wine, but I'm not grasping the essence of Kurban right now. However, I note that he says: "True beliefs are obviously useful for guiding adaptive behavior, so claims that evolved computational mechanisms are designed to be anything other than as accurate as possible requires a powerful argument." Well I'm happy with that. In-group competition for status or a better share of divided resources (via Trivers self-deception = enhanced deception) are not a strong argument for *major* deviations from truth. This will threaten the whole group. But the mechanism that forms a viable group in the first place, in the face of many unknowables where there simply is no truth, which group by combined efforts most certainly has a much increased chance of survival, *is* a poweful argument. And remember this mechanism is not really about deviating from 'the truth' (albeit surface logic can be ducked), it's about establishing an agreed position and so action, where there *is no truth*. While we're a bit better now at examining history to help rule out political philosophies that do not seem to have worked, there is still little to no absolute truth regarding the current crop of (mainstream at least) philosophies. Hence cultural positions in political controversy. And it's good to bear in mind also that while enforced social consensus has been a major *net* benefit thus far, it can have very major downsides too.

(*Darwinian Populations by Peter Godfrey-Smith has a good explanation of 'correlated interaction'. This and related insights belatedly provided the underpinning that group / multi-level selection needed, helped by the fact that competing theories could never achieve a viable explanation for altrusim beyond kin groups).

March 29, 2015 | Unregistered CommenterAndy West

Dan -

I assume that limits in my ability to understand is the limiting factor here, but I spent a bit more time looking at that paper and I still don't get why you think that causality is inferred. So maybe you can get me to stop bugging you about this if you give me a dumbed down explanation. I'll lay out some of my confusion here. You often don't respond to these wandering narratives - so I don't really expect a response - but if you see a reason to read on and see a place where you could perhaps clarify my confusion, I would appreciate it.

I don't doubt that people who have greater CRT, numeracy, scientific literacy, etc. have more skills at their disposal to support polarized views, and so having those skills at one's disposal enables one to magnify any baseline propensity towards polarization...but....

To believe that greater CRT, numeracy, scientific literacy, etc. causes polarization, I would expect that longitudinal data would be necessary. I would expect that the experiment would take subjects and measure that over time, as the "cause" increased over time in individuals (CRT, etc.), so would the "effect" (polarization). I would expect to see that there was some clear proportional growth in the one as compared to the other over time (along with, perhaps, evidence that could explain why the growth rates, respectively, were not always proportional to each other).

Inference of causality would be strengthened if evidence showed that as CRT, numeracy, and scientific literacy weakened over time, so did polarization - but obviously that would be an impossible experimental paradigm (how could you decrease CRT, numeracy, and scientific literacy?) Similarly, I wouldn't expect to see that if you decreased polarization over time you would see a decrease in CRT, numeracy, and scientific literacy (again, how does someone become less scientifically literate?).....but I do think that it would be important to test whether there might be a reason that people who are more politically engaged because they are more invested in an identity protective drive that can be fulfilled by greater mastery of the skills related to CRT, numeracy, and scientific literacy are more likely to become more numerate.

I get that setting up a longitudinal design study here would be next to impossible, but....

It seems to me that level of education is an attribute that is pretty strongly associated with level of political engagement. Is that because people who are more educated see more reason to become, and have more skills needed to be, politically engaged? Is it because people who come from a more politically engaged culture (say, family of origin) are more likely (and have more reason) to become more educated? (Personally, I don't think that the causality runs only in one direction.)

I think of some highly educated Asian clients/students I've worked with (scientists, engineers, economists, high-level business execs) who were strikingly, to me, apolitical. They had very little investment in a political identity - in a way that seemed to me to be very different than what is the case with their American counterparts who were similarly highly educated (I'm speaking generally here, of course). Now that might mean that in their case, there isn't a inherent direction of causality that runs from

higher political identification ===> greater CRT, numeracy scientific literacy, etc.

But tit also might be that since they were not polarized along ideological lines it would suggest that there isn't an inherent direction of causality that runs from

Greater CRT, numeracy, scientific literacy etc. ===> higher political identification (and thus, propensity towards polarization).

Now maybe, in comparison to a nationally representative cohort, more highly educated Asians are more likely to be politically engaged and polarized on various issues?

Or maybe the cause and effect that you're finding is culturally bounded.


Anyway, thanks for allowing the space for my rants. :-)

March 29, 2015 | Unregistered CommenterJoshua

Dan,

One answer is that your first choice poses an unnecessary dichotomy.
In the context of the climate debate it seems plausible that both sides are thinking reasonably. There are enough knowns vs. unknowns and uncertainties to justify several positions as reasonable. While we do selectively credit evidence following certain patterns, when the evidence allows for several plausible interpretations what basis do we have to decide if we are fooling ourselves. From this frame then the parties fooling themselves are the one holding to their interpretation strongly while rejecting others as unreasonable.

On the other hand: are [we] fooling ourselves about what these individuals are actually using their reason to do?
Probably. It's a pretty good null hypothesis anyway.

I suggest that the group identity theory is limiting and a possible source of fooling ourselves. Is it, for instance, group identity we are protecting or have we selected identification with those groups partly because of a prior personal identity? So I would ask if it's necessary to force the distinction between types of identities?
From my personal reflections the "group identity" theory is rather limiting. I am more often aware of a sense of identity rather than personal identity vs. group identity vs. non-group identity.

From a different point of view I think your question sails very close to the wind of the big question of just what is rationality. A question about which a data driven psychologist has little to say.

March 30, 2015 | Unregistered CommenterCortlandt Wilson

Cortlandt -

==> "In the context of the climate debate it seems plausible that both sides are thinking reasonably."

Not to speak for Dan - but one of the most common misperceptions I see about theories related to how group identity influences reasoning, is an interpretation that it implies irrationality.

==> "From this frame then the parties fooling themselves are the one holding to their interpretation strongly while rejecting others as unreasonable.

IMO, that is clearly the dominant paradigm

==> "From my personal reflections the "group identity" theory is rather limiting. I am more often aware of a sense of identity rather than personal identity vs. group identity vs. non-group identity"

I agree with this at the personal level. But on the other hand, I would guess that both you and I are outliers - outliers that have a higher level of personal investment than is the norm. Generalizing from such a sampling is, clearly, problematic.

March 30, 2015 | Unregistered CommenterJoshua

@Joshua

Not to speak for Dan - but one of the most common misperceptions I see about theories related to how group identity influences reasoning, is an interpretation that it implies irrationality.

Joshua - I think it's easy (but perhaps incorrectly?) to infer the "irrationality" from Dan's writing. As an editor I would encourage Dan to be more explicit about the limitations of the data from his studies as he sees them. Specifically what interpretations the data does not support.

But in the present case Dan, as I interpret his question, explicitly raises the question of rationality & Irrationality in his question about " individuals who engage in this form of information processing are using their reason to fool themselves . . . or that we (those who study them) are fooling ourselves".

How do you interpret Dan's question?


Also ... In thinking about your comment I'm more aware of the imprecision in my own conception of reason and rationality. Making reference here to the The Age of Enlightenment or Age of Reason for instance.

My current thinking appears to me to be in line with some of Dan's recent posts about System 1 and System 2. That is, rationality for it's very conception in System 2 thinking partly relies on System 1 intuitions. Alternatively, concepts of rationality are subject to Godel like incompleteness.

I'm thinking that Dan pretty much holds a similar belief himself but I don't know that he has stated it. Some of the common misconceptions of his work might be dispelled if he did.

March 30, 2015 | Unregistered CommenterCortlandt Wilson

Cortlandt -

==> "Joshua - I think it's easy (but perhaps incorrectly?) to infer the "irrationality" from Dan's writing. As an editor I would encourage Dan to be more explicit about the limitations of the data from his studies as he sees them. Specifically what interpretations the data does not support."

Fair enough. I agree.

==> "How do you interpret Dan's question?

I think this goes back to the your point above about clarity and explicitness and heading misconceptions off at the pass. I am a believer in writer-responsible text - where within reason, the writer is responsible for misunderstanding on the part of the reader.

Anyway, knowing from my previous reading that Dan isn't arguing that people are acting "irrationally" if their reasoning is biased by factors such as cultural cognition, I have a perspective that leads to a different conclusion. Basically, rationality and biased reasoning are not necessarily mutually exclusive.

Hopefully, Dan will respond to your further comments on that issue.

March 30, 2015 | Unregistered CommenterJoshua
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.