follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Two publics, two modes of reasoning, two forms of information in science communication: a fragment . . . | Main | On the provisionality & conjectural status of claims about Pakistani Drs & Kentucky Farmers »
Tuesday
Jun302015

Self-deception at L'université Toulouse: an encore!

I offered a report on my presentation at the fun "self-deception" symposium sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole). I also described my ambivalence toward characterizing identity-protective cognition--the species of motivated reasoning that is at work in public conflict over societal risks & like facts-- as a form of "self-deception."

These reflections have now inspired/provoked a report from another of the symposium participants, Joël Van der Weele, who presented really cool study results on the dynamics of self-deception in job interviewing.  In addition to summarizing the study highlights, Joël's post widens the lens to take in how "self-deception" has figured more generally in the study of behavioral economics.  Having read & reflected on the post, I would definitely now qualify my own ambivalence. I think "self-deception" fits more comfortably when the "self" is the object as well as the subject of the asserted "deception" than it does when the objects are societal risks.... But I'm perplexed, which is good!

Strategic self-deception

Joël van der Weele

 (with thanks to Peter Schwardmann for input)

Like Dan, I attended the workshop on self-deception in Toulouse, and like Dan, I will focus on my own talk. Unlike Dan, my viewpoint is that of a behavioral economist, with associated convictions and blind spots, of which I am happy to be reminded.

Joël van der Weele, steely resisting self-deceptionMost of the empirical literature on motivated cognition and self-deception is focused on establishing the existence of this phenomenon. Social psychologists in particular have made great progress in showing that people will systematically bias their beliefs and their information processing in a self-serving manner, and end up believing that they are smarter, nicer and more beautiful than they really are, and that the world is a safer, more just and more manageable place than it really is.

As usual, behavioral economists arrived to this research area a few decades after the psychologists, and are now confirming some of these results in economic contexts, using their own experimental and theoretical paradigms. While they have questioned that some of the overconfidence evidence is really inconsistent with rationality (Benoît and Dubra, 2011), they also find that much of it seems to be a truly self-serving bias.

At the workshop, several talks were dedicated to summarizing or adding to the evidence of when and where this kind of motivated cognition may occur, for example in the domain of information seeking about stock performance (George Loewenstein), scientific but politicized beliefs about gun control and climate change (Dan Kahan), trust in others (Roberto Weber), and self-inferences from test scores (Zoë Chance).

At the same time, economic studies are showing that overconfidence is expensive. Both in real world data (Barber and Odean, 2000) and experiments (Biais et al. 2005), traders who are overconfident tend to trade too much and make less money. There is more anecdotal evidence from other domains: I am sure that you all know people who think they are really good at something they really are not that good at, with embarrassing or painful results.

Given these costs, why would people deceive themselves? A popular account in both psychology and economics is that people simply like to think well of themselves, or like to think that things turn out well for them in the future, but this is not a very satisfactory explanation. Why wouldn’t evolution or the market take care of those sentimental souls in favor of more hard-boiled types? Where, in other words, are the material benefits that self-deception can bring?

The answer to this question is still mostly in the hands of theorists. Roland Bénabou, who gave the opening talk at the conference, has, together with his co-author Jean Tirole, proposed an explanation in terms of motivation (Bénabou and Tirole, 2002). If people suffer from laziness or have other difficulties in seeing through their plans, overconfidence may be a helpful `anti-bias’ that gets them out of their seat and into action. I don’t know of experiments testing this idea, but if you can help me out I am happy to hear of some. 

Another influential idea has been put forward by a biologist, Robert  Trivers, in several publications since the mid ‘80s (most prominently Von Hippel and Trivers, 2011),  including this book. Trivers argues that self-deception enables you to better deceive others and thus achieve social gain. If you truly believe you are great, you will do a much better job at convincing others that you are. This will help you impress potential sexual partners, achieve sales, land jobs, etc. Self-deception is useful because if you are not aware of lying about being great, you’ll be less likely to feel bad about your deception, give yourself away or face retribution in case of subsequent failure to live up to your proclaimed greatness.

This hypothesis is strikingly consistent with the folk wisdom peddled in the popular self-help literature. Just search for “success” and “confidence” on amazon, and you will find a score of books telling you that if you just believe in yourself (no matter the evidence), riches will soon be yours. While this may be true of the authors of these books, the kind of evidence that is cited in this literature is not very convincing to someone trained in scientific inference (“Look at Person X, she’s is confident and rich. So if you become as confident as X, you’ll sure be rich.”).

So my co-author Peter Schwardmann and I decided to subject the folk wisdom to a proper experimental test. We got about 300 people to the lab to perform a cognitively challenging task.  We then split the group in two. Our treatment group was told that they would be able to earn about 15 euros ($17) if they can persuade others in a face-to-face “interview” that they were amongst the top performers in the task. The control group is not told anything.

Before actually conducting the interviews, participants in both groups then privately report their beliefs about the likelihood of being in the top half of performers on the task, where we pay them for submitting an accurate belief. We find that treatment and control group are both overconfident on average, with the average belief of being in the top half being 60%, i.e. 10% higher than the true number.

In line with Trivers’ hypothesis, the shadow of future interactions increases overconfidence by about 50%, from 8% to 12%. This effect does not go away after we give participants some noisy information about their actual performance, as the prospect of future deception responsibilities also reduces responsiveness to new information about performance. Thus, anticipation of future deception opportunities indeed causes a more optimistic self-assessment amongst our participants, a case of strategic self-deception.

Our next question was whether self-deception paid off in the interview phase, i.e. whether increased confidence made a participant more likely to be flagged as a good performer, conditional on real performance. The interactions followed a speed-dating protocol, where we promoted the control group to interviewers, tasked with assessing the performance of the treatment group.

The results in this phase of the experiment crucially depend on the details of the environment. We had given some of the interviewers a short tutorial in lie-detection. It turned out that these interviewers were pretty good at spotting the true good performers and the self-deceptive strategies of the interviewees were ineffective. Against untrained interviewers, however, the average level of self-deception in our experiment (i.e. the increase in overconfidence of our treatment group) led to a substantial increase in the chance of being flagged as a top performer and the associated earnings. 

All of this is somewhat preliminary, as we are currently refining results and putting them on paper on paper. As far as we know, there are no other studies showing causal evidence for strategic self-deception in social contexts, although some are suggestive of it (Burks et al. 2013, Charness et al. 2014). If this finding holds up in a wider array of settings, we may find that the pop psychology literature is not that wrong after all.

References

Barber, B. M. and T. Odean. 2000. "Trading is hazardous to your wealth: Common stock investment performance of individual investors", Journal of Finance 55, 773-806.

Bénabou, Roland and Jean Tirole. 2002. “Self-confidence and Personal Motivation”, Quarterly Journal of Economics, 117:3, 871-915.

Benoit, J.P. and J. Dubra. 2011. “Apparent Overconfidence”, Econometrica, 79:5, 1591-625.

Biais, B. D. Hilton, K. Mazurier and S. Pouget. 2005. “Judgemental overconfidence, self-monitoring, and trading performance in an experimental financial market” Review of Economics Studies, 72:2, 287-311.

Burks, S. V., J. P. Carpenter, L. Goette and A. Rustichini. 2010. “Overconfidence and Social Signaling”, Review of Economics Studies, 80:3, 949-983.

Charness, G., A. Rustichini, and J. van de Ven. 2014. “Self-confidence and strategic behavior”, Amsterdam University mimeo.

Von Hippel, William, and Robert Trivers. "The evolution and psychology of self-deception." Behavioral and Brain Sciences 34.01 (2011): 1-16.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>