follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: Another lesson from SE Fla Climate Political Science, this one on "banning 'climate change' " from political discourse | Main | WSMD? JA! Are science-curious people just *too politically moderate* to polarize as they get better at comprehending science? »
Friday
Mar112016

"Monetary preference falsification": a thought experiment to test the validity of adding monetary incentives to politically motivated reasoning experiments

From something or other and basically an amplification of point from Kahan (in press) 

1.  Monetary preference falsification

Imagine I am solicited and agree to participate in an experiment by researchers associated with the “Moon Walk Hoax Society,” which is dedicated to “exposing the massive fraud perpetrated by the U.S. government, in complicity with the United Nations, in disseminating the misimpression that human beings visited the Moon in 1969 or at any point thereafter.”  These researchers present me with a "study" containing what I’m sure are bogus empirical data suggesting that a rocket the size of Apollo 11 could not have contained a sufficient amount of fuel to propel a spacecraft to the surface of the moon.

After I read the study, I am instructed that I will be asked questions about the inferences supported by the evidence I just examined and will be offered a monetary reward (one that I would actually find meaningful; I am not an M Turk worker, so it would have to be more than $0.10, but as a poor university professor, $1.50 might suffice) for “correct answers.”  The questions all amount to whether the evidence presented supports the conclusion that the 1969 Moon-landing never happened.

Because I strongly suspect that the researchers believe that that is the “correct” answer, and because they’ve offered to pay me if I claim to agree, I indicate that the evidence—particularly the calculations that show a rocket loaded with as much fuel as would fit on the Apollo 11 could never have made it to the Moon—isvery persuasive proof that the 1969 Moon landing for sure didn't really  occur.

If a large majority of the other experiment subjects respond the way I do, can we infer from the experiment that all the "unincentivized" responses that pollsters have collected on the belief that humans visited the Moon in 1969 are survey “artifacts,” & that the appearance of widespread public acceptance of this “fact” is “illusory” (Bullock, Gerber, Hill & Huber 2015)? 

As any card-carrying member of the “Chicago School of Behavioral Economics, Incentive-Compatible Design Division” will tell you the answer is, "Hell no, you can't!" 

Under these circumstances, we should anticipate that a great many subjects who didn’t find the presented evidence convincing will have said they did in order to earn money by supplying the response they anticipated the experimenters would pay them for.

Imagine further that the researchers offered the subjects the opportunity, after they completed the portion of the experiment for which they were offered incentives for “correct” answers, to indicate whether they found the evidence “credible.”  Told that at this point there would be no “reward” for a “correct” answer or penalty for an incorrect one, the vast majority of the very subjects who said they thought the evidence proved that the moon landing was faked now reveal that they thought the study was a sham (Khanna & Sood 2016).

Obviously, it would be much more plausible to treat that "nonincentivized" answer as the one that finally revealed what all the respondents truly believed.

By their own logic, researchers who argue that monetary incentives can be used to test the validity of experiments on politically motivated reasoning invite exactly this response to their studies.  These researchers might not have expectations as transparent or silly as those of the investigators who designed the "Moon walk hoax" public opinion study.  But they are furnishing their subjects with exactly the same incentive: to make their best guess about what the experimenter will deem to be a "correct" response--not to reveal their own "true beliefs" about politically contested facts.

Studies as interesting as Khanna and Sood (2016) can substantially enrich scholarly inquiry. But seeing how requires looking past the patently unpersuasive claim that "incentive compatible methods" are suited for testing the external validity of politically motivated reasoning experiments (Bullock, Gerber, Hill & Huber 2015).

Refs

Bullock, J.G., Gerber, A.S., Hill, S.J. & Huber, G.A. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10, 519-578 (2015).

Kahan, D.M. The Politically Motivated Reasoning ParadigmEmerging Trends in Social & Behavioral Sciences, (in press). 

Khanna, Kabir &  Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (2016), available at http://www.gsood.com/research/papers/partisanlearning.pdf.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

In my opinion, the interesting most interesting part of this analysis would be how people arrive at what are presumably "their own" "true beliefs" about politically contested facts. The absorption of those "facts" comes, I believe as a result of carefully targeted messaging that attempts to position new ideas in ways that are amenable to preexisting belief systems, so as to create a base group of people who will support politicians with those positions.

This is reinforced by our belief segmented media outlets as well as the online delivery mechanisms by which many people receive that information. All of which tend to accentuate an environment in which "people like us" support policy "x". And mean that people that people that one intended to hear from that don't fit that consensus may never be seen.

"Rather than strictly reverse chronological, Instagram will order posts “based on the likelihood you’ll be interested in the content, your relationship with the person posting and the timeliness of the post.”"
"This is essentially how Facebook’s feed works, and how Twitter recently reconfigured its feed to work."

http://techcrunch.com/2016/03/15/filteredgram/

All of which could be easily manipulated by those that control the feed.

And then, data analytics can be used to develop entertainment programming that fits the mold.

"How is Netflix getting it so right? By meticulously gathering and analyzing data on customer preferences, including not just what people watch but what they search for, what they like and even where they pause, rewind and fast forward. What’s more, Netflix has broken down its content into nearly 80,000 specific genres and subgenres — everything from Emotional Independent Dramas for Hopeless Romantics to Witty Dysfunctional-Family TV Animated Comedies. Yes, those are real categories."

http://techcrunch.com/2016/02/27/netflix-the-force-awakens/

All of this strikes me as way ahead of where pollsters like Pew are when it comes to determining popular opinion. And also speaks to how such opinions are shaped in modern times.

March 16, 2016 | Unregistered CommenterGaythia Weis
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.