follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "Enough already" vs. "can I have some more, pls": science comprehension & cultural polarization | Main | Critical, must-do 2016 CCP NY resolutions! »
Saturday
Jan022016

"Don't jump"--weekend reading: Do judges, loan officers, and baseball umpires suffer from the "gambler's fallacy"?

I know how desperately bored the 14 billion regular subscribers to this blog can get on weekends, and the resulting toll this can exact on the mental health of many times that number of people due to the contagious nature of affective funks. So one of my NY's resolutions is to try to supply subscribers with things to read that can distract them from the frustration of being momentarily shielded from the relentless onslaught of real-world obligation they happily confront during the workweek.

So how about this:

We were all so entertained last year  by Miller & Sanjurjo’s“Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” which taught us something profound about the peculiar vulnerabilities to err super smart people can acquire as a result of teaching themselves to avoid common errors associated with interpreting random events.

So I thought, hey, maybe it would be fun for us to take a look at other efforts that try to "expose" non-randomness of events that smart people might be inclined to think are random.

Here's one:

Actually, I'm not sure this is really a paper about the randomness-detection blindspots of people who are really good at detecting probability blindspots in ordinary folks.

It's more in the nature of how expert judgment can be subverted by a run-of-the-mill (of the "-mine"?) cognitive biases involving randomness--here the "gamblers' fallacy": the expectation that the occurrence of independent random events will behave interdependently in a manner consistent with their relative frequency; or more plainly, that an outcome like "heads" in the flipping of a coin can become "due" as a string of alternative outcomes in independent events--"tails" in previous tosses--increases in length.

CMS present data suggesting that behavior of immigration judges, loan officers, and baseball umpires all display this pattern.  That is, all of these professional decisionmakers become more likely than one would expect by chance to make a particular determination--grant an asylum petition; disapprove a loan application; call a "strike"--after a series of previous opposing determinations ("deny," "approve," "ball" etc.).

If you liked puzzling over the the M&S paper, I predict you'll like puzzling through this one.

In figuring out the null, CMS get that it is a mistake, actually, to model the outcomes in question as reflecting a binomial distribution if one is sampling from a finite sequence of past events.  Binary outcomes that occur independently across an indefinite series of trials (i.e., outcomes generated by a Bernoulli process) are not independent when when one samples from a finite sequence of past trials.

In other words, CMS avoid the error that M&S showed the authors of the "hot hand fallacy" studies made.

But figuring out how to do the analysis in a way that avoids this mistake is damn tricky.

If one samples from a finite sequence of events generated by a Bernoulli process, what should the null be for determining whether the probability of a particular outcome following a string of opposing outcomes was "higher" than what could have been expected to occur by chance?

One could figure that out mathematically....  But it's a hell of a lot easier to do it by simulation.

Another tricky thing here is whether the types of events decisionmakers are evaluating here--the merit of immigration petitions, the crediworthiness of loan applicants, and the location of baseball pitches--really are i.i.d. ("independent and identically distributed").

Actually, no one could plausibly think "balls" and "strikes" in baseball are.

A pitcher's decision to throw a "strike" (or attempt to throw one) will be influenced by myriad factors, including the pitch count--i.e., the running tally of "balls" and "strikes" for the current batter, a figure that determines how likely the batter is to "walk" (be allowed to advance to "first base"; shit, do I really need to try to define this stuff? Who the hell doesn't understand baseball?!) or "strike out" on the next pitch.

CMS diligently try to "take account" of the "non-independence" of "balls" and "strikes" in baseball, and like potential influences in the context of judicial decisionmaking and loan applications, in their statistical models. 

But whether they have done so correctly--or done so with the degree of precision necessary to disentangle the impact of those influences from the hypothesized tendency of these decisonmakers to impose on outcomes the sort of constrained variance that would be the signature of the "gambler's fallacy"-- is definitely open to reasonable debate.

Maybe in trying to sort all this out, CMS are also making some errors about randomness that we could expect to see only in super smart people who have trained themselves not to make simple errors?

I dunno!

But b/c I love all 14 billion of you regular CCP subscribers so much, and am so concerned about your mental wellbeing, I'm calling your attention to this paper & asking you-- what do you think?

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>