follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« What “bodycams” can and can’t be expected to do. . . plus coolest study of the year | Main | Why expect people to *know* evolution? A question that deserves a good answer »
Wednesday
Dec242014

"Anyone who doesn't agree must be a Marxist!" Plus "bans," "decibans," & Turing & Good on "evidentiary weight"

Maybe this (like the honeybadger) will turn out to be one of those discoveries on my part that everyone else already knows about, thereby revealing my disturbing remoteness from the zeitgeist, but the underscored sentence struck me as sooooooo hilarious I thought I should take the risk and share it, just in case it really is a hidden gem:

Actually, the paper (Good 1994) is not nearly so esoteric as it looks. Good was a brilliant writer, whose goal was to help curious people understand complicated things--as opposed to the sort of terrible writer whose goal is to be understood as brilliant by people he knows won't be able to comprehend what he is saying (which usually is nothing very interesting). 

I came across this paper while looking for accessible accounts of Turing's usage of "bans" and "decibans," a precursor of the Bayes factor, as a useful heuristic for making the concept of "weight of the evidence" tractable (in my case for a paper on the conceit that rules of evidence can be used to correct for foreseeable cognitive biases on the part of factfinders in legal proceedings).

A "ban," essentially, is a likelihood ratio of 10. That is, we would say that a piece of evidence has a weight of "1 ban" when it made some hypothesis 10x more probable (necessarily in relation to some other hypothesis) than we would have had reason to view it without that evidence.

Turing, in working on decryption at Blatchley Park in WW II, selected the ban as a unit to guide the mechanized search for solutions to codes generated by the German "Enigma" machine. Actually, Turing advocated using "decibans," which are 1/10 of ban, to assess the probative value of potential matches between sequences of code and plain text that poured out of the "bombe" decoders, electronic proto-computers that rifled through the zillions of combinations formed by the interacting Engima rotors, the settings of which determined the encryption "key" for Enigma-encrypted messages. 

Turing judged a deciban-- again, 1/10 of a "ban" or a likelihood ratio of 1.25:1 or 5:4 -- as pretty much the smallest difference in relative likelihood that a human being was likely to be able to perceive (Good 1979).

That's an empirical claim about cognition, of course.  What evidence did Turing have for it?  None, except the vast amount of experience that he and his fellow code-breakers were accumulating as they dedicated themselves to the task of productive deciphering of Enigma messages.  That certainly counts for something --but for how much? See the value of having units some system of "evidentiary weight" units here?

Good -- a 24-yr old, freshly minted Cambridge mathematician -- was part of Turing's team.

After the war, he wrote prolifically on probability theory, and Bayesian statistics in particular, for decades. He had lots of informative things to say about the concept of "evidentiary weight" (Good 1985).  He died in 2009.

Turns out he was really funny too.

Or at least I'd say that this sentence is at least 10 ban's worth of evidence that he was.

References

Good, I. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley & A. F. M. Smith (Eds.), Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (pp. 249-270). North-Holland: Elsevier.

Good, I. (1994). Causal Tendency, Necessitivity and Sufficientivity: an updated review Patrick Suppes: Scientific Philosopher (pp. 293-315): Springer.

Good, I. J. (1979). Studies in the history of probability and statistics. XXXVII AM Turing's statistical work in World War II. Biometrika, 66(2), 393-396. 

 

 

Good circa 1974 (at Va. Tech.)

 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

The Bayes factor has a lot of problems, as we discuss in chapter 7 of BDA (3rd edition); see also my 1995 paper in Sociological Methodology with Rubin. The short story is that the Bayes factor or marginal likelihood typically depends strongly on aspects of the model that are untestable and are typically set up in a casual manner. The Bayes factor can work well for certain discrete problems such as Good's codebreaking example, but I think it's hopeless for the sorts of theory testing that we do.

December 24, 2014 | Unregistered CommenterAndrew Gelman

@Andrew:

Marxist!

December 24, 2014 | Registered CommenterDan Kahan

I would think Turing was alluding to the Bel and decibel used in electronic engineering, where I think the decibel was selected on a similar basis of being minimally detectable by the human ear. Given the strong relationships between the subjects, I often refer to the unit of information as a decibel anyway.

It's the indubitable right answer to a part of the question, but it's only a part of the question. What if the trials are not independent? How do you select the models to test?

December 25, 2014 | Unregistered CommenterNiV

@NiV:

You are right about the deciban-decibel analogy (although the term also alludes to the town of Bansbury, where large, exactingly perforated sheets of paper used in the deciphering process were manufactured).

And yes, Bayesian updating is the easy part; how to calibrate our decibans is where we usually come to grief...

December 25, 2014 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>