## New paper: "Laws of cognition, cognition of law"

This teeny weeny paper is for a special issue of the journal *Cognition. *The little diagrams illustrating how one or another cognitive dynamic can be understood in relation to a simple Bayesian information-processing model are best part, I think; I am almost as obsessed with constructing these as I am with generating the multi-colored "Industrial Strength Risk Perception Measure" scatterplots.

## Reader Comments (7)

The thing that assigns likelihood ratios to different hypotheses is what I would call the 'statistical model', and it is indeed a matter of choice - either by a priori assumption or by it's own separate hypothesis-evidence update process.

The 'narrative templates' sounds more like what I'd call the 'choice of hypotheses', which again can be done in different ways, the choice of which can affect the outcome. Sometimes a particular context will suggest adding hypotheses that one wouldn't normally include. Sometimes it will suggest a particular division of cases. And while these can technically be represented using priors - we don't think of it that way. We pick a set of likely hypotheses that we then approximate as being exhaustive, to simplify the process. Near certainties we round up to certainties. and so on.

While one can argue about whether a choice of hypotheses should be explicitly represented in the Bayesian framework or treated as a set of priors on the union of all possibilities, I do think that the statistical model definitely ought to be represented explicitly.

@NiV:

Well, real people rarely "explicitly represent" their "statistical models." Or maybe a better way to put it, real people just are implicit statistical models, and the interesting thing is to try to parameterize them...

I have found, as a descriptive matter, that trying to tease out the influences that, in conceptual terms, alter the "likelihood ratio" associated with information is a profitable way to test empirically how people form perceptions of risk & similar facts.

"Well, real people rarely "explicitly represent" their "statistical models.""Oh, I agree. They rarely explicitly represent their priors, or their likelihood ratios either.

Bayesians, on the other had, do.

"Or maybe a better way to put it, real people just are implicit statistical models, and the interesting thing is to try to parameterize them..."I assumed that with your diagrams you was trying to draw an analogy between Bayesian formalism (priors and likelihood ratios) and the way people think (beliefs and evidence). The analogy is not exact - there have been many demonstrations that human expertise is not consistent with Bayesian Belief - but to the extent that it works, the element that generates likelihood ratios from hypotheses - corresponding to that mental model of the world by which we predict the likely consequences of the different hypotheses - is called the statistical model in the formal Bayesian analysis. I thought since you was labelling the other modules with their corresponding Bayesian terms, you might like to label that one too.

Its significance is, unfortunately, often ignored. There are usually several choices that can be made, or some uncertainty about which is proper, but people without a deep understanding will often assume one implicitly without realising it, and then wonder why other people come to different conclusions on the same evidence. There have, for instance, been several famous examples in climate science. So I'm always happy to see someone recognise that there is more to Bayes than "posterior equals prior times likelihood ratio", giving what many seem to regard as a mathematically inescapable and unarguable conclusion. Even with the same priors and the same evidence, there is still room for disagreement.

@NiV

Thanks. I will indeed consider revisions along these lines.

You are right of course that there is myriad evidence that human deicisonmaking is not Bayesian. I find simple Bayesian framework to be useful heuristic-- start with that & then explicate how any particular mechanism of cognition relates to it, so that we can be clear about its operation & significance. In the course of this, too, I think we find that although many mechanisms featured in cognitive science relate to defects in the capacity to process information in Bayesian terms, many more involve the impact of information & other influences in shaping inputs into Bayesian processing.

Obviously, too, those influences can be normatively appraised. If motivated reasoning is best understood as the impact that some goal or interest external to truth-seeking has on the likelihood ratio assigned new information, then in many (but probably not all) contexts this will be normatively undesirable.

@NiV: What source would you suggest for hat refers to "likelihood ratios" as part of "the statistical model" that is presupposed by or outside of Bayesian inference? I have to admit that I've not seen decision theorists giving much thought to "where do likelihood ratios come from?" *Priors* -- sure b/c all one needs to say is, who cares about them? so long as we have access to enough information & update properly, we'll get where we'll converge on best estimate regardless of where we started.

But of course, that assumes we assign proper likelihood ratio to new information. (Of course, what likelihood ratio to assign that information might be somethign that someone else is investigating in a process that is itself appropriately Bayesian -- but somewhere this must end)

Dan,

I'm not sure what the origin of the terminology is - you're right that it's not ordinarily defined or discussed explicitly, but rather taken for granted.

One reference to the general Bayesian inference approach that discusses models extensively is Burnham, K.P., and Anderson, D.R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd ed. Springer-Verlag. ISBN 0-387-95364-7.

For example:

http://www.mun.ca/biology/quant/ModelSelectionMultimodelInference.pdf

I'm told a lot of other people have referenced this book as an authoritative source for statistical models, but I wouldn't want to say there isn't a better one. (It may be worth noting that the book often calls it a "probability model" as well.)

The comparison of likelihoods in the Bayesian updating process is effectively a 'likelihood ratio test' - you might find more if you check some references to that.

@NiV-- many thanks!