follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Dual process reasoning, Liberalism, & disgust | Main | Is Disgust a Uniquely "Conservative" Moral Emotion? »

"Integrated & reciprocal": Dual process reasoning and science communication part 2

This is the second in what was to be a two-part series on dual process reasoning and science communication.  Now I’ve decided it must be three!

In the first, I described a conception of dual process reasoning that I don’t find compelling. In this one, I’ll describe another that I find more useful, at least for trying to make sense of and dispel the science communication problem. What I am planning to do in the 3rd is something you’ll find out if you make it to the end of this post.

A brief recap (skip down to the red type below if you have a vivid recollection of part 1):

Dual process theories (DPT) have been around a long time and come in a variety of flavors. All the various conceptions, though, posit a basic distinction between information processing that is largely unconscious, automatic, and more or less instantaneous, on the one hand, and information processing that is conscious, effortful, and deliberate, on the other. The theories differ, essentially, over how these two relate to one another.

In the first post I criticized one conception of DPT, which I designated the “orthodox” view to denote its current prominence in popular commentary and synthetic academic work relating to risk perception and science communication.

The orthodox conception, which reflects the popularity and popularization of Kahneman’s appropriately influential work, sees the “fast,” unconscious, automatic type of processing—which it refers to as “System 1”—as the default mode of processing. This conception, which you can find all over the place, goes like this:

System 1 is tremendously useful, to be sure. Try to work out the optimal path of evasion by resort to a methodical algorithm and you’ll be consumed by the saber-tooth tiger long before you complete your computations (etc).

But System 1 is also prone to error, particularly when used for assessing risks that differ from the ones (like being eaten by saber-tooth tigers) that were omnipresent at the moment of our evolutionary development during which our cognitive faculties assumed their current form.

Our prospects for giving proper effect to information about myriad modern risks—including less vivid and conspicuous but nevertheless highly consequential ones, like climate change; or more dramatic and sensational but actuarially less significant ones like those arising from terrorism or from technologies like nuclear power and genetically modified foods the benefits of which might be insufficiently vivid to get System 1’s attention—depends on our capacity, time, and inclination to resort to the more effortful, deliberate, “slow” kind of reasoning, which the orthodox account labels “System 2.”

This is the DPT conception I don’t like.

I don’t like it because it doesn’t make sense.

The orthodox position’s picture of “reliable” System 2 “monitoring” and “correcting” “error-prone” System 1 commits what I called the “System 2 ex nihilo fallacy”—the idea that System 2 crates itself “out of nothing” in some miraculous act of spontaneous generation.

Nothing makes its way onto the screen of consciousness that wasn’t instants earlier floating happily along in the busy stream of unconscious impressions.  Moreover, what yanked it from that stream and projected it had to be some unconscious mental operation too, else we face a problem of infinite regress: if it was “consciously” extracted from the stream of unconsciousness, something unconscious had to tell consciousness to perform that extraction.

I accept that the sort of conscious reflection on and re-assessment of intuition associated with System 2 truly & usefully occurs.  But those things can happen only if something in System 1 itself—or at least something in the nature of a rapid, automatic, unconscious mental operation—occurs first to get System 2's attention.

So the Orthodox DPT conception is defective. What’s better?

I will call the conception of DPT that I find more compelling “IRM,” which stands for the “integrated, reciprocal model."

The orthodox conception sees “System 1” and “System 2” as discrete and hierarchical.  That is, the two are separate, and System 2 is “higher” in the sense of more reliably connected to sound information processing.

“Discrete and hierarchical” is in fact clearly how Kahneman describes the relationship between the two modes of information processing in his Nobel lecture.

For him, System 1 and 2 are "sequential": System 1 operations automatically happen first; System 2 ones occur next, but only sometimes. So the two are necessarily separate. 

Moreover, what System 2 does when it occurs is check to see if System 1 has gotten it right. If it hasn’t, it “corrects” System 1’s mistake. So System 2 “knows better,” and thus sits atop the hierarchy of reasoning processes within an ordering that ranks their contribution to rational thought.

IRM sees things differently. It says that “rational thought” occurs as a result of System 1 and System 2 working together, each supplying a necessary contribution to reasoning. That’s the integrated part.

Moreover, IRM posits that the ability of each to make its necessary contribution is dependent on the other’s contribution. 

As the “System 2 ex nihilo” fallacy helps us to see, conscious reflection can make its distinctive contribution only if summoned into action by unconscious, automatic System 1 processes, which single out particular unconscious judgments as fit for the sort of interrogation that System 2 is able uniquely to perform.

But System 1 must be seletctive:  there are far too many unconscious operations going on for all to be monitored, much less forced onto the screen of conscious tought, which would be overwhelmed by such indiscriminate summoning! But in being selective, it has to pick out the "right" impressions for attention & not ignore the ones unreflective reliance on which would defeat an agent's ends.  

How does System 1 learn to perform this selection function reliably? From System 2, of course.

The ability to perform the valid conscious reasoning that consists in making valid inferences from observation, and the experience of doing so regularly, are what calibrate unconscious processes, and train them to select some for the attention System 2, which is then summoned to attend to them. 

When it is summoned, moreover, System 2 does exactly what the orthodox view imagines: it checks and corrects, and on the basis of mental operations that are indeed more likely to get the “right” answer than those associated with System 1.  That event of correction will itself conduce to the calibration and training of System 1.

That’s the reciprocal part of IRM: System 2 acts on the basis of signals from System 1, the capacity of which to signal reliably is trained by System 2.

I do not by any means claim to have invented IRM!  I am synthesizing it from the work of many brilliant decision scientists.

The one who has made the biggest contribution to my view that IRM, and not the Orthodox conception of DRT, is correct is the brilliant social psychologist Howard Margolis.

Margolis presented an IRM account, as I’ve defined it, in his masterful trilogy (see the references below) on the role that “pattern recognition” makes to reasoning. 

Pattern recognition is a mental operation in which a phenomenon apprehended via some mode of sensory perception is classified on the basis of a rapid, unconscious process that assimilates the phenomenon to a large inventory of “prototypes” (“dog”; “table”; “Hi, Jim!”; “losing chess position”; “holy shit—those are nuclear missile launchers in this aerial U2 reconaisance photo! Call President Kennedy right away!” etc).

For Margolis, every form of reasoning involves pattern recognition.  Even when we think we are performing conscious, deductive or algorithmic mental operations, we are really just manipulating phenomena in a manner that enables us to see the pattern in the manner requisite to an accurate and reliable form of unconscious prototypical classification. Indeed, Margolis ruthlessly shreds theories that identify critical thinking with conscious, algorithimic or logical assessment by showing that they reflect the incoherence I've described as the "System 2 ex nihilo fallacy."

Nevertheless, how well we perform pattern recognition, for Margolis, will reflect the contribution of conscious, algorithmic types of reasoning.  The use of such reasoning (particularly in collaboration with experienced others, who can vouch through the use of their trained pattern-recognition sensibilities that we are arriving at the “right” result when we reason this way) stocks the inventory of prototypes and calibrates the unconscious mental processes that are used to survey and match them to the phenomena we are trying to understand.

As I have explained in a previous post (one comparing science communication and “legal neutrality communication”), this position is integral to Margolis’s account of conflicts between expert and lay judgments of risk. Experts, through a process that involves the conscious articulation and sharing of reasons, acquire a set of specialized prototypes, and an ability reliably to survey them, suited to their distinctive task. 

The public necessarily uses a different set of prototypes—and sees different things—when it views the same phenomena.  There are bridging forms of pattern recognition that enable nonexperts to recognize who the “experts” are—in which case, the public will assent to the experts’ views (their “pictures,” really).  But sometimes the bridges collapse; and there is discord.

Margolis’s account is largely (and brilliantly) synthetic—an interpretive extrapolation from a wide range of sources in psychology and related disciplines.  I don’t buy it in its entirety, and in particular would take issue with him on certain points about the sources of public conflict on risk perception.

But the IRM structure of his account seems right to me.  It is certainly more coherent—because it avoids the ex nihilo fallacy—than the Orthodox view.  But it is also in better keeping with the evidence. 

That evidence, for me, consists not only in the materials surveyed by Margolis.  They include too work by contemporary decision scientists.

The work of some of those decision scientists—and in particular that of Ellen Peters—will be featured in Part 3.

I will also take up there what is in fact the most important thing, and likely what I should have started with: why any of this matters.

Any “dual process theory” of reasoning will necessarily be a simplification of how reasoning “really” works.

But so will any alternative theory of reasoning or any theory whatsoever that has any prospect of being useful.

Better than simplifications, we should say such theories are, like all theories in science, models of phenomena of interest.

The success of theories as models doesn’t depend on how well they “correspond to reality.”  Indeed, the idea that that is how to assess them reflects a fundamental confusion: the whole point of “modeling” is to make tractable and comprehensible phenomena that otherwise would be too complex and/or too remote from straightforward ways of seeing to be made sense of otherwise.

The criteria for judging the success of competing models of that sort are pragmatic: How good is this model relative to that one in allowing us to explain, predict, and formulate satisfying prescriptions for improving our situation?

In Part 3, then, I will also be clear about the practical criteria that make IRM conception so much more satisfying than the Orthodox conception of dual process reasoning.

Those criteria, of course, are ones that reflect my interest (and yours; it is inconceivable you have gotten this far otherwise) in advancing the scientific study of science communication--& thus perfecting the Constitution of the Liberal Republic of Science


PrintView Printer Friendly Version

EmailEmail Article to Friend

References (4)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (10)

You are probably already familiar with this, but I find this appealing: "Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making" by Glockner & Betsch, and then there is the "global workspace" discussed by Dehaene. I look forward to reading some of the work of Howard Margolis.

July 24, 2013 | Unregistered CommenterSteve Speise


In fact, the article is among the many things I don't know. Many thanks for pointing it out

July 25, 2013 | Unregistered Commenterdmk38

IRM is interesting.
From what we are doing at the cellular level, things look a little different. From our point of view, there is no stable prototype such as "dog". There are objects that contain patterns for which an output may be saying the word 'dog' out loud, but, for another person, the same input pattern may yield the output 'dog', 'dog or cat', or 'unknown four legged, small, furry animal', or 'who knows?'.
Since I don't calibrate the patterns for which I respond with 'dog' to the patterns for which you respond with 'dog', it is likely that our patterns are different a little or a lot. We, except by extensive 'expert' practice, do not have a way to calibrate our input patterns to our output words.
As an example, my friend's dog barks at the TV when the dog sees or hears another dog, or sometimes a cow or horse, or, less often, certain vehicles; but she does not bark at other things. We do not correct her when she barks at things that we do not categorize as dogs. As far as we can tell, since the bark sounds the same, her prototype 'dog' is a well defined set of objects, most of which are dogs.
From the point of view of cultural cognition, we may want to realize that the same emitted sounds by two different people may not have the same internal 'meaning.' To have productive conversations, sometimes, we each may need to get a better view of each others' internal meaning. For output words that I have investigated, such as 'genome,' experts and non-experts have different meanings. For this word in particular even experts routinely have different and contradictory meanings. This makes sense if 'genome' is a voiced output for an unknowable internal concept.
In conversations about polarizing topics, I am much more sensitive than I used to be in probing whether my 'prototypes' are the same as the other person's 'prototypes'. Where the prototypes are close, conversation is easy. When the prototypes differ substantially, we have to work through the differences.
The conversation make acquire more inherent complexity if my concept of 'dog' is strongly connected to 'pet', 'happy', and 'beach' but yours is connected to 'outside', 'working,' 'replaceable,' and 'dangerous.' Each of our 'prototypes' is intimately connected with idiosyncratic other prototypes.

July 25, 2013 | Unregistered CommenterEric Fairfield


Are you familiar wit the work of Paul Churchland & in particular his book The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain. (1995)?

It presents an extended account of the neural basis of the sort of pattern-recognition theory that Margolis treats as foundational for all of cognition. It is a bit dated; also very synthetic & openly conjectural in many places. I am wondering how it compares & how it holds up in light of the work you describe

July 25, 2013 | Unregistered Commenterdmk38

I am not familiar with either Margolis or Churchland. We have just reached the point where we can extrapolate from what we think we know to what they say. Clearly, I have to read both of them.
My surprise for today was that when you said 'prototype' I could translate this into cellular terms in general and then person by person. The discussions on this blog are helping me tremendously in determining when I am talking sense and when I am not.

July 25, 2013 | Unregistered CommenterEric Fairfield

I'm so happy that you are bringing attention to Howard Margolis. He was a friend, and one of the people I most admired. I wish he hadn't died so soon. To say that he was a "social psychologist" is not quite right. Yes, he was, sort of, but he was also (I recall) a CIA employee, an editor and writer of the news section of Science, and a professor of political science at Chicago. His book on "Selfishness, altruism, and rationality" anticipated many ideas that became fashionable 20 years later. He was rarely given sufficient credit. And, yes, "Patterns ..." is a great book.

July 25, 2013 | Unregistered CommenterJon Baron

You no doubt know more than I do about this, but my sense is that pattern-recognition & prototypical reasoning were the major building blocks of theories of neural functioning in psychology before fMRIs arrived on the seen (i.e., when Churchland wrote his book & Magolis his). The measurements that can most easily be made w/ fMRIs are suited for testing hypotheses that posit modularity of cognitive operations, different ones of which are understood are uniquely assigned to particular brain "regions." But my understanding -- I'm sure you'll correct me if I'm wrong -- is that the only neuroscientists who see things this way are psychologists who study humans w/ fMRIs. Neuroscientists who, say, study monkeys & who measure electrical activity in individual neurons use models that posit the sort of spatially separated, nonparametric interaction of diverse brain regions that one would expect to see if neural activity really consisted in -- neural networks! Neural network models are used in computer science & other fields in which researchers investigate prototypical (as opposed to deductive or algorithmic) classification schemes. I talked to a very good psychologist who uses fMRIs to conduct studies of moral reasoning and he told me that techniques were being developed that might make enable them to be used to test hypothese involving pattern recognition & cognition in human beings. I hope that is true.

July 25, 2013 | Registered CommenterDan Kahan


Sadly, although he was on the faculty at Chicago when I was, I never met him!

I find his argument w/ Slovic (developed in Dealing w/ Risk by carried on elsewhere too) fascinating. I am persuaded variously by one & then the other. Margolis would recognize this as a pattern-recognition conundrum.

July 25, 2013 | Registered CommenterDan Kahan

Dan’s preference for IRM over the dichotomy of system 1 and system 2 needs to confront two basic features of the human brain and explain them to be preferred.

First, of the three main structures of the human brain, reptilian, limbic and neocortical, the limbic brain is evolutionary older than the neocortex. That alone would suggest that the dichotomous characterization is on firmer ground. See a brief description at

Second, this evolutionary morphology is also mirrored in the development of the brain from birth to adulthood. The prefrontal cortex is among the last to mature in humans (around age 25 in males), and is the locus of reason and judgment. It is well known that adolescents suffer from impetuosity and make ill-considered decisions, another way to saying systems 1 and 2 are different and that system 1 runs the show until checked by system 2. In these situations Jonathan Haidt’s preferred terms of the elephant (system 1) and the rider (system 2) seem much more descriptive.

Anyone who has raised a child knows how easily “what were you thinking” springs from one’s lips when a teenager commits a thoughtless act. And the answer is that they weren’t. At least one research team has made this a focus of their work:

So Dan’s proposal needs to at least address why IRM does a better job of describing and explaining at least these facts to be considered a contender. It also needs to make full reference of experimental evidence to support it. It would seem that the “orthodox” view is better.

July 29, 2013 | Unregistered CommenterHaynes

fMRI is a wonderful tool. For the things that I try to understand, it is much more fine grained than previous tools and can be done on living subjects but is far too coarse grained for deep understanding. So, if I extrapolate from our neuron by neuron and synapse by synapse model of the brain to the scales of fMRI images (millions of neurons), I predict that there have to be evolutionarily stable regions that yield the observed fMRI images. On the other hand, the meanings that we assign to these metabolically active regions during some psychological test is often very different from the meaning that psychologists assign to the same results.
On System 1/2 versus IRM, I think that the jury is still out. I don't know of any experiments whose results clearly distinguish among the two. As to the three layers of the brain and the late developing neocortex, those facts, while developmentally true, seem mostly like a comfortable red herring that will turn out not to be right once we understand exact firing pathways through all these regions for well controlled tasks.
I liked your references and will read them and others if you can supply them. Thanks.
For definitive conclusions on the questions that you both discuss, it will be a while. There are a lot of critical experiments that still need to be done. We will do some of these experiments. For most of them, we will collaborate widely with others.

July 29, 2013 | Unregistered CommenterEric Fairfield

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>