follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Using likelihood ratios -- not pee values -- to weigh the evidence on judges & motivated reasoning | Main | So are humans very good at designing computer programs to predict Supreme Court decisions? You tell me! »
Tuesday
Apr142015

Why it is a *mistake* to draw inferences about the (in domain) reasoning of experts from studies of motivated reasoning on the part of the general public

This is an excerpt from “ ‘Ideology’ or ‘Situation Sense’? An Experimental Investigation of Motivated Reasoning and Professional Judgment.” That paper reports the results of a CCP study designed to test whether judges are vulnerable to motivated reasoning.

As described in more detail in a previous post, the answer turned out to be "yes and no": yes when they assessed societal risks like climate change and marijuana legalization, on which judges, like members of the public, polarized along cultural lines; but no when those same judges analyzed statutory interpretation problems that were designed to and did trigger ideologically biased reasoning in members of the public who shared those judges' values.

This excerpt discusses the implications of this finding for the question whether scientists should be viewed as vulnerable to ideologically motivated reasoning when they are making in-domain judgments relating to climate change and other societal risks.

 

* * *

C. Motivated reasoning, professional judgment & political conflict 

... Sensibly, citizens tend to treat “scientific consensus” on environmental risk and other highly technical matters as a reliable normative guide for decisionmaking, collective and individual. But what makes it sensible for them to do so is that the method of inquiry scientists themselves use does not afford existing “scientific consensus” any particular weight. On the contrary, the entitlement of any previously supported proposition to continued assent is, for science, conditional on its permanent amenability to re-examination and revision in light of new evidence.

If, then, there were reason to believe that scientists themselves were being unconsciously motivated to discount evidence challenging “consensus” positions on issues like climate change, say, or nuclear power or GM foods, by their cultural outlooks, that would be a reason for treating apparent scientific-consensus positions as a less reliable guide for decisionmaking.

Various commentators, including some scientists, now assert that identity-protective reasoning has pervasively distorted the findings of climate scientists, making their conclusions, as reflected in reports like those issued by the Intergovernmental Panel on Climate Change, the National Academy of Sciences, and the Royal Society, unreliable.

Obviously, the best way to test this claim is by conducting valid empirical studies of the scientists whose findings on risk or other policy-relevant facts are being challenged on this basis. But we believe our study, although confined to judges and lawyers, furnishes at least some evidence for discounting the likelihood of the hypothesis that climate scientists or other comparable experts are being influenced by identity-protective reasoning.

The reason is the connection between our study results and the theory of professional judgment on which the study was founded.

As explained, the theoretical basis for our study design and hypotheses was the account of professional judgment most conspicuously associated with the work of Howard Margolis. Margolis treats professional judgment as consisting in the acquisition of specialized prototypes that enable those possessing the relevant form of expertise to converge on the recognition of phenomena of consequence to their special decisionmaking responsibilities.

Margolis used this account of professional judgment among scientists to help explain lay-expert conflicts over environmental risk. Nonexperts necessarily lack the expert prototypes that figure in expert pattern recognition. Nevertheless, members of the public possess other forms of prototypes—ones consisting of what expert judgments look like—that help them to recognize “who knows what about what.” Their adroit use of these prototypes, through the cognitive process of pattern recognition, enables them to reliably converge on what experts know, and thus to get the benefit of it for their own decisionmaking, despite their inability to corroborate (or even genuinely comprehend) that knowledge for themselves.

Nevertheless, in Margolis’s scheme, the bridging function that these “expertise prototypes” play in connecting lay judgments to expert ones can be disrupted. Such sources of disruption create fissures between expert and lay judgment and resulting forms of public conflict over environmental risk.

Identity-protective cognition can be understood to be a disrupting influence of this character. When a fact subject to expert judgment (Is the earth heating up and are humans causing that? Does permitting citizens to carry handguns in public make crime rates go up or down? Does the HPV vaccine protect adolescent girls from a cancer-causing disease—or lull them into sexual promiscuity that increases their risk of pregnancy or other STDs?) becomes entangled in antagonistic cultural meanings, positions on that fact can become transformed into badges of membership in and loyalty to opposing groups. At that point the stake people have in protecting their status in their group will compete with, and likely overwhelm, the one they have in forming perceptions that align with expert judgments.

As we have noted, there is a striking affinity between the account Margolis gives of pattern recognition in expert judgment among scientists and other professionals and Karl Llewellyn’s account of “situation sense” as a professionalized recognition capacity that enables lawyers and judges to converge on appropriate legal outcomes despite the indeterminacy of formal legal rules. We would surmise, based on this study and previous ones, a parallel account of public conflict over judicial decisions.

Lacking lawyers’ “situation sense,” members of the public will not reliably be able to make sense of the application of legal rules. But members of the public will presumably have acquired lay prototypes that enable them, most of the time anyway, to recognize the validity of legal decisions despite their own inability to verify their correctness or comprehend their relationship to relevant sources of legal authority.

But just like their capacity to recognize the validity of scientific expert judgments, the public’s capacity to recognize the validity of expert legal determinations will be vulnerable to conditions that excite identity-protective reasoning. When that happens, culturally diverse citizens will experience disagreement and conflict over legal determinations that do not generate such disagreement among legal decisionmakers.

This was the basic theoretical account that informed our study. It was the basis for our prediction that judges, as experts possessing professional judgment, would be largely immune to identity-protective cognition when making in-domain decisions. By access to their stock of shared prototypes, judges and lawyers could be expected to reliably attend only to the legally pertinent aspects of controversies and disregard the impertinent ones that predictably generate identity-protective cognition in members of the public—and thus resist cultural polarization themselves in their expert determinations. That is exactly the result we found in the study.

Because this result was derived from and corroborates surmises about a more general account of the relationship between identity-protective reasoning and professional judgment, it seems reasonable to imagine that the same relationship between the two would be observed among other types of experts, including scientists studying climate change and other societal risks. Public conflict over climate change and like issues, on this account, reflects a reasoning distortion peculiar to those who lack access to the prototypes or patterns that enable experts to see how particular problems should be solved. But since the experts do possess access to those prototypes, their reasoning, one would thus predict, is immune to this same form of disruption when they are making in-domain decisions.

This is the basis for our conclusion that the current study furnishes reason for discounting the assertion that scientists and other risk-assessment experts should be distrusted because of their vulnerability to identity-protective cognition. Discount does not mean dismiss, of course. Any judgment anyone forms on the basis of this study would obviously be subject to revision on the basis of evidence of even stronger probative value—the strongest, again, being the results of a study of the relevant class of professionals.

At a minimum, though, this study shows that existing work of the impact of identity-protective cognition on members of the public has no probative value in assessing whether the in-domain judgments of climate scientists or other risk-assessment professionals is being distorted by this form of bias. Generalizing from studies of members of the public to these experts would reflect the same question-begging mistake as generalizing from such studies to judges. The results of this study help to illustrate that commentators who rely on experiments involving general-public samples to infer that judges are influenced by identity-protective cognition are making a mistake. Those who rely on how members of the public reason to draw inferences about the in-domain judgments of scientists are making one, too.

* * *

Now here is one more thing that is worth noting & that is noted (but perhaps not stressed enough) in a portion of the paper not excerpted here: the conclusion that professional judgment insultates experts from identity-protective cognition (the species of motivated reasoning associated with ideologically biaed information processing) either in whole or in part does not mean that those experts are not subject to other cognitive biases that might distort their judgments in distinct or even closely analogous ways! There is a rich literature on this.  For a really great example, see Koehler, J.J., The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993)

Dynamics of cognition need to be considered with appropriate specificity--at least if the goal is to be clear and to figure out what is actually going on.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (10)

Dan =

==. "If, then, there were reason to believe that scientists themselves were being unconsciously motivated to discount evidence..."

I might suggest that you speak there of influenced rather than motivated?


==> "At a minimum, though, this study shows that existing work of the impact of identity-protective cognition on members of the public has no probative value in assessing whether the in-domain judgments of climate scientists or other risk-assessment professionals is being distorted by this form of bias. "

Maybe this is just semantics, but I gotta say, that's a bit of a tough nut for me to crack. I'm having some trouble reconciling that comment with the one that follows in bold: Discount does not mean dismiss.

While I agree that your findings should raise the bar for people to cross before drawing conclusions about political bias influencing climate scientists, I wouldn't agree that studies of motivated reasoning with samples drawn from the general public have no, as in zero, nada, zilch, probative value for evaluating political influence on climate scientists. If we know that motivated reasoning is a baseline human tendency, while findings among a sampling of judges could help inform our understanding of the reasoning among different experts in a different domain of expertise, as you state we need to recognize that the cross-domain applicability is limited to some degree.

I also think a lot of this relates to another problem - how we define who does or doesn't have climate science expertise. If we look at the parallel, where judges have been to law school and have either been appointed or elected to serve at a court, that's a pretty clear definition. With climate science, how do we determine who qualifies based on their expertise? Does someone like NiV, who as near as I can tell has a great deal of domain-relevant expertise, belong in a different grouping? How about Dana Nuccitelli? Of course, that all goes back to the discussions related to"consensus" and whether or not we can see a large degree of uniformity in opinions of climate science "experts" w/r/t the influence of ACO2 emissions on the climate - where the magnitude of the uniformity that people see is largely a function of how they define who does or doesn't have climate science expertise.

April 14, 2015 | Unregistered CommenterJoshua

"With climate science, how do we determine who qualifies based on their expertise? Does someone like NiV, who as near as I can tell has a great deal of domain-relevant expertise, belong in a different grouping?"

Thank you!

I agree with the point you're making, which is almost the same as the one I was planning to make while I was reading Dan's post. If Dan's reasoning is correct, then I as a scientist formally trained in the methods of coming to objective conclusions and knowing a great deal about climate science am shown to be immune to motivated reasoning and similar cognitive biases! Hooray! So we can relax, and all trust my judgement now? ;-)

No, I don't think so!

As Feynman put it: "Science is the belief in the ignorance of experts." We have to include ourselves in that. Which is why it is not enough to just have a job as "a scientist", to have training and qualifications and so on. You have to have strong principles - to know what standards science requires of you and to resist the temptation to take short-cuts and make compromises, even for what seems like a good cause. It's the same way it takes more than a deep knowledge of the Bible and Church doctrine to be a good Christian. And not every Christian is. It's the same with scientists.

I am certainly impressed that judges can be so like-minded in their professional work, and have (provisionally) revised my beliefs accordingly. Although I agree with Mill that "we are not therefore obliged to conclude that all its consequences must be beneficial."

It does persuade me that there are ways to immunise oneself against certain sorts of biases. It doesn't persuade me that the most notorious of the climate scientists have been using them. Not when they've explicitly said thing indicating that they're not. It certainly doesn't convince me that I'm immune! Which is why I'll keep on checking my thinking against the thoughts of people with different biases.

And for Dan to stick to his conclusion despite climate scientists saying stuff like "Each of us has to decide what the right balance is between being effective and being honest", I'm not convinced that he's immune, either!

Nevertheless, a very interesting result! Now, you just need to find out how the judges do it...

April 14, 2015 | Unregistered CommenterNiV

==> "If Dan's reasoning is correct, then I as a scientist formally trained in the methods of coming to objective conclusions and knowing a great deal about climate science am shown to be immune to motivated reasoning and similar cognitive biases! "

Well, that was my point, but only sort of. There's also some reverse TravisBickleism going on there. The point being, the generalization may hold true for the majority of climate scientists, but not all. And that was the point about how we then get into a definitional discussion. Who is a climate scientist.

BTW - FYI, (and much to your surprise I'm sure, :-) ), I think that your reading of Schneider's meaning is flawed.

April 14, 2015 | Unregistered CommenterJoshua

@NiV:

I think its a nonsequitur to say that we can trust the judgment of any individual or even group of professionals simply b/c the exercise of professoinal judgment is characterized by resistance to identity-protective cognition for in-domain decisoins.

What am I missing?

April 14, 2015 | Registered CommenterDan Kahan

Dan,

"I think its a non sequitur to say that we can trust the judgment of any individual or even group of professionals simply b/c the exercise of professional judgment is characterized by resistance to identity-protective cognition for in-domain decisions."

When you say:

If, then, there were reason to believe that scientists themselves were being unconsciously motivated to discount evidence challenging “consensus” positions on issues like climate change, say, or nuclear power or GM foods, by their cultural outlooks, that would be a reason for treating apparent scientific-consensus positions as a less reliable guide for decisionmaking.

What do you mean by "reliable guide for decisionmaking"? If it turns out that they're not motivated to discount evidence, wouldn't that mean there is more reason for treating their consensus as a reliable guide? What am I missing?

"BTW - FYI, (and much to your surprise I'm sure, :-) ), I think that your reading of Schneider's meaning is flawed."

Sounds interesting. How do you interpret it?

April 16, 2015 | Unregistered CommenterNiV

The thread starting at his link would be the place to start the convo.

http://judithcurry.com/2014/12/17/ethics-and-climate-change-policy/#comment-656355

Although I'm not sure what else needs to be said, actually. If you are inclined to do so, know that there is no point in bothering take the time writing a comment where you lecture me as to what Schneider actually meant. I'm certainly willing to engage in a discussion about what he might have meant, but if you think that my view is only explainable by my ignorance or lack of understanding, there's really not much of anywhere else to go.

April 16, 2015 | Unregistered CommenterJoshua

@NiV:

Yes, if there were reason to believe that professional judgment can be subverted by identity-protective cognition on issues that generate the same in the public, there’d be less grounds for confidence on “expert consensus” on those issues.

But it obviously doesn’t follow, logically or practically, that “we can relax, and all trust [your] judgement” now.

1st, you are an indivdiual (as far as I know). It is not in the nature of findings like these to help us evaluate the quality of the reasoning of any individual, whether she is a member of the public, an expert, or a house pet.

2d, there are, as pointed out in the paper and in this post, reasoning defects other than identity-protective cognition, and we know that professional jusgment is indeed vulnerable to them in some conditions.

3d, the form of “professional judgment” being exercised, even if uninfluenced by identity-protective cognition in the in-domain judgment, might be unworthy of confidence if it is invalid. I don’t think that is so for the sort of expert knowledge climate scienitsts possess. But I think it is a very very big concern for the sort possessed by judges and lawyers.

There is a difference between reliability and validity. “Situation sense” is the former – and nothing more.

April 16, 2015 | Registered CommenterDan Kahan

" If you are inclined to do so, know that there is no point in bothering take the time writing a comment where you lecture me as to what Schneider actually meant."

Well, actually, I was just curious. My original intention was to let you answer and let it pass. My sincere thanks for answering - I appreciate it. I didn't know that Schneider had said that, so I've learnt something interesting today. But what Schneider said is so outrageous that I just can't resist commenting on it. So please take this as a comment on Schneider rather than yourself.

"I would think that people who are interested in what Scheider meant would take the time to research what Schneider says he meant."

"I say we should execute all the greens by firing squad and when I say "execute all the greens by firing squad" I mean give them money and massages from beautilicious masseuses..."

It's perfectly clear what he meant. It's perfectly clear why he got into trouble over it. It's perfectly clear why he would want to back away from it.

I don't know how anyone can think "On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we'd like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public's imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have." really meant "What I was telling the Discover interviewer, of course, was my disdain for a soundbite-communications process that imposes the double ethical bind on all who venture into the popular media." There's no hint of "disdain" in that to me.

Anyway, there's no double ethical bind involved. If you don't like media soundbites, don't give them. As scientists you're ethically bound not to give them. End of story. All the other stuff is just being 'unethical'.

Dan,

Yes, I agree that it doesn't follow that you can all trust me and take my word for it - that was sort of my point. As Joshua keeps pointing out, my views on climate change conform to my political views. Is that a coincidence? I don't think it is. My view might be that it's the other guys who are biased and wrong, but either way you slice it I do think it can be agreed by both sides that political bias exists.

And by the same argument you use to say we can't evaluate individuals, I don't think we can use this statistical argument to evaluate individual climate scientists. When a climate scientist declare that they're making data up, that they know it will corrupt the scientific record, but that it's what scientists in their group always do and nobody cares, I don't see how we can use collective statistical arguments about professionals extrapolated from judges to dismiss or ignore this.

April 16, 2015 | Unregistered CommenterNiV

==> ". I didn't know that Schneider had said that, so I've learnt something interesting today."

I hadn't realized the extended context either until Lars Karlson provided it. That's because I had only heard about what Schneider had said from "skeptical" blogs where, ironically, due skeptical diligence was not given to evaluating what he meant (by at least, evaluating even if later dismissing, what he later said about what he meant).

==> ". But what Schneider said is so outrageous that I just can't resist commenting on it."

I can certainly understand that viewing what he said in the truncated context. However, I have a different take on his explanation (further context) . I could see how someone could see it as a walk back, but IMO, it looks like a clarification and further explanation. What he later said makes logical sense to me.

==> "It's perfectly clear what he meant. "

As I said in the thread at Judith's, IMO there's no way for someone else to actually know what he meant, so where I fault "skeptics" is in their confidence that they know for certain what he meant. Seems starkly unskeptical, IMO - not the least because, of course, their certainty could easily be predicted by even a cursory consideration of the notion of confirmation bias.

At any rate, consider how often you say in these threads that if you want to know what "skeptics" mean when they answer questions within a truncated context, then you should ask them what they meant to help get a more robust evaluation.

==> "I don't know how anyone can think..."

I'm simply never impressed with arguments from incredulity.

==> "Anyway, there's no double ethical bind involved.."

We can search all over the Interwebs to find scientists talking about the complicated nature of getting accurate representations of their scientific conclusions portrayed in the media. Judith has talked about it a number of times, for example. We recently saw Dan allude to that problem.

Judith gives incomplete information in public fora and can argue 'till the cows come home about her intent when doing so. In fact, it is an impossibility to express highly technical information for public consumption without some form of modification.

==> "If you don't like media soundbites, don't give them"

If you don't want to give media soundbites, then you are perfectly justified in not doing so. Meanwhile, there are plenty of scientists who, on a regular basis, feel a reason to try to balance the ethical tension between divergent goals. I suspect that we could come up with any number of scientists that you would not criticize for presenting modified technical information to the media.

==> "All the other stuff is just being 'unethical'."

I guess that more than you, I think that questions of ethics are quite complex and intrinsically subjective.

April 16, 2015 | Unregistered CommenterJoshua

"I hadn't realized the extended context either until Lars Karlson provided it. That's because I had only heard about what Schneider had said from "skeptical" blogs where, ironically, due skeptical diligence was not given to evaluating what he meant"

:-)
That's why I like talking to people who disagree with me. Different blindspots.

" I could see how someone could see it as a walk back, but IMO, it looks like a clarification and further explanation. What he later said makes logical sense to me."

I think it's possible that when Schneider used the word "honesty" to contrast against "effectiveness" he picked the wrong word. (Or perhaps unintentionally picked the right word...) Possibly what he meant to contrast was "technically accurate" versus "effective". Sceptics naturally read the opposite of "honesty" to be "dishonesty", especially in this context, while believers I'm sure would resist this interpretation.

But even with this change, it still doesn't address the main criticism which is that Schneider's purpose in simplifying the language is not to aid his audience's understanding at the highest level they can cope with, but to be more effective politically.

He may well be decrying the fact that he is forced to use soundbites to be politically effective, but it's not his use of soundbites that is the issue, but that in doing so his aims are political. Thus, he doesn't address the criticism. His original statement said several things, only one of which could, somewhat unconvincingly and with a drastic change of wording, be interpreted the way he says. But his original statement was not "just" about that.

Political activists naturally have a "the ends justify the means" attitude to political effectiveness, they seem to take it as a moral given that is beyond question, and as a result they have a blindspot to criticisms of it. I think it's entirely possible that Schneider didn't even understand what it was about his original statement that people thought was so bad. I think it's likely that it's why he made the statement in the first place. He didn't really understand what the ethical imperative on scientists to tell the whole truth and nothing but was all about.

April 17, 2015 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>