follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Why the science of science communication needs to go back to highschool (& college; punctuated with visits to museum & science film-making studio) | Main | Date-rape debate deja vu: the script is 20 yrs out of date »
Friday
Sep262014

Are military investigators culturally predisposed to see "consent" in acquaintance rape cases?

This is the last (unless it isn't) installment in a series of posts on cultural cogntion and acquaintance rape. The first excerpted portions of the 2010 CCP study reported in the paper Culture, Cognition, and Consent: Who Sees What and Why in Acquaintance Rape Cases, 158 U. Penn. L. Rev. 729 (2010). The next, drawing on the findings of that study, offered some reflections on the resurgence of interest in the issue of how to define "consent" in the law generally and in university disciplinary codes.

Below are a pair of posts. The first is by Prof. Eric Carpenter, who summarizes his important new study on how cultural predispositions could affect the perceptions of the military personnel involved in investigating and adjudicating rape allegations. The second presents some comments from me aimed at identifying a set of questions--some empirical and methodological, and some normative and political--posed by Carpenter's findings.

Culture, Cognition & Consent in the U.S. Military


The American military is in a well-publicized struggle to address its sexual assault problem.  In 1991, in the wake of the Tailhook scandal, military leaders repeatedly and publically assured Congress that they would change the culture that previously condoned sexual discrimination and turned a blind eye to sexual assault. 

Over the past two decades, new sexual assault scandals have been followed by familiar assurances, and Congress’s patience has finally run out.  As a result, the Uniform Code of Military Justice (UCMJ) is currently undergoing its most significant restructuring since it went into effect in 1951.  The critical issue who is going to make the decisions in these cases: commanders, as is the status quo, or somebody else like military lawyers or civilians.  

What does any of that have to do with the Cultural Cognition Project, you might ask?  Well, I was serving as a professor at the Army's law school when I read Dan's article, Culture, Cognition, and Consent

Those of us at the school were working very hard to train military lawyers and commanders on the realities of sexual assault and to dispel rape myths.  At a personal level, I was often frustrated by the resistance many people showed to this training, particularly the military lawyers.  I suspected this was because rape myths are rooted in deeply-held beliefs about how men and women should behave, and I could not reasonably expect to change those beliefs in a one-hour class.

One of Dan's findings, broadly summarized, was that those who held relatively hierarchical worldviews agreed to a lesser extent than those with relatively egalitarian worldviews that the man in a dorm-room rape scenario should be found guilty of rape. 

My reaction to his finding was a mixture of "ah-ha" and "uh-oh."  The military is full of hierarchical people. 

Continue reading

 
Is military cultural cognition the same as public cultural cognition? Should it be?


I’m really glad Eric Carpenter did this study.  I have found myself thinking about it quite a bit in the several weeks that have passed since I read it.  The study, it seems to me, brings into focus a cluster of empirical and normative issues critical for making sense of cultural cognition in law generally.  But because I think it’s simply not clear how to resolve these issues, I'm not certain what inferences—empirical or moral—can be drawn from Eric’s study.



 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (13)

I'll ask the same question I usually ask in a different context. Given that the experiment has identified that people's views on the 'facts' differ depending on their cultural/political background, and given that experimenters have cultural/political backgrounds too, what makes an experimenter so confident that a particular portion of the subjects were 'wrong' or that the 'rape myths' they believed in were 'wrong', such that we need to actively do something to change this state of affairs?

If people are influenced by their political/cultural preconceptions, then isn't it just as true that the people who thought it was rape are being affected too? We have noted the effect on those on the right of the graph, should we have a look at those on the left?

And if there is a possibility that this proposed shift in the burden of proof and a priori assumptions is actually culturally/politically motivated, what's the right way of making the decision? If bipartisan agreement cannot be reached, is it 'right' to overrule or bypass those of the other cultural persuasion?

September 27, 2014 | Unregistered CommenterNiV

@NiV:

You don't see how my post addresses your question?

September 27, 2014 | Unregistered Commenterdmk38

Dan,

No, I don't. You discuss whether the study provides any evidence of researchers in general being biased, and I agree it doesn't, and you discuss the question of whether the population average necessarily defines what is "right", and I agree it doesn't. But neither do you comment (so far as I can see) on the one-sided moral judgements being made or whether the researchers are any better able to define what is "right" than the population average.

I would regard it as a form of "partisan science". It advocates for a policy change on a politically controversial topic.

That's not necessarily a bad thing, if everybody recognises it for what it is. Free speech does not allow us to set limits on what tools policy activists are allowed to use, and they can therefore use science if they like. However, I was interested in whether researchers do actually recognise that it is a politically and morally partisan position they are working from?

Have I missed something? Or was it your point that from the evidence presented here there's no way to tell?

September 27, 2014 | Unregistered CommenterNiV

@NiV:

The point about professional habits of mind applies to those who study human cognition the same as it does to those who do climate science, law, medicine, paleontology, geology, chick sexing, etc.

I acknowledge that the issue is one that admits of empirical testing. But the best avaialble evidence, as I understand it, persuades me that professional habits of mind do generate convergence among culturally diverse professionals when they are engaged in decisionmaking in their domain.

Show me the evidence you rely on to the contrary. Although at that point, it will be you, not me, who is stuck in your logical trap, for it is your position, not mine, that says the study of motivated reasoning is iteself bound to impllode in contradiction b/c of motivated reasoning; if you show me evidence that you are "right," won't that that evidence give me reason to doubt your partiality?

I think the question is just a sport, really. There are all kinds of reasons for researchers to fall prey to self-serving and wishful thinking biases already before you add cultural cogntion. The process of science -- the competitive interplay of conjecture and refutation engaged in by many many people -- is the best check.

September 27, 2014 | Unregistered Commenterdmk38

"Show me the evidence you rely on to the contrary."

How about his or this? This one doesn't observe bias as such, but it does find inconsistency. It would appear scientific assessments are not objective. This one is on a different motivation for bias. I'm sure I've posted such evidence here before, although I can't find the post now. Not enough studies have been done in my view, but a few have, and so far as I can see on a brief perusal, all the ones I've found report that scientists are just as biased as anyone else - I've yet to come across one that has tested the question and found no biases.

Of course, the scientists here might well be biased in what they report - after all, you only want to publish interesting and surprising results, and who would be surprised to find scientists were unbiased?

Actually, given your results, I'd be highly surprised if scientists weren't strongly affected by polarisation due to motivated reasoning, and that their expertise made them all the more expert at fooling themselves. The history of science is full of such cautionary tales. And of course I've witnessed it at first hand in the climate debate.

It seems to me very unlike you to be taking such an assertion on faith without empirical evidence. You've talked often enough about how things have to be based on empirical evidence "all the way down". Of course, as a scientist, your identity is wrapped up in a self-image of scientists as paragons of rationality, so motivated reasoning would predict this. It's truly impressive that you would go so far to maintain consistency! ;-)

Not only that, it seems to me that such a result, besides being interesting in its own right, would be a powerful confirmation of your theory. The more scientifically educated, the more polarised, and it's clearly not any deficit in reasoning ability of scientific education that causes the problem. And if they're not, it would surely be interesting to find out what specific aspect of their thinking/training enables them to overcome it. I was taught when testing a theory to always check the extremes and corner cases. They are often the most enlightening and most likely to reveal exceptions.

"if you show me evidence that you are "right," won't that that evidence give me reason to doubt your partiality?"

Of course I'm partial! I have biases and blind spots, just like everyone else. That's why I go to efforts to talk to people who have different blind spots. This is why the scientific method does not rely only on strict adherence to procedure, but demands that results be critically challenged from a diversity of viewpoints. And even that's not infallible, as we all know. Diversity, not convergence, is the basis of our confidence in the reliability of science.

This thesis is of course confirming what I already believe, so it behooves me to be cautious. That's why I'm so lucky to have an expert in the field on hand, who I'm sure will point out the counter-evidence if I'm wrong! :-) That's what scientific conversations are about, after all.

"I think the question is just a sport, really. There are all kinds of reasons for researchers to fall prey to self-serving and wishful thinking biases already before you add cultural cogntion. The process of science -- the competitive interplay of conjecture and refutation engaged in by many many people -- is the best check."

I agree! I'd like to think that's what we were doing.

September 27, 2014 | Unregistered CommenterNiV

Sorry, messed up the second link somehow. Try this:
http://www.researchgate.net/publication/227683692_We_Value_What_Values_Us_The_Appeal_of_IdentityAffirming_Science/file/50463527acef61494b.pdf

September 27, 2014 | Unregistered CommenterNiV

NiV - the first and fourth links didn't work for me.

September 27, 2014 | Unregistered CommenterJoshua

NiV -

I'm not sure exactly what point you're making with those links (and might not be even if I could see the articles that each of them reference)...but anyway, I thought you might get a kick out of this:

"Abstract:
The purpose of the present study was the investigation of interaction effects between functional MRI scanner noise and affective neural processes. Stimuli comprised of psychoacoustically balanced musical pieces, expressing three different emotions (fear, neutral, joy). Participants (N=34, 19 female) were split into two groups, one subjected to continuous scanning and another subjected to sparse temporal scanning that features decreased scanner noise. Tests for interaction effects between scanning group (sparse/quieter vs continuous/noisier) and emotion (fear, neutral, joy) were performed. Results revealed interactions between the affective expression of stimuli and scanning group localized in bilateral auditory cortex, insula and visual cortex (calcarine sulcus). Post-hoc comparisons revealed that during sparse scanning, but not during continuous scanning, BOLD signals were significantly stronger for joy than for fear, as well as stronger for fear than for neutral in bilateral auditory cortex. During continuous scanning, but not during sparse scanning, BOLD signals were significantly stronger for joy than for neutral in the left auditory cortex and for joy than for fear in the calcarine sulcus. To the authors' knowledge, this is the first study to show a statistical interaction effect between scanner noise and affective processes and extends evidence suggesting scanner noise to be an important factor in functional MRI research that can affect and distort affective brain processes."

September 27, 2014 | Unregistered CommenterJoshua

@NiV:

1. The Morton et al. article is of exactly the sort that I say in my post doesn't support any inferences about professionals making in domain decisions. It studies general public samples reacting to forms of science on which they don't have professional expertise.

2. The Wilson study is great. I'm sure I've linked to it many times. Also to Koehler's classic on confirmation bias in scientists.

I said I agree that scientists are prone to these sorts of effects. That's what I said gets washed out by competitive interaction of conjecture & refutation.

But in any case, these sort of biases in the exercise of professional judgment operate independently of cultural predispositions that affect members' of the public's assessments of science that threatens or affirms their identity.

3. Researchers should for sure do well-designed studies to investigate that. I'm betting that such studies will show that scientists making in-domain assessments of evidence are not nearly so vulnerable to these effects as members of the public. I'm betting that based on my best understanding of how cultural cognition works (usually w/o generating polarization) to enable nonexperts to rationally discern what's known by experts. But if someone shows otherwise, I'll happily update. I don't feel any big stake in the answer being the one I happen to believe is more likely true.

4. The 2d point of my post is also responsive to your question. I say very clearly I don't think it is possible to derive any normative conclusions from evidence of the influence of cultural predispositions on fact perceptions. Such evidence doesn't show that the perceptions in question are wrong; it doesn't show that those who hold opposing ones are reasoning in a manner that is any less influenced by cultural predispositions. Believing that, why would it be affirming of my *cultural identity* to find one thing or another?

I am sure that some people will read the conclusions of the CCP date rape study as supporting their political or moral position on what the law should be here. That happens every time we do a study. It doesn't make me happy; it demoralizes me. It demoralizes me no matter which side the person making that mistake is on.

If I were finding that those who disagree do so b/c they are a stupid class of people, then I think your point would be more of a challenge to me. There are researchers who report finding that, of course. But for the reasons I have given about professional judgment, I don't myself think there's much basis for concludling the researchers who advance this position are doing so on account of motivated reasoning

September 28, 2014 | Registered CommenterDan Kahan

"I said I agree that scientists are prone to these sorts of effects. That's what I said gets washed out by competitive interaction of conjecture & refutation."

Yes, if there is a diversity of viewpoints within the field to provide that competition. What if there isn't?

"But in any case, these sort of biases in the exercise of professional judgment operate independently of cultural predispositions that affect members' of the public's assessments of science that threatens or affirms their identity."

Scientists are members of the public, and as human as the rest of them. For sure, there are many other types and sources of bias, but confirmation bias based on political/cultural preconceptions is surely included.

" I'm betting that such studies will show that scientists making in-domain assessments of evidence are not nearly so vulnerable to these effects as members of the public."

That strikes me as something which ought to require positive evidence for it to be believed. Why? By what mechanism? And if there is such a mechanism, wouldn't it provide the perfect antidote to a polluted science environment? Wouldn't we want to isolate it, refine it, and teach it to all our citizens?

If there is such an technique, can you think of anything more important to understand?

"I'm betting that based on my best understanding of how cultural cognition works (usually w/o generating polarization) to enable nonexperts to rationally discern what's known by experts."

Hmm. This is slightly different to your usual phrasing: - to rationally discern what's known to science. Don't you think there's an important difference?

But I wonder if the non-polarised version is really any more reliable. We know that when people come to diametrically opposed positions, at least one of them must be wrong. But if people all converge on the same position, who's to say it's right? The problem is not so obvious - there being no noisy arguments to highlight it - but given the way science works, and indeed, considering the Enlightenment arguments for open debate, why should we think less diversity of views should be more reliable in finding the truth?

Maybe, if we've already had the debate - if the matter was controversial but is no more - we can talk of convergence. But that's a rather different criterion. Is it the one people apply in their judgements?

"Believing that, why would it be affirming of my *cultural identity* to find one thing or another?"

TBH I don't think you do. But I think your guest author does, and I suspect you didn't notice. Or at least, you gave no sign you noticed - although you might just be being polite.

Consider: "At a personal level, I was often frustrated by the resistance many people showed to this training, particularly the military lawyers." Frustration? But if the views causing this frustration are part of cultural worldviews where both sides are legitimate viewpoints, or at least, such that we cannot draw normative conclusions, should we be trying to change them through "training"? There is such a thing as freedom of belief.

Consider "My reaction to his finding was a mixture of "ah-ha" and "uh-oh."" Is "uh-oh" the correct reaction? From the point of view of people of a hierarchical persuasion, it would mean the outcomes of trials were safer, and less likely to be miscarriages of justice convicting the innocent.

Consider "But at the micro-level, when deciding a particular case, these individuals may unconsciously rely on a cognitive process that interferes with their ability to accurately perceive the relevant information. And when those cases are aggregated, we see a system that is not taking the sexual assault problem seriously." The first sentence is correct, but the second sentence isn't supported by the evidence - we see a system that is either taking the sexual assault problem too seriously because of those on the left of the graph, or not seriously enough, because of those on the right. We can't say which - or rather we can, but speaking only from our own worldview's moral preconceptions, not from the evidence.

Consider: "These findings have important public policy implications. Foremost, something has to change." That's specific policy advocacy. As you say, "some people will read the conclusions of the CCP date rape study as supporting their political or moral position on what the law should be here."

My point is that this study draws explicit conclusions that support one side of the policy debate, and will undoubtedly be used for that purpose by policymakers. But this is not a conclusion that can be drawn from the evidence - all it shows is that people's opinions differ, it doesn't say who's right, or that things have to change - those aspects of the conclusion came from the researcher's own moral preconceptions and policy preferences.

And like I said, there's nothing wrong with that so long as everyone recognises this for what it is: advocacy. My question is, does the community recognise it?

"It doesn't make me happy; it demoralizes me. It demoralizes me no matter which side the person making that mistake is on."

I think that answers my question. Thank you.

September 28, 2014 | Unregistered CommenterNiV

@NiV: But you wouldn't say, would you?, that coming to a normative conclusion based on what one's own empirical research suggests is true means the empirical research was "advocacy"? I can be motivated to examine a question empirically by some value or another; based on what I figure out, I might conclude that a certain course of action is justified. But if someone else isn't bothered, or values other things that are at odds with the recommended course of action, then there's nothing in the empirics that speaks to that. Indeed, I am very gratified when people who disagree with me on the normative implications of my findings still find the work valid. If they they embed the findings in an argument that has normative implications I disagree with, that's even better -- for my sense that I'm doing valid empirical resesarch, although obviously I will be prepared to disagree and argue with the person based on what strikes me as the best moral perspective.

September 29, 2014 | Registered CommenterDan Kahan

NiV -

Which of the "rape myths" that the author refers to, do you think aren't myths?

September 29, 2014 | Unregistered CommenterJoshua

"But you wouldn't say, would you?, that coming to a normative conclusion based on what one's own empirical research suggests is true means the empirical research was "advocacy"?"

The issue in this case is that the normative conclusion is not "based on the empirical research". It's an assumption on the part of the researcher.

There's nothing in the empirical research that says it is the people on the right hand end of the graph that are making the wrong choice, rather than the left hand end. The empirical research only concludes that people differ in their opinions, and this is correlated with their cultural alignment. All the stuff about the system not taking sexual assault seriously and that the foremost policy implication is that something has to change doesn't come from the empirical evidence.

There's no problem with having a normative reason for asking some question, and then doing an empirical test to answer that question. The issue is around letting your normative preferences colour your interpretation of the empirical evidence, or giving people the impression that the evidence itself supports the normative position where it doesn't.

My original question was: "Given that the experiment has identified that people's views on the 'facts' differ depending on their cultural/political background, and given that experimenters have cultural/political backgrounds too, what makes an experimenter so confident that a particular portion of the subjects were 'wrong' or that the 'rape myths' they believed in were 'wrong', such that we need to actively do something to change this state of affairs?" Again, the empirical evidence only tells us that people's views differ depending on their cultural/political background. It doesn't tell us any of them are wrong to do so, or that we need to change this state of affairs. So why is the author drawing that as a conclusion - let alone the "foremost" conclusion? It's not science, so what is it?

September 30, 2014 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>