follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Law & Cognition 2016, Session 2 reading list & q's | Main | Law & Cognition 2016: On-line seminar »
Wednesday
Aug312016

Law & Cognition 2016, Session 1 recap: Whence the Likelihood Ratio?

I'm going to do my gosh darned best to recap each session of the seminar this yr. Here's Session 1 ...

The objective of session 1 was two-fold: first, to introduce Pennington & Hastie’s “Story Telling Model” (STM) as a mechanism of jury information processing; and second, to establish the “missing likelihood ratio” (MLR) as the heuristic foundation for engaging mechanisms of jury information processing generally.

In the “Self-defense?” problem puts the MLR problem in stark terms.

In the problem, we are presented with a series of facts the significance of which is simultaneously indisputableand highly disputed.  What’s undeniable is that each of these facts plainly matters for the outcome. What’s unclear, though, is how.

Rick paused for a period of time after exiting the building and viewed Frank as he approached him from across the street. Was Rick frozen in fear? Adopting a stance of cautious circumspection? Or was he effectively laying a trap, allowing Frank to advance close enough to enable a point-blank fatal shot and create a credible claim of his need to have fired it? 

Likewise, Rick emerged from a secured building lobby accessible only by use of an electronic key. Was his failure to seek immediate refuge in it upon spying Frank evidence of his intention to lure Frank close enough to him to make a deadly encounter appear imminent—or would it possibly have put Rick in deadly peril to turn his back on Frank in order to re-enter with use of the electric key?

Were Frank’s words—“What are you looking at, you freak? I’m going to cut your damned throat!”—a ground for perceiving Frank as harboring violent intentions? Or was the very audacity and openness of the threat inconsistent with the stealth that one would associate with an actor intent on robbing another?

Frank had begun to lurch toward Rick moments before Rick fired the shot. Was Frank’s erratic advance grounds for viewing him as a lethal risk or for seeing him as too stupefied by drink to reach Rick at all, much less apprehend him had Rick made any effort to escape?

Rick immediately called 911; doesn’t that show he harbored law-abiding intentions? But doesn’t the calm matter-of-fact tone of his communication show he wasn’t genuinely in fear for his life?

What if we roll back the tape?  Rick had read of the string of robberies in his neighborhood; didn’t that give him grounds for fearing Frank? But what did it give him grounds for fear of? One cannot lawfully resort to deadly force to repel the taking of one's property, even the forcible taking of it.

Rick started to carry a concaled gun after reading of the robberies.  Was that the reaction of a person who honestly feared for his life—or one of a person who lacked regard for the supreme value of life embodied in the self-defense standard, which confines use of deadly force to protection of one’s own vital physical interests?

In the face of these competing views of the facts, “Bayesian fact-finding” is an exercise in cognitive wheel spinning.

Formally, Bayes Theorem says that a factfinder should revise his prior estimate of some factual proposition or like hypothesis (expressed in odds) by multiplying it by a factor equivalent to how much more consistent a new piece of information is with that proposition than with an alternative one: posterior odds = prior odds x  likelihood ratio.

Legal theorists argue about whether this is a psychologically realistic picture of juror decisionmaking in even a rough-and-ready sense.

But as a problem like self-dense helps to show, the Bayesian fact-finding instruction is bootless in a case like Self-Defense.

There the decisionmaking issue there is all about what “likelihood ratio” or weight to assign all the various pieces of evidence in the case.

Do we assign a likelihood ratio “greater than 1” or “less” to Rick’s behavior in buying the gun, in standing motionless outside the building as Frank approached, in failing to seek protection inside the lobby, in placing a call to 911 in the manner he did; ditto for Frank’s tottering advance and his bombastic threat?

Bayes’s Theorem tells us what to do with the likelihood ratio but only after we have derived it—and has nothing to say about how to do that.

This is the MLR dilemma.  It’s endemic to dynamics of juror decisionmaking.  And it’s the problem that theories like Hastie and Pennington’s Story Telling Model (STM) are trying to solve.

STM says that jurors are narrative processors. They assimilate the facts to a pre-existing story template, one replete with accounts of human goals and intensions,  the states of affairs that trigger them, and the consequences they give rise to.

In a  rational reconstruction of jury fact-finding, the story template is cognitively prior to any Bayesian updating. That is, rather than being an outcome constructed after jurors perform a Bayesian appraisal of all the pieces of evidence in the case, the template exists before the jurors hear the case, and once activated functions as an orienting guide that  motivates the jury to conform the individual pieces of evidenced adduced by the parties to the outcome it envisions.

Indeed, it also operates to fill in the inevitable interstitial gaps relating to intentionality, causation, and other unobservables that form the muscle and sinew necessary to transform the always skeletal trial proof into a full-bodied reconstruction of some real-world past event.

Schematically, we can think of the story template as shaping juror’s priors, as supplying information or evidence over and above what is introduced at trial, and then determining the likelihood ratio or weight to be assigned to all the elements of the trial proof.

Click it! Click it good!

Whence the template? Every juror, P&H suggest, comes equipped with an inventory of templates stocked by personal experience and social learning.

The trial is not a conveyor belt of facts presented to the jury for it to use, one-by-one, to fabricate a trial outcome.

It is a contest in which each litigant endavors to trigger as quickly and decisively as possible selection of the most favorable case-shaping template from the jury’s inventory . . . .

Or so one would gather from P&H.

The questions for us, always, about such an account are always 3: (1) is it  true; if so (2) what use is it for a lawyer; and (3) what significance does it have for those intent on making the law work as well as it ppossibly can?

What are the answers?

You tell me!

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (19)

Very useful. Thanks.

August 31, 2016 | Unregistered CommenterEric Fairfield

Whence the template? Every juror, P&H suggest, comes equipped with an inventory of templates stocked by personal experience and social learning.
This is a new idea... how?
I'm pretty sure this is precisely <I>why we have juries.

September 1, 2016 | Unregistered Commenterlucia

@Lucia--

1. I'm not sure this is "new" or "news" to a lawyer. Although in fact, I think they are less likely to *know* than simply *be* this theory of how cases get litigated. That is, they don't have to bother thinking about it; they just do it naturally.

2. But far from being "what we have juries for," if STM is right, then maybe this is why we shouldn't have them?

The key is not "whence the templates"; it's this --

In a rational reconstruction of jury fact-finding, the story template is cognitively prior to any Bayesian updating. That is, rather than being an outcome constructed after jurors perform a Bayesian appraisal of all the pieces of evidence in the case, the template exists before the jurors hear the case, and once activated functions as an orienting guide motivates the jury to conform the individual pieces of evidenced adduced by the parties to it.

Indeed, it also operates to fill in the inevitable interstitial gaps relating to intentionality, causation, and other unobservables that form the muscle and sinew necessary to transform the always skeletal trial proof into a full-bodied reconstruction of some real-world past event.

Schematically, we can think of the story template as shaping juror’s priors, as supplying information or evidence over and above what is introduced at trial, and then determining the likelihood ratio or weight to be assigned to all the elements of the trial proof.

If this is true, then jurors aren't selecting outcomes based on weight of the evidence. They are weighing evidence based on what they happen to think is true based on some incomplete & arbitrary portion of the evidence, or maybe not based on the evidence at all but on some other arbitrary thing (who makes a more convincing opening statement).

If your Dr made decisions this way, you'd sue for malpractice so fast his or her head would be spinning.

Then yours would b/c the jury would evaluate your case against your preconception-driven Dr by relying on its own preceonception-driven reasoning.

September 1, 2016 | Registered CommenterDan Kahan

This sounds like what I call the 'statistical model', which we use to assign probabilities to outcomes under each hypothesis.

I think I've mentioned before, in our discussions on Bayesian reasoning, that one reason people can come to different conclusions on the same evidence is not only that they are using different priors, but also that they use different statistical models to calculate likelihoods.

Any doctor also has a set of templates - they're in the "Big book of diseases" she gets handed while studying for her medical degree. These symptoms occur often with this disease, rarely with that one, never with the other. That background knowledge assigning a probability to each symptom, under the hypothesis of each disease, is a statistical model built up from book learning, medical experimentation and case histories, and personal experience.

In a way, it's a part of the prior knowledge.

September 1, 2016 | Unregistered CommenterNiV

@NiV--

Yes, that's right: the divergent likelihoods are part of the problem.

But the bigger one is the endogeneity between the template and the likelihood. Rather than updating template-based priors based on new information, new information is being given the effect dictated by the story template (akin to confirmatin bias).

Tests of professional judgment in medicine are sensitive to this sort of error. Yes, the templates are the priors; those who lack good hypotheses about what the possible diagnoses are get nowhere with testing. But those who don't test the hypotheses assocated with the disease-type templates get nowhere too-- or they get somewhere, namely, the wrong answer. See Patel, V.L. & Groen, G.J. Knowledge Based Solution Strategies in Medical Reasoning. Cognitive Science 10, 91-116 (1986).

STM is the strategy of the bad dr who conforms his or her assessment of all the information to the first diagnosis that pops into his or her head

September 1, 2016 | Registered CommenterDan Kahan

Dan -

==> if STM is right, then maybe this is why we shouldn't have them?...==>

What would you suggest as an alternative?

==> If your Dr made decisions this way, you'd sue for malpractice so fast his or her head would be spinning. ==>

Seems to me that it would be unrealistic to think that it isn't a significant factor in how doctors do diagnose problem, ala Jerome Groopman and Kahneman?

September 1, 2016 | Unregistered CommenterJoshua

@Joshua--

I'd prefer professional factfinders & I'd abolish the adversary system. If it were up to me.

Presumably, yes, some Drs have this sort of problem in reasoning. But as I mentioned to @NiV, there are professional-judgment tests you can admnister that examine whether Drs are avoiding this problem

September 1, 2016 | Registered CommenterDan Kahan

"Rather than updating template-based priors based on new information, new information is being given the effect dictated by the story template (akin to confirmatin bias)."

Yes, that's what a statistical model does.

To calculate the likelihood ratio, you need to be able to calculate functions P(O|H1) and P(O|H2) for each observed outcome and each hypothesis. To do so, you tell causal stories - H1 causes effect E1, which prevents effect E2, which allows E3 to occur, which results in observation O. H2 prevents E3, oh, hang on a sec. that's inconsistent with what we observed, so that's pretty unlikely. A causal model takes an initial state (Frank was drunk) and constructs plausible consequences of that (he became aggressive) to determine the likelihood of the chain and hence the end state (he got shot). Those paths through the causal model are what we call 'stories' or 'narratives'.

The problem is that the real world is enormously complicated, with billions of potential paths and unknowable influences to kick events off track, and with our finite minds we have to approximate. We store a subset of the most common/likely narrative steps - human motivations, stereotypes, cliches, things-we-read-in-newspapers, and so on - and then try to put them together like a jigsaw puzzle. It's a fast heuristic approach to a computationally intractable problem.

The 'template' is some background fact about the world like 'drunk people often become aggressive'. It's based on prior experience, and everyone's experiences are different. Someone who goes out on the town every night and rarely has any trouble will tend to discount the conclusion. Someone who once nervously went through the town centre late at night and saw three fights will be firmly convinced of it.

So when our jurors are told (with evidence) that Frank was drunk, they calculate likelihoods based on their templates how plausible they find Rick's claim that he became aggressive. To "update template-based priors based on new information" they would have to instead modify their belief that drunk people tend to become aggressive.

We don't want them to do that. If a lawyer becomes aware that the jurors have an erroneous model, he may try to modify that with separate evidence. But when it comes to the evidence about the events they're supposed to be judging, they need to keep their prior models fixed.

The big problem behind the concerns you raise, I think, is not that jurors have and use such models, nor necessarily that they differ or may be inaccurate. It's that the heuristic search is not deep enough. There is a tendency once one plausible solution has been identified, to respond to new information by looking for small modifications of that solution; that are "close" to it in scenario-space. It's an effective strategy - it's what mathematicians call a 'hill-climbing optimisation' - but it gets trapped too easily by local solutions. You find the peak of the small hill closest to where you started, missing the mountain that's a little further away.

The mathematicians get round that problem by changing how much they alter the best solution they've found so far. You start off making big changes to the story - crazy what-ifs, unlikely alternatives, strange events - and gradually reduce the step size as it becomes apparent where the most plausible stories are clustered.

Lawyers get round it by finding a local peak that gives the answer they want, and trying to get the jury to lock in on that. Whichever lawyer picks the taller hill (according to the jury's plausibility assessment function) wins the case.

September 2, 2016 | Unregistered CommenterNiV

NiV,
Nicely stated. We start from the cellular level and use slightly different terminology but get to the same place. We also have a reason why it is hard to change jurors' rapid conclusions, how to change them, and how to prevent jurors from rapidly going back to their previous views.

September 2, 2016 | Unregistered CommenterEric Fairfield

Interesting reflection on what we do and do not know about Zika:

"Unfortunately, the world seems to need repeated reminders that the absence of diagnostic capability is not the same as the absence of disease."

http://www.slate.com/articles/health_and_science/medical_examiner/2016/09/zika_started_in_sub_saharan_africa_and_it_may_be_as_harmful_to_that_region.html

September 2, 2016 | Unregistered CommenterGaythia Weis

Sorry, comment to wrong post!

September 2, 2016 | Unregistered CommenterGaythia Weis

For this post: From the perspective of someone who has both served as a juror, and been rejected from juries I think that we need to back up a step and look at the jury selection process. First, the available pool from which juries are selected, then the reasons for which they can get excused from duty, and then finally the lawyer interview process and their ability to reject some jurors. This shapes the deliberations in which collectively, juries, often influenced by more dominate members, may weave a collective story, or alternative sides to the story. Since all of this is connected to individual perspectives and stories as noted above, it is a big deal if the jury is culturally homogeneous, in a manner not reflective of the diversity represented by the victim or accused.

If professional factfinders are relied upon it would matter a great deal on how professional facts are set and how those become embedded or are changed over time with increasing knowledge. We already have a problem in criminal forensics with over reliance on new tools which turn out to be not as definitive as originally thought. In my opinion, there are times when a non expert jury with the reasoning of "just plain common sense" can do better at seeing facilities in analyses.

September 2, 2016 | Unregistered CommenterGaythia Weis

Related - (Dan has a staring role):

http://www.thenewatlantis.com/publications/hard-to-believe

September 3, 2016 | Unregistered CommenterJoshua

@NiV--

This is interesting enough to warrant a follow up post-- & likely an update of the materials for next yr -- but the dr vs. juror contrast definitely wrrants being pushed harder on.

Both the factfinder & the dr need templates: those supply priors that shoudl then be tested w/ evidennce. So Dr says, "could be disease x, y, or z... hmmmm..." Juror says, "could be situation p, q or r.... huh...."

But then what? The dr performs some dignostic tests that generate LRs that indicate relative proibabilty of x, y & z. Jury gets evidence that it should be using to determine relative probability of p, q & r.

Both will have be relying on "extraevidentiary" information, too, to determine how to update. IN the Dr's case, she has access to information that tells her what significance to give to the diagnostic tets. The jurors have life experience that determines the weight to give to evidence.

This could all work out fine. But not if dr & jury use one of the hypotheses in way that jurors use story template.

If Dr is reasoning this way, she says, "coudl be x, y or z but really based on the description in the intake file, x has got to be it!" She runs an x vs. y test but when the result comes in suggesting y is more likely, she concludes the test instrument must be broken. She does an x vs. z test, and when that comes in suggesting z is more likely, she concludes that in fact such a reading *ought* to be treated as evidence that x is more likely -- after all, "x has got to be it!" -- & makes a note to read results that way in the future. She cancles any tests for y vs z -- unnecessary! -- & based on mistaken recollection of interview w/ patient notes to the file about sympostms that the patient didn't actually have that are consistent with x ...

That's the STM equivalent of what jury does: Jury: after hearing the opening argument- which is just a summary of what the parties *assert* the evidence will show -- the jury becomes convinced that "p" is right outcome. It therefore discounts the credibility of all the witness who testify in terms consistent with q & r. It also misreads evidence that suggeset q & r as being just as likely as p as suggesting that p is in fact much more likely than q & r. Finally, it simply imagines a bunch of nonexistent pieces of evidence also consistent w/ p & forgets various q & r supporting ones ...

Note that these are errors independent of your "statistical model," if I'm undesetanding it properly. The statistical model is what tells Dr what hypotheses to start w/ & what effect to give to diagnostic tests properly construed. Under the "model," the priors & likelihood ratios for tests, though, are independent; that's what makes it possible for Dr to update -- to revise her priors at every stage from intake to final test result -- in manner that is truth convergent rather than preconcetion reinforcing.

The jury, too, could have a "model" of how the world works that includes priors & instrcuts them how to make sense of pieces of independent evidence. But the model, if its truth convergent, won't involve an endogeneity between priors & effect to be given to independent pieces of evidence; it won't be preconception-reinforcing. Or jury could have a "model" that is prconception reinforcing -- but since the world doesn't work that way, that would make jurors and their models incompentent facfinders, the same way a model that made dr who used preconception-reinforcing strategies for diagnonising an illness incompentent as a dr.

September 3, 2016 | Registered CommenterDan Kahan

"Note that these are errors independent of your "statistical model," if I'm understanding it properly."

Sort of. They're using the hill-climbing algorithm I described, but their problem with it is different to the one I described above.

Essentially they regard the evidence as conditional on the reliability of the expert witness (or other contrary evidence). When the expert witness contradicts their current best solution, they have a choice of either making many complex changes to revise their model to make it consistent with the testimony, or they can achieve the same consistency with only one single small alteration:- to suppose that the expert witness is wrong. Computationally, the latter alternative is far easier, and there is a tendency to confuse the mental effort required to build a model of events with its likelihood.

Generally, I think, people will feel the need to come up with some additional justification for dismissing the testimony. If they try to find a reason and can't, they'll be uncomfortable with doing so (which translates to a likelihood ratio shift, albeit a smaller one than the evidence merits). But in most cases plausible reasons are easy enough to find - and the more you know about the subject the easier that is.

You *do* always have to consider the truth of testimony as conditional (especially in a court case!), but you have to evaluate its reliability properly - you can't take the mere mental difficulty of abandoning your current hill and finding a higher one as a valid reason not to do so.

It's one reason why it's important that expert witnesses not rely on arguments from authority to make their case, but clearly explain the actual evidence in comprehensible detail. Discounting an expert's authority can be done in one step, but rationalising the negation of a dozen parallel lines of evidence is a lot harder. Eventually, the effort needed to do so exceeds the effort needed to jump to another hill; making the jump the easier option.

"Under the "model," the priors & likelihood ratios for tests, though, are independent; that's what makes it possible for Dr to update"

The same principle applies. If the blood test reports that Mr Jones is eight months pregnant, any doctor will discount the test's accuracy rather than calling the newspapers. The difference is only in the threshold of inconsistency needed. Duff test results are a lot more common, and much more likely, than finding a pregnant man. It's not more likely than a doctor getting their initial diagnosis wrong (as most doctor's are well aware). Of course, their assessment is also largely based on having built up a lot of experience over the relative reliability of diagnoses and test results, which jurors - usually attending their first ever trial - don't have.

The problem is not that jurors *can* discount evidence that disagrees with their current beliefs, it is that they do so too easily, and for the wrong reasons.

--
People depend on statistical models to support their interpretation of evidence, and the models are themselves based on prior evidence and information; often they even have hierarchies of many different models for the same problem to choose from, with varying levels detail and reliability. Their validity and applicability to the current problem is a *separate* set of hypotheses with confidences in them that can also be revised. It is logically correct and practically unavoidable that this be so, but the complexity of handling the myriad alternatives in a Bayesian network correctly is so mind-boggling that it's no surprise that people often do it wrong - their built-in heuristics are insufficient for more than relatively short/simple logical chains.

September 3, 2016 | Unregistered CommenterNiV

I think that the medical analogy as raised by NiV could be useful. Although many of the structural changes needed in the legal system are beyond issues related to juries.

Medical Professionals are operating within a pretty set system of protocols. Whereas our legal system has huge discrepancies, depending on locales, between how something is defined as criminal, and what actions to take if a crime is committed. One example would be the incarceration map shown here: http://www.nytimes.com/2016/09/02/upshot/new-geography-of-prisons.html?src=me&ref=general&_r=0. I think we could also agree that if the gender and racial identities of Rick and Frank were varied above, along with those of the jury, we'd also be likely to see a different set of responses as to what seemed appropriately threatening. Or what actions were appropriate to take. It's not straightforward, but there are steps that can be attempted to get people to consider their biases before proceeding with an assumption of criminal intent. The neighborhood social networking website, NextDoor, often used by law enforcement agencies as a vehicle for communication with the public, is working on “stop and think” mechanisms: http://fusion.net/story/340171/how-nextdoor-reduced-racial-profiling/. Judges issue instructions to juries, which may include parameters for the trial agreed to by the attorneys in the case, but in my experience those are limited, and usually done without any actual exchanges with the jury to see how those instructions are interpreted by individual jurors.

Doctors have medical charts. They don't just run around with entire Bayesian analysis schemes for each patient in their heads. They walk into the exam room, flip on the computer to review the chart. If they are good at patient relations, they can launch into a greeting that at least implies they've are up to speed on this patients concerns. But maybe they didn't even really remember the patient's name. They proceed to see if the patient has attained the appropriate treatment and assessment protocols for their age group. They may have blood tests or other test results from a previous visit. They can inquire of the patient to find out effects of any medications previously prescribed. Then they can proceed to the next steps. I think that what a real doctor is likely to do is not as bad as portrayed by Dan in his comment to NiV in the paragraph starting with If Dr is reasoning this way, she says, …, but on the other hand, I think that an average doctor might be highly influenced by the story told by the drug salesman that just stopped by, and decide that the new wonder drug sounds like just the thing to try next. Two checks and balances here not available, at least prior to appeal in a criminal case are that a patient can always get a second opinion or change doctors entirely. And a doctor can refer a patient on to a specialist in the field of what seems to be the patient's problem.

The problems for jurors start with the fact that unlike Doctor professional organizations, they have a much weaker set societal norms for what constitutes criminal behavior. They may have very little knowledge as to the details of the law under which the defendant was charged. Individual jurors cultural norms may differ widely. A black male walking down the street may immediately seem threatening to some. It may be perceived that women always say no when they mean yes. One persons definition of rape may be another's (as Brock Turner father put it) “20 minutes of action”. Drug use might be seen as outlandish or routine. Is killing never justified? Or do second amendment rights extend to defending one's "person" which might include such things as theft? If a jury were really chosen in a manner that resulted in being representative of the full diversity of a community from which it were drawn, there would be some community similarities but also many diverse ideas of initial starting stories. But to the best of their abilities, lawyers try to select the venues, and the jurors, that they feel will best buy into the case they wish to present.

Then what happens is, in my opinion, sort of a spiraling self fulfilling prophecy. Lawyers present a storyline, present witnesses and cross examine them in ways that support that storyline and give a summary emphasizing only the key points fitting that storyline. In my opinion, problems start to arise, not just because jurors begin with their prior biases and initial leanings towards a storyline. But also because the entire trial may be directed that way. Also they have no charting abilities. Thus, a really storyline disruptive bit of new information might cause a juror to jump out of a storyline entirely and start a new one. But a point that might raise a few doubts might not get connected to a similar point with similar doubts presented in testimony the day before. So one of my suggestions for tweeks to improve the current trial system would be to give jurors data charting abilities.

I also think that it is likely to be true that lawyers are pretty much operating in the dark, except for body language cues, as to how the jury might be responding to their presentation. In my opinion, there should be more opportunity for members of the jury to ask questions of the two legal teams and the judge. This would have to be in writing, so as to not slow the system down to a grinding halt. But giving the jury members an opportunity to convey such things as thoughts about an entirely different, third storyline they were contemplating, or questions of witnesses they thought ought to be asked, would give the judge and the attorneys the opportunity to more fully grasp where the jury perceptions and misconceptions were at and change their presentations accordingly. Hopefully this would lead to a more broadly structured, and thus hopefully more fair, trial.

Joshua's link is a good one. “there must be a place for informed non-experts to contribute meaningfully to the discussion. “ speaks to the role of juries. Our legal system is an important supporter of social norms and also plays a leading role in moving towards new ones. Active Participation by the public in legal enforcement is an important component of democracy.

September 4, 2016 | Unregistered CommenterGaythia Weis

Gaythia -

Reading your discussion of protocols for doctors compared to influences on juries, I thought of Adam Benforado and his work on biases in our legal system:

http://www.npr.org/2015/07/06/418585084/the-new-science-behind-our-unfair-criminal-justice-system

And I also thought of how doctors may well be influenced by similar biases, for example:

http://news.utexas.edu/2012/04/23/bias_african_american_health


------------

Some free association...

Dan suggests a system where professional fact-finders might replace juries of peers. But how would be be ensured that professional fact-finders wouldn't be subject to the same biases as non-professional juries? Perhaps it would b equally useful to focus on bias in the way that Benforado suggests - through employing direct intervention and, perhaps more importantly (particularly given the potential for direct interventions to have a counterproductive effect of enhancing bias), taking steps to ensure diversity.

Also, NiV mentions jury nullification downstairs.... Well, I would imagine there would be a strongly negative reaction to overt exercise of jury nullification as a way to address intrinsic biases in our legal system (although it was an outlier kind of example - just look at the strong reaction in the white American public to the possibility that the jury in the O.J. Simpson case might have effectively manifest jury nullification to address the racial biases in our legal system. In general, my guess is that most Americans consider jury nullification to be in direct contradiction to the legal principles of our judicial system)...but it is interesting to think about Paul Butler's argument that jury nullification should be embraced as a way to address biases in our legal system. I can't find a non-paywalled link to his original article, but there are other, related references...

http://www.npr.org/templates/story/story.php?storyId=242990498

and

https://www.youtube.com/watch?v=e8eQ_EYwQQI


Relatedly, I thought that this is fascinating:

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2584168

September 4, 2016 | Unregistered CommenterJoshua

My 1.5 cents.
I think that we are missing some important things.
I think that we are assuming that the priors are stable when they are not.
I think that we assume that we know the complete set of priors for a jury when we don't.
I think that we may be making overly restrictive assumptions about the shape and time variance of likelihood functions.
So our predictions are not as good as we would hope.
We can make better predictions.

September 4, 2016 | Unregistered CommenterEric Fairfield

"Also, NiV mentions jury nullification downstairs.... Well, I would imagine there would be a strongly negative reaction to overt exercise of jury nullification as a way to address intrinsic biases in our legal system"

From who? The point of allowing jury nullification is to deal with instances where the law is obviously wrong and unjust, according to the popular morality. So by definition, the popular judgement of it is that it's the right thing to do. Clearly there's a risk that the jury's morality might also be out of step with society's (if it can happen to legislators and prosecutors, it can certainly happen to juries), but that's often argued to be an acceptable price for the safeguard on justice. Judges and prosecutors can be corrupted, too.

"In general, my guess is that most Americans consider jury nullification to be in direct contradiction to the legal principles of our judicial system"

Depends which principles you mean. The issue arises over the situations when the law stands in direct contradiction to justice. Do you apply an unjust law, or apply your own justice in place of the existing laws agreed?

So which principle do you mean? Law or justice?

To take an extreme example, suppose you lived in 1940s Germany and the law said that any hidden Jews had to be handed over to the Gestapo. Is it morally *wrong* to disobey such a law? Would it not be in "direct contradiction" to the legal principles of the German judicial system? The 'criminals' who did, at severe risk to themselves, are nowadays regarded as heroes.

Would "most Americans" argue that even an unjust law should always be enforced? Because "It's the law"? Should people be allowed and expected to vote their own conscience, or do they get to say: "I was only following (the judge's) orders"?

And I know you don't like to be called 'authoritarian', but this is one of those issues where the "authoritarian" thinking is that "the law is the law" and the State has the absolute right to decide, and it is opposed to "liberal" thinking, that jury nullification is an important safeguard for justice and freedom - just like the right to jury trials. (After all, if it's simply a matter of fact whether the law was broken, why do you need a jury? As Dan says, professional factfinders would be more reliable at that.)

There are plenty of authoritarian Americans, I agree. But is it really "most"? I find it depressing, if so.

September 10, 2016 | Unregistered CommenterNiV
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.