follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "They saw an election"-- my 2 cents on election result | Main | Women for & against Trump: who sees what & why . . . . »
Monday
Oct242016

Law & Cognition 2016, Session 7.5: probing the SSK data

No sooner had I finished saying “one has to take a nice statistical bite of the results and see how much variance one can digest!” than I was served a heaping portion of data from David Schkade, coauthor of Schkade, Sunstein & Kahneman Deliberating about dollars: The severity shift, Columbia Law Rev. 100, 1139-1175 (2000), the excellent paper featured in the last Law & Cognition course post.

That study presented a fascinating glimpse of how deliberation affects mock juror decisionmaking in punitive damage cases.  SSK discovered two key dynamics of interest: first, a form of group polarization with respect to judgments of culpability, whereby cases viewed as low in egregiousness by the median panel member prior to deliberation generated toward even lower collective assessments, and cases viewed as high by the median penal members toward even higher collective assessments; and second, a punitive-award severity shift, whereby all cases, regardless of egregiousness, tended toward awards that exceeded the amount favored by the median panel member prior to deliberation.

The weight of SSK’s highly negative normative appraisal of jury awards, however, was concentrated on the high variability of the punitive damage judgments, which displayed considerably less coherence at the individual and panel levels than did the culpability assessments.  SSK reacted with alarm over how the unpredictability of punitive awards arising from the deliberative dynamics they charted would affect rational planning by lawyers and litigants.

My point in the last post was that the genuinely odd deliberation dynamics did not necessarily mean that there were no resources for trying to identify systematic influences to reduce the unpredictability of the resulting punitive awards.  In a simulation that generated results like SSK’s, I was still able to construct a statistical model that explained some 40% of the variance in punitive-damage awards based on jurors’ culpability or “punishment level” assessments, which SSK measured with a 0-8 Likert scale.

It was in response to my report of the results of this simulation that Schkade sent me the data.

SSK's actual results turned out to be even more amenable to systematic explanation than my simulated ones. The highly skewed punitive-awards formed a nicely behaved normal-distribution when log transformed. 

A model that regressed the transformed results against SSK’s 400-odd punishment-level verdicts explained some 67% of the variance in the punitive awards. That’s an amount of variance explained comparable to what observational studies report when real-world punitive damages are regressed on compensatory damage judgments (Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D. & Wells, M.T., The predictability of punitive damages., Journal of Legal Studies 26, 623-661 (1997)).

Schkade made this important observation when I shared these analyses with him:

You’re right that the awards do seem more predictable with a log transformation, from a statistical point of view.  However, the regression homoscedacity assumption is imposed on log $.  The problem is that in reality where $ are actually paid you have to unlog the data and then the error variance increases in proportion to the estimate.  Worse still, this error is asymmetric and skewed toward higher awards.

So e.g. if the predicted punishment verdict is >=4 you must tell your client that the range of awards they face is exp(10) ~ $22,000 to exp(20) ~ $500,000,000.  This range is so vast that it is pretty much useless for creating an expected value for planning purposes.  In other words, $ payments are made in the R^2 == .10 world.  Of course if you factor in estimation error in assessing the punishment verdict itself, this range is even wider, and the effective R^2 even lower.

I think this is a valid point to be sure.

But I still think it understates how much more informative a statistically sophisticated, experienced lawyer could be about a client’s prospects if that lawyer used the information that the SSK data contain on the relationship between the 0-8 punishment-level verdicts and the punitive-damage judgments.

Ignoring that information, SSK describe a colloquy between a “statistically sophisticated and greatly experienced lawyer” and a client seeking advice on its liability exposure. Aware of the simple distribution of punitive awards in the SSK experiment, the lawyer advises the client that the “median” award in cases likely to return a punitive verdict is “$2 million” but that “there is a 10% chance that the actual verdict will be over $15.48 million, and a 10% chance that it will be less than $0.30 million” (SSK p. 1158).

But if that same “statistically sophisticated and experienced” lawyer could estimate that the client’s case were one that was likely to garner the average punishment-level verdict of “4,” she could narrow the expected punitive award range a lot more than that.  In such a situation, the median award would be $1 million, and the 10th and 90th percentile boundaries $250,000 and only $5,000,000, respectively.

To be sure that’s still a lot of variability, but it’s a lot less—an order of magnitude less—than what one would project without making use of the data’s information about the relationship between the punishment-level verdicts and the punitive damage awards.

Is it still too much?  Maybe; that’s a complicated normative judgment.

But having been generously served my curiosity-sating helping of data, I can attest that the there is indeed a lot of digestible variance in the SSK results after all, the weird dynamics of their juror subjects notwithstanding.

It should also be abundantly clear that the size of Schkade’s motivation to enable others to learn something about how the world works is as big as any award made by SSK’s 400 mock jury panels.  I am grateful for his virtual “guest appearance” in this on-line course!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

Schkade seems to panic at the prospect of damages needing to be estimated as, e.g., "from tens of thousands to hundreds of millions" of dollars, but my impression is that a lognormal distribution of damages really isn't something that an actuary would panic at. A lognormal distribution of damages is actually fairly well constrained, and as long as the risk were insurable for other reasons, is probably itself insurable. Loss functions that are Pareto-distributed, like many natural disasters, are even heavier-tailed, and insurance products exist for those as well. So I think that the observed variance is quite operationally tractable.

October 24, 2016 | Unregistered Commenterdypoon

@Dypoon-- I am inclined to agree that upredictability of awards is overstated but in fact in many states (e.g., Calif) insurance for punitive damages is legally prohibited. There's still the self-insurance option for big actors. For small ones, being unable to pay "hundreds of millions" makes the prospect of being sued for that amount less likely. Of course, one can object to wildly divergent awards on grounds independent of their unpredictability -- like their inherent unfairness &, assuming in fact they are excessive in relation to expected harm, their inefficiency.

October 25, 2016 | Registered CommenterDan Kahan

"In some states, insurance for punitive damages is legally prohibited."

Did not know this! Thanks! Can you legally use put-call parity to engineer equivalent coverage?

Example: let's you expect you may be fined gigantic $$$. If buying insurance for it were legal, the standard arrangement is to pay the insurer a premium, and if the judgment came down heavy, they would pay. If light, they just take the premiums. You have paid someone for their liquidity.

But if that's illegal, then instead, you go to the bank and say, "I need to be pre-cleared for a certain gigantic line of credit and am willing to pay on the spot for access to this money later." If judgment comes down heavy, then you take out the loan and make payments on it, as pre-agreed. If it's light, the bank has already been paid your spot payment. Either way, you're purchasing liquidity.

You're saying the first is illegal, okay; but is the second illegal? and if so, why?

October 28, 2016 | Unregistered Commenterdypoon

@dypoon--

I'm not sure if the device you mentioned is lawful. But obviously the lender will need to factor in the borrower's risk of liability for a huge award in determining the creditworthiness of the borrower, taking into account priority of claims in the result the award is big enough to bankrupt the borrower. Probably if your assets are large enough to secure a loan like that before any award, they are large enough to secure the loan after any award is entered

October 31, 2016 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>