follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Making sense of the " 'hot hand fallacy' fallacy," part 1 | Main | On the road *again*... »
Sunday
Nov012015

Weekend update: talking it up & listening too

Reports on road shows:

1. Carnegie Melon PCR series:

Great event! Passionate, curious, excited audience eager to contribute to the project of fixing the science communication problem.

This is the future of the Liberal Republic of Science: a society filled with culturally diverse citizens whose common interest in enjoying the benefit of all the knowledge their way of life makes possible is secured by scientists, science communication professionals, educators, and public officials using and extending the "new political science" of science communication.

 

Slides here.

2. 10th Annual Conference on Empirical Legal Studies:

I did presentation on "'Ideology' or 'Situation Sense?'," the CCP study on interaction of cultural worldviews and legal reasoning in public, law students, lawyers & judges, respectively.  Lots of great feedback.

Slides here.

A small selection of other papers definitely worth taking look at (very frustrating element of a conference like this is having to choose between concurrent sessions featuring really interesting stuff):

Chen, Moskowitz & Shue, Decision-Making Under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires
Thorley, Green et al., Please Recuse Yourself: A Field Experiment Exploring the Relationship between Campaign Donations and Judicial Recusal
MacDonald, Fagan & Geller, The Effects of Local Police Surges on Crime and Arrests in New York City
Ramsayer, Nuclear Power and the Mob: Extortion and Social Capital in Japan
Scurich, Jurors’ Presumption of Innocence: Impact on Cumulative Evidence Evaluation and Verdicts
Sommers, Perplexing Public Attitudes Toward Consent: Implications for Sex, Law, and Society
Robertson, 535 Felons? An Empirical Investigation into the Law of Political Corruption 
Baker & Malani, Do Judges Really Care About Law? Evidence from Circuit Split Data 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (16)

"What additional studies can we do that..."

But then you just get into an argument over whose additional studies are using a correct method.

Although sometimes there's an argument about whether there even *is* a correct method. What if one side argue that there isn't one?

November 5, 2015 | Unregistered CommenterNiV

@NiV:

Then they are just butt heads.

Scholarly conversation is for people who insist on valid forms of empirical study but who don't think any single method is the "gold standard" etc

November 5, 2015 | Unregistered Commenterdmk38

Sure, but what counts as a "valid form of empirical study"? People disagree.

For example, do experiments on computer models (already known not to match observations) count as "empirical study"?

One side says model outputs aren't "empirical", and that the scientific method demands that if a hypothesis doesn't match observation then it's wrong. The other side insists that models *embody* our current best knowledge of the conclusions of empirical study, and that while wrong in some details the models are "good enough". Or at the least, the best we've got available, and since politics means we *have* to have an answer to the scientific question, an imperfect answer is better than none.

Both sides are right, according to their own standards and values. On the one hand, a model clearly isn't "empirical", but on the other it's equally clear that we don't yet have access to the sort of empirical observation that would resolve the issue yet (and which may not be possible, requiring controlled experimentation with the climate system), so if you're going to provide any sort of answer you have no choice but to use unvalidated models. What else are we supposed to do? Demanding rigorous empirical evidence about every part of the model sets an impossible standard. All models are wrong, but some are useful.

It's the difference between "Can I believe this?" versus "Must I believe this?"

So you get one side proposing that we don't trust any models that don't fit observations, and keep working until one has been developed that demonstrably does, and only then claim to have answered the question. No model is perfect, but you have to quantify the uncertainties to show that your conclusion is safe despite them, to claim it as a 'scientific' conclusion. And you get the other side proposing further experiments on the models we've currently got, claiming ever-improving certainty as both models and experiments improve. Where rigorously quantifying uncertainties is impossible, "expert judgement" can be substituted. Each side disagrees fundamentally with the 'correctness' of the other's approach.

It's a complicated issue generally - not just with regard to the politically fraught area of climate science. Even "gold standard" science has a lot more fuzziness than most people seem to think. People sometimes compare climate change to Newton's laws of gravity, and yet even Newton himself thought they were probably incomplete/wrong, and we now know there are huge gaps and ambiguities in them. Physicists argued for years over the subtleties. 'Validity' of methods isn't a binary yes-or-no proposition, but a spectrum. What standard do we set, who sets them, and on what basis?

November 6, 2015 | Unregistered CommenterNiV

==> "One side says model outputs aren't "empirical", and that the scientific method demands that if a hypothesis doesn't match observation then it's wrong. "

Then all methods of empirical study are wrong. Because all empirical study utilizes models where outputs are not 100% consistent with the underlying hypothesis.

November 6, 2015 | Unregistered CommenterJoshua

I don't understand. How so?

November 6, 2015 | Unregistered CommenterNiV

Imperfect measurements and unrecognized confounding variables. Or the system is so simplified as to make measurements more exact by sharply limiting the possibility of confounding variables such that observations no longer resemble, (or "model") the actual and much more complex, real world case.

November 6, 2015 | Unregistered CommenterGaythia Weis

Imperfect measurements and unknown confounding variables are included in the hypothesis as standard. You just hypothesise that the measurement errors have a spread smaller than some set bound.

When the measurement errors are small enough, they're often neglected. (They complicate things without adding value.) But in situations where they're large compared to the things being observed, the measurement apparatus has to be included as part of the physics of the experimental situation being observed. Where there are elements of the experimental situation that are unknown - whether in the measurement apparatus or the system being measured - the hypothesis is instead constructed to place bounds or constraints on it.

That's what I mean by "quantifying uncertainties". No measurement process is perfect - we cannot, for example, measure the average surface temperature of the globe exactly because we can't place a thermometer at every point of the surface, and thermometers are imperfect anyway. But if I can show that temperature varies geographically within certain bounds, and then take a sample, I can place error bounds on the result. I can then offer as my hypotheses either "stable temperature plus less than x units of measurement error" or "rising temperature plus less than x units of measurement error" as the options to distinguish between, and gather empirical evidence of one over the other. I don't need to know the exact magnitude or distribution of the measurement error, I only have to be confident that it is significantly smaller than the temperature rise I'm trying to observe.

Even so, quantifying measurement error is far from trivial, and when it comes to climate change I'm not convinced they've managed it yet. (Nor are climate scientists, if they're being honest.) But that doesn't mean it's impossible in principle.

November 7, 2015 | Unregistered CommenterNiV

What climate modelers are doing is approaching an absolutely correct answer by increasing points measured and variables controlled. My point is that this process always relies on models, since they are never going to get there. The models steadily improve.

So we started with a situation over 50 years ago in which climate scientists started to gain traction in warning politicians that global warming was a serious issue. See for example: http://www.theguardian.com/environment/climate-consensus-97-per-cent/2015/nov/05/scientists-warned-the-president-about-global-warming-50-years-ago-today. So at that time, the scientists didn't really have the means to project future global temperature changes. Nor did they take all possible confounding variables into account. They had what they felt was enough evidence to change course with regards to fossil fuel use. Now, 50 years later we've amassed considerably more data in support of this hypothesis.

I think that the case for climate change is incredibly strong. Still I think that it is impossible, in principle to completely quantify the measurement error.

But since what NIV says above is: "I don't need to know the exact magnitude or distribution of the measurement error, I only have to be confident that it is significantly smaller than the temperature rise I'm trying to observe." And that can be extrapolated to all of the other numerous measurements which go into climate models. Or into analyses of both the historic and geologic records, I'm not sure anymore what it he is arguing about with Joshua above.

My point is that it is always a somewhat imperfect model. Science is always a work in progress. People are still best advised to take action based on the best available knowledge.

November 8, 2015 | Unregistered CommenterGaythia Weis

"What climate modelers are doing is approaching an absolutely correct answer by increasing points measured and variables controlled. My point is that this process always relies on models, since they are never going to get there. The models steadily improve."

Agreed. The question is over whether you should claim to know the answer before you've shown that the models are accurate enough to answer it.

"They had what they felt was enough evidence to change course with regards to fossil fuel use."

Feelings?!

"Now, 50 years later we've amassed considerably more data in support of this hypothesis."

Which is where? Because even after many years of studying this subject, I still don't know.

"I think that the case for climate change is incredibly strong."

Why?

Have you looked at it yourself, or are you taking someone's word for it? How did you decide whose word to take?
People keep on saying this mass of evidence exists, but nobody I talk to seems to have actually seen it themselves, or know where it is. It's just what they've been told.

"Still I think that it is impossible, in principle to completely quantify the measurement error. "

Depends whether you mean "quantify" as in "set an upper bound on" or "determine exactly". I agree you can't do the latter, but why can't you do the former? And if you can't, how can you have "incredibly strong" evidence?

Suppose I claim the length of the Emperor of China's nose is 23 inches, give or take some unquantified uncertainty. My measurement process might be plus or minus an inch, or plus or minus a foot, or plus or minus a billion miles. "Unquantified" means I don't know. It could be any of those, or none or them. So regarding the question "Is the Emperor of China's nose more than 20 inches long?" we have no way to answer. If we can't even be sure we're within a billion miles of the right answer, our 'measurement' tells us absolutely nothing!

On the other hand, if I can show that my measurement process is accurate to better than 1 inch, although I don't know whether it's half an inch, or a tenth of an inch, or a billionth of an inch, then I *can* answer the scientific question, even though I don't know the exact accuracy. The measurement uncertainty *is* quantified, because we've set an upper bound on the error, and this bound means that the measurement accuracy is good enough to distinguish 23 inches from 20 inches. (It wouldn't be good enough to distinguish it from 22.8 inches, though. 'Validity' depends on the question you're asking.)

Quantifying the uncertainties, and showing that the measurement process is accurate enough to answer the question, is absolutely essential in science. It has to be demonstrated empirically, and the empirical evidence validating your measurement process is part of the empirical evidence supporting your conclusions.

Until then, the only scientifically justifiable answer you can give is "we don't know".

"I'm not sure anymore what it he is arguing about with Joshua above."

I was alluding to this: https://www.youtube.com/watch?v=b240PGCMwV0
Joshua was (apparently?) arguing that Feynman was wrong, because observation itself depends on models of the observation process. (True. It does.) I was just explaining how science deals with that issue.

"My point is that it is always a somewhat imperfect model. Science is always a work in progress."

It is. But that doesn't mean there's no difference between a mature science with accurate, well-validated models and a young science that's just in the initial stages of exploration. Until the models have been validated, they can't give reliable answers to questions.

You can always use unvalidated models to answer questions, but that's not 'Science' and doesn't have science's characteristic authority. It's like doing fortune telling by spinning test tubes on a superconducting Ouija board while wearing a white coat and shouting 'Science!'

"People are still best advised to take action based on the best available knowledge."

Sometimes the best available answer is: "we don't know".

If I throw these two dice, what will their total be? ~ We don't know. ~ But if you had to guess anyway? ~ Well, I'd guess '7' was most likely, if the dice are fair. ~ Are they fair? ~ We don't know. Recent observations don't quite match what we'd expect to see if they were. But it's the only model we've got. ~ OK, we'll gamble the economy of the free world, trillions of dollars of other people's money, on it coming up 7.

Does that sound sensible? If you know the only answer available is wrong, but is the best you've got?

November 8, 2015 | Unregistered CommenterNiV

Dan, was the following meant as a "This" referring to CMU, or just a good idea in general? "This is the future of the Liberal Republic of Science: a society filled with culturally diverse citizens". When I first saw it I thought of an article which acquaintance Pramila Jayapal (WA State Senator, Seattle) strongly feels is a must read: http://www.nytimes.com/2015/11/01/magazine/has-diversity-lost-its-meaning.html?smprod=nytcore-iphone&smid=nytcore-iphone-share&_r=0. "What’s so irritating about the recent ubiquity of the word ‘‘diversity’’: It has become both euphemism and cliché, a convenient shorthand that gestures at inclusivity and representation without actually taking them seriously." What I know of CMU, its surrounding neighborhoods and more distant commute into the City suburbs is that there are a lot of differences between communities, Shadyside, Squirrel Hill, Oakland, Liberty and Wilkinsburg, Mt Lebanon and even North Versailles, but not that much interaction. Richard Florida, from his time at CMU has some pretty astute analysis of the problems Pittsburgh created for itself with urban renewal and stadium building. And I especially like his response to Thomas Friedman's "The World is Flat" which is about parallel economic communities: "The World is Spiky" http://www.theatlantic.com/past/docs/images/issues/200510/world-is-spiky.pdf. We can't have a liberal republic in segregated isolation.

November 8, 2015 | Unregistered CommenterGaythia Weis

NiV, I assume, after much experience in comment discussions with you, that you are not in need of hand holding while walking through the IPCC website, http://ipcc.ch/, and drilling down through the layers of reports http://ipcc.ch/report/ar5/wg1/, to arrive at relevant research papers, and their data sets. Our knowledge of climate change does not hing on the accuracy of our various mechanical thermometers, there are a whole host of physical, biological and chemical measurements involved.

Sure, my own direct experience is either too regional (Western US Water Supply, Pacific Northwest Ocean Acidification, home gardening and time of first frost). Or on completely unrelated topics (numerical modeling of real world systems, laboratory and industrial gases, analytical measurement methods and related statistical evaluation). We're well past the time when any one human could lay claim to wrapping their own mind around all the science humans claimed to know. We have to rely on our ability to evaluate the experts, or on our ability to astutely select those guides whose evaluation of the experts we rely on. What I think is convincing about the hypothesis of climate change is the diversity of studies from different fields of science which collaborate in support of this. As well as the liveliness of genuine skeptics and peer reviewers who contribute to refining methods and opening up new areas of research.

The economic cost of doing nothing to change our current trajectory is quite high.

November 9, 2015 | Unregistered CommenterGaythia Weis

@Gaythia-- not CMU, but the project of PCR

November 9, 2015 | Registered CommenterDan Kahan

"NiV, I assume, after much experience in comment discussions with you, that you are not in need of hand holding while walking through the IPCC website, http://ipcc.ch/, and drilling down through the layers of reports http://ipcc.ch/report/ar5/wg1/, to arrive at relevant research papers, and their data sets."

No indeed. I'm very familiar with it.

But the IPCC has a method for expressing uncertainty that will be unfamiliar to most casual readers. The uncertainty in a finding is split into the "likelihood" which is the probability of a conclusion being true given that the current methods, models, data, and physical understanding are correct, and the "confidence" which is the expert's judgement as to whether the current methods, models, data, and physical understanding are correct.

There's an explanation of the distinction here in AR4, although not a very good one. The explanation in AR5 is even less clear, but if you want to see it it's in section 1.4 of WG1.

So you have to be extremely careful when reading the IPCC report to notice are they talking about "likelihood" or "confidence"? The use distinct language for each.

Now, the question of whether observed climate change is anthropogenic is discussed in chapter 10 of AR5, the executive summary of which contains a number of very strong statements in support of the connection. But not that all of these statements are phrased in terms of the likelihood, i.e. the probability calculated on the assumption that the climate models are correct. They say this explicitly: "Robustness of detection and attribution of global-scale warming is subject to models correctly simulating internal variability." The problem is, as everyone knows, they don't. Virtually all the models predict that pauses in warming of the length of the currently observed hiatus should not occur, with greater than 95% or in some cases greater than 99% confidence! "It doesn't make a difference how beautiful your guess is. It doesn't make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it's wrong."

The IPCC immediately go on to discuss this. "The observed recent warming hiatus, defined as the reduction in GMST trend during 1998–2012 as compared to the trend during 1951–2012, is attributable in roughly equal measure to a cooling contribution from internal variability and a reduced trend in external forcing (expert judgement, medium confidence)."

Note those words in the parentheses at the end - they're very important! The statement concludes that the failure of the models to match observation can be explained, but they have only medium confidence in this (interpreted in the AR4 lexicon as roughly 50%), and the evidence for it isn't empirical, quantified observations, but "expert judgement". That is to say, their opinions are the evidence.

AR4 was even more explicit about this. They say in chapter 9: "The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change." I think it was Ben Santer (an IPCC author) who said of all this: "It's unfortunate that many people read the media hype before they read the chapter. I think the caveats are there. We say quite clearly that few scientists would say the attribution issue was a done deal."

So that, so far as I know, is the state of the science. The models do indeed show that the observed rate of warming cannot be explained by the models without the influence of rising CO2. They also show that the hiatus cannot be explained by the models with or without the influence of rising CO2. If the models are right, the former statement implies with high *likelihood* that much/most of the warming is caused by CO2. But *confidence* that the models are not falsified by the hiatus is only "medium", and based not on quantified empirical evidence but "expert judgement"!

The caveats are indeed there, (and I'm sure in the distant future they're going to hang their hats on that,) but they're written in a way that only another scientist would notice them. They don't have solid quantified scientific evidence, and they know it.

Hence my question. I'm sure if the IPCC had the evidence, they'd point to it. They haven't. So how come everyone is so certain that there's this ever growing mountain of evidence without ever having seen it themselves?

It was Tom Wigley, former head of the CRU and very much on the consensus side, that said: "No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves." (The circumstances in which he said this are interesting - do go look at the context.) But it appears that a lot of scientists haven't. Isn't that something we need to be aware of?

"The economic cost of doing nothing to change our current trajectory is quite high."

Possibly so. The economic cost of doing what many people are proposing is arguably even higher.

Incidentally, I've no objection in principle to 'doing something'. If AGW was a genuine threat, an effective response would have to apply to everyone (as the Byrd-Hagel resolution points out), we would first go nuclear as fast as we could (like rapid-building hundreds of reactors...), we would encourage lower-carbon fossil fuels like unconventional gas (fracking) as a stepping stone towards it, and we'd use markets to fund and motivate it. We would also take the quality of the science more seriously. But the politicians of the climate movement don't, instead resisting any technology that would work and only backing technologies that don't, while proposing partial responses that would have no effect on the climate but would serve primarily to redistribute wealth from the developed nations to the developing ones.

If climate change was really a problem, then the approach of the climate campaigners would be a serious concern - squandering limited public tolerance on useless trivia like banning incandescent lightbulbs or building windmills while completely ignoring or opposing any practical solutions. If I was a believer, I'd be extremely angry and quite scared about that.

November 9, 2015 | Unregistered CommenterNiV

==> "The economic cost of doing what many people are proposing is arguably even higher."

Sure. Arguably. And monkeys may arguably fly out of Dan's butt in any given minute.

In the meantime, what's your expert confidence in the likelihood, and on what do you base your confidence?

I would think that to get some handle on that, you'd need to have a reliable and validated economic modeling methodology for your projections, which of course, would necessarily need to account for the long-term ratio of negative and positive externalities.

In lieu of you getting back to me with those validated data (and of course, your validated model), I'll go with a different approach.

Instead of saying that cost of what people are proposing IS ">arguably higher, (which I think is pretty much a useless framework - after all, we could say that about anything), I'll say that the cost of what people are proposing may be higher, or it may be lower than the impact of non-action. Accordingly, it seems to me that it makes sense to not expect models to prove anything, but to use them to inform us in our attempts to assess policies for risk avoidance and risk mitigation. After all, all knowledge comes from problematic modeling output, as all knowledge relies on the imperfect modeling processes that take place in our imperfect brains.

November 10, 2015 | Unregistered CommenterJoshua

==> "Incidentally, I've no objection in principle to 'doing something'. If AGW was a genuine threat, an effective response would have to apply to everyone (as the Byrd-Hagel resolution points out), we would first go nuclear as fast as we could (like rapid-building hundreds of reactors...), we would encourage lower-carbon fossil fuels like unconventional gas (fracking) as a stepping stone towards it, and we'd use markets to fund and motivate it."

It's always interesting to watch "skeptics" in their approach to the law of unintended consequences.

November 10, 2015 | Unregistered CommenterJoshua

"Sure. Arguably. And monkeys may arguably fly out of Dan's butt in any given minute."

If you say so Joshua.

Sounds rather uncomfortable...

"In the meantime, what's your expert confidence in the likelihood, and on what do you base your confidence?"

I only said 'arguable' because I knew you'd argue about it, and therefore inserted the necessary caveat, in the hopes that would stop you blowing up. Didn't work, though, did it? :-)

By 'arguable', I was referring to the "Copenhagen consensus" instigated by Bjorn Lomborg, of course.
http://www.copenhagenconsensus.com/

Economic modelling is an even less reliable art form than climate modelling. But even the Stern report, which is pretty much state-of-the-art for the consensus side, wasn't able to make a strong argument for action without cheating on discount rates. In my country, we already pay more in green taxes than the carbon tax rate he was proposing - and his costs are probably a gross overestimate (because he's overestimating the amount of warming, overestimating the damage caused by taking the most alarmist opinions, and because of the aforementioned non-standard discount rate). So if people are still saying we need to do more (and they are) then the action those people are proposing by definition costs more than climate change will. It's higher than the amount we already pay which is higher than the rate Lord Stern said we ought to pay to balance the effects of climate change, which are almost certainly higher than the true costs of climate change.

So yes, I'm pretty certain. What's your basis for thinking Dan's butt-monkey medical problem is a more likely outcome than me possibly having researched what I'm saying?

"It's always interesting to watch "skeptics" in their approach to the law of unintended consequences."

Which 'unintended' consequences are you talking about?

I actually *intend* to encourage humanity to build massive nuclear power plants, frack, and move to a more market-based style of government. Those are all policies I like politically - just as conventional climate campaigners like redistributive authoritarian government solutions and anti-technology Luddism.

How did you know what I 'intended', anyway? Mind-reading again, Joshua...? :-)

November 10, 2015 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>