follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Who distrusts whom about what in the climate science debate? | Main | What "climate skeptics" have in common with "believers": a stubborn attraction to evidence-free, just-so stories about the formation of public risk perceptions »
Tuesday
Aug132013

So what is "the best available scientific evidence" anyway?

A thoughtful person in the comment thread emanating (and emantating & emanating & emanating) from the last post asked me a question that was interesting, difficult, and important enough that I concluded it deserved its own post.

The question

... in your initial post you mention "best available evidence" no less than six times. And you may also have reiterated the phrase in some of your comments.

Perhaps you have identified your criteria for determining what constitutes "best available evidence" elsewhere; but for the benefit of those of us who might have missed it, perhaps you would be kind enough to articulate your criteria and/or source(s) for us. 

It is a rather nebulous phrase; however, I suppose it works as a very confident, if not all encompassing, modifier.  But as far as I can see, your post doesn't tell us specifically what "evidence" you are referring to (whether "best available" or not!)

Is "best available evidence" a new, improved "reframing" of the so-called "consensus" (that is not really holding up too well, these days)? Is it simply a way of sweeping aside the validity of any acknowledgement/discussion of the uncertainties? Or is it something completely different?!

My answer:

Well, to start, I most certainly do  think there is such a thing as "best available scientific evidence." Sometimes people seem to think “cultural cognition” implies that there “is no real truth” or that it is "impossible for anyone to say becaues it all depends on one's values" etc.  How absurd!

But I certainly don't have a set of criteria for identifying the “best available scientific evidence.” Rather I have an ability, one that is generally reliable but far from perfect, for recognizing it.  

I think that is all anyone has—all anyone possibly could have that could be of use to him or her in trying to be guided by what science knows.

For sure, I can identify a bunch of things that are part of what I'm seeing when I perceive what I believe is the best available scientific evidence.  These include, first and foremost, the origination of the scientific understanding in question in the methods of empirical observation and inference that are the signature of science's way of knowing.

Basic technique for recognizing the best available scientific evidenceBut those things I'm noticing (and there are obviously many more than that) don't add up to some sort of test or algorithm. (If you think it is puzzling that one might be able reliably to recognize things w/o being able to offer up any set of necessary and sufficient conditions or criteria for identifying them, you should learn about the fascinating profession of chick sexing!)

Moreover, even the things I'm seeing are usually being glimpsed only 2nd hand.  That is, I'm "taking it on someone's word" that all of the methods used are the proper and valid ones, and have actually been carried out and carried out properly and so on. 

As I said, I don't mean to be speaking only for myself here.  Everyone is constrained to recognize the best available scientific evidence.

That everyone includes scientists, too. Nullius in verba--the Royal Society motto that translates to "take no one's word for it"-- can't literally meant what it says: even Nobel Prize winners would never be able to make a contribution to their fields -- their lives are too short, and their brains too small--if they insisted on "figuring out everything for themselves" before adding to what's known within their areas of specialty.

What the motto is best understood as meaning is don't take the word of anyone except those whose claim to knowledge is based on science's way of knowing--by disciplined observation and inference-- as opposed to some other, nonempirical way grounded in the authority of a particular person's or institution's privileged insight.

Amen! But even identifying those people whose knowledge reflects science's empirical way of knowing requires (and always has) a reliably trained sense of recognition!

So no definition or logical algorithm for identification -- yet I and you and everyone else all manage pretty well in  recognizing the best available scientific evidence in all sorts of domains in which we must make decisions, individual and collective (and even in domains in which we might even be able to contribute to what is known through science).

I find this recognition faculty to be a remarkable  tribute to the rationality of our species, one that fills me with awe and with a deep, instinctive sense that I must try to respect the reason of others and their freedom to exercise it.

I understand disputes like climate change to be a consequence of conditions that disable this remarkable recognition faculty.

Chief among those is the entanglement of risks & other policy-relevant facts in antagonistic cultural meanings

This entanglement generates persistent division, in part b/c people typically exercise their "what is known to science" recognition faculty within cultural affinity groups, whose members  they understand and trust well enough to be able to figure out who really knows what about what (and who is really just full of shit).  If those groups end up transmitting opposing accounts of what the best available scientific evidence is on a particular policy-relevant fact, those who belong to them will end up persistently divided about what expert scientists believe.

Even more important, the entanglement of facts with culturally antagonistic meanings generates division b/c people will often have a more powerful psychic stake in forming and persisting in beliefs that fit their group identities than in "getting the right answer" from science's point of view, or in aligning themselves correctly w/ what the 'best scientific evidence is.”

After all, I can’t hurt myself or anyone else by making a mistake about what the best evidence is on climate change; I don’t matter enough as consumer, voter, “big mouth” etc. to have an impact, no matter what "mistake" I make in acting on a mistaken view of what is going on.

But if I take the wrong position on the issue relative the one that predominates in my group, and I might well cost myself the trust and respect of many on whose support I depend, emotionally, materially and otherwise.

The disablement of our reason – of our ability to recognize reliably (or reasonably reliably!) what is known to science --not only makes us stupid. It makes us likely to live lives that are much less prosperous and safe. 

It also has the ugly consequence of making us suspicious of one another, and anxious that our group, our identities, are under assault, and our status being put in jeopardy by the enactment of laws that, on their face seem to be about risk reduction, but that are regarded too as symbols of the contempt that others have for our values and ways of life.

Hence, the “pollution” of the “science communication environment” with these toxic cultural meanings deprives us of both of the major benefits of the Liberal Republic of Science: knowledge that we can use to improve our lives, individually and collectively; and the assurance that we will not, in submitting to legal obligation, be forced to acquiesce in a moral or political orthodoxy hostile to the view of the best life that we have the right as free and reasoning beings to choose for ourselves!

Well, I want to know, of course, what you think of all this.

But first, back to the questions that motivated the last post.

To answer them, I hope I've now shown you, you won't have to agree with me about what the "best available scientific evidence" is on climate change.  

Indeed, the science of science communication doesn't presuppose anything about the content of the best decision-relevant scientific evidence.  It assumes only two things: (1) that there is such a thing; and (2) that the question of how to enable its reliable apprehension by people who stand to benefit from it admits of and demands scientific inquiry. 

But here goes:

Climate skeptics (or the ones who are acting in good faith, and I fully believe that includes the vast majority of ordinary people -- 50% of them pretty much -- in our society who say they don't believe in AGW or accept that it poses significant risks to human wellbeing) believe that their position on climate change is based on the best available scientific evidence -- just as I believe mine is!

So: how do they explain why their view of what the best evidence on climate science is rejected by so many of their reasonable fellow citizens?

And what do they think should be done?

Not about climate change! 

About the science communication problem--by which I mean precisely the influences that are preventing us, as free reasoning people, from converging on the best available scientific evidence on climate change and a small number of other consequential issues (nuclear power, the HPV vaccine, the lethality of cats for birds, etc)? Converging in the way that we normally do on so many other consequential issues--so many many many more that no one could ever count them!?

I hope they have answers that aren't as poor, as devoid of evidence, as the ones in the blog post I critiqued, in which a skeptic offered a facile, evidence-free account of how people form perceptions of risk-- an account that turned on the very same imaginative, just-so aggregation of  mechanisms that get recycled among those trying without the benefit or hindrance of empirical studies of the same to explain why so many people don't accept scientific evidence on the sources and consequences of climate change.

I hope that they have some thoughts here, not because I am naive enough to think they -- any more than anyone on the other side -- will magically step forward and use what they know to dispel the cloud of toxic partisan confusion that is preventing us from seeing what is known here.

I hope that because I would like to think that once we get this sad matter behind us, and resume the patterns of trust and reciprocal cooperation that normally characterize the nonpathological state in which we are able to recognize the best available scientific evidence, there will be some better science of science communication evidence for us all to share with each other on how to to negotiate the profound and historic challenge we face in communicating what's known to science within a liberal democratic society.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (85)

Would it be terribly, terribly unfair of me to summarise that more briefly as...?

A. I don't know how I know, I just do.
B. Trust the experts.

I know! I know! But if you're summarising your viewpoint for a new and sceptical audience, you probably need to put in a bit more background. Aren't you talking about how somebody without the time or training to look at the science for themselves might decide? You'll no doubt remember what Tom Wigley said about scientists endorsing scientific statements...

And although I know you don't want to get dragged in to the tar pit of the climate debate itself, but given that you've indicated several times that you think the consensus constitutes the "best available scientific evidence" you might like to say a little more about how you came to that decision. Given that everyone's 'science recognition capability' is disabled on that particular question, shouldn't you suspend judgement? Isn't that what your science is telling you?

August 13, 2013 | Unregistered CommenterNiV

@NiV:

Terribly unfair? I dunno.

But cartoonish & uninteresting.

Being able to know what's known when it exceeds the capacity to reproduce it on one's own is a fascinating, big matter. Science can't be understood w/o an account of how this happens -- for as I said here & in the linked materials, no I'm not "talking about talking about how somebody without the time or training to look at the science for themselves might decide." I"m talking about climate scientists themeslves on climate science -- along w/ scientists of every sort in every field.

But the world is filled w/ big intgeresting questions & one can't expect everyone to find interesting the same big things that one oneself finds mysterious and fascinating. But since you ask, I tell you that your restatmeent of my position is devoid, certainly, of all the things that I find interesting & be excited to share an appreciation of w/ others.


AS for skeptics:

1. I have no more interest in introducing them to my thought than I have in introducting anyone else to it. But no less, either. I am happy to talk w/ anyone who shares my interests & who might have something intersting to say in return (there is a noise-signal ratio one has to deal w/, of course).

2. This post is no more in medias res for skeptics than anyone else. I have tried to set forth an account that "fits" the scope of the conversation one might have w/ another in these circumstances-- & supplied references. How much "more background" should I supply in a blog that I can expect very few will have the patience to read even at this length? In any case, who could possibly expect what I say here to be "complete" etc. They should go to Wikipedia if they want that.

If people want more detail b/c this isn't enough -- of coruse & great! I have tried to point them down the path (my links are paths, but only to other paths) on which they might find more that is interesting.

Now on suspending judgment ... Why in the world should I?

I admit that I'm fallible; I admit that I'm doing the best I can & certainly relying on others, who themselves could be mistaken; I have aldready said that I resent conditoins that I have reason to believe diminish the reliability of my usual abilities to figure things out--as they do the ability of others, whose mistakes, if I think they are making any, I assume are a consequence of exactly those same conditions.

But I believe what I believe, and expect others to respect my right to use and rely on my own reason as I see fit.

I have in fact tried to make clear to others who disagree with me that I most emphatically *don't* expect them to suspend any of their judgments on decisoin-relevant science as a condition of having a conversation w/ me about the science of how to communication decison-relevant science.

Because the latter doesn't depend on the content of former, asking them to engage in that sort of demeaning gesture -- say you really don't know what you know when you say you disagree w/ me about issues (a), (b), (c) & (d)!-- would mock the claim I made about the importance to me of respecting their capacity and freedom to reason for themselves.

August 13, 2013 | Registered CommenterDan Kahan

The human brain is unsurpassed at pattern recognition. That's how we tell a good paper from a bad. We look at and say "do I smell a rat?"

The human brain is equally unsurpassed at fooling itself. Nobody can convince me that black = white as well as I can.

Authors have every incentive to dress up a bad paper as good.

So, while I agree with Gelman that rigor without intuition is not worth much, I would add that intuition without rigor is not worth much either.

August 14, 2013 | Unregistered CommenterRichard Tol

50% of people accept the non-skeptic position?

More to your point, yes, there can be a science of science communication. Many may not want to talk about it because it is your area of expertise and like normal, usual people, believe there is information and they imbibe/consume it. They don't like the idea of an intermediary?

My college English teacher used to remind us (a science stream class) that the people who make lots of money are not the engineers and the doctors but the English graduates who write copy.

With that said, climate advocates have worked for years, with a compliant media, in incorporating global warming concern as a part of the moral fabric of Western society. It, nevertheless, has only a tenuous foothold. Skeptics work against a headstart, against the grain of common conversational mores. So, moral shocks like Climategate and Glaciergate which transpired wholly on account of existence of climate skeptics, become flashpoints when the veil is ripped.

The modes of communication of climate skeptics are defined by the baseline state of the climate debate.

August 14, 2013 | Unregistered CommenterShub Niggurath

50% of people accept the non-skeptic position?

More to your point, yes, there can be a science of science communication. Many may not want to talk about it because it is your area of expertise and like normal, usual people, believe there is information and they imbibe/consume it. They don't like the idea of an intermediary?

My college English teacher used to remind us (a science stream class) that the people who make lots of money are not the engineers and the doctors but the English graduates who write copy.

With that said, climate advocates have worked for years, with a compliant media, in incorporating global warming concern as a part of the moral fabric of Western society. It, nevertheless, has only a tenuous foothold. Skeptics work against a headstart, against the grain of common conversational mores. So, moral shocks like Climategate and Glaciergate which transpired wholly on account of existence of climate skeptics, become flashpoints when the veil is ripped.

The modes of communication of climate skeptics are defined by the baseline state of the climate debate.

August 14, 2013 | Unregistered CommenterShub Niggurath

Dan -

I hope that you read that post by "pointman" I linked in the previous thread, lest you might be tempted to think that the post that you linked in the previous thread, or the comments from "skeptics" in the previous thread, are somehow not a representative sample of the views of "skeptics" who are active in the "skept-o-sphere."

I think that it is important to note, however, that "skeptics" are not monolithic, and there are many reasons to believe that the sampling you're looking at may not be representative of the phenomenon you're looking at: a "skeptical" view on science communication - because folks that are active commenters in on-line forums are not likely very representative (although interesting, they certainly consider themselves to be such - a kind of projection that to me is rather ironically very much not-skeptical).

Still, however, I go back to a belief that actually, what you have seen is representative in a sense. It is representative in the sense that, at least within many western cultures, there is a common perception that the effective way to communicate in adversarial situations and controversies, is through what amounts to "culturally assaultive advocacy."

We see that play out in issue after issue, and we even see it when we boil things down to something more basic - as you can see in the concept in conflict resolution that the key is to guide people to differentiating between "positions' and "interests." Without such a focus, people see these exchanges as zero sum gain scenarios - and that the way to victory is to vanquish the opponent. The overwhelming identification as victim is what fuels these situations - and it is no different in the climate wars, in essence, than when a couple fights and can't reach resolution because they are too busy justifying their victimhood to get out of their own way.

"Skeptics" and realists alike, I predict, will see your questions about effective communication as denying their victimhood, and as a result, categorize you as among their victimizers.

In conflict resolution, to be an effective mediator, you have to convince both parties that you are neutral and that your goal is to help them identify synergies and maximize common interests. But to acquire that status in the eyes of the disputants, you need to have an organizing structure that puts you outside of the conflict.

Good luck with that!

August 14, 2013 | Unregistered CommenterJoshua

@Shub:

On "what's the pct who believe/reject" etc-- I 'd say rsearchers who treat particular questions -- "do you believe human beings are responsible for warming? do you? do you?!!!!" -- as important don't really have a good theory of what they are measuring. People -- ordinary ones, the ones who wouldn't be found w/ 1,000,000 miles (or kilometers; take your pick) of this blog don't have any real sense of particular element of the science involved. But they do -- astonishingly, nearly all of them, no matter how little time they spend thinking about politics or science or anything other than sports or royal babies or entertainers' sex lives -- have an orientation, an affective "yay or boo" attitude on climate. Accordingly, anything you ask them -- "earth heating up?" "humans causing?" "ice caps meltiing?" "we all gonna die?" -- will correlate about 0.8 w/ each other. So what we have is some latent or unobserved pro-con attitude, and the sensible thing to do is come up w/ valid observable indicators (what people say when you ask them these sorts of questions) that can be combined into a reliable scale. Then we can use it to see how much variance there is across a particular population & what explains that. Do that & you'll see that people in US -- & UK & Australia too, and likely other places -- are intensely divided, and along lines that reflect commtiment to familiar types of outlooks & styles of living. The divisions are super deep & super persistent.

But yeah, the answer is 50% (and anyone who tells you that there is a more precise way to characterize the division based on one or another particular question is the sort of person who I said doesn't really get what he or she is measuring).

Also, beware anyone who runs up to you & excitedly says, "hey -- things are about to shift! A majority is now saying [believe/disbelieve]! It's just a matter of time before our position will prevail! See?! We've been predicting this! I knew it would happen & it finally has started!"

This is a weird cyclical reaction that you see on both sides of the climate debate.

You also see a variant of it in people who believe in things like the imminent 2d coming of Jesus or the impelending end of the world etc.

In any case, it is invariably not based on valid evidence but rather on some mix of self-deception &, in case of advocacy groups that do "communication," an effort to claim credit for "moving the needle" of public opinion.

August 14, 2013 | Unregistered Commenterdmk38

Since I'm riffing, a couple more thoughts along similar lines...

It is interesting just how much energy gets spent arguing about "the consensus."

It strikes me that one way to look at that is that it is perfectly in-line with the character of the climate wars that is about identification and self-identification

Is arguing about the consensus really just arguing about who is "us" and who is 'them" and how big each group is, respectively? And I think of the unreflective use of "us" and "we" in that last thread - over and over. And I think of the constant battles about term "denier" and the compulsive need to use terms like "warmist" or "alarmist" or "statist"....

Isn't that all evidence that a great deal of what is going on is about identification and drawing the lines that categorize groups?

August 14, 2013 | Unregistered CommenterJoshua

@Richard:

Agree 100% (+/- 5%, 0.95 LC)! I suspect Gelman does too.

There's got to be some sensible way to combine professional judgment & evidence-based practices that help to validate & fine tune judgment.

One of my interests is to try to persuade professions whose members share a very robust craft sense that they ought to develop the practice of collecting evidence suitable for testing conjectures that their members form on issues that divide them. Journalists, lawyers, educators -- they all fall into this class. If they developed this practice, they'd not be relying any less on professional judgment; I think you & I agree, based on pattern-recocgition dynamic we are both referring to, that it would be absurd to think they could or should! Still, the common stock of prototypes that they acquire and refine would then be in contact w/ evidence that can serve to validate the content of it & would no doubt evolve in a manner consistent with that. So the benefits (necessity) of professional judgment (as a form of perception, really) would be conserved, but it woudl be enriched by the validated insight that only disciplined empirical observation can assure ....

I think something like this is the story, actually, of the practice of medicine. I sometimes think that the idea of the "science of science communication" is just a way to try create inside of the loose profession of science communication the sort of productive dynamic interplay of professional judgment and empirical calibration that is part of that field.

Well, those are conjectures, of course! I'd like to examine them & see others do the same.

August 14, 2013 | Registered CommenterDan Kahan

@Joshua:

I'm done w/ that thread! Whatever more can be learned from it is nothing that relates to what I'm interested in.

August 14, 2013 | Registered CommenterDan Kahan

OK - apologies,

August 14, 2013 | Unregistered CommenterJoshua

@Joshua--

to me? No need! Was trying to extricate you. Appreciate too your motivation to lower my entropy (what more meaningful gesture of friendship can their be among people trying to enlarge one another's knowledge?!)

August 14, 2013 | Registered CommenterDan Kahan

Fascinating discussion.
I, as an experimental biochemist, seem to view things a bit differently. I start with the experimental design and the type of equipment being used. I first ask whether the data being presented is likely to be precise and accurate and the experiments well controlled. Then I ask whether the methods used to draw meaning from the data are being used properly. Next, I ask whether the conclusions drawn from the data satisfy Ockham's razor. Finally, I ask whether this data is consistent with other reliable data that I know and, if not, why not.
Carefully gathered data with proper understanding of uncertainty and Ockham's razor bounded conclusions constitutes what I consider 'best scientific evidence.'
I also put weighting factors on data from different techniques and different experimenters, but that is a longer discussion.
An examples of not good data or evidence include not calibrating a pH meter so that all the pH measurements that underlie the data could be wrong and thus invalidate the data or not picking the sample population well enough that the data are actually representative of the phenomenon that the scientist says they are. Invalidate use of mathematical methods to draw meaning from the data is also a red flag to me.

August 14, 2013 | Unregistered CommenterEric Fairfield

It's a lot simpler than you're making it, Dan.

Eric Fairfield has some of the right criteria. But I'd raise that up a notch: Peer Review.

Yes, I know that there are some crummy journals that claim peer review. But by following the major journals in a field, and the overarching Science and Nature, you can figure out the crummy ones.

And then there are the peer reviews of peer review: NAS studies and special task forces like the ICPP.

August 14, 2013 | Unregistered CommenterCheryl Rofer

"But the world is filled w/ big intgeresting questions & one can't expect everyone to find interesting the same big things that one oneself finds mysterious and fascinating. But since you ask, I tell you that your restatmeent of my position is devoid, certainly, of all the things that I find interesting & be excited to share an appreciation of w/ others."

I agree. It's a big and important question, and very interesting, which is why I find it so odd that you would use that argument. (Assuming I'm understanding what you intended correctly.)

How do people recognise good/bad science? Isn't that a critical issue in this sort of debate? Wouldn't it tell us a lot about what might go wrong, and how to do it better? And isn't it something of a cop out to say no more about it than "I don't know how we do it, we just do"?

There's a lot more that *could* be said. I think, though, that if one was to get into specifics, it would become quite difficult to maintain the position that the consensus constitutes the "best available scientific evidence". That could just be my point of view, of course, but you offer nothing here to prove me wrong.

"I have tried to set forth an account that "fits" the scope of the conversation one might have w/ another in these circumstances-- & supplied references. How much "more background" should I supply in a blog that I can expect very few will have the patience to read even at this length?"

I was thinking of something that concentrated more 'content' in the given length. We have here a really interesting phenomenon. Different people look at the same facts, and come to opposite conclusions about whether it is good or bad science. Climate scientists complain that it will be hard to dismiss a sceptical paper because the mathematics in it appears to be correct, or that the data they are generating is meaningless because it uses incompatible interpolation algorithms, or that inserting made-up data will corrupt the databases but nobody cares enough to do anything about it. Some people (like me) look at that and think "bad science", while some people look at the same things (like you, apparently) and think "good science". How does that work? How do we go from being an individualist or communitarian to deciding that corrupting the databases with made-up numbers is "good" or "bad"? It really doesn't help to say: "I don't know how we do it, we just do."

"But I believe what I believe, and expect others to respect my right to use and rely on my own reason as I see fit."

Agree absolutely! That's something sceptics have argued long and hard for.

"Because the latter doesn't depend on the content of former, asking them to engage in that sort of demeaning gesture -- say you really don't know what you know when you say you disagree w/ me about issues (a), (b), (c) & (d)!-- would mock the claim I made about the importance to me of respecting their capacity and freedom to reason for themselves."

I wasn't asking you to say you disagree with views you obviously hold, I was interested in you explaining why you hold them. (Not that you have to explain if you don't want to, but in a post purporting to be an explanation/definition of what "best available" means in this context it was what I was expecting.)

But one of the views you obviously hold is that in these circumstances people's ability to recognise good/bad science is fatally impaired. At the same time, you obviously recognise it as good science. And you are unable to define precisely by what criteria you do so. Don't you find this combination curious? As you say, you can talk about climate science communication science as a separate subject to climate science, but what do you do when your beliefs in communication science (judgement is impaired) conflict with your beliefs about climate science (the science is recognisably good)? Don't you think that's an interesting question?

"I'm done w/ that thread! Whatever more can be learned from it is nothing that relates to what I'm interested in."

:-) That's certainly a sentence that could be interpreted both ways!

Well, I found it one of the most fascinating threads you've had in quite a while. Again, it's interesting how different people interpret things in such different ways, isn't it?

Thanks anyway.

August 14, 2013 | Unregistered CommenterNiV

@Chreyl:

I agree w/ Eric's approach; I think anyone who reads a study-- even one published in Nature or Science or NPAS -- w/o doing those things is actually not really reading it; one has to think about the methods & ibnferences to know how much weight to assign a study finding -- and that's all a scientific study does: furnish evidence that has a particular degree of force; it doesn't "prove" or "disprove" anything w/ probability 1.0.

But (a) ordinary people don't have time to ready Nature & Scence & even if they did, and did it in the way Eric says, they'd not be becomeing acquainted w/ nearly as much decisoin-relevant science as they need to become acquainted with - and rely on -- to live a good life, so necessarily they will be doing something else; (b) even the people who write the science & Nature & PNAS articles don't have time to comprehend much less think critically in the way Eric described about all the premises & taken for granted conclusions on which their own study rests -- often, in fact, individual authors on those papers have full command of only some but not all elements of the study methodology; & (c) reviewers, if you accept Gelman's point, are also going to be in a situation in which, even when using critical reasoning, they end up relying on professional judgment that has elements themselves not tested & validated in the same way as what they are reviewing (has peer review itself ever been empirically tested & validated, e.g.?)!

Whether it's done by laypeople or professionals, recognition of reliable evidence is one of the most complicated "simple" things around. It is simple; it is nearly incomprehensible. And at the same time it is for sure a form of rational thinking, which shows that reason involves, if not than meets the eye, seeing more things than we can satisfactorily put into words.

BTW, don't try to tell Gellman that he should see publication in Nature & Science (not sure how he feels about PNAS, or Proceedings of RS for that matter) is a good heuristic for "valid science." He calls them the the tabloids.

Actually, doesn't that suggest that -- the disagreemwnt betweeen him & many other experts on empirical methods with you & w/ Eric & many others who are also experts - tell us things aren't so simple?

August 14, 2013 | Registered CommenterDan Kahan

Well, I guess it depends on what you are trying to do.

My sense of the question at the top of the post is that it is from a layman trying to make sense of what scientists have to say. If that is the case, it will be very difficult for her to go through Eric's procedure. But others have in the process of peer review. And material that hasn't been peer reviewed is likely not to be worth considering.

So peer review is a nice proxy for the non-scientist to start with. And if non-scientist want to dip into science, Science and Nature are not bad places to start. I strongly disagree with your point (b) in that paragraph; it's a scientist's job to question her own assumptions. If they're in there, they're the best she's got now. And the reviewers accepted them.

I can offer up a critique of peer review, too; most scientists can. And some scientists enjoy exaggeration to call attention to themselves. But is that useful to the questioner?

It looked to me like this was about finding reliable information. Peer review is the scientist's means of quality control. It's the best we've got now.

August 14, 2013 | Unregistered CommenterCheryl Rofer

@Cheryl

1. Imagine I am the "average American" & have an ear infection. What do I do? Go to a Dr, you say. I could at that point say -- what's the evidence that that is the thing to do? But I just go. She prescribes me an antibiotic. Do I read up first before taking it? Say the studies that suggest it works were pubished in Nature or Science. Great. But first show me some eividence that it makes sense for me to rely on that heuristic to determine that that's the best avaiable evidence; and tell me were to find the evidence that that's a sensible thing to do -- is it in Nature? Science? If at this point you are saying, "Oh, please! Don't be an dense; you know full well that Nature & Science are authoritative & can be relied on" etc. -- then stop. Because that's my point: I know that b/c I know that you & others w/ your professional insight know what you are talking about; identifying who knows what about what -- not reading Nature & Science-- that's how I manage to figure out what's known to science. But if you aren't covninced yet, then simply imagine the studies on the drugs weren't published in Nature & Science. So I guess I have to read them. But they make no sense to me. I could spend 1000 hours doing so but I'll get fired from work if I do that. Plus I have to drive somewhere & I need to spend 10000 hrs understanding GPS technology before I head out; or 25000 making sure airplanes can fly. Well, I won't do any of that, actually, because that's all ridiculous. I'll figure out who knows what they are alking about and do what they say. I'll end up taking the antibiotics & doing a billion other things that make sense because they reflct insights gleaned by science. I can, very rationally, observe plenty of things around me that reliably orient me toward what's known to science w/o being able to comprehend those things (& w/o every turning a page of Nature or Science or any of a billion other journals that I can't understand). Now if I can't do that on something like climate change, then the problem isn't in what I can and can't comprehend by engaging in critical reasoning of the sort you would do or follow some set of simple heuristics like "what does Nature say?" It's something out of whack w/ the usual cues that enable me reliably to recognize so much more than I could possibly ever figure out following the strategies you are describing..

2.. I agree w/ you on peer review. But do you see no irony in our "knowing" that a system that has never been empirically validated is the "best we've got" -- or in accepting that it's "good enough" apparently that we don't even have to bother testing it against plausible alternatives? I sure as hell wouldn't want you to be my dr if that's the way you figure out what medicine I should take! That's the whole point of Gelman's essay. The enterprise of science is filled with paradoxes of these sorts. That's not some sort of internal contradiciton or anything. But you won't convince anyone who is curious to figure out how all this works that the answer is "simple": "just go w/ Nature & Science b/c they are peer reviewed & we all know they are the best journals"!.

August 14, 2013 | Registered CommenterDan Kahan

Dan -

I sure as hell wouldn't want you to be my dr if that's the way you figure out what medicine I should take!

??

I am confused.

My assumption is that is exactly the logic you'd want your doctor using - in other words, that in lieu of being able to personally evaluate all the empirical evidence, a careful assessment of the range of peer reviewed analysis is the next best alternative. That would mean, precisely, determining what is "good enough" without examining all the alternatives - via the shortcut of relying on peer review.

There is a logic to the notion that peer review journals are the best we've got. It may not be validated by empirical analysis, it seems to me that it relies on the same logic as that you talk about:

The logic is precisely the same as saying:"Yes, the prevalence of view among experts isn't dispositive,and I should be open to all evidence and I am aware of biases that come with group identification and group think and the like, but because I can't evaluate all the empirical evidence, I will place some importance on the fact that there are some views that are more prevalent among those that seem to be the most qualified."

What am I missing (realizing it would be hard to capture that in anything less than a 10,000 word essay - just give me the highlights)?

August 14, 2013 | Unregistered CommenterJoshua

@Cheryl,
Thanks for bringing up peer review and its strengths and weaknesses. And also that for a non scientist to follow the details of what I proposed for best evidence might take a lot of work.
For my work and that of colleagues whom I trust, there are levels above peer review. The first level is whether anyone in our lab can find anything wrong with a piece of work, not just the authors but anyone else in the lab. The second level can be called 'enhanced peer review', which is sending the work to trusted colleagues who will be critical and balanced. They will find things that those of us in the lab missed. Lab and colleague review is much more stringent than standard peer review.
The scientists that I trust all do this intensive self criticism before releasing results. That does not mean that we distrust all of our assumptions, but we double check everything. OK, quadruple check.
On the idea of this process being hard for non-scientists to follow I must remember that point. I have been giving a series of discussions on how to analyze ancient textiles. I assumed that the audience understood the role of controls. I have to explain these things better.

August 14, 2013 | Unregistered CommenterEric Fairfield

Well, I am trying to answer the question up top. I found your discussion extremely confusing and don't have time to work it all through, so I didn't get to Andrew Gelman's paper.

So I took Ockham's route and am sticking with that first question. I am a practical sort of person, and if I were asking that question from a non-scientific background, I would want to know a fairly simple answer. As far as I can see, that first question has little to do with your doctor's office example.

You seem to be tracking off from that question to considering more philosophical questions, and I take it to be practical.

If the questioner wants a practical guide to finding the best science, and if she is sincere, she will take a look at NAS and IPCC studies and at Science and Nature. The primary literature relating to climate change is simply too voluminous, and, good as Eric's suggestions are, applying them to any significant part of that literature will be a full-time job. "I know it when I see it" simply doesn't fly for someone outside the field, and perhaps even not within.

Whether or not peer review has been empirically evaluated, it seems to me hard to argue that it's the best we've got. Again, this is a practical statement. "The best we've got" is different from "the best of all possible worlds."

August 14, 2013 | Unregistered CommenterCheryl Rofer

@Joshua--

Was unclear: meant wouldn't want a "dr" whose professional mentality was to shrug her shoulders & say "best we got" when I asked why she was prescribing for my illness a medicine that had been in use for yrs & yrs & yet never tested for efficacy. I would want her -- and her profession! -- to test the goddam medications they are prescribing.

Or how would you like a criminal justice system that shrugs its shoulders & says "best we got" about forensic "tests" that have been useed for decades w/o empirical validation? (You can answer that question: it's the one you have, which says exactly that about fingerprints.)

So why do I & Cheryl feel that it's okay to shrug our shoulders & say 'best we got" about peer review? It's "best we got" -- but only b/c we assume it's "good" & in fact "good enough" not even to bother testing it-- for decades.

Why shouldn't whoever it is who is equivalent of the patient or criminal dfdt say to us -- "you guys are scientists? WTF?! Test your procedure for testing the quality of article selction to see if it has validity! & then test it again -- & again & again-- against whatever plausible alternatives people propose so we can remain confident that the scientific evidence that our system of publication yield is really the best available!"

We know that anyone who says that about peer review is an idiot. We know that b/c we have been professionalized in a manner that makes us see the validity of peer review -- "for Christ's sake-- Nature & SCience use it!" -- as plainly as the chick sexer discerns the gender of the chick.

But "reliability" of shared professional intuition or perception is "not worth much," @Tol says, w/o "rigor" of the sort one gets w/ empirical validation.

Fine, you say. Validate peer review already -- it's about time! But trust me, this is just the beginning of something that will never end if you really don't want to trust me.

August 15, 2013 | Registered CommenterDan Kahan

This whole post is as information free as it can possibly be. This is word smithing a 1000 words to convey nothing much more than you should know who to trust.

We dont have to trust the experts or recognize who to trust or any of that nonsense, even when we absorb "established" science, without having to reinvent the wheel all over again. The way that happens is that when we build on top of existing "established" science, be it scientific theories or observations or engineering gizmos using either existing established science or the parts you add on top of it, some of us (not just one or two people, but significant number of us) WILL see failures if the existing established science isnt robust. For example, even as you learn the basics and take Newtonian formula for gravitational force between objects and Newton himself as a giant in the field for granted, every school and college student is doing experiments that either prove or disprove newtonian gravitational force calculations in varying degrees. So we just dont trust Newton. Bur rather constantly keep confirming that it works. Once a while we find out there are areas that established science doesnt work well. That is where you go add on top of existing science and address those exceptions.

But before you get there, established science in many disciplines doesnt become established until substantial amount of observations matching what reasonably well developed theory or models predict. Before it becomes established sufficiently significant number of folks verify that observations match predictions. Anyone who comes after that have to be able to verify this, if they choose to. And some people always do as I explain above in some form. The problem for climate science is that the cart is in front of the horse. Before the science is settled, they should show that enough number of observations of sufficient length match what the models or theory (there really is not theory here to begin with) predict. Before they ask us to believe 100 year predictions, get 20 year predictions right. Until then dont ask us to go turn the world economy upside down, based on no-evidence. Best available is good enough if you know for a fact that you are facing an imminent danger and you have to take some action right away. Such as an Asteroid is hurtling down to earth and our well established newtonian science says it will hit the earth. Not when you have to constantly adjust the models to even hindcast correctly.

So it isnt just someone saying trust newton, you imbecile, since you will never know as much physics as Newton knows. It is just that it hasnt failed in a long time. When it does, it is big news and time for adjustments or develpment of additional elements to Newton' theory to include the parts that fail. If there is way to fit them, then you have invented a brand new discipline, such as quantum mechanics. It still doesnt negate newtonian mechanics, because your observations didnt lie. Just that newtonian mechanics cant be applied to sub-atomic particles.

August 15, 2013 | Unregistered CommenterShiv

It is interesting to me ( as a doctor) to see the repeated usenof medical metaphors. The term 'evidence- based' comes from medicine, and we emphatically do not apply it to questions about judging the character of authors or gut feelings about papers. There are now a number of evidence hierarchies and we often refer to terms such as 'level one evidence '.

There are also criticisms of evidence- based medicine as well of course, but many would agree that a focus on the quality of evidence has improved medicine in general.

I wonder whether some of this sort of effort would improve climate science as well. Do we agree on the value of the evidence of the results of general circulation models? what about historical records or paleo-based temperature reconstructs.Surely we could come up with some sort of rating of the values of different evidence that goes beyond consensus or gut feeling

August 15, 2013 | Unregistered CommenterMichael Lowe

@Dan:

I most certainly am not shrugging my shoulders that peer review is the best we've got. That's a statement of my best judgment. Is your latest comment sarcasm? It's sometimes hard to understand sarcasm in this medium, without the body-language cues, and I tend to be awfully literal-minded.

It would indeed be nice to have a study, or actually it would require a rather large set of studies, to prove that peer review works. But I don't see those studies being funded any time soon, and I don't see proposals for alternatives, which would also have to be tested by similar studies. There are a few natural experiments starting on the Web, but it will be years before we can evaluate them. Meanwhile, science and life will go on, and we need some way to evaluate the science.

It's not all cultural cognition. Certainly that's a factor. But it's possible for the rest of us to consider the assumptions and actions of our tribes and come to the conclusion that that's the best we've got. When I read your top post, I see you veering toward "trust me." Peer review is "trust a group of scientists conferring among themselves." I prefer the latter, which appears to be able to get past some of the more things that can go wrong with one person's judgment. And, as Shiv says, "It is just that it hasnt failed in a long time." I would put that in a more positive way, that peer review has helped to move science forward in many areas, over many years. And I'm wondering about the alternatives you would propose.

The problem with attributing all our thinking to cultural cognition is that on the one hand it puts us all into little boxes that can never change. My observation is that there's a lot of change in people's thinking going on all the time, as Michael points out for medicine. Going in the other direction, if everything we're thinking is due to cultural cognition, then nobody can believe anything, and it's all "trust me." Which seems to be the jist of your initial post.

And, for Michael (and Dan), "that sort of effort" is precisely what the IPCC is doing. A great many scientists are coming together to evaluate general circulation models, isotopic evidence, glacier measurements, and many other aspects of climate science to throw out the studies that aren't reliable and to bring together the evidence to see if it all makes sense. It's peer review of peer review, as Eric said.

August 15, 2013 | Unregistered CommenterCheryl Rofer

Best available is good enough if you know for a fact that you are facing an imminent danger and you have to take some action right away.

It is "good enough" in other cases as well. It is good enough when you are evaluating risk in the face of uncertainty. It isn't perfect, but it is good enough. We run that calculation on a daily basis, probably.

So on the one hand we have a problem when someone confuses good enough with perfect, and on the other hand we have a problem when people selectively expect perfection and exclude the value of good enough.

August 15, 2013 | Unregistered CommenterJoshua

To me 'best available' and 'good enough' are very different.
For instance, during the second world war, the British could spot German planes by leaning out the window, hoping there were no clouds and looking into the sky. That technology was 'best available' but nowhere near 'good enough.' The Americans and British then invented radar, which was not previously available but turned out to be 'good enough' to win the war.

August 15, 2013 | Unregistered CommenterEric Fairfield

@Michael:

I agree 98% (+/- 4%, 0.95 LC) that the practice of professional science communication needs to adopt the "evidence based" orientation of medicine. I understand Gelman to be advocating this, too, for the teaching of statistics & science (actually, this is the philosophy of SENCER). @Richard, above, seems to express a sentiment along these lines.

My point about peer-review is that an "evidence based" philosophy applied to it would say -- "well, what's the evidence that this is the right way to do it? We've been doing this for decades on the assumption 'it works'-- we would have tested that assumption by now if we had been being sufficiently evidence based, so let's fix that omission & get to it."

Now the best part -- "gut feelings" & medicine!

Shouldn't we be evidence based in our attitude toward the role pre-conscious intutions, perceptions, feelings etc might usefully play in the judgment of professionals -- whose training & experience will give them distinctive perceptions, intuitions, feelings?

Modeling for us what it really means to be evidence-based, medical science has used empirical methods to test the relative effectiveness of perceptive or affective forms of reasoning -- ones that involve rapid, unconsicous perception guided by "pattern recognition" of the sort @Richard was refering to above -- vs. conscious, deliberative, anlaytical types of reasoning.

The conclusion: both make a contribution & certainly, so it would in fact be a big mistake not to recognize that part of medical training involves training phsycians, like chkck sexers, to be "good" at "recognizing" patterns that are assocociated with important decisions in the practice of being a doctor.

Consider:


These research studies [ones showing, among other things, that experienced medical specialists are more likely to correctly diagnose diseases than less experienced ones b/c their intuitions guided them more reliably to possible causes of illness that can then be tested] illustrate a view of heuristics that contrasts with the view in the classical decision-making paradigm. In this research, expert strategies, which include a range of heuristics, are associated with high levels of accuracy. Experts represent the problem in such a way that recognizable patterns emerge from the data thereby minimizing extraneous search through a myriad of irrelevant information and extraneous hypotheses. Interestingly, some of the expert heuristics are suggestive of biases that would be labeled as problematic according to standards of decision research. It is certainly conceivable that a confirmation bias may occasionally prejudice the most seasoned practitioner and lead them to misdiagnose a problem. In the cases reported in many studies of expertise, however, heuristics serve to generate the correct decision in an economical manner. In this sense, expert strategies are immensely adaptive. There is a substantial body of research on medical problem solving [62], and more generally in other domains that illuminate the ways in which solution strategies are instantiated in diverse contexts. This research also suggests a continuum of skill acquisition that could serve as benchmarks for instruction and training. In general, problem-solving studies are more “diagnostic” in specifying potential sources of error as well as characterizing the productive roots of expert performance.

Patel, V.L., Kaufman, D.R. & Arocha, J.F. Emerging paradigms of cognition in medical decision-making. Journal of Biomedical Informatics 35, 52-75 (2002).

Those trained in empirical methods learn forms of crtical thinking -- ones essential to valid causal inference -- that ordinary people usually have trouble with.

But this tends to make empiricists (really really good ones, who unquestionably display fine-tuned critical resoning skills) instinctively, reflexively dismissive (!) of forms of cognition that involve appropriately trained forms of affective or intuitive perception (when they hear about research on "pattern recognition," they say things like "know it when you se it, huh? sounds like you are talking about pornography!").

Well, Drs, b/c they are so ruthlessly evidence based, have overcome this.

There are a whole bunch of professions that need to be evidence-based & follow the path where it takes them.

But one place that following that path will likely take them to is an informed undersanding of the contribution that recognition -- based on intuitive apprehension of patterns -- plays (along with other forms of reasoning, certainly) in reliable & accurate decisionmaking, professional & nonprofessional!

August 15, 2013 | Registered CommenterDan Kahan

Glaciergate?

A typo in a massive report is worthy of the hackneyed suffix "- gate"?

I suppose we could expatiate on Billboardgate, HeartlandFundinggate, WegmanReportgate, OregonPetitiongate, MichaelMannLibelgate too, and that would be a really positive contribution.

August 15, 2013 | Unregistered CommenterToby

Re peer review, I only partially agree. Scientists consider peer review necessary but not sufficient. When people in the public cite peer reviewed work, so often they cite articles where the only review was that the check cashed with no problems. Even in more credible journals, a good portion of the work doesn't make it past a second layer of review, and in some fields of science, almost no article makes it past a second layer of review. Much work that is published in Science, etc, is incomprehensible to people outside the field, and we in other fields certainly have trouble judging their worth.

The sources that make sense, which Dan uses, are the uber reports out of communities that begin with peer review. While we don't ask much of the public, we can certainly ask those who take the time to research whether their own gut reaction is correct to check if the sources they find overlap well with National Academy of Sciences, the major UN science organizations, and so on. If not, as is so often the case, perhaps the sources they have been relying on are inadequate. I do realize, from much experience, that the question has to be asked often. One person I only knew through FB told me that she had never heard of the concept of uber reports before and she stopped relying on her schlock sources and changed her mind on a couple of issues (nuclear power and GM food). After my 50th mention of uber reports.

While that is rarer than we would like, over time there is a shift. Clearly a sizable percentage of people remain wedded to ideas congenial to their world view. But when there is a shift in groups, and the shift is away from the culturally pleasant take, there is a shift to agnosticism or even heresy among other members of the group.

It seems to me that clarity about which sources are OK to use must be part of every discussion for those of us who are trying to shift thinking. In my own personal presentations, I include information about my own shifts in sources and beliefs with "I once was lost, but now am found, Was blind, but now I see." Or a secular version of same.

August 15, 2013 | Unregistered CommenterKaren Street

Journal peer review definitely is not "the best we've got". It isn't even very good - it mostly consists of 2-3 unpaid experts reading it in their spare time between all their other jobs, usually with none of the data, calculations, lab notes, or any attempt at replication.

Journal peer review is not a check on the accuracy of the science, and is not part of the scientific process. It is an editorial function - the journal wishes to help its readers keep up to date with work in the field, and therefore wants to present only relevant papers worth spending time on. Peer reviewers are looking for papers that are on topic, interesting, new, at least reasonably competently done, that provide a level of evidence sufficient to match the unexpectedness of the claims, that fit into the existing framework of knowledge or explain why they don't, and provide sufficient information for the interested researcher to replicate it.

They do not, in general, check that the work is correct, or non-fraudulent. All peer review tells you is "we think this paper is worth your time looking at".

The scientific process occurs in the next stage, when the community of scientists read the paper and try to verify it, replicate it, extend it, debunk it, and critically challenge it. If it survives this process, it gains credibility. The longer and more determined the attack, the more credibility the survivors accrue.

Scientific laymen should not be looking to the recently published peer-reviewed literature for reliable science. They need to look to the older literature, that has stood the test of time, for reliable science. The gold standard in science is not the shiny peer-reviewed journal, which is but a tentative work-in-progress, but the well-thumbed, dog-eared textbook on the professor's shelf.

---

Dan, I think you may be underestimating how much people can understand about science for themselves. Sure, if you don't want to know, I'll just give you the pills and not tell you. But if anyone asked their doctor how they can know that the pills will work, I'd not expect the doctor to tell them to go read the literature, I'd expect them to get a couple of plates of agar out of the lab, infect them both with pus from the ear, add antibiotic to one plate, and tell the person to take them home and keep them warm. The bacteria will grow on the clean plate but will die on the treated one. Or they could be introduced to some previous patients.

If someone asks me how GPS works, I tell them. Each satellite broadcasts a very accurate time signal. The time taken for the signal to reach you tells you how far away each satellite is, and then you triangulate. You can tell where the satellites are by using the same process in reverse from the controlling ground stations, whose true positions are known. You can easily illustrate the principle to the innumerate with a few lengths of wood or plastic. It doesn't take 10,000 hours. If someone is *really* interested in the maths, I can go through that in about half an hour. It's just basic geometry.

And if anyone asks whether aeroplanes can fly, I tell them to go to an airport and stand outside it for a while.

The marvel of science is that even untrained amateurs can do it. A lot of stuff is accessible to everyday observation and simple deductions, and it is a crying shame that people go through life thinking science is an arcane mystery that it takes thousands of hours of study to do. (Some bits are, but a lot is not.) Science is a tool applicable to daily life - it's methods are the best way we know of for separating truth from myth and should be provided to everyone.

Even worse, there are those people taught that science is about trusting the experts, and memorising lists of facts. Without understanding, they repeat what they've been told - and frequently get it wrong. Considering the history of science, it is shocking; it really is.

Science is kind of like sport - OK, the average man or woman on the street cannot play to the same level as a professional athlete, but they can still go down to the recreation ground and kick a ball around, do a few tricks. You don't have to be a complete couch potato, only watching them on TV. And it makes you appreciate the professional's talent even more when you know a little bit about how it's done.

August 15, 2013 | Unregistered CommenterNiV

Journal peer review is not a check on the accuracy of the science, and is not part of the scientific process. It is an editorial function

This is an artificial construction. The real world ain't so black and white. Gray exists. It is all of a check on accuracy, part of the scientific process and an editorial function. Those features are not mutually exclusive to one another.

During the process or preparing for peer review, someone wishing to publish generally checks and rechecks their work in anticipation getting the work ready for publication and/or in anticipation or the review of peers. Might they undergo a similar process if the weren't going to try to publish their work, or perhaps publish their work absent any peer review? Perhaps. And probably, very often not, as I would say anyone who has published work through peer review would attest.

That isn't to say that there isn't a downside. Sometimes in anticipation of peer review a scientist might hide flaws because without doing so the work wouldn't get published. Sometimes time is wasted altering a document so as to pass peer review - in ways that aren't really pursuant to the "best possible." And sometimes the process of peer review actually diminishes the range of interesting or alternative takes on science because the function of "gatekeeping" can be a double-edged sword.

Unless someone tries to scientifically approach the counterfactual, I guess it is not "scientific" to say whether we'd be better off without peer reivew processes. I guess I am content with my personal assessment of the probabilties - with the consideration of analyses that I've seen that rather comphrensively address the high prevalence of potential errors in peer reviewed publications. Maybe the confidence I expressed earlier was overstated - but clearly, NiV, there was much left out in your assessment.

Journal peer review definitely is not "the best we've got". It isn't even very good


Journal peer review definitely is not "the best we've got". It isn't even very good

Absent a scientific approach to quantification, "not even very good" is basically, completely subjective.

August 15, 2013 | Unregistered CommenterJoshua

Joshua,

Phil Jones at the Parliamentary enquiry into Climategate:

"Jones's general defence was that anything people didn't like – the strong-arm tactics to silence critics, the cold-shouldering of freedom of information requests, the economy with data sharing – were all "standard practice" among climate scientists. "Maybe it should be, but it's not."

And he seemed to be right. The most startling observation came when he was asked how often scientists reviewing his papers for probity before publication asked to see details of his raw data, methodology and computer codes. "They've never asked," he said."

http://www.theguardian.com/environment/cif-green/2010/mar/01/phil-jones-commons-emails-inquiry

Nobody checks.

Nobody checked Phil Jones work - or they'd have caught the fact that he claimed Chinese weather stations hadn't moved when he had no data to support that. (And a lot had.) Nobody checked Michael Mann's work, or they'd have spotted that he mislabelled series, stuck them in upside-down, made errors implementing standard algorithms, and had published a temperature reconstruction that he knew was not correlated (R^2 = 0.02) to temperature. Nobody checked Jones and Mitchell's CRU TS 2.1 climate database that not even CRU, who generated it, were able to replicate (as 'Harry' reported, at great length). All of these were published in the peer-reviewed literature. All of these were reproduced in the super-reviewed uber-report by the IPCC. At no stage were any of these actually checked.

I could go on and on and on, endlessly. Nobody checked the thermometers being used to monitor climate change were not being sited next to aircon units and trash burners. Nobody checked how many trees the spike at the end of the Yamal paleoclimate series relied on. Nobody checked Steig's maths on the Antarctic peninsula warming. Nobody checked the Himalayan glacier claim (and no, it wasn't a simple typo. Funded research projects had been founded on the claim). Nobody checked the Amazonian drought claim. Nobody checked the African food production claim. Nobody checked the Marcott upspike.

Nobody checks. The author comes from a good university, the message is about what we expected, why would we check? Nobody knew for decades that Phil Jones had thrown away the information on the homogenisation he had done on CRUTEM. Nobody could possibly have checked it, because not even Phil Jones knew how to reconstruct it again from the raw data, and he created it! The results in the journals are waved past, with a nod and a smile. (If they fit in, of course.)

The first question you ought to ask is "Has it been checked?" And ask for the details.

I personally think it's unfair to throw up all the many failures of peer review, because truly it was never intended as the arbiter of true science. It is, as I say, a mere editorial function. But it has been boosted into something more by people with an interest in other people thinking that, since they control the journals. (Or at least, like to think they do.) And of course the journals have gone along with it, because it boosts their prestige. But I think it's a grave mistake in the long run, because eventually they'll be found out. And it will potentially cast more discredit on the product of real scientific method because this cargo cult science has been sold as the real deal for so long.

August 15, 2013 | Unregistered CommenterNiV

Sorry, NiV - not buying the logic. First, what you identify as errors are disputed as such. Second, assuming that they could all be fairly identified as errors, expecting perfection and damning all peer review on the basis of it not being perfect is binary thinking. Third, you have no idea whether some other errors were checked and corrected. Fourth, even assuming they all were what could objectively be considered errors, and that there was no checking (and as a result no other errors were found and corrected), you are damning the entire peer review process on the basis of a few cases.

Have you seen some evidence that shows that no errors are corrected via the peer review process (including author-identified errors as a part of the process of preparation for peer review)? Have you even seen some evidence that in balance, more errors pass through peer review than those that are corrected?

I have personally seen event occur that are not consistent with what would have to be true for your logic to hold up.

Would the end results of the process be better if raw data were supplied and comprehensively evaluated? Sure. Would there be some downsides to implementing such a blanked requirement? Probably. Would the there a net benefit from such a requirement? I think so. But again, just because the process could be improved, has flaws, and mistakes have been made, does not justify a conclusion that we'd be better off if the process didn't exist.

August 15, 2013 | Unregistered CommenterJoshua

I personally think it's unfair to throw up all the many failures of peer review, because truly it was never intended as the arbiter of true science.


I'm not sure that anyone has argued that they think it is, or should be, the arbiter or science. I would certainly argue that it couldn't possible fulfill such a role.

But saying that it isn't, or shouldn't be the arbiter of science is not the same thing as saying that it is possible, or even likely, that the product of science is improved in balance by the existence of peer review.

August 15, 2013 | Unregistered CommenterJoshua

Man, that was some seriously convoluted syntax in that last sentence, but I'm guessing that you can figure out what I was intending to say.

August 15, 2013 | Unregistered CommenterJoshua

"First, what you identify as errors are disputed as such."

Yes, and that's the really interesting phenomenon here that I'd like to understand more about.

How do we get from individualism/communitarianism to thinking that a temperature reconstruction that is not correlated with observed temperatures is not an error? (Or is one, if you take the other view.)

"Second, assuming that they could all be fairly identified as errors, expecting perfection and damning all peer review on the basis of it not being perfect is binary thinking."

I don't expect perfection. I don't expect a couple of weeks part-time unpaid work with no access (as Phil Jones reports) to the data or calculations to have a hope of picking up more than the most superficial errors.

And I'm not damning it. It serves it's actual purpose admirably. I'm just saying it shouldn't be oversold as something it's not.

"Third, you have no idea whether some other errors were checked and corrected."

That's not the issue. Peer review, like other editorial processes such as spell-checking, clearly does filter and correct some errors. But it's a low bar, and I'd not try to claim that a piece of science had been 'checked' on the basis that I'd run it through the spell-checker. "You've spell-checked it? Oh that's all right then. Let's use it to justify upturning the global economy."

"Fourth, even assuming they all were what could objectively be considered errors, and that there was no checking (and as a result no other errors were found and corrected), you are damning the entire peer review process on the basis of a few cases."

No, I'm damning the entire peer-review process on the basis of the scientific community's reaction to it. Their reaction is not to correct the errors and the processes that allowed them to pass multiple layers of review, it's to make excuses and deny that there's a problem. And to carry on using and citing the evidence that is now known to be flawed. It's to say in effect "the error's don't matter".

The problem is not the specifics of the few cases we've found, it's the fact that even quite simple and unsubtle errors passed the current process - the real problem is all the errors still in there that we haven't yet found. In an industrial process, if you discover a quality control failure has passed flawed products, you not only fix the quality control, you also go back and re-check all the products that passed the flawed process.

The basic problem is that many climate scientists are sloppy. They're careless and lax. They lack rigour. And it's not by the use of such methods that science won its reputation for reliability.

But that's off-topic for this discussion. The question here is not about who is right, but how and why we disagree. How does someone recognise good/bad science, and how does being an individualist or communitarian affect that recognition.

I was interested to know what thought processes Dan went through when deciding that this was "the best available scientific evidence", in the face of the controversy and what he knows about people's judgements on such topics being impaired. You have given me some valuable insights into yours with the points you raise, but we have long been participants in this debate and I've seen it all before. I was particularly interested in Dan's thought processes, because he's not so deeply into the debate. He says, if I understand him correctly, that he trusts the experts, but is that a purely communitarian thing? Does it take priority over the more direct arguments? How does one recognise expertise, except by evaluating the arguments they use? Does Dan trust climate scientists because they share the same cultural tendencies as him? I'm still very unclear on the reasoning here.

It seemed to me like a golden opportunity for someone with insight into the psychology of belief in science to introspect on their own reasons for taking a side in a controversy. Can they identify where the cultural factors enter their own thinking? There are none of the usual problems of trying to externally figure out what a person is thinking from the outside. But perhaps it doesn't work that way. Perhaps the reasons for belief are truly as mysterious to introspection as they are to outside observers.

August 16, 2013 | Unregistered CommenterNiV

Glaciergate a typo: An example of the low-information type of supporter of climate orthodoxy.

With toby as example, it is quite possible that the supposed 50% who 'reject' the sceptics are none but those who accept the standard position without much reflection or inquiry (as they rightly should and not waste time on this). Those who do start looking because one thing or another didn't make sense start seeing the problems. However strong your cultural affinities are to your chosen version of global warming reality, if you come to believe someone's been lying to you, a crucial break occurs. That is why climate activists don't care much for skeptics who write on blogs. They can be shooed away, and kept out of the picture. The bigger fish and some of the think tanks, on the other hand, are a hassle because there's a chance what they say might 'sow doubt', breaking viewers out of their cocooned climate understanding. Most support for climate activists' cause consists of people who've passively accepted it by imbibing and you don't want to be breaking up the fragile, delicate state of its ecosystem.

August 16, 2013 | Unregistered CommenterShub Niggurath

Shub,

That explanation may work for some, but there are a lot of believers who continue to believe even after they've been shown the problems. The question is how and why.

An obvious theory is that they know what they're doing and are being deliberately deceptive. Another is that they are naive and have been deceived. You'll recognise those as the same theories they apply to sceptics. The truth appears to be far more interesting. It appears that both sides think that they are right and the other is wrong, and they each justify their own actions by appeals to the truth.

Nobody can spot their own biases and the gaps in their own reasoning, because the only means they have to check them is the same brain that came up with the reasoning in the first place. You can pick up some problems with extensive scientific training and experience, but nobody can see it all. However, we can see the flaws more easily when we look at other people, especially those who think differently to ourselves. In fact, we find it downright weird the way they can't see things that appear obvious to us. It's hard to believe, sometimes.

That's why science is not purely about method and technique, but also relies on critically challenging everything. You do the work and try to eliminate all the flaws you can, but then you publish it to a doubtful scientific community who will try to tear it apart. We cannot see our own biases, but because we each have different biases, we can often see and point out each other's. A result that a lot of people have checked is far safer than one that only a few (or one) have checked. That's why people argue for paying attention to a 'consensus'.

But they misunderstand the term - it's not referring to everyone simply holding the same opinion, it's referring to everyone having come to the same conclusion having each independently checked for themselves. People who hold the same opinion because they all trust the same small group of 'experts' does not provide such an assurance. They're following the herd. Their errors are not independent but correlated.

Even such a consensus does not override direct argument and evidence. Some biases are common, some errors are subtle, and everyone can miss something one person spots. But seeing evidence of a debate is worth at least something. But at the same time, individual argument and evidence can be flawed too. We are each of us fallible. Our brains all work in much the same way.

Which when you look at some other people, is a scary thought.

August 16, 2013 | Unregistered CommenterNiV

I don't expect perfection. I don't expect a couple of weeks part-time unpaid work with no access (as Phil Jones reports) to the data or calculations to have a hope of picking up more than the most superficial errors.

And I have seen peer review pick up far more than superficial errors. You go from the fact that it didn't pick up errors that you consider significant to it can't pick errors that aren't superficial.

And I'm not damning it. It serves it's actual purpose admirably. I'm just saying it shouldn't be oversold as something it's not.

I agree. It shouldn't be oversold, and neither should it be undersold. Like anything else in this debate, it becomes a tool of the "motivated."

But it's a low bar, and I'd not try to claim that a piece of science had been 'checked' on the basis that I'd run it through the spell-checker.

It isn't the equivalent of a spell-checker merely because you equate the two. If you want to argue that it picks up on errors no more significant than misspellings, then to convince me, at least, you'd have to bring some evidence to the table. I have seen peer review pick up significant errors. Maybe my experiences are anomalous. The only way for me to know that is to look at some data. Baring that, considering that I don't see any reasons to believe that my experiences are unique, I would place my bet on your being in error.

No, I'm damning the entire peer-review process on the basis of the scientific community's reaction to it.

You are categorizing "the scientific community" on the basis of a limited set of examples. In doing so, you are excluding any reactions in the scientific community, in these examples and other similar examples, that don't match the pattern you're confirming.

Their reaction is not to correct the errors and the processes that allowed them to pass multiple layers of review,

I have seen that happen many times.

it's to make excuses and deny that there's a problem.

And I have seen that happen also.

And to carry on using and citing the evidence that is now known to be flawed. It's to say in effect "the error's don't matter".

Because I don't have the technical capability or smarts to judge the different arguments myself, I can't accept that as a definitive description. I have to suspend judgement in this situation. So what I am left with is that here, you are making unsupported claims (in your categorical evaluations of peer review). Of course, on the other side, I have seen (at least sometimes) an over-reliance on the "authority" or peer review. And I have seen fairly comprehensive analyses that have pointed to the rather high rate of flaws in peer-reviewed literature. And I have seen, first hand, where peer-review produces sub-optimal or even counteproductive outcomes. It all goes into the hopper, and what I come out with is that peer review likely produces better results than we'd have without it, and that those who are tribally-affiliated use peer review as just one more arrow in their quiver of motivated reasoning.

In an industrial process, if you discover a quality control failure has passed flawed products, you not only fix the quality control, you also go back and re-check all the products that passed the flawed process.

I see this as an idealized concept of quality control in industry. There is certainly a measure of truth, but it is a vast over-generalization. There are many exceptions to the process working as you described. IMO, the notion of "engineering quality" quality control has become another arrow.

The basic problem is that many climate scientists are sloppy. They're careless and lax. They lack rigour. And it's not by the use of such methods that science won its reputation for reliability.

Scientists, like engineers, and anyone else, have always been sloppy, careless, and lax. They have always lacked rigor. Science has "won its reputation" despite all of those flaws, presumably because in balance, it produces outcomes that outweigh those flaws.

What will damage the reputation of science is when disputants use science as an arrow. But we'll get over it. It isn't something new.

August 16, 2013 | Unregistered CommenterJoshua

...So: how do they explain why their view of what the best evidence on climate science is rejected by so many of their reasonable fellow citizens?...

My rejection of the status quo began early in the game, around 2002. I saw Steve McIntyre's claim that Michael Mann's hockey-stick graph - being touted as a critical proof that unprecedented human-caused global warming was happening - was wrong.

I would normally have believed the hockey-stick paper: it was published in Nature. But it seemed odd to me that it did not show a MWP at all. So I took time to look at McIntyre's maths. And he was right.

Maths is not a subject depending on the readers views or attitudes. It is either right or not. McIntyre's criticisms over the off-centered PCA calculation were provably right. And yet Mann, and Nature, did not respond to that criticism. Instead, they were trying to suppress McIntyre's work and deny him any publication.

At that point I became pretty sure who was doing science, and who was trying to pull a scam...

August 16, 2013 | Unregistered CommenterDodgy Geezer

Dan, I doubt you really mean best, as that term is normally used, which is for the highest member of a serial ranking. You would be excluding the second best evidence, and the third best, etc. Perhaps you mean all the good evidence, but that is already being done.

More deeply you seem to exclude the obvious explanation for the debate, which is that the good evidence is inconclusive, so different people make different judgements. What you call cultural factors are merely different demographic categories. Categories do not cause behavior, they merely describe it.

The idea that everyone looking at the same evidence should draw the same conclusions is what I call the Lockean fallacy. The relative weight of evidence is a matter of personal judgement, based on all one knows and believes.

August 16, 2013 | Unregistered CommenterDavid Wojick

He says, if I understand him correctly, that he trusts the experts, but is that a purely communitarian thing?

I don't really buy that taxonomy, but even if I did, trust in the experts (and I"m not saying that is an accurate characterization of what Dan described - in fact I think that it isn't), doesn't seem to me to be a behavior that is disproportionately characteristic of communitarians. I think that what differs are the experts they trust - but maybe Dan has data that show otherwise.

Does Dan trust climate scientists because they share the same cultural tendencies as him?

Obviously, I can't speak for why Dan does or doesn't trust anyone, and I don't know how he'd answer that question about the process for others, but IMO....there is little doubt that the identity of the experts people tend to trust is correlated to political identification. The causal mechanism is somewhat more complicated than what you describe, IMO.

August 16, 2013 | Unregistered CommenterJoshua

Joshua – peer review cannot possibly spot errors if data and code are neither asked for nor supplied.

Engineering practice may not always be ideal but then you have to ask yourself if CAGW isn’t worthy of the very best standards applied to all businesses, especially those impacting heavily on human safety like pharmaceuticals or the nuclear industry? Those important fields don’t rely on trust, they have very stringent procedures, monitoring, policing and countless other fail safes to ensure human error is as small as possible. Climate science is barely on the first rung of the ladder. You’d almost think CO2 wasn’t important.

August 16, 2013 | Unregistered CommenterTinyCO2

@ David Wojick:

I mean "best" as in "all valid" -- in sort of loose or heuristic Bayesian sense of everything that one would aggregate to try to form more accurate estimate of the probability of a hypothesis (indeed, any useage of "best" that contemplates one should use only "most probative" rather than "all valid" evidence would to me reflect a defective understanding of how to engage in reasoned decisoinmaking!).

Using it in that sense, moreover, I certainly don't mean to be assuming that the "best evidence" is necessarily "settled" or "conclusive" on some proposition. Indeed, the idea that the state of any hypothesis is every "setteled conclusively" by emppirical evidence is a sad if common misunderstanding of science. It is often the case that the "beset evidence" supports the inference there is a high degree of uncertainty surrounding some propoosition--ike say whether permitting citizens to carry concealed weapons in public increases or decreases crime.. People can have positions on policy issues that reflect that conclusion as the one supported by the "best availabile scientific evidence"-- why not?

Judith Curry posted an interesting response to my last post that seems to reflect a misunderstanding of my position in this regard. I plan to post something drawing people's attention to her post & offering some observations about it.

August 17, 2013 | Unregistered Commenterdmk38

@Tiny, @NiV @Joshua @Cheryl @ et al:

The only "peer review" that I myself view as particularly reliable is the one that occurs after a study has been published (and "published" as in made public in any particular way). That sort literally never ends; it only seems that way about the kind that happens before publication

August 17, 2013 | Unregistered Commenterdmk38

Dan, your use of the term "best" is so nonstandard (best never means all) I have to wonder if you do this a lot. It may explain why I find your stuff very hard to understand.

As for the rest you have not responded to my points. You seem to be claiming that there is some general failure of reasoning on the skeptics side but I find no evidence of that and I study the logic and epistemology of the debate. The fact is that different completely reasonable people can look at the same body of evidence and come to opposite conclusions, especially if the evidence is complex as it certainly is in the case of climate change. That they do this is no evidence of any failure of reasoning.

August 17, 2013 | Unregistered CommenterDavid Wojick

Dan writes:

The only "peer review" that I myself view as particularly reliable is the one that occurs after a study has been published (and "published" as in made public in any particular way). That sort literally never ends;

I think few would disagree, Dan. But how does one square this with the noble tradition of the "gold standard" IPCC "assessment" process?

This "inclusive, transparent" process often results in the inclusion of supporting citations derived from material that is so new that this post-publication "review" has not even begun (as I had observed here and more recently here) Well, except via the blogs, which the IPCC's new, improved rules have specifically excluded from their list of "acceptable sources of information for IPCC reports".

In light of the above, would you say that the IPCC's reports are based on "best available evidence" (whatever your definition and/or criteria might - or might not - be)?

August 17, 2013 | Unregistered CommenterHilary Ostrov

@Hilary:

I don't know as much as I should about all the IPCC processes. Moreover, I have observed outstanding climate scientists -- includuing ones who have participated in the IPCC process & who agree largely w/ the positions it has generated -- complain about one or another apsect of it, or one or another conclusion that the assessments have reached.

But I tend to believe that the process it involves is geared toward the sort of aggregation that would promote discernment & use of the best available evidence on climate by many actors -- from nat'l govts to local ones to commercial entitites & to scientists in various related fields trying to figure out how to extend knowledge by *challenging* existing positions & findings.

But in response to your earlier, quite reasonable question, I wrote an entire blog post on why the question just asked me isn't relevant to the question I asked you & others to help me try to sort through.

On why we we not only can have a common discussion about "the science communication problem" -- persistent public conflict in the face of the 'best available evidence'-- despite disagreeing about the climate change or the IPCC etc; but on why we have a common interest in doing so precisely b/c of the intensity & persistence of disagreement among reasonable people about what the best available evidence is.

I've explained in connection w/ @David's post that I don't think "best available evidence" has to mean evidence that dispels uncertainty; it might show that a particular matter is deeplyuncertain -- that no confident conclusions of the sort that could reasonably inform policy are even possible etc.

Having done that, I am confident that I will find few skeptics who would say, "My position isn't based on what I see as the 'best evidence'; I just have a position all my own!" etc They are "skeptical" based on their assessment of the best evidence they can find.

What's puzzling to me is why so many ordinary people who are doing that are reaching results so intensely at odds with one another. NOrmally that doesn't happen; the number of issues where people who disagree about climate agree about matters informed by scientific evidence (including issues on which the answer from science is "not clear") is orders of magnitude larger than ones that divide peple in the way climate change does.

I was curious whether skeptics think about *that* problem; it has to be the same problem to them as it is to a nonskpetic-- why there is a lack of convergence in the face of evidence that informs a confident position on their own part (even if confience is "uncertainty" etc).

But I am gonig to thnk about other things for a while. Aftger about 500 requests that have resulted in nonresponsive answers I get it: I'm not going to get any enlightenment by continuing to try to explain why the answers I'm getting are nonrespnsive.

Oh-- just so you don't think (or maybe just to make it take 15 seconds longer for you incorrectly to conclude) that I'm saying that my assessment of the run of the respones reflects a view that skeptics are stupid or some such rubbish, go back & *look* at the post you responded to in the first place where I asked whether skeptics are as aggressively resistant to thinking about evidence on how people form their views here as so many nonskeptics are! There really is so much those on both sides have in common, as I said

August 17, 2013 | Registered CommenterDan Kahan

@David:

I don't see where I claimed failure of reasonoing on skeptics side -- cerrtainly not on climate ... It's odd b/c this whole discussion started in the last post w/ my pointing out that "failures of reason" are a weak claim for existence of persistent public conflict on climate -- whether offered by nonskpetics or skeptics. In any case, I've done multiple studies that examine the hypothesis that "defects in reason" account for the controversy & they all suggest the answer is no.

It's only a guess on my part but I think the number of skeptical commmentators in this thread & one after last post who have treated my questions as if I were asking "what is wrong w/ skeptics? Why don't they believe the 'best available science'-- i.e., the 97% consensus etc" shows just how strongly preconceptions can shape what people make of what they are reading. It's assumed that I'm attacking climate sckeptis -- so I get a barrage of the usual mortar shells of factoids and criticisms of climate scientists dumped on my head. But since I didn't say anything that turned on whether skeptics are right or wrong, this response was just plain irrelevant

As for "standard" terms, I'm not sure where the standard you are used to comes from. I've tried to explain what I meant; I might try again if the "best evidence" -- however anyone could choose to define it -- weren't so overwhelming that you are implacably motivated not to understand a word I'm saying b/c you are convinced I disagree w/ you about climate change.

August 17, 2013 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>