follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« It's here: Annenberg/CCP "Evidence-based Science Filmmaking Initiative" | Main | Prepare yourself ... CCP's Evidence-based Science Filmmaking Initiative »
Saturday
Jan092016

Weekend update: the anti- "fact inventory conception of science literacy" movement is gaining ground on Tea Party & Trascism [Trump+Fascism]; to eclipse them, only thing it needs is a catchier name!

A friend pointed me toward this really interesting article:

The nerve of the piece is a critique of the "fact inventory" conception of science comprehension that informs the NSF's Science Indicators' battery:

The bigger issue, however, is whether we ought to call someone who gets those questions right “scientifically literate.” Scientific literacy has little to do with memorizing information and a lot to do with a rational approach to problems....

[T]he interpretation of data requires critical thinking.... Our schools don’t train people to be vigilant about avoiding errors such as confounding correlation and causation, however, nor do they do a good job of rooting out confirmation bias or teaching the basics of statistics and probabilities. All of this leads to the propagation of a lot of nonsense in the press and internet, and it leaves people vulnerable to the flood of “facts.”

It’s not possible for everyone—or anyone—to be sufficiently well trained in science to analyze data from multiple fields and come up with sound, independent interpretations. I spent decades in medical research, but I will never understand particle physics, and I’ve forgotten almost everything I ever learned about inorganic chemistry. It is possible, however, to learn enough about the powers and limitations of the scientific method to intelligently determine which claims made by scientists are likely to be true and which deserve skepticism. . . . Most importantly, if we want future generations to be truly scientifically literate, we should teach our children that science is not a collection of immutable facts but a method for temporarily setting aside some of our ubiquitous human frailties, our biases and irrationality, our longing to confirm our most comforting beliefs, our mental laziness. Facts can be used in the way a drunk uses a lamppost, for support. Science illuminates the universe.

Wow.

For sure I couldn't have said this better.  Anyone can confirm this for him- or herself by reviewing the various posts I've written criticizing the "fact inventory" conception of science literacy and defending an "ordinary science intelligence" alternative that features the types of critical reasoning proficiencies essential to recognizing and making use of valid scientific evidence.

Maybe I'm jumping the gun, but I hope this thoughtful and reflective article is a harbinger of more of the same, and the beginning of a wider discussion of this problem.

If I have any quibble with Teller's argument, though, it is over what the nature of the problem actually is.

Teller starts with the premise that the U.S. public has a poor comprehension of science and attributes this to the "fact inventory" conception of science literacy.

She might be right-- but I'm not sure.

I'm not sure, that is, that the American public's science comprehension is as poor as she assumes it is. The reason I'm not sure is that I don't think we've been assessing the general public's science comprehension with a valid measure of that capacity -- one that features critical reasoning proficiencies rather than a"fact inventory"!

Developing a public science comprehension measure focused on the reasoning proficiencies that Teller conviciningly emphasizes has been one focus of CCP reasearch over the last few years.  The progress made so far in that effort is reflected in the current version, "2.0," of the "Ordinary Science Intelligence" assessment test (Kahan in press).

As discussed in previous posts, OSI_2.0 doesn't try to certify respondents' acquisition of any set of canonical "factual" beliefs. 

Instead, it uses quantitative and critical reasoning items that are intended to assess a latent or unobserved disposition suited for recognizing and making appropriate use of valid empirical evidence in one's "ordinary," everyday life as a consumer, a participant in today's economy, and as a democratic citizen.

Since at least 1910 (my memory is hazy for events earlier than that), when Dewey published his famous "Science as Subject-Matter and as Method," the idea that science pedagogy should be focused on cultivating the distinctive reasoning proficiencies associated with making valid inferences from reliable observations has exerted a powerful force on the imaginations and motivations of a good number of educators and scholars (today I think of Jon Baron (1993, 2008) as the foremost champion of this view).

One thing they've learned is that imparting this sort of capacity is easier said than done!

But in any event, they are right -- as is Teller -- that this kind of thinking disposition is the proper object of science education.

The much more pedestrian point I find myself making now & again is that we really don't have a good general public measure of this capacity -- and so aren't even in a good position to figure out how well or poorly we are doing in equipping citizens with it.

Necessarily, too, without such a good measure, we won't be as smart as we ought to be about what contribution defects in science comprehension are making, if any, to public controversies over climate change, nuclear power, the HPV vaccine, and other issues that turn on decision-relevant science.

Teller cites the 2012 CCP study that found that higher science literacy is associated with greater polarization, not less, on climate change risks (nuclear power ones too).

I think that study helps to show that this sort of conflict is not plausibly attributed to defects in science comprehension. Precisely b/c I and my collaborators agree with Teller that a "fact inventory" conception of "science literacy" is defective, we used a science comprehension measure-- OSI_1.0-- that combined certain NSF Indicator "basic fact" items with a Numeracy battery, which has been shown to be highly effective in measuring the capacity of ordinary members of the public & others to reason well with quantitative information. 

People who scored high on that critical reasoning measure still polarized on climate change.

And the same is true of people who score the highest on even reasoning-proficiency centered OSI_2.0:


Most people, sadly, don't know very much about the science of climate change.

But the few who actually can reliably identify its causes and consequences (as measured by version 1.0 of the "Ordinary Climate Science Intelligence" test, an assessment based on "climate science literacy" items drawn from NASA, NOAA, and the IPCC) are also the most politically polarized on the question of whether human activity is the principal cause of climate change -- or indeed on whether climate change is happening at all (Kahan 2015a).

That evidence has lead me to conclude that the conflict over climate change (not to mention numerous other disputed issues of science) isn't about what people know.  It is about who they are: the "beliefs" people form on these issues are ones suited to helping them form affective orientations toward these issues that effectively signal their membership in & loyalty to groups embroiled in a nasty form of cultural status competition....

That problem isn't being caused by any deficiency  in science education in this country.

On the contrary, that problem is preventing our democracy from getting the benefit of whatever scientific knowledge & reasoning capacity we have managed to impart in our citizens.

If we want enlightened democracy, we better figure out how to extricate science from these sorts of ugly, illiberal, reason-eviscerating forms of cultural conflict (Kahan 2015b).

Of course, these are provisional conclusions, informed by what I regard as the best available evidence.

But the best evidence available definitely isn't as good as it should be for exactly the reason that Teller describes so articulately: we don't possess as good a measure of public science comprehension as we ought to have.

This is how I put it at the end of “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change:

The scale development exercise that generated OSI_2.0 is offered as an admittedly modest contribution to an objective of grand dimensions. How ordinary citizens come to know what is collectively known by science is simultaneously a mystery that excites deep scholarly curiosity and a practical problem that motivates urgent attention by those charged with assuring democratic societies make effective use of the collective knowledge at their disposal. An appropriately discerning and focused instrument for measuring individual differences in the cognitive capacities essential to recognizing what is known to science is essential to progress in these convergent inquiries.

The claim made on behalf of OSI_2.0 is not that it fully satisfies this need. It is presented instead to show the large degree of progress that can be made toward creating such an instrument, and the likely advances in insight that can be realized in the interim, if scholars studying risk perception and science communication make adapting and refining admittedly imperfect existing measures, rather than passively employing them as they are, a routine component of their ongoing explorations.

Not as articulate as Teller-- but the best I can do! 

And hey-- if my best motivates others who can do a better job still, then I figure I'm doing my part.

References 

Baron, J. Thinking and deciding (Cambridge University Press, New York, 2008).


Baron, J. Why Teach Thinking?‐An Essay. Applied Psychology 42, 191-214 (1993).

Dewey, J. Science as Subject-matter and as Method. Science 31, 121-127 (1910).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res. (in press).

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (9)


Dan, IMO there is another, possibly larger factor beyond lack of critical thinking skills, which is that almost none of the public, and maybe probably fewer scientists than we would like understand the social and institutional aspect of science, and hardly anyone understands that this is going on. Never mind that you don't understand relativity. Virtually any "knowledge" in the vast majority of the population of things like water is H20 or atmospheric oxygen is 02 (atoms traveling in pairs) is taken on faith.

The public's understanding of science is massively social, but so is most of the understanding of science that scientists have. Scientists know their neck of the woods, but for other matters go to reference books, or tables, or find the right expert. Their judgement of who is expert is based on things like (1)prestige, (2)the quality of reasoning in the small parts that they understand of papers by people in vastly different specialties, and (3) judgement of character is in this person is careful in making judgements, or has integrity or is or isn't a first class slime ball.

I highly recommend the 1985 paper "Epistemic Dependence" by John Hardwig. One of his most striking points is how a little thing like trying to measure the lifespan of charm particles may result in a paper with 50 or more authors, no one of whom knows from personal certainty how the whole argument goes. Somewhere in the discussions of such issues is piece on how WWII U.S. battleships managed to aim their guns correctly to hit Japanese battleships. It goes something like
1) Someone on radar measures with great exactitude the distance to this ship, and this and other data are sent to a calculation room somewhere on the ship.
2) The calculators use, I believe, tables painstakingly generated by landbound computers, probably with some hand calculation and/or slide rule virtuosity, and generate another set of numbers.
3) The gunners use the numbers to adjust the angle of the gun which generates the correct trajectory for hitting the target.

If you think about it, it's a bit like 3 different "Chinese Rooms" where people are following cookbook procedures. So who where is the knowledge of how to hit the target?

If (analytical) epistemologists faced facts they'd have to admit that almost nobody "knows" anything according to their ideas of "knowing". And where does that leave us?

In climate science, where so many different kinds of information from so many different specialties have been applied to the problem, it is also a kind of knowledge that is only fully present in hundreds of minds and thousands of papers. Many of them no doubt contain mistakes. Unfortunately many people (including some high scorers on your tests, who pride themselves on their knowledge of logical fallacies) dismiss this as "groupthink".

The account, in Miriam Solomon's Social Empiricism of the growing acceptance of plate techtonics (summarized in "Global Warming and the Controversy: What is Scientific Consensus? Continental Drift as Example.") by scientists in different fields, only as evidence appeared in their field provides a very different picture where consensus is anything but groupthink.

January 11, 2016 | Unregistered CommenterHal Morris

"IMO there is another, possibly larger factor beyond lack of critical thinking skills, which is that almost none of the public, and maybe probably fewer scientists than we would like understand the social and institutional aspect of science, and hardly anyone understands that this is going on."

Agreed.

"Scientists know their neck of the woods, but for other matters go to reference books, or tables, or find the right expert. Their judgement of who is expert is based on things like (1)prestige, (2)the quality of reasoning in the small parts that they understand of papers by people in vastly different specialties, and (3) judgement of character is in this person is careful in making judgements, or has integrity or is or isn't a first class slime ball."

Scientists (those that understand how the scientific method works) don't rely on criteria like prestige or character judgements. They use criteria related to how often and how thoroughly a result has been checked - preferably by hostile critics motivated to find flaws in it. Science gains credibility by surviving criticism - it's like evolution by natural selection that way. If you know that a particular theory - like the second law of thermodynamics - has come under determined and ingenious attack thousands of times and survived every one of them, you can rely on it as pretty solid. You can rely on college textbook derivations because you know thousands of students and lecturers have gone through the arguments in detail. You can rely on theories like special relativity, because you know that everyone who has ever come across it has initially spent many hours trying to find reasons why it's got to be wrong - not just for the glory of beating Einstein, but because it just looks so crazy!

Science deals with our cognitive blindspots and human fallibility by means of systematic scepticism. If a possible plausible counter-argument to any theory is found, or even suspected, the theory is considered 'disputed' and has to be checked before proceeding to use it. Likewise if a theory is new and has not been thoroughly tested yet. (Most academic journal papers are in this position.) Only after it has been thoroughly tested and no surviving counterarguments are known can it be 'trusted', and used by scientists without them having to personally verify the chain of reasoning supporting it themselves.

There is a huge amount of science that *has* been so tested. Scientists rely on institutional and social measures to know what parts this applies to. Unfortunately, the teaching of science by authority used in schools has permeated many of the college educated scientists too, who ought to have been since taught otherwise, and many professional scientists do extend their trust to other sources - investing their belief as you say in the prestige and authority of the source. Argument ad Verecundiam, as Locke called it. Some even go so far as to subscribe to consensus, as if science was subject to a popularity contest!

Epistemic dependence is a fact of life; but it has consequences. If each step in a chain of reasoning independently has a 95% chance of being correct, then you can chain together 13 steps into an argument before the overall probability of correctness falls below half. If each step has a 99% chance of being correct, then you can manage arguments of about 69 steps before the same thing happens. The higher your confidence in the individual steps, the more complex and extensive a chain of reasoning you can construct from them. Conclusions at the frontiers of modern science probably rely on hundreds or even thousands of individual steps. (Mathematics is even worse.) While the non-zero probability of error is not fatal - we can proceed quite a long way despite it - it does set firm limits to what we can achieve, and the standards of evidence that science requires for 100-step chains is far stricter than the much shorter chains of reasoning needed by people in everyday life. People are vaguely aware that scientists tend to be a bit more persnickety about precision and correctness than most people, but probably don't realise why, or know the extent of it.

Dan's survey question "There is solid evidence of recent global warming due mostly to human activity such as burning fossil fuels" is the right question to ask. It doesn't ask about experts and their prestige. It asks about evidence.

Do you know what the evidence is? Do you know if it has been checked? Do you know that any results found to be unsupported would have been eliminated from the canon? Do you know if anyone has proposed any counter-arguments or problems that haven't yet been answered? Is the theory 'disputed'? These are the sort of questions anyone should be able to ask. You might have to take some answers on trust, but you can go a long way even without any specialised scientific training just by asking this sort of question.

What is the solid evidence that recent global warming is due mostly to human activity such as burning fossil fuels? Is it:
A) The average global surface temperature has risen, and nobody knows of any other reason why it should.
B) The rising 'ordinary least-squares' trend line calculated through the temperature observations is 'significant' according to the standard textbook statistical test for trend lines.
C) Existing climate models don't match reality when CO2 rise is set to zero, but do when CO2 rise is set to that observed.
D) The uncertainties in all the lines of empirical evidence proposed so far cannot be fully quantified, and so expert judgement by the IPCC's climate scientists is ultimately relied upon.
E) Something else.

I doubt there are many people on either side of the political divide who could correctly answer this question. I think quite a few scientists would struggle, too. (Although actually you don't need any more than some simple critical thinking skills and a knowledge of logical fallacies and arguments to eliminate most of the options. Deep technical knowledge isn't required.)

People on both sides (many scientists included) are simply taking their position on this question on faith. It is therefore perfectly natural that they should believe only the experts they have faith in, and fall into line with their belief communities. What possible alternative do they have, if they don't know what the evidence actually is and they're not able to use the scientific method itself?

January 11, 2016 | Unregistered CommenterNiV

@NiV & @HalMorris--

I defnitely agree that people should learn how institutions of science work. For one thing, since even as proficient "science reasoners" (or just "reasoners" ), they will still, as even NiV has now been brough around to concede(!) be obliged to "take it on the word" of those who know what science knows on a great many things if they are to get the benefit of what science knows!

So part of what we have to teach the "science literate" citizen -- part of her "ordinary science itelligence"-- will be a form of perception well-calibrated to identifying who knows what science knows & who doesn't; who knows something that has the pedigree that makes it entitled to be treated as known (provisionally of course) b/c it is based on observation & reason & not on authority (the best understanding of nullius in verba; if you want a good chuckle out of someone who felt exasperated enought to have to explain why no one can do a "title search" for that pedigree-- but rather just has to be able to *see* it based on conventions that guide such perception -- read Popper's wonderful lecture on On the Sources of Knowledge & Ignorance)

When I see people fretting about how to cultivate public "trust" in science, I am unimpressed.

First, people *do* trust science & scientists --immensely. W/o a 2d thought, they put their lives in the hands of those who are using science literally hundreds of times a day.

But second & more importantly, the capacity they need to recognize valid science & give it proper effect is not *trust*! It is reliable, warranted discernment of the entitlement of a science commumnicator to be credited; entitled b/c he or she is transmitting information that has in fact been generated in the way science demands: by disciplined observation, reliable measurement & valid inference.

What we should be figuring out, as "scientists of science communication," isn't how to "make the public trust" scientists.

It is *how* the public generally *does* manage reliably to identify those communicators of science whom it genuinely makes sense for them to repose their confidence. We can make sure to build into science education-- i.e, "thinking" education -- the proper instruction to sharpen and perfect that capacity.

Then we should be figuring out how to create a science communication enviroment in which those "ordinarily science intelligent" people can trust their own perceptions -- that is, one that isn't filled wth the sort of science communication pollution that ought to make everyone worry about the reliability of his or her own perception that shw knows what science knows.

Finally, we should then be insisting that those who communicate--including those who produce what's communicated -- genuoinely possess the qualities that make them worthy of being identifyed as worthy of trust by people who are exercising *that* critical form of rationality.

Thank you very much, @HalMorris, for the link to the Hardwig paper!

[BTW, @Niv you know that I don't think the "belief in" global warming item is a valid indicator of any for of science comprehension; not b/c it isn't true or isn't something that has been communicated by scientists & others who know what science knows, but b/c it just doesn't measure anything haveing to do w/ science comprehension. It measures a latent for of identity-- & that's that.]

January 11, 2016 | Registered CommenterDan Kahan

@Dan can you please delete the first version of my comment leaving only the corrected one? And it's convenient edit out the line explaining the correction.

@NiV - IMO you've illustrated many of the points I was making. Unfortunately, rather than feeling vindicated, I just feel very very tired. If you're curious enough to read http://therealtruthproject.blogspot.com/2014/11/global-warming-and-controversy-what-is.html and respond showing some understanding of the points I'm making, maybe we can have a real conversation.

@Dan - what do you think? Should I just shoot myself?

January 11, 2016 | Unregistered CommenterHal Morris

@Halmorris--no, don't shoot yourself.

Maybe spend rest of day thinking about David Bowie & how lucky we were to get to listen to his cool music

January 11, 2016 | Registered CommenterDan Kahan

@Dan, I think I'm inclined to agree with Larry Laudan that science and other matters are much more on a continuum, which is to say he wasn't too interested in the "demarcation problem" (science vs non-science), which really annoyed rationality proponent Massimo Pigliucci who made Laudan's position something to counter in his book Philosophy of Pseudoscience. As you probably know, Laudan, who has collaborated with Alvin Goldman's Social Epistemology project, has been spending more and more time on legal reasoning and less on "philosophy of science".

As I've tried to say mostly elsewhere, if a community of people claiming to be putting together an understanding of X really are connecting and engaging with a tractable domain suitable for creating a scientific discipline, one sign that this is happening is the appearance of methods not seen in previous scientific discovery; methods that reflect the structure of the family of phenomena they are encountering (i.e. paradigms or research programs). This is just the sort of thing that frustrates attempts to give a simple account of what the "scientific method" is. Ironically, psychology, as an example, at least up to the 80s, with its static methodology of target and control groups, and "yes or no" hypotheses that anyone can understand (though nearly always in terms of probabilities and statistical tendencies, which means people don't really understand) just illustrated with this seeming rigor of method that they had not come very far in terms of really grappling with a tractable domain. At least in one case where I was debating someone who had this simple "scientific method" idea and was maintaining that climate scientists didn't follow it, he got his idea of "scientific method" from a psychology professor decades ago.

It seems to me there is a "smell test" i.e. as distinct yet difficult to characterize as a smell (though the challenge is to get past that and characterize it) by which I distinguish someone genuinely grappling with phenomena and trying to get at the truth (even if it is clear that they are presently leaning in some direction) from someone trying to sound rigorous or scientific but who treats speaking as a means to produce a certain result in his/her audience - i.e. they are being in effect a salesman or propagandist. And one thing that keeps goading me on is frustration at people who gravitate to the propagandist and sees them as quite sincere and genuine, perhaps impressed by their energy and consistency rather than avoid that person and gravitate towards those more ready to change their thinking as the known facts change. Two non-scientists who impress me that way are Jon Ronson (hard to explain if you don't know who he is) and Ira Glass, who went so far as in the case of a onetime guest on his show ("This American Life"), who presented a great illustration of something Ira probably considers quite real and worth exposing -- extreme exploitation of workers by Chinese factories working for American companies, but the guest made up actual poignant encounters with people based on things he'd only heard about, and Glass went so far as to bring that guest back on the program for a pretty intense grilling and exposure of his "creative journalism".

January 11, 2016 | Unregistered CommenterHal Morris

"@NiV - IMO you've illustrated many of the points I was making. Unfortunately, rather than feeling vindicated, I just feel very very tired."

Yes, both sides feel much the same way. I felt my own views on the debate had been vindicated by your championing of "prestige" and "judgement of character is in this person is careful in making judgements, or has integrity or is or isn't a first class slime ball" as ways to determine which position is to be preferred in a scientific controversy. It's tempting to give up. Nevertheless, I cannot praise a fugitive and cloistered virtue, unexercised and unbreathed, that never sallies out and sees her adversary, but slinks out of the race where that immortal garland is to be run for, not without dust and heat. The race goes on.

"If you're curious enough to read [...] and respond showing some understanding of the points I'm making, maybe we can have a real conversation."

What points are you making?

You say: "The papers I've received from non-believers in global warming have generally looked flawed to me." What papers? What flaws? It's hard to comment on your reasons or judgement without knowing.

You say "The author, certain that he has made a great demonstration of something may claim his work singlehandedly disproved AGW, and might -- I think only if out of touch with the real science community, be convinced that scientists have become corrupt, and are no longer working according to "real" scientific principals, but I see no sign of that." What evidence did they present of that corruption? Again, I can't comment on whether your statement that you "see no sign of that" is reasonable if I don't know what evidence you've seen and rejected.

Your main point seems to be that just because science is seen as controversial, and just because scientists have come to an agreement over a period of time and called it a 'consensus', doesn't mean it is wrong, or unscientific groupthink. I agree. I would not come to the conclusion that there is a problem with climate science lightly, or without evidence.

So that I can get an idea of whether a conversation is worth pursuing, could I ask that you have a quick look at the file known as Harry_read_me and let me know your reaction to it? It's quite long, but you should get the idea quite quickly by skimming it for the commentary. (My favourite section is the bit about the "nuclear option", but there are plenty more gems.)

I've come across people who genuinely see nothing wrong with doing science this way. I was told once that academics having to maintain the rapid rate of progress at the cutting edge of science, they didn't have the time or training to mess about with things like 'software quality'. They had to move on to the next project, the next big paper, without the luxury of being able to tidy up old code.

Are you such a person? Do you think science should be done this way? Can you look at Harry's development notes and still see "no sign" of a problem?

Because if so, I've no idea what climate science would have to look like such that you could see a problem.

Science always has to be falsifiable - there has to be some specific set of observations that, if seen, would lead you to reject the hypothesis. If we can work out what those criteria are for you in the case of climate science, we might indeed be able to start a conversation.

January 12, 2016 | Unregistered CommenterNiV

NIV:


could I ask that you have a quick look at the file known as Harry_read_me and let me know your reaction to it? It's quite long, but you should get the idea quite quickly by skimming it for the commentary. (My favourite section is the bit about the "nuclear option", but there are plenty more gems.)

I've come across people who genuinely see nothing wrong with doing science this way.


I'm getting no impression from a "quick look", and can't tell what is meant by "nuclear option". Do you want to explain that, and indicate any other particular places to look.

My impression is that this has to do with the "hockey stick" as if that is the basis on which the consensus was formed. One thing I'd like, and it is the sort of thing google is not set up to find without great difficulty is any lists of papers such as Oreskes' used as a basis for citing "97% consensus", and perhaps also lists that have been compiled to counter lists said to support the consensus. My curiosity is to get a sense of the variety of angles from which people were able (if we believe them) to demonstrate something weighing in the balance in support of AGW.

A project to try to produce one reliably sourced clean and tidy piece of data that will seem to carry the whole argument for the benefit of the public might just be a bad idea. Assuming the worst case which it seems you believe, that that project was done in bad faith, I wonder, do you think the "hockey stick" is part of scientists talking themselves into a consensus? Please answer if you can whether that would be typical of how members of the "skeptic" community think; I would seriously like to know I imagine most researchers would be skeptical of such an attempt to roll many studies into one and produce one such tidy artifact, and would only be swayed by far more transparent (to those with the requisite knowledge) and cohesive studies.

I can't answer the question what I think of "doing science this way" unless you clarify what "this way" is however.

January 12, 2016 | Unregistered CommenterHal Morris

"I'm getting no impression from a "quick look", and can't tell what is meant by "nuclear option"."

It was an indirect suggestion to do a text search for the words "nuclear option"...

"My impression is that this has to do with the "hockey stick" as if that is the basis on which the consensus was formed."

No, this is to do with a database of climate data called CRU TS 2.1 which was published in the peer-reviewed literature as Mitchell and Jones 2005, and the data used in other papers and reports, including the IPCC summary reports. It therefore passed journal review, review by the IPCC, and several years scrutiny by the climate science community.

Despite the fact that even CRU themselves proved unable to replicate the results, even with direct access to the code and files of the author, and even the researcher tasked with maintaining it described data in it as "meaningless" and "Just awful!", it seems nobody outside CRU even noticed. And nobody inside was telling them.

In fact, officially, they're still not. The database is still published on the university's website, with no more than the vaguest caveats. The paper is still part of the scientific record in the journal. The numbers are still in the IPCC report. Nobody in the scientific mainstream acknowledges openly that there is any problem with this data, even to this day. And according to 'Harry', this is "what CRU usually do". That is to say, there are probably other cases of this sort of thing we still don't know about.

" Do you want to explain that, and indicate any other particular places to look."

Mmmm. How about...?

"But what are all those monthly files? DON'T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that's useless ..."

"It's botch after botch after botch."

"Oh, GOD, if I could start this project again and actually argue the case for junking the inherited program suite."

"Am I the first person to attempt to get the CRU databases in working order?!!"

"COBAR AIRPORT AWS cannot start in 1962, it didn't open until 1993!"

"What the hell is supposed to happen here? Oh yeah -- there is no 'supposed,' I can make it up. So I have : - )"

"You can't imagine what this has cost me -- to actually allow the operator to assign false WMO (World Meteorological Organization) codes!! But what else is there in such situations? Especially when dealing with a 'Master' database of dubious provenance"

"This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em, and it's the main reason the project is nearly a year late."

"OH F--- THIS. It's Sunday evening, I've worked all weekend, and just when I thought it was done, I'm hitting yet another problem that's based on the hopeless state of our databases."

"This whole project is SUCH A MESS"

"This is why Tim didn't save more of the intermediate products - which would have made my detective work easier. The ridiculous process he adopted - and which we have dutifully followed - creates hundreds of intermediate files at every stage, none of which are automatically zipped/unzipped. Crazy. I've filled a 100gb disk!"

"Before we get started, an important question: If you are merging an update - CLIMAT, MCDW, ian - do you want the quick and dirty approach? This will blindly match on WMO codes alone, ignoring data/metadata checks, and making any unmatched updates into new stations (metadata permitting)?"

"Aaaaarrrggghhhh!!!! And the reason this is so important is that the incoming updates will rely PRIMARILY on matching the WMO codes! In fact CLIMAT bulletins carry no other identification, of course. Clearly I am going to need a reference set of 'qenuine WMO codes'.. and wouldn't you know it, I've found four!"

"The trouble is, we won't be able to produce reliable station count files this way. Or can we use the same strategy,
producing station counts from the wet database route, and filling in 'gaps' with the precip station counts? Err."

"Not good. We're out by a factor of at least 10, though the extremes are few enough to just cap at DiM. So where has
this factor come from?"

"This leads to a show-stopper, I'm afraid. It looks as though the calculation I'm using for percentage anomalies is,
not to put too fine a point on it, cobblers."

"So, good news - but only in the sense that I've found the error. Bad news in that it's a further confirmation that my abilities are short of what's required here."

"So, not as good as the MCDW update.. lost 68.. but then of course we are talking about station data that
arrived with NO metadata AT ALL."

"Oh, GOD. What is going on? Are we data sparse and just looking at the climatology? How can a synthetic
dataset derived from tmp and dtr produce the same statistics as an 'real' dataset derived from observations?"

"Bear in mind that there is no working synthetic method for cloud, because Mark New lost the coefficients file and never found it again (despite searching on tape archives at UEA) and never recreated it. This hasn't mattered too much, because the synthetic cloud grids had not been discarded for 1901-95, and after 1995 sunshine data is used instead of cloud data anyway."

"Aaaand - another head-banging shocker! The program sh2cld_tdm.for, which describes itself thusly:
'program sunh2cld c converts sun hours monthly time series to cloud percent (n/N)'
Does NO SUCH THING!!! Instead it creates SUN percentages! This is clear from the variable names and user interactions."

"Back to the gridding. I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation - apparently linear as well. As far as I can see, this renders the station counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective - since we're using an off-the-shelf product that isn't documented sufficiently to say that. Why this wasn't coded up in Fortran I don't know - time pressures perhaps? Was too much effort expended on homogenisation, that there wasn't enough time to write a gridding procedure? Of course, it's too late for me to fix it too. Meh."

Would you like some more? Just let me know if you do...

"One thing I'd like, and it is the sort of thing google is not set up to find without great difficulty is any lists of papers such as Oreskes' used as a basis for citing "97% consensus""

My recollection is that Oreskes did it by conducting a literature search on a database of papers, and never actually published the list, and that another more skeptical researcher repeated the search with the same terms and found the reported numbers to be wrong. But I've not checked myself, so I can't vouch for that.

Cook et al. did release an actual list. You might have better luck with that.

"and perhaps also lists that have been compiled to counter lists said to support the consensus"

Try here. Although frankly, I don't think counting papers on either side is a good way to do it either. When Einstein was told of a paper in which 100 authors surveyed opposed relativity, he reportedly replied: " If I were wrong, then one would have been enough!"

I'm just trying to answer you're requests for data here. It's not an argument I would use.

"A project to try to produce one reliably sourced clean and tidy piece of data that will seem to carry the whole argument for the benefit of the public might just be a bad idea"

I think a project to collect in one place a complete, clean chain of argument and evidence for the claim, with all the irrelevancies, sidelines, dead ends, results since refuted, and missing or dubious data stripped out, would be an excellent idea. It would enable us to avoid getting distracted by all the complications, and help clarify what is known and what isn't. The IPCC reports were theoretically supposed to do that, but don't. (They refer to papers, but don't pull out the relevant and still valid parts, and those papers are themselves only the ends of long threads of other papers and datasets and software. For example, you wouldn't be able to guess how CRU TS 2.1 was compiled from it's citation in the IPCC report.)

I think doing so would be difficult to impossible - the IPCC themselves said there isn't any fully quantifiable case from empirical evidence ("The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change.") - but I do still think it's what they should be aiming to do.

"Assuming the worst case which it seems you believe, that that project was done in bad faith, I wonder, do you think the "hockey stick" is part of scientists talking themselves into a consensus?"

The Hockeystick papers were probably not originally done in "bad faith" as such - they were just incompetent and careless. It was the subsequent defence of them after the flaws had been found where the "bad faith" lies.

January 13, 2016 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>