follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« What to think about how "How You Say It" — an empirical study of aporetic judicial reasoning | Main | Science and public policy: Who distrusts whom about what? »
Friday
May022014

The fractal nature of the "knowledge deficit" hypothesis: Biases & heuristics, system 1 & 2, and cultural cognition

I often get asked—in correspondence, in Q&A after talks, in chance encounters with strangers while using one or another mode of public transportation—what the connection is between “cultural cognition” and “all that heuristics and biases stuff” or some equivalent characterization of the work, most prominently associated with Nobelist Daniel Kahneman, on the contribution that automatic, largely unconscious mechanisms of cognition make to risk perception.  

This excerpt, from Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition,  Law & Human Behavior 34, 501-516,  (2010), furnishes half the answer.  

The basic idea is that cultural cognition is not an alternative to the “heuristics and biases” position but a supplement that helps explain how one and the same mechanism—“the availability effect,” “biased assimilation,” “probability neglect” etc.—can generate systematically opposing risk perceptions in identifiable groups of people. 

But as I said, this is only half the answer. At the time that CCP researchers did this study, they were carrying out a research project to examine how cultural cognition interacts with heuristic or “System 1” information processing, which as I indicated features automatic, unconscious mechanisms of cognition. 

In a project that we started thereafter, we’ve been examining the connection between cultural cognition and “System 2” reasoning, which involves conscious, analytic forms of information processing.  In particular, we’ve been empirically testing the popular conjecture that disputes over climate change and other politically contested risks reflects the public’s over-reliance on heuristic reasoning

Not so. Cultural cognition captures and redirects conscious, analytical reasoning, too

Tragically, people use their quantitative and critical-reasoning dispositions to fit empirical data and other technically complex forms of evidence to the position that affirm their identities.  As a result, those who are most disposed to use System 2 reasoning are the most polarized

If you are wandering the internet preaching that the climate change controversy is a consequence of public’s over-reliance on “emotion” or “fast, intuitive heuristics” etc etc you are ignoring evidence. It was a very reasonable hypothesis, but you need to update your understanding of what’s going on as new evidence emerges—just as climate scientists do! 

Sometimes I think this account—that the climate change controversy is a consequence of “public irrationality”—is a kind of pernicious story-telling virus that is impervious to treatment with evidence. 

Makes me realize, too, the irony that I am implicitly affirming my adherence to the “knowledge deficit” hypothesis by continually trying to overcome a version of it by simply bombarding propagators of the "System 1 vs. system 2" (or "bounded rationality," "experiential reasoning," "public irrationality" etc.) explanation of conflict over climate change with more and more and more and more empirical evidence that their account is way too simple. 

Life is weird. And interesting.

 

Theoretical Background: Heuristics, Culture, and Risk

The study of risk perception addresses a puzzle. How do people—particularly ordinary citizens who lack not only experience with myriad hazards but also the time and expertise necessary to make sense of complex technical data—form positions on the dangers they face and what they should do about them?

Social psychology has made well-known progress toward answering this question. People (not just lay persons, but quite often experts too) rely on heuristic reasoning to deal with risk and uncertainty generally. They thus employ a range of “mental shortcuts”: when gauging the danger of a putatively hazardous activity (the possession, say, of a handgun, or the use of nuclear power generation), they consult a mental inventory of recalled instances of misfortunes involving it, give special weight to perceived authorities, and steer clear of options that could improve their situation but that also involve the potential to make them worse off than they are at present (“better safe, than sorry”) (Kahneman, Slovic, & Tversky, 1982; Slovic, 2000; Margolis, 1996). They also employ faculties and styles of reasoning—most conspicuously affective ones informed by feelings such as hope and dread, admiration and disgust—that make it possible for them to respond rapidly to perceived exigency (Slovic, Finucane, Peters & MacGregor, 2004).

To be sure, heuristic reasoning of this sort can lead to mistakes, particularly when they crowd out more considered, systematic forms of reasoning (Sunstein, 2005). But they are adaptive in the main (Slovic et al, 2004).

As much as this account has enlarged our knowledge, it remains incomplete. In particular, a theory that focuses only on heuristic reasoning fails to supply a cogent account of the nature of political conflict over risk (Kahan, Slovic, Braman & Gastil, 2006). Citizens disagree, intensely, over a wide range of personal and societal hazards. If the imprecision of heuristic reasoning accounted for such variance, we might expect such disagreements to be randomly distributed across the population or correlated with personal characteristics (education, income, community type, exposure to news of particular hazards, and the like) that either plausibly related to one or another heuristic or that made the need for heuristic reasoning less necessary altogether. By and large, however, this is not the case. Instead, a large portion of the variance in risk perception coheres with membership in groups integral to personal identity, such as race, gender, political party membership, and religious affiliation (e.g. Slovic, 2000, p. 390; Kahan & Braman, 2006). Whether the planet is overheating; whether nuclear wastes can be safely disposed of; whether genetically modified foods are bad for human health—these are cultural issues in American society every bit as much as whether women should be allowed to have abortions and men should be allowed to marry other men (Kahan, 2007). Indeed, as unmistakably cultural in nature as these latter disputes are, public debate over them often features competing claims about societal risks and benefits, and not merely competing values (e.g. Siegel, 2007; Pollock, 2005).

This is the part of the risk-perception puzzle that the cultural theory of risk is distinctively concerned with (Douglass & Wildavsky, 1982). According to that theory, individuals conform their perceptions of risk to their cultural evaluations of putatively dangerous activities and the policies for regulating them. Thus, persons who subscribe to an “individualist” worldview react dismissively to claims of environmental and technological risks, societal recognition of which would threaten markets and other forms of private ordering. Persons attracted to “egalitarian” and “communitarian” worldviews, in contrast, readily credit claims of environmental risk: they find it congenial to believe that commerce and industry, activities they associate inequity and selfishness, cause societal harm. Precisely because the assertion that such activities cause harm impugns the authority of social elites, individuals of a “hierarchical worldview” are (in this case, like individualists) risk skeptical (Rayner, 1992).

Researchers have furnished a considerable body of empirical support for these patterns of risk perception (Dake, 1991; Jenkins-Smith, 2001; Ellis & Thompson, 1997; Peters & Slovic, 1996; Peters, Burriston & Mertz, 2004; Kahan, Braman, Gastil, Slovic & Mertz, 2007). Such studies have found that cultural worldviews explain variance more powerfully than myriad other characteristics, including socio-economic status, education, and political ideology, and can interact with and reinforce the effect of related sources of identity such as race and gender.

Although one could see a rivalry between culture theory and the heuristic model (Marris, Langford, O’Riordan 1998; Douglas, 1997), it is unnecessary to view them as mutually exclusive. Indeed, one conception of the cultural theory—which we will call the cultural cognition thesis ((Kahan, Braman, Monahan, Callahan & Peters, in press; Kahan, Slovic, Braman & Gastil, 2006)—seeks to integrate them. Culture theorists have had relatively little to say about exactly how culture shapes perceptions of risk.[i] Cultural cognition posits that the connection is supplied by conventional heuristic processes, or at least some subset of them (DiMaggio, 1997). On this account, heuristic mechanisms interact with cultural values: People notice, assign significance to, and recall the instances of misfortune that fit their values; they trust the experts whose cultural outlooks match their own; they define the contingencies that make them worse off, or count as losses, with reference to culturally valued states of affairs; they react affectively toward risk on the basis of emotions that are themselves conditioned by cultural appraisals—and so forth. By supplying this account of the mechanisms through which culture shapes risk perceptions, cultural cognition not only helps to fill a lacuna in the cultural theory of risk. It also helps to complete the heuristic model by showing how one and the same heuristic process (whether availability, credibility, loss aversion, or affect) can generate different perceptions of risk in people with opposing outlooks.

The proposition that moral evaluations of conduct shape the perceived consequences of such conduct is not unique to the cultural cognition thesis. Experimental study, for example, shows that negative affective responses mediate between moral condemnation of “taboo” behaviors and perceptions that those behaviors are harmful (Gutierrez & Giner-Sorolla, 2007). The same conclusion is also supported by a number of correlational studies (Horvath & Giner-Sorolla, 2007; Haidt & Hersh, 2001). The point of contact that the cultural cognition thesis, if demonstrated, would establish between cultural theory and these other works in morally motivated cognition would also lend strength to the psychological foundation of the former’s account of the origins of risk perceptions.

 

 


[i] For functionalist accounts, in which individuals are seen as forming risk perceptions congenial to their ways of life precisely because holding those beliefs about risk cohere with and promote their ways of life, see Douglas (1986) and Thompson, Ellis & Wildavsky (1990).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (25)

Dan,
Interesting discussion. Thanks.
Long ago, I learned thermodynamics. I resisted learning it because it consisted of all these heuristic equations that I had to memorize and that had no obvious basis. Later I learned statistical mechanics and quantum mechanics, which provided a strong basis for the equations of thermodynamics. I was happier and memorized the thermodynamic equations now.
It seems that a similar progression is occurring in cultural cognition and nearby fields. Heuristics are starting to be developed that help the researchers in the field but that may change as the field matures. Kahneman said, if I remember correctly, that System 1 and System 2 thinking are useful heuristics that may not survive increases in neuroscientific knowledge.
I am coming at both System 1 and 2 and cultural cognition from the point of view of the actions of individual cells, sort of like the statistical mechanical point of view in previous times. From my point of view Systems 1 and 2 overlap, sometimes substantially. Also, neither system has sole ownership of cultural cognition so the neurons that correspond to your cultural cognition work are distributed throughout both System 1 and System 2. The distribution is not random and is strongly culturally conditioned as well as being molded by a person's life history. The distribution of what we can call 'cultural cognition neurons' seems to predict some of the results that you have found such as stronger adherence to certain beliefs as a person's level of education rises.
Thanks for the insights. Ask questions about my cellular view of the world if it strikes you as interesting.

May 2, 2014 | Unregistered CommenterEric Fairfield

"It was a very reasonable hypothesis, but you need to update your understanding of what’s going on as new evidence emerges—just as climate scientists do!"

Do you mean, delete the parts of the evidence that disagrees with your hypothesis and only mention your deletion in a separate paper referred to in a footnote? Or never publish the contrary evidence at all? More data is usually better, but as they say, “this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal.”

Of course, according to climate scientists (Esper et al 2003) “The ability to pick and choose which samples to use is an advantage unique to dendroclimatology,” so it might still not be allowed in psychological research!

--
I assume you was being humorous! But seriously, don't you think this is an excellent example of what you were just talking about?

May 2, 2014 | Unregistered CommenterNiV

@NiV:

I myself would find it boring & tedious to turn any discussion about the issues raised in this post (including whether it is simply wrong; I would welcome hearing from anyone who thinks so, provided they supply a cogent explanation) into one about whether climate scientsits have "deleted" evidence etc. Others can join you in that, if they please, of course. It is indeed a free country.

I'd be delighted, however, if you would tell us what you think of Ch. 12 of Silver's Signal & Noise (entitled "A Climate of Healthy Skepticism"). It seems to me that he gets it right on the role of model updating in climate science. And it also seems to me, ironically, that both teams in the all-climate-all-the-time-worldwide-wrestling-association "debate" have agreed that one of the rules of their sport is for them both to get that wrong.

But that's what I think! Send me via e-mail a review of Chapter 12 -- the whole book if you like -- & I'll figure out an appropriate way to turn it into an exchange (something in the nature of this or this).

That would be quite interesting. Engaging w/ your comment on this post would not be. To me, at least.

May 2, 2014 | Registered CommenterDan Kahan

@Eric:

I think you should develop these ideas further. They do sound quite interesting.

I myself think there is something deeply flawed about the usual way "System 1/System2" is presented-- as "heuristics, fine, if that's all you have time for or is the best you can do" vs "calculative double-checking of your intuition's homework."

I think Kahneman himself likely blanches as this maddeningly popular popularization -- one might say "heuristic" -- understanding of his conception of dual process reasoning

May 2, 2014 | Registered CommenterDan Kahan

"I myself would find it boring & tedious to turn any discussion about the issues raised in this post (including whether it is simply wrong; I would welcome hearing from anyone who thinks so, provided they supply a cogent explanation) into one about whether climate scientsits have "deleted" evidence etc."

The point I found interesting was not simply that some climate scientists deal with inconvenient evidence by deleting or excluding it rather than updating their beliefs, but that everybody does the same, including people trying to understand how and why the public believes what it does about climate change. I found it ironic that the behaviour you was criticising in contrast to how climate scientists do it was precisely the sort of behaviour that got climate scientists into trouble in the first place.

It's an excellent example of several phenomena related to motivated reasoning. Your research has indicated that the more scientifically literate one is, the more inclined to biased/polarised reasoning. Scientists are very scientifically literate - surely it is no surprise that they're subject to the same biases?

The same phenomenon appears to be at work when evidence of this behaviour is presented to climate change believers. Because it threatens their cultural identity, they reject, ignore, or re-interpret the evidence rather than change their beliefs. Or declare it to be "boring and tedious". ;-)

I agree, absolutely, that a discussion of whether they did would be boring and tedious - the argument has been gone through endlessly over the past few years, as if there was ever any doubt. But my question was rather about whether this is an example of what you were just talking about: that people ignore contrary evidence to maintain their beliefs and this applies symmetrically to everyone, including climate scientists and climate science communicators? And indeed climate science communication science researchers, who, as you note, keep on producing more and more empirical evidence in the implicit belief that it is the lack of evidence that is the problem.

These would all seem to be confirming instances and extensions of your thesis, which should surely interest you?

In any case, you ought to know by now that using climate scientists as exemplars of scientific virtue is going to raise some eyebrows! You've studied the climate debate long enough to have foreseen that much, surely!

--

Regarding Nate Silver's take on climate science, you are quite right that forecast errors and revision of the models is not evidence of a deficiency in the scientific method. It's quite right that climate modellers should do so. But that's not what their critics are complaining about.

Scientific model building goes through a number of distinct stages. Exploratory - where you are trying to get a handle on how the physics works, what factors are important, and so on. Calibration - where you have a correct outline of the physics and only need to determine the adjustable parameters more precisely. And verification - where you use new and independent data to test and document the accuracy and limits of validity of the model, proving (statistically or otherwise) that it works within the documented margin of error. Then there is validation, which means confirming that the verified model is sufficiently accurate and reliable for the application that you are about to use it in.

People who work in industrial science, where there is often large amounts of Other People's Money riding on getting the science right, know that you can only use validated models in decision making, particularly when the stakes are high. The problem the critics keep highlighting is that the climate models are not validated!

They're not even verified. They're arguably somewhere on the border between 'exploratory' and 'calibration'. Sceptics would say it's not even confirmed they've got all the relevant physics right. Not even climate scientists would claim they're correctly calibrated.

And the reason the sceptics keep hitting the climate modellers over the head with the models' failures is that they're simultaneously trying to claim that the errors are excusable because they're a work in progress, and at the same time they're suitable for making multi-trillion dollar decisions about the global economy and the political freedoms of every nation in the world!

If you look at the lengths people go to in such minor applications as the software for hospital life support equipment, medical trials for new drugs, aeroplane avionics, and nuclear reactor control systems - where every line is scrutinised and audited and tested to the limit - or even just the average company accounts which still need to be audited - it seems utterly incredible that people would accept less for what many have described as "the end of the world". Some of the things climate scientists have done would fail a first year computer science homework assignment, as far as software quality goes! Or data integrity. Or experimental method.

And sure, if you're in exploratory mode working on an obscure academic topic that - in the grand scheme of things - doesn't matter, then this may well be perfectly acceptable. A scientific purist might object on principle, but nobody else is really going to care.

But this isn't such a topic. People will die because of the measures governments are taking against global warming. Maybe the cost is worth it. Maybe it's better that a few old people die of cold in the winters because they can't afford the heating bills than that in thirty years time the globe fries, and the crops and animals die, and humanity reverts to cannibalism (or whatever the latest dire prediction is). But ethically and morally you need much higher standards of evidence before you can take such decisions.

When you push believers in climate change hard on this point, what it comes down to, and the fundamental difference that explains the political divide, is that supporters don't regard the proposed measures needed to combat climate change as a cost. They don't see the need for high standards of evidence to justify what would be their preferred policy anyway. The emergency justifies authoritarian regulations to enforce what they see as an improvement to society, and so they have a much lower threshold for the evidence needed to justify this. Obviously, the people with opposing policy preferences who are about to get steam-rollered would prefer to set a higher bar.

Understandable behaviour from both sides, but the situation is clearly not going to be solved by simply providing more science communication. It's not really about the science.

May 2, 2014 | Unregistered CommenterNiV

Starting (and agreeing with) the final sentence of NIV's comment above: "It's not really about the science." But changing the underlying argument.

The fossil fuel industry understands well that climate change is real. Shell Oil currently has it's Arctic Challenger vessel sitting in Bellingham Bay's shipyard, waiting for the right (regulatory) moment to commence deep sea arctic oil drilling. Such investment in arctic drilling is predicated on the idea that the arctic will be ice free for economically significant periods of the year. http://www.cbc.ca/news/world/polar-meltdown-top-challenge-for-arctic-council-1.1312992 Meanwhile Shell contributes heavily to ALEC, which supports policies and politicians that deny climate change. http://www.sourcewatch.org/index.php/ShellIMHO, it is not that the hierarchy of Shell does not believe in climate change. They, in fact, plan to profit from it. What they want to prevent is the public from taking regulatory actions, the sort that currently keep the Arctic Challenger in port. Oligarchs of the past did not care if their cigarettes caused others to get cancer or if their steel mill pollution killed townspeople. They may not care about flooding in Bangladesh. They, themselves may do just fine on an estate in the hills of Greenland. Or make even more money in geo-engineering attempts to rectify things. Or, at least provide for their decedents the where-with-all to end up on some other planet entirely. So of course, they are going to use the best of what they understand regarding cultural cognition to maintain a political system in which regulatory action is stalled. There is money to be made.

May 3, 2014 | Unregistered CommenterGaythia Weis

Gaythia,
Corporations exist to make money for their shareholders and can be sued successfully if they chose policies that make less money. Corporations, like the rest of us, also work in their enlightened self interest, which may mean opposing regulations that would, whether well thought out or not, decrease the corporation's profits. A number of corporations, for instance, wandered off into corn based ethanol production to receive politically popular government subsidies even though the underlying public belief system was wrong and corn based ethanol production would not survive without the subsidies.
Thanks for the comment

May 3, 2014 | Unregistered CommenterEric Fairfield

Gaythia,

I have long wondered at the popularity of the 'oil company funds sceptics' conspiracy theory. So far as I know, oil companies have had no significant input to or influence over climate scepticism since the 1990s, and the economics would predict they ought to be in favour of regulation and restriction, as it would provide them with a government enforced monopoly and an artificial scarcity that would cause the prices of what fossil fuels they did sell to "skyrocket". That's more profit for less work, which every business likes. That's why Shell (and a whole raft of other energy companies) fund climate science groups such as CRU.

I think it's entirely possible that businesses might well believe the Arctic is soon going to open up to exploration without necessarily believing it is due to climate change. According to the mainstream science, and as set out in the IPCC reports, global warming should currently be affecting both poles more or less equally but the drop in ice coverage due to it shouldn't be significant until around 2080. So if you ascribe the drop over the past few years to climate change (as opposed to an accumulation of short-term random weather variations imposed on top of it), then you are asserting that the climate models and scientific understanding of global warming must be seriously in error, and you are therefore standing in opposition to mainstream climate science.

That makes you a climate sceptic! Welcome! :-)

Oligarchs of the past did of course care if cigarettes cause others to get cancer or steel mill pollution killed townspeople. Those were their workers and customers, on who they were reliant for their trade. However, oligarchs are more aware than most that life is risky and we cannot entirely eliminate it, we can only trade one risk for another. Shutting down an entire industry has a cost, too, and while the tobacco oligarchs like Al Gore can easily walk away, the poor people who work in that industry usually cannot. Likewise, the loss of the product being produced - whether tobacco that gives pleasure to millions, or steel that holds much of our civilisation together - is a cost to society too. The question is - who gets to decide? The people who are taking the risk? Or distant politicians and bureaucrats, looking for votes from the authoritarian middle-class "Concerned"?

That's why such people insist on solid evidence before taking such drastic and costly measures. Models need to be validated. Statistics need to be collected and confirmed. And then the costs on *both* sides of the equation need to be assessed before coming to a decision.

Oligarchs know this - that's often how they made their money. But most people only know what they read in the media, whose business is selling drama, and tales of good versus evil, and campaigners for change usually only present one side of that balance. The oligarchs are blamed for the costs on one side, the people are fired up, the politicians act, and then when the inevitable costs and consequences on the balancing side hit, the oligarchs get blamed for that too.

People generally only see half the picture, because they are plugged in to only some of the social networks of expertise. They build a particular world view, and come to believe it is the whole truth and the only right way to think. And people who think differently are naturally fitted into the good-vs-evil narrative the media have sold them. The argument here is that it's what drives the political polarisation that pollutes the science communication environment.

Incidentally, I'm not sure what you meant about oligarchs not caring about Bangladesh flooding. Bangladesh is a river delta, and floods all the time. It's formed by the silt washed down off the Himalayas by the river reaching sea level, so the gravity-driven river flow slows and the silt drops, building the land up. Regular flooding is how Bangladesh maintains pace with sea level - if it wasn't allowed to flood, it would soon sink below sea level, like the Netherlands has!

The flood silt is also what maintains the agricultural fertility of the delta, which is the primary reason why people (especially poor people reliant on agriculture) settle in such places. Their entire economy is reliant on it flooding!

You only hear about the bad side of flooding. Do you see what I mean about there being another side to the story, a balancing set of costs and benefits, that the campaigning media leave out?

May 3, 2014 | Unregistered CommenterNiV

NIV:

1. ALEC: Look into who funds it and what candidates and policies they fund.

2.http://www.nap.edu/catalog.php?record_id=13515. Also, Antarctica is continental. Ice coverage, and glacial creep, is related to snowfall. Some extremely cold areas "desert" areas may receive more snowfall if it warms up. All depends on what you mean by "more or less" equal I suppose.

3. Don't be too quick to welcome me.

4. ??? See for example: When Smoke Flowed Like Water by Devra Davis. Or get a mining job in some poorly regulated third world area, like China or even West Virginia. Because of course the owners will care, so no worries!

5. Thomas Piketty Capital in the 21st Century and David Graeber Debt the first 5000 years. Yes, blame the oligarchs. Social justice is a perfectly good "good" narrative.

6. I'll grant you the Bangladesh example. Too many confounding variables to be a straightforward sea level rise case.

I do believe that there are those who are seeing real benefits to climate change. Russia (especially if they are assuming they can hold Siberia against China) and Canada both may have much to gain. As does Greenland (although not Denmark).

May 3, 2014 | Unregistered CommenterGaythia Weis

Gaythia,

1. Never heard of them. Nothing to do with climate scepticism, so far as I know.

2. I had a quick look at the report, but so far as I could see they appear to be agreeing with me. They say that the observations far outstrip the model predictions and that the models are effectively useless. It's true that they *do* say the decline can be 'linked' to climate change, but they appear to offer no evidence or argument to back the assertion, and it rather looks like they're just assuming it.

Since the IPCC approach to attribution compares models to observation under different scenarios to prove a connection, then I don't see how they can do it without valid models. But I've only skimmed it briefly and might have missed something. Perhaps you could summarise for me what their supporting argument actually is?

3. Why not? You may be a sceptic on the other side of the consensus, but the principle is the same.

4. You still seem to be missing the point. Those are different trade-off points, in some of which fewer resources are allocated to safety, but that trade-off may be justified by other factors. For example, the number of workers the industry can support may rise when the resources are stretched more thinly. If you offer people a choice between unemployment in a poor country without welfare, or a job working in a mine with a poor safety record, a lot of them choose the mine. People are willing to compromise their own safety to put food on the table. They prefer that. If you devote more resources to improving safety and cleaning up pollution, you'll be able to support fewer employees and some people will starve.

If you want to help the workers in such places, offer them a better alternative. The oligarchs will then automatically raise their standards to compete, to get the workers they need, or shut the industry down if there is no trade-space left.

5. Certainly 'social justice' is a perfectly good narrative, but it's only *one* of the narratives. If you limit yourself to information from only one set of experts, all conforming to the same world view, you'll only get half the picture, and political polarisation of things like science communication is inevitable. Try Hernando DeSoto's book 'The Mystery of Capital' for an alternative view.

6. There are too many confounding variables in all of it.

"I do believe that there are those who are seeing real benefits to climate change."

Yes. According to Richard Tol's work (see figure 1 here), as cited by the IPCC, there are net economic benefits to warming up to 2 C above present, beyond which it is a net negative. The best-off region appears to be Russia and Eastern Europe. But the economic models appear to be even less well validated than the climate ones, so take with a pinch of salt.

May 4, 2014 | Unregistered CommenterNiV

NiV -

"But the economic models appear to be even less well validated than the climate ones, so take with a pinch of salt."

And yet, we read this: "People will die because of the measures governments are taking against global warming."

As an absolute assumption upon which you paint a picture of "realists" who think those deaths are a worthwhile tradeoff against the potential of more deaths, long term as the result of climate change (in other words, "realists" = "alarmists:") - as counterbalanced against "skeptics" who argue that we just don't have enough information yet to know (due to unvalidated models) to know that there will be any such counterbalancing long-term deaths due to climate change (and hence, those holding a prudent and non-"alarmist") belief.

A couple of problems:

(1) your reliance upon/criticism of unvalidated models is selective.
(2) your integration of the "It's not really about the science" influence upon reasoning is selective - (as in - "That's why such people insist on solid evidence before taking such drastic and costly measures. Models need to be validated. Statistics need to be collected and confirmed. And then the costs on *both* sides of the equation need to be assessed before coming to a decision. Oligarchs know this - that's often how they made their money. "... notice how the influence of "it's not really about the science" interestingly disappears?).

Rather than play our usual disagreement about generalizations w/r/t "realists" and "skeptics," maybe I'll just tell you that your characterization of me, as someone you'd likely consider to be in the "climate concerned" (the terms are so difficult), is wrong. I don't see the question as one of the tradeoff that you describe, the reason being that I think that to reach the conclusions that you have reached, you are relying on unvalidated and unverified economic modeling - modeling which leaves out the extremely relevant and important balance of positive and negative externalities from alternative energy policies (in other words, I think that you're being "alarmist"). You are also relying on unvalidated and unverified modeling of how "skeptics" and "realists" reason.

May 5, 2014 | Unregistered CommenterJoshua

Joshua,

I'm very happy for you to express scepticism about unvalidated economic models. I agree that we shouldn't rely on them to direct the global economy.

The "it's not really about the science" comment wasn't really talking about the one-sided views of economic trade-offs that I was discussing later - it referred to the reason believers in CAGW choose to ignore the evidence about climate scientists making stuff up, ignoring, deleting, or not publishing inconvenient data, and the rest of the scientific community's failure to police it. There's no doubt that they did - the only question is "does it matter?"

People differ in whether they think it does. And whether or not it is simply about whether they like the proposed policies or not as I suggested, I do still think that whatever their reasons it's not about the science.

So you're saying that your own reasons for ignoring it are otherwise, and that I'm over-generalising from the few believers who have told me about their reasons to all believers. OK. So we could talk about what the *real* reasons are, and what the evidence for that is. I don't mind. If you've got a different viewpoint on the question, I'm always happy to learn about different perspectives.

But in the absence of any real scientific data, all we have is speculation, and I'm not trying to write a scientific paper here. Most of the things normal people say in ordinary conversations are not backed by references and scientific studies. If you've got them, great, we can all learn a thing or two, but it becomes tiresome not to be able to express any opinion or personal impression without having to justify every detail in advance. If you want to know why I think something, just ask. If you disagree, just disagree. If you've got data to prove me wrong, just show the data. I'm not claiming to be infallible, I'm just expressing an opinion.

It would be different if I was speaking in a professional capacity, with other people's well-being riding on my words, but I'm not.

I would agree that the right decision depends on the unknown balance of positive and negative externalities (and internalities) of the different energy policies. I would agree we don't know what those are, so we don't know what the right decision is. But that *is* my position. We've got no solid proof that we need to do anything, or of what the best thing to do would be. The same applies to any unknown risk. Should we spend resources now to defend against extraterrestrial invasion? If we don't and we're wrong, the consequences would be catastrophic. But we have no real reason to think we should, and I'd certainly object strenuously to people suspending democratic debate to ram such measures through at the public's expense. Other people are free to have a different view.

But anyway, that's all besides the point. The question I was initially asking was whether some climate scientists excluding data they didn't like and some climate communicators ignoring or rejecting this history were potentially further examples of what Dan was originally talking about - of the human tendency to motivated reasoning, similar to Dan's example of science communicators ignoring the science on science communication? What do you think? Agree? Disagree? Do you think this aspect/prediction of the 'motivated reasoning' theory is something that ought scientifically to be further explored?

To be honest, I think I can guess what you and Dan would think - based on the predictions of motivated reasoning theory. But you never know. People sometimes surprise me. :-)

May 5, 2014 | Unregistered CommenterNiV

NiV -

"The "it's not really about the science" comment wasn't really talking about the one-sided views of economic trade-offs that I was discussing later..."

Yes. That was my point. In fact, "it's not really about the science" is applicable much more broadly that how you focused it...

" - it referred to the reason believers in CAGW "

Kind of a meaningless term (CAGW), IMO, much more the product of an "it's not really about the science" attitude on the part of "skeptics" than it is the product of what "realists" believe... Am I a "believer in CAGW" if I think that models that show some "fat tail" of probability of damaging impact from ACO2 should not simply be dismissed with a hand-wave and an appeal to the truth that "all models are wrong" while ignoring the caveat that "some models are useful"?

"....choose to ignore the evidence about climate scientists making stuff up, ignoring, deleting, or not publishing inconvenient data, and the rest of the scientific community's failure to police it. There's no doubt that they did - the only question is "does it matter?""

Choose to ignore, or don't agree with you about the causes along with the significance? Perhaps they, like me, think that some measure of tribalism from some amount of climate scientists is not scientifically justified, but that perhaps more damaging are the ways that "skeptics" have used those typical and normal and entirely human behaviors to justify a holier than though tribalism of their own?

"People differ in whether they think it does. And whether or not it is simply about whether they like the proposed policies or not as I suggested, I do still think that whatever their reasons it's not about the science."

So not only do I think that your description of the scope of the issues at play was inadequate,I also think that your application of "it's not really about the science" was inadequate. That was my point. Simply saying that you only meant to use that characterization in a limited capacity, not only doesn't address my point, it actually just repeats the error I was pointing out. "It's not really about the science" certainly applies to much of what emanates from "skeptics" when they argue that concern about "fat tails" is invalidated, only that much more so when they ground such an argument in a practice of exploiting universal human behaviors (in this case, saying that an argument is about the science when it really isn't about the science) on one side as if they don't exist on both sides.

"So you're saying that your own reasons for ignoring it are otherwise,..."

Heh. No. I'm not saying that I've 'ignored" anything. I pointing out what I think that you've ignored.

" and that I'm over-generalising from the few believers who have told me about their reasons to all believers."

That's pretty much a non-sequitur. I am saying that you are over-generalizing about something entirely different - meaning that you are over-generalizing about how "realists" view a putative trade-off between a certainty of near-term deaths against an (alarmist) belief in more deaths long term.

"OK. So we could talk about what the *real* reasons are, and what the evidence for that is. I don't mind. If you've got a different viewpoint on the question, I'm always happy to learn about different perspectives."

I'm not sure what you're describing here. Reasons for what? Viewpoint and perspective about what?

"But in the absence of any real scientific data, all we have is speculation, and I'm not trying to write a scientific paper here. Most of the things normal people say in ordinary conversations are not backed by references and scientific studies. If you've got them, great, we can all learn a thing or two, but it becomes tiresome not to be able to express any opinion or personal impression without having to justify every detail in advance. "

You're certainly entitled to express whatever opinion or impression that you want. And I am certainly entitled to show how your opinions, particularly when they are not expressed as opinions but as certainties (as was the case with what I excerpted from your comment), are not based in evidence - but are reflections of the very same biased analysis that you were criticizing in others. I'm sorry if you find that tiresome. My preference would be that you'd acknowledge how your comment reflected your own biases - so that we could share perspectives on how biases influence reasoning on both of our parts. Go back and read what I excerpted again. You will see that what you wrote was a statement of fact, that failed to reflect how it was merely an opinion, and further, an opinion that was based on unvalidated modeling - whether that be the unvalidated but formal models that support a statement of "People will dies because..." or the unvalidated and informal personal economic model that underlies your personal conceptualization of the economics related to climate change.

"If you've got data to prove me wrong, just show the data. I'm not claiming to be infallible, I'm just expressing an opinion."

My intent is not to "prove [you] wrong," but to point out the attributes your reasoning that resemble, quite directly, those that you criticize in others. In this case, conclusions of certainty that derived from uncertain modeling.

"We've got no solid proof that we need to do anything, or of what the best thing to do would be. "

Well, that's just fine. But I don't see that as being either particularly scientific nor particularly relevant to the evidence-based discussion. Yes, there are some who claim that there is some "proof," but I see far more who say that there is evidence of probabilities that should be addressed. I see far more who argue that this is a mater of assessing risk in the face of uncertainty. And I see far more who mischaracterize fundamental arguments just as you have done, whether it be the argument about risks, or the argument about tradeoffs.

" Should we spend resources now to defend against extraterrestrial invasion? "

I see this as a basically useless analogy. It reduces legitimate questions about risk assessment to a caricature. There's nowhere to go in a discussion that diminishes the real discussion into such a useless caricature.

"But we have no real reason to think we should, and I'd certainly object strenuously to people suspending democratic debate to ram such measures through at the public's expense."

Once again, IMO, a useless caricature. I, too, object strenuously to people "suspending democratic debate to ram such measures through at the public's expense." So I'm glad that we're in agreement about that. So then is there nothing for us to discuss? I think there is something for us to discuss. I think that we can discuss the tradeoffs in risk assessment in the face of uncertainty. I think that we can talk about the ranges of probabilities to differentiate positions and interests, so as to establish policies that support the interests that we have in common. But I can't go there with you if you think that I am interested in "suspending democratic debate to ram such measure's through at the public expense," nor can I go there with you if you're going to establish my guilt by association with people who fit that description, nor can I go there if you're going to mischaracterize people who disagree with you, say, about the certainty of deaths associated with proposed mitigation policies.

"But anyway, that's all besides the point. The question I was initially asking was whether some climate scientists excluding data they didn't like and some climate communicators ignoring or rejecting this history were potentially further examples of what Dan was originally talking about - of the human tendency to motivated reasoning, similar to Dan's example of science communicators ignoring the science on science communication? What do you think? Agree? Disagree? Do you think this aspect/prediction of the 'motivated reasoning' theory is something that ought scientifically to be further explored?"

That is all too vague and general and stylized and formulaic for me to respond with much confidence. But sure, "some" climate scientists may indeed "exclude data they didn't like," as may, indeed, "some" climate "skeptics." Some "climate communicators" may well indeed "ignore or reject" the history of behavior of "some" climate scientists, as may, indeed, "some" "skeptics" ignore and reject the history of behavior of other climate "skeptics."

I see no reason why any such behaviors would not be examples of "motivated reasoning." Of course they would.

Should that aspect be further explored? Well, sure. But that question seems to suggest that somehow such behaviors have already in some way been excluded from previous study of "motivated reasoning." I don't see why it would be. Motivated reasoning is motivated reasoning, IMO. The exclusion of inconvenient data is a key "tell" for motivated reasoning - irrespective of any particular context. In other words, excluding the data which show that "motivated reasoning" is not proportionately more characteristic of one group of people, say climate scientists, than basically any other group of people, say climate "skeptics" would be a "tell" for motivated reasoning. As far as I'm concerned, there is abundant evidence that motivated reasoning is a product of fundamental cognitive and psychological attributes of how humans reason. Orientation in the climate wars is not explanatory, just as orientation along the lines of political identity (see Dan's take on Krugman) is not explanatory.

May 5, 2014 | Unregistered CommenterJoshua

Dan -

A link and a question. First, the link:

http://www.nytimes.com/2014/05/06/us/politics/in-justices-votes-free-speech-often-means-speech-i-agree-with.html?_r=1

A very interesting article, IMO, when viewed against the backdrop of "motivated reasoning."

As for the question. I was talking to someone today about motivated reasoning, and we started discussing how to view the societal changes that have taken place vis–à–vis public opinion on homosexuality and same sex marriage, and the impact of "activism" related to those issues. It wonder if a rigid view towards motivated reasoning might predict that no shift in public opinion, such that has occurred w/r/t those issues, could take place, and that the influence of "activism" would not be differentially discernible as seems to have been the case with activism related to those issues. After all, aren't views on homosexuality and same-sex marriage dominated by orientations of the "authority," or lack thereof, of religious doctrine? Wouldn't people from opposing camps be locked into reasoning that would be predicted by a confirmation bias in selecting "experts" to trust? Would the shift over time in public opinion be possible with issues like homosexuality and same sex marriage be possible because those issues aren't necessarily deferred to "expert" opinion, as would necessarily happen with an issue such as climate change because of technical/scientific requirements for deep understanding of the questions at play?

May 5, 2014 | Unregistered CommenterJoshua

Joshua,

"Kind of a meaningless term (CAGW)"

Quite right. It's an imperfect attempt to qualify the spectrum of opinions - my point is that I'm not talking simply about believers in global warming or anthropogenic global warming, where one could believe the world was warming without necessarily thinking we had to do anything about it. One has to believe in damage of some sort to think drastic action is necessary or desirable. CAGW is simply the mostly widely used conventional term I know of in the neighbourhood of such a distinction, although "the climate-concerned" probably comes close.

"Am I a "believer in CAGW" if I think that models that show some "fat tail" of probability of damaging impact from ACO2"

By the definition I was using, yes. But I agree the terminology could be further refined to a more precise gradation.

"should not simply be dismissed with a hand-wave and an appeal to the truth that "all models are wrong" while ignoring the caveat that "some models are useful"?"

Indeed. That's why I was going on at length about model validation up above. The relevant distinction, which it is true that a lot of sceptics fail to make, is that some models are indeed useful, but you don't know which ones until you have validated them. You have to first demonstrate that they are useful before you can treat them as such. All models are subject to error, but if you document the size of the error and demonstrate that the model stays within those bounds, then you can still use the model for making trustworthy predictions. If you don't know how big the error might be, then the model output tells you nothing useful.

Any unvalidated model always has a "fat tail" of risk, not because there's any actual evidence of such a risk, but because there's no good evidence that there isn't. One is constantly in danger of falling into the 'Pascal's Wager' fallacy.

"Choose to ignore, or don't agree with you about the causes along with the significance?"

Why not both? They seem to ignore it as an issue because they don't think it is significant - they answer the question "Does it matter?" with a "No".

I don't know how much disagreement about causes there is (assuming you mean the 'cause' of the climate scientists doing things like "reducing the number of series used [...] to enhance a desired signal"). I thought several of the proposals they offered me were very plausible. Our disagreement was, as I said, primarily about whether it "mattered".

I don't really know why climate scientists did it, and while I have a few hypotheses, I don't feel very strongly about them. So if people want to offer different ones, I don't usually argue much.

"Perhaps they, like me, think that some measure of tribalism from some amount of climate scientists is not scientifically justified, but that perhaps more damaging are the ways that "skeptics" have used those typical and normal and entirely human behaviors to justify a holier than though tribalism of their own?"

Hmm. Interesting. What do you mean by "damage", and damaging to what?

This reminds me of Ed Cook's email where he complained of a paper he was reviewing: "If published as is, this paper could really do some damage. [...] It won't be easy to dismiss out of hand as the math appears to be correct theoretically [...]".

A lot of people have interpreted that as damage to the climate "Cause", although I think it's just as likely to be damage to the credibility of dendroclimatology and the professional reputation of its practitioners. The thing is, if the science is wrong (or at least, unreliable), isn't damaging its credibility the right thing to do? So what does "damage" mean in this context?

By the way, this is just the sort of insight into others' opinions that I was looking for. When I point to the bits I have a different view on, I'm not trying to say "I'm right and you're wrong", I'm trying to encourage them to expand on their reasons and reasoning. Where do we differ? What's at the root of the disagreement?

"only that much more so when they ground such an argument in a practice of exploiting universal human behaviors [...] on one side as if they don't exist on both sides."

Of course, it's a universal human behaviour to do just that. As we've all noted with the discussion about Krugman's article, both sides are perfectly symmetrical in their belief in the asymmetry between the sides. It's no more surprising or blameworthy when sceptics do it as when climate scientists do it.

Climate sceptics have spent the last 20 years being ignored, excluded, derided, insulted, and psychoanalyzed. They've been compared to the worst conspiracy theorists, cranks, and holocaust deniers going, blown up in comedy videos, and had people calling for them to be fired, purged, and even subjected to a "climate Nuremberg". They've been subjected to an extreme contempt by important people in global governance - the 'Great and the Good' as we call them. They are, of course, annoyed and upset about that. And a little bit worried. It is of course a "universal human behaviour" that they'll tend to become somewhat more 'tribal' and aggressive about it than they should.

That's human too. But speaking just for myself I'm not out to blame anybody. I'm asking what science ought to do to fix it. And in particular, I'm trying to understand why so many other people don't think we should.

May 6, 2014 | Unregistered CommenterNiV

NiV -

--> "One has to believe in damage of some sort to think drastic action is necessary or desirable. "

A couple of issues there. The first relates to the subjectivity and uncertainty in how "drastic" is defined. I think that you are making assumptions there that are not based on validated modeling. The second relates to a larger assumption on your part. I don't think that "drastic [using that term with caveats] is necessary or desirable." I think that action (which could be potentially but not certainly, be what I would call "drastic") might be desirable ("necessary" is a moot point, IMO, because no one gets to determine what is "necessary."). It is a matter of probabilities. Just as I would reject someone who thinks with certainty that some unspecified action is necessary, so I would reject someone who thinks that some unspecified action is not justified (and who thinks that saying that action might be justified is "alarmism.")

At the risk of being tiresome yet again, this goes back to the larger points I'm trying to get across to you:

1: I find your certainty about the "costs" of action to be based on unvalidated modeling.
2: I find your characterization of those who disagree with you to be (at least sometimes) inaccurate. You say that I could be considered a "believer in CAGW," yet all that I am saying is that policies aimed at mitigating climate change might be justified as a matter or risk abatement in the face of uncertainty.
3: I find your characterization that people are advocating for "suspending democratic debate to ram such measure's through at the public expense," to be too hyperbolic to be anything other than counterproductive - especially since you seem to want to apply that attribute disproportionately across the great climate divide (and I believe that to the extent that such a characterization actually exists in the climate wars, it should be meted out proportionately).

--> "is that some models are indeed useful, but you don't know which ones until you have validated them. "

I disagree. Here, I think that your standard is unrealistic (a reversal of what you say about my standards). I use unvalidated modeling all the time, with the understanding that my models are not validated, but that they represent the range of possibilities. They suggest probabilities. In the real world, in the face of risk assessment in the face of uncertainty, we cannot always have models that are validated to the degree we'd like. (And of course, even there defining a tolerance point for validation is like saying that you have divided a line into the smallest possible segments. "Validation" is inherently subjective within an infinitive realm of possibilities.) So I would say that unvalidated models are useful even thought they are "wrong." They are useful because they help me to conceptualize the uncertainties. Any policy determinations I might make need to account for the fact that imperfect models show fat tails of outcomes that are troubling in nature. I like Kerry Emanuel's terminology of "high...outcome function."

In assessing the event risk component of climate change, we have, I would argue, a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability. This means talking not just about the most probable middle of the distribution, but also the lower probability high-end risk tail, because the outcome function is very high there.

Do we not have a professional obligation to talk about the whole probability distribution, given the tough consequences at the tail of the distribution? I think we do, in spite of the fact that we open ourselves to the accusation of alarmism and thereby risk reducing our credibility. A case could be made that we should keep quiet about tail risk and preserve our credibility as a hedge against the possibility that someday the ability to speak with credibility will be absolutely critical to avoid disaster.

-->"Any unvalidated model always has a "fat tail" of risk, not because there's any actual evidence of such a risk, but because there's no good evidence that there isn't."

Yes, of course. But that is a fact of life that we can't just will away. The fact that the model isn't validated (to some subjective measure of tolerance) does not mean that the fat tail can be dismissed. But that same fat tail can be justification for action - because it's possibility has not been disproven. This is what risk assessment in the face of uncertainty is all about.

-->"Why not both? They seem to ignore it as an issue because they don't think it is significant - they answer the question "Does it matter?" with a "No"."

I would again ask you to reconsider how you define "ignore." If you feel a momentary sharp pain but don't run to the doctor - does that mean that you've "ignored" the pain even though you determined that the pain wasn't so significant as to warrant a specific response? The person next to you might feel that pain would merit a trip to the doctor. Perhaps that person is a hypochondriac, or an "alarmist?" You can't reverse engineer from someone's disagreement with you about significance to determine that they have "ignored" something. In do doing, you are merely dismissing disagreement as being invalid.

-->"Hmm. Interesting. What do you mean by "damage", and damaging to what?"

Damage to the potential of constructive dialog. Damage to the possibility of risk assessment in the face of uncertainty. IMO, there is an interlocking mechanism of oppositional over-certainty that strengthens (or is the product of) tribalistic dynamics.

So no, I was not referring to damage to "the cause." Your pattern-recognition tendencies got in our way there. Temporarily. It wasn't insurmountable. And I would argue, that was because you were being what I call "meta-cognitive." (and I note here that Dan has been what I consider to be unjustifiably dismissive of the benefits of meta-cognition).

May 7, 2014 | Unregistered CommenterJoshua

NiV -

--> "Climate sceptics have spent the last 20 years being ignored, excluded, derided, insulted, and psychoanalyzed. They've been compared to the worst conspiracy theorists, cranks, and holocaust deniers going, blown up in comedy videos, and had people calling for them to be fired, purged, and even subjected to a "climate Nuremberg". They've been subjected to an extreme contempt by important people in global governance - the 'Great and the Good' as we call them. They are, of course, annoyed and upset about that. And a little bit worried. It is of course a "universal human behaviour" that they'll tend to become somewhat more 'tribal' and aggressive about it than they should."

From where I sit, there is a parallel there with how "realists" have been treated. So while I don't dismiss the behaviors you describe, and I'm not ignorant of the causal mechanism behind the tribal and aggressive response, at the same time I would say that both groups over-dramatize how they've been treated, and find false disproportionality in how the two sides are treated, respectively. IMO, that is all part of the self-victimization and castigation dynamic - that fits within a identity protective and identity aggressive mechanism that is a component of motivated reasoning.

May 7, 2014 | Unregistered CommenterJoshua

"It is a matter of probabilities. Just as I would reject someone who thinks with certainty that some unspecified action is necessary, so I would reject someone who thinks that some unspecified action is not justified (and who thinks that saying that action might be justified is "alarmism.")"

Quite so. Opinions differ, and our collective decision is a matter for public debate, in which all sides ought to get their say. I don't have a problem with anyone saying action might be justified. I agree, it might. The same goes for a lot of controversial policy proposals. That's what democratic debate is for.

"At the risk of being tiresome yet again, this goes back to the larger points I'm trying to get across to you: 1: I find your certainty about the "costs" of action to be based on unvalidated modeling."

OK. What's my view on the costs of action?

"You say that I could be considered a "believer in CAGW," yet all that I am saying is that policies aimed at mitigating climate change might be justified as a matter or risk abatement in the face of uncertainty."

I considered you a believer (tentatively, and accepting the criticisms of the terminology) on the basis of your belief in a "fat tail" of probabilities of damaging impact, which is something many people argue justifies action. But I guess it depends on how fat you believe the tail is - it's a matter of degree.


"3: I find your characterization that people are advocating for "suspending democratic debate to ram such measure's through at the public expense," to be too hyperbolic to be anything other than counterproductive"

It's not intended to be hyperbolic. What I'm referring to is the calls and attempts to exclude sceptics from public debate, and to dismiss their arguments without consideration. I'm referring to those who claim "the science is settled" to sucha a degree that presenting alternative views to the consensus is "false balance", and who write in to complain whenever any sceptic views evade the net.

On the minor TV channels I frequently see shows that take the ideas of Yetis, UFOs, Atlantis, and the Bermuda triangle seriously, and nobody says a thing. But in the past ten years I can recall only one single explicitly climate sceptic show (The Great Global Warming Swindle, shown on a channel with a mandate for presenting alternative perspectives), and there was an explosion of protest and official complaints, that such a thing should not be allowed. The complaints got rejected by the tribunal, but it's noticeable that nobody has ever shown it or anything like it again.

Not all supporters of climate action do. But some do. I'm only referring to that subset.

"I disagree. Here, I think that your standard is unrealistic (a reversal of what you say about my standards). I use unvalidated modeling all the time, with the understanding that my models are not validated, but that they represent the range of possibilities. They suggest probabilities. [...] So I would say that unvalidated models are useful even thought they are "wrong." They are useful because they help me to conceptualize the uncertainties."

I think possibly you misunderstand. It's the verification stage that tells you what the uncertainties and probabilities are. With an unverified model, you don't know.

Any model comes with a reported margin of error, a probability distribution, limits of applicability, a list of pre-conditions assumed, and so on. But if they've not been checked, whose to say they mean anything? I've got my own personal climate model that says the net global warming after 100 years will be 0.3 C plus or minus 0.05 C at 95%. Do you believe it, given that the model simply asserts it, and presents no evidence to demonstrate its accuracy? Any ordinary Joe on the street can propose their own model, and give it confidence bounds. But are any of them any use, without testing them to see if they actually work, within the bounds they state?

I would suggest not. You develop a model that makes a prediction. You either calculate an expected error bound and then do measurements to demonstrate it, or you simply measure what the error spread on the predictions is. That's verification. There's nothing to say that the spread has to be narrow. In fact, it's a lot easier to verify the model if it isn't. But if it's not verified, it's just hearsay.

Then you look at the verified model's error margin and you ask whether that's good enough for your application. If I say the global warming will be zero plus or minus 50 C, that's very likely true and an easily verified model, but it's useless for your policy decision purpose. Another one that has a verified accuracy of plus or minus 2 C might well be more useful, and plus or minus 0.5 C definitely would be sufficient. Looking to see if the model is accurate enough for your purposes is 'validation'.

So if the science was really bad and all we could say about it was that the warming would be zero plus or minus 50 C, the extremes of the spread are a very alarming "fat tail" of catastrophic outcomes. But the spread is not saying that's what's likely to happen, because we've got evidence for it. It's saying "We don't know", and that we simply have no evidence that it's not.

If you make your decision on the basis of the upper bound of your uncertainty, you get the perverse result that the more ignorant and uncertain you are, the more confidently you make your decision to act. Indeed, if you are sufficiently uncertain, then you will always act, even if the model predicts a zero effect! The uncertainty margin alone is sufficient to push you over the threshold.


"I would again ask you to reconsider how you define "ignore." If you feel a momentary sharp pain but don't run to the doctor - does that mean that you've "ignored" the pain even though you determined that the pain wasn't so significant as to warrant a specific response?"

Yes.

Ignoring something may in general situations be justified. We ignore most things. I'm not arguing here that ignoring what climate scientists did is not justified. (Not because I don't have an opinion, but because that's not what this forum is about.) I'm simply observing that opinions differ, and this is the point on which we disagree.

"Damage to the potential of constructive dialog."

Ah, we're making progress! Dialog with who? What do you mean by 'constructive'?

The failure to respond properly (as sceptics see it) to the scientific 'issues' with the practice of climate science has blocked dialog with sceptics. We can't even find a common reference frame with such a view. So presumably you're not talking about dialog with sceptics. But then, with who? And could a dialog that resulted in a decision to do nothing count as 'constructive'?

"So no, I was not referring to damage to "the cause." Your pattern-recognition tendencies got in our way there. Temporarily."

I would argue they didn't get in the way at all. With radically different world views it's very easy for us to interpret the same words differently; miscommunication due to conflicting metacontexts is rife. I could have simply assumed that was what you meant, and said nothing. But by explaining how your words could be interpreted, I can guide the discussion towards a different way of expressing them that avoids the potential misunderstanding.

I'm sure that in teaching, you know that when you're trying to explain a difficult concept, it's useful to get the student to explain in their own words how they're interpreting your explanation of the concept, in order to be able to diagnose where they've gone wrong. If I tell you how it looks to me, you can maybe see clearer how to explain it in a way I can understand.

"From where I sit, there is a parallel there with how "realists" have been treated."

Indeed. Symmetrical in their belief in the asymmetry of the situation...

May 7, 2014 | Unregistered CommenterNiV

--> "OK. What's my view on the costs of action?"

All I know about that is what you said (and I re-posted previously as a specific reference):

-->"People will die because of the measures governments are taking against global warming."

Which, as I indicated, seems to be in contrast to this:

-->"But the economic models appear to be even less well validated than the climate ones, so take with a pinch of salt."

The first quote states an absolute, even though as far as I can tell it is based on the output of what you believe are poorly validated models (referred to in the second quote). It seems to me that your standards w/r/t whether unvalidated models are useful are being arbitrarily applied.

-->"But I guess it depends on how fat you believe the tail is - it's a matter of degree."

To some extent yes, but I would also say that it also depends simply on the existence of risk. Even very low risk of highly concerning events can justify action.

-->"What I'm referring to is the calls and attempts to exclude sceptics from public debate, and to dismiss their arguments without consideration. I'm referring to those who claim "the science is settled" to sucha a degree that presenting alternative views to the consensus is "false balance", and who write in to complain whenever any sceptic views evade the net."

Of course, there are outliers who I might characterize as fitting your description - but for the most part I still think that the description is hyperbolic. I see many "realists" object to what they consider to be a false balance. They think that news organizations should not seek to create a false balance. They complain about it. But that isn't the equivalent of: "suspending democratic debate to ram such measure's through at the public expense." For example, while "realists" object vociferously to the content of the Congressional testimony of someone like Spencer or Christy or Curry or RPJr. - I don't seem them arguing that they have no right to, or that the government should prevent them from, testifying. "Skeptics" object all the time to what they think is an ill-considered "balance" in news coverage. Does that then mean that they are arguing in favor of "suspending democratic debate?" And I often read "skeptics" arguing (selectively) that "activism" among scientists is destroying science itself.

Personally, I find all the hand-wringing about loss of "freedom of speech" as holding civil rights (and real concerns about freedom of speech) hostage to a partisan climate war.

--> "I think possibly you misunderstand. It's the verification stage that tells you what the uncertainties and probabilities are. With an unverified model, you don't know."

It's possible. I assume that you're speaking to a technical distinction between validation and verification, and I am using those terms unscientifically (due to my technical and intellectual limitations)...

I am sensitive to a similar question in another environment - education, where people focus on whether testing is reliable (produces the same results if taken multiple times by the same person) and neglecting something else that's very important: Its validity (whether it really is measuring what it purports to measure: something meaningful about a student's knowledge and/or intellectual abilities).

But seeing as how I'm not really capable of discussing validation and verification of modeling in a technical sense, I'll speak to how I view "poorly validated" and poorly verified climate models.

I'm saying that models are useful for conceptualizing the range of possibilities, even w/o confidence intervals. Inherently, we can't perfectly validate a model that projects such a complicated phenomenon such as the affect of ACO2 on future climate. We have no experimental test conditions. Further, we can't perfectly verify that the model, even if to the extent that it is validated, produces verifiable results. In the end, poorly validated/verified models could be counterproductive because they create disproportionate concern about events that have little chance of occurring. Or they could be productive because they inform me of unlikely but extremely dangerous events. Even if I don't know which is occurring in a particular instance (unsupported concern or beneficial alarm), such is the human predicament. Such is evaluating risk in the face of uncertainty.

--> "Any model comes with a reported margin of error, a probability distribution, limits of applicability, a list of pre-conditions assumed, and so on. But if they've not been checked, whose to say they mean anything? "

A model gives results based on inputs and parameters. Those inputs and parameters could be wrong. Or, models could predict events that don't occur because they are what falls outside the CI. It doesn't predict the future. It provides outcomes given specific input. The model stands apart from a prediction based on its output.

--> "Do you believe it, given that the model simply asserts it, and presents no evidence to demonstrate its accuracy?"

I don't believe or disbelieve it...just as I don't believe or disbelieve mainstream climate models. I accept the models for what they are - mechanisms that use an algorithm to take an input to create an output. I don't see it as something to believe in or not believe in. It provides me with information. Information that may or may not be useful. W/r/t climate models, I see smart and knowledgeable people taking as much input as they can, using their expertise to inform an algorithm, and then processing the data. I don't take the outcomes as a matter of faith or belief. I'm well aware that there could be errors in their parameters, errors in the input, errors in the algorithm, or other methodological problems. Thus, I look at the models as tools to use to help conceptualize an uncertain set of conditions. The more and the better they are verified and validated, the more confidence I have in their output - but as I said, standards of what comprises adequate verification and validation are subjective, and even when the models are what I consider substandard, they are useful even though they are "wrong" (in the sense of not being "right").

-->"But if it's not verified, it's just hearsay."

I own a rental property. The other day, one tenant told me something troubling that she heard from a second tenant about what a third tenant did. Now I well know that there is a high probability of inaccuracy, but I didn't ignore what I heard. I took action with the recognition that the information was unverified and unvalidated. I took steps to account for my uncertainty. But the information was useful. And in the end, I was better off for having heard the information and responding. Now if I hadn't allowed for my uncertainty, and just run off half-cocked on the basis of hearsay, I might well have wound up taking counterproductive action. I could have taken the wrong action even after accounting for my uncertainty. Such is life.

--> "If you make your decision on the basis of the upper bound of your uncertainty, you get the perverse result that the more ignorant and uncertain you are, the more confidently you make your decision to act."

You need to explain that more. I can make a decision on the upper bound of my uncertainty, with an understanding that I'm doing so. As a result, I am acting with less confidence (than I would be with greater certainty) that my action will be merited as a response to the situation, but with an understanding that I've acted so as to account for an unlikely event that would have significant consequences. Accordingly, I will adjust based on the "costs" of my action.

--> "Indeed, if you are sufficiently uncertain, then you will always act, even if the model predicts a zero effect! "

Again, this I don't understand. You'll need to explain more. I can be very uncertain about an outcome, and either choose to act with an acceptance of that uncertainty, or not act. For example, sometimes, in a moment of crisis, I have prayed for divine intervention. I did so even though I thought that there is virtually no chance of there being some divine entity that will hear my prayer and act as a response in a way that will address my concerns. I'm an agnostic, and ordinarily I don't pray because I think it is pointless. My degree of uncertainty is not what changes my decision as to whether or not to act (pray) in a specific scenario. My level of uncertainty remains the same. It is my judgement of the magnitude of the consequences that drives my change in decision-making about action. It's a matter of probabilities. More potentially severe outcomes changes my perception of the meaning of the probabilities. (Of course, the example is not precisely on point because there is no correlated change in the "costs" of action).

--> "Yes.

--> Ignoring something may in general situations be justified. We ignore most things."

We're going to have to agree to disagree on this. When I feel a pain but decide not to do anything about it, I have not ignored the pain.

--> "Ah, we're making progress! Dialog with who? What do you mean by 'constructive'?"

Dialog across society as opposed to tribal bickering. "Constructive" would be working together to distinguish between positions and interests, and to identify synergies (in shared interests)

--> "The failure to respond properly (as sceptics see it) to the scientific 'issues' with the practice of climate science has blocked dialog with sceptics.

Realists have a similar perspective.

--> "We can't even find a common reference frame with such a view. So presumably you're not talking about dialog with sceptics. But then, with who? And could a dialog that resulted in a decision to do nothing count as 'constructive'?""

Indeed, the problem is in finding a common frame of reference. The obstacles to establishing that common frame of reference are created bilaterally. I have encountered that over and over at "skeptical" websites - resistance when I am trying to establish a common frame of reference. I am called all kinds of names for trying to establish a common frame of reference. Don't get me wrong - it doesn't bother me, but I don't think that the failure to find a common frame of reference has an origin in the ideological stance of either "skeptics" or "realists," but in the mechanisms of motivated reasoning that affect both groups.

--> "So presumably you're not talking about dialog with sceptics. "

I'm not talking about self-identified "skeptics" (or at least the majority that I've encountered) as just I'm not talking about self-identified "realists." For people so identified (usually), the constructive convo is buried beneath identity aggressive and identity protective behaviors. IMO, you can't make progress in such a context. You have to be working with people who are engaged in good faith to make progress.

--> "But then, with who?"

People who are non-identified, or people who are willing to loosen their grip on their identification. But more important, IMO, than identifying the "who" is in creating a context for constructive dialog. Roughly speaking, I think that the principles of "participatory planning" lay out such a context.

-->"And could a dialog that resulted in a decision to do nothing count as 'constructive'?"

Most definitely.

May 9, 2014 | Unregistered CommenterJoshua

NIV's ramblings strongly reminded me of a series of exchanges I had with a local climate change skeptic, one with a Ph.D. in analytical chemistry, who worked for many years in a prominent consumer products firm, most recently assisting in the conduct of controlled clinical trials for new medications. For him, controlled trials are the gold standard for assessing effectiveness. And I agree.

He holds that he will not accept the findings on climate change until such trials can be held. And when I asked how he would do that, he of course has no answer. He is a perfect case of a motivated reasoner who uses his advanced education to find fault with all the evidence presented, and fails to propose a better solution.

And how is that different from NIV's statement?:

"Scientific model building goes through a number of distinct stages. Exploratory - where you are trying to get a handle on how the physics works, what factors are important, and so on. Calibration - where you have a correct outline of the physics and only need to determine the adjustable parameters more precisely. And verification - where you use new and independent data to test and document the accuracy and limits of validity of the model, proving (statistically or otherwise) that it works within the documented margin of error. Then there is validation, which means confirming that the verified model is sufficiently accurate and reliable for the application that you are about to use it in."

I wondered if NIV is in fact this person.

What he and NIV ignore, quite intentionally I think, is that we are living this experiement. So we cannot step outside of it to verify and validate it as one can a lab experiment or a controlled trial. There are consequences of taking action, but most think that there are greatly consequences of inaction.

This chemist has a friend, a highly published MD here in the med school, who when I suggested that most folks are risk averse and that perhaps we should take out some insurance in the form of policies to reduce carbon emissions, asserted "I never buy insurance"!, and confirms Dan's framework of cultural cognition.

Dan quite properly has been extending the concept of motivated reasoning to the cultural context. But I think we need to speak more plainly about people such as my chemist acquaitance . We go to school to undertand how the world works. And when we encounter a problem, are supposed to have learned to say: "oh, this is a problem and I need to think about it". Next we should have learned to say "now, how should I think about it?" And this is the stuff of our course work and research.

But if the most educated among us do not do this, then we should simply conclude that they are intellectually dishonest, and say publicly.

May 9, 2014 | Unregistered CommenterHCG

Joshua,

"All I know about that is what you said (and I re-posted previously as a specific reference):"

Ah! I see! I apologise, I hadn't meant to imply that one followed from the other. The economic models describe what might happen 50-100 years down the line. The reference to people dying referred to the phenomena of 'excess winter deaths' and 'fuel poverty', which happen now and are measurable and modellable with greater confidence. We have a lot of elderly people on the poverty line. There are widespread reports of such people having to make choices between eating and heating. There are cases of people who died of hypothermia in houses with the heating off, reportedly because they could not afford to heat them. If you raise energy prices with respect to incomes far enough to have a real effect on energy use (which is the mechanism of current government policies) then it will have a perfectly predictable effect on the poor.

I will grant you that it's a conclusion that could certainly be argued with. But it's got nothing to do with the long range economic models that are used to assess the benefit or otherwise of global warming.

"To some extent yes, but I would also say that it also depends simply on the existence of risk. Even very low risk of highly concerning events can justify action."

Yes, this it what I think of the 'Pascal's Wager' fallacy. The argument goes that risk is probability times impact. If the probability can be argued to be non-zero, then you can always get the decision you want by magnifying the impact sufficiently. Thus Pascal argued that no matter how small you thought the probability of heaven and hell, because the impact of your decision to believe or not was infinite, belief was always justified on rational risk-avoidance grounds. (It's similar to the St Petersburg paradox, where there is a game with an infinite payoff which you will nevertheless almost certainly lose.)

It's a useful political argument because if you're advocating for action based on predictions of events for which you have no evidence, you can always make the case stronger by predicting a more scary event. Thus, people with no evidence tend to predict more dramatic catastrophes that we have to act fast to avoid.

It can be more easily seen to be a fallacy by noting that the technique can be applied to any argument, even for diametrically opposed arguments. Thus Pascal's Wager is equally an argument for sacrificing to pagan Gods instead, on the basis that they all have their infinite heavens and hells too. It can, as I noted earlier, be applied to far more realistic disasters like extraterrestrial invasion. (The Drake equation is an unreliable and unvalidated model, but there are a lot of very sensible astronomers who do take the possibility of intelligent life elsewhere in the universe as a serious possibility, so there is clearly a non-zero probability...) However, a very low risk of even highly concerning actions does not necessarily justify action. The only time, I would say, was when the costs of acting are trivial, as when an atheist might pray when prayer costs nothing.

In practice, you often see impact arguments being used selectively - the (uncertain) risks of one course are highlighted while the (no less uncertain) risks of all the other courses are ignored. It's fertile ground for motivated reasoning. However, clearly opinions differ on the point. I wouldn't necessarily expect you to agree with my point of view, but do you at least understand it?

" I took action with the recognition that the information was unverified and unvalidated. I took steps to account for my uncertainty."

OK, so you recognise there is a difference between running off half-cocked and taking steps to account for the uncertainty.

So would you agree if I argued that the climate model outputs were sufficiently concerning for us to obtain more reliable information, and that we should therefore institute a research programme to collect and publicly archive the data, assess and improve the quality of the sensors, use professional software engineers to write the software, professional data archivists to manage the data, have all the statistics checked over by professional statisticians, and put *all* the calculations, data, quality checks, auditors reports, code reviews, source code, model outputs, and so on into a public archive where anyone can read, comment, and criticise? And then for scientists to make such criticism easy, and use climate sceptics as an alternative viewpoint with which to test it? Fund them, even?

Because that's the argument Steve McIntyre has been making since the beginning, and that Anthony Watts made with his SurfaceStations project, and lots of others. If this is such a serious business, and it is, then fix the science! Make it as solid and rigorous as you can. And absolutely do not continue to accept known errors in the record, or dismiss poor scientific practice with a wave of the hand, or tolerate weak arguments and opinions masquerading as solid scientific evidence.

Because that one is *my* position. I'm not saying we should ignore or dismiss the issue. I'm saying we should take it more seriously! But that what we should be doing first is to fix the science, and get the best possible information we can, *before* we decide what to do about it. Because the evidence so far is enough to justify looking for more evidence, but is *not* good enough to safely take further action on.

The thing I've never understood is why scientist-believers would accept (unnecessarily) bad science on a topic that they themselves regarded as of overwhelming importance. If anyone took such liberties with something I regarded as safety-critical, let alone on a global scale, I'd be incandescent! Why aren't you?

It's something I don't understand, but would like to know more about.

"You need to explain that more. I can make a decision on the upper bound of my uncertainty, with an understanding that I'm doing so."

OK, let's say our threshold for action is that the upper 95% bound exceeds 2 C. If I have a highly accurate model that says zero plus or minus 0.1 C, I won't act and I'll feel happily confident about it. If I have a less accurate model that says zero plus or minus 2 C, I'm on the edge. I'm not certain if it meets the criterion or not, and so I'll be nervous and tentative about advocating action. If I have a really bad model, based on guesswork and a string of unlikely assumptions, and the output is zero plus or minus 10 C, then the upper bound is far, far above the threshold, and I am in no doubt about whether the criterion has been met.

The worse and more unreliable my model, the more confident I am about my decision to act.

"My degree of uncertainty is not what changes my decision as to whether or not to act (pray) in a specific scenario. My level of uncertainty remains the same. It is my judgement of the magnitude of the consequences that drives my change in decision-making about action."

Precisely. Evidence (or its lack) informs probabilities, but people are responding to the impacts. People differ in their response to this trade-off.

"Indeed, the problem is in finding a common frame of reference. The obstacles to establishing that common frame of reference are created bilaterally. [...] but I don't think that the failure to find a common frame of reference has an origin in the ideological stance of either "skeptics" or "realists," but in the mechanisms of motivated reasoning that affect both groups."

Yes, that was what I was trying to argue above. "The question I was initially asking was whether some climate scientists excluding data they didn't like and some climate communicators ignoring or rejecting this history were potentially further examples of what Dan was originally talking about - of the human tendency to motivated reasoning, similar to Dan's example of science communicators ignoring the science on science communication?" I'm not saying that the problems are all on one side. I was asking whether the particular behaviours Dan was describing were examples of what Dan is always talking about, too. And I think we have indeed managed to discuss that, for which I thank you.

May 10, 2014 | Unregistered CommenterNiV

HCG,

"For him, controlled trials are the gold standard for assessing effectiveness. And I agree. He holds that he will not accept the findings on climate change until such trials can be held. And when I asked how he would do that, he of course has no answer."

In the case of climate, there are potentially answers (that it would take a long technical digression to go into), but I think this makes an interesting point besides that. In physics, there are many things we cannot know, because we don't have the experimental techniques or technology yet to be able to do the experiment to be able to tell. In science, the answer to such questions is "We don't know", and one of the differences between science and other philosophies that purport to give answers to life's questions is that "We don't know" is always an acceptable answer.

We'll try to find out. We're not giving up. But at the moment we don't know, and we're not going to tell you that we do know just for the sake of having an answer, and we're going to carry on pointing out all the flaws in the arguments of those other philosophies that claim they do know. I don't have to know the secret to eternal youth myself to be able to point out that the snake oil doesn't work.

"I wondered if NIV is in fact this person."

No. There are a lot of us! A number of surveys report that about 15-20% of scientists are climate sceptics to some degree (with heavy caveats). In some particular sciences, like meteorology, the percentage is even higher.

"What he and NIV ignore, quite intentionally I think, is that we are living this experiement. So we cannot step outside of it to verify and validate it as one can a lab experiment or a controlled trial."

On the contrary. The fact that we are living in the experiment is why I care. We'll be the ones who pay the high energy prices, and the carbon taxes and tariffs and subsidies, and the caps and limits on emissions. We'll be the ones who have to deal with the regulations and legal bans, who won't be allowed to do what we want to do, who will have limits placed on our freedoms. We'll be the ones who have to pay the economic and political consequences of your experiment. There are likely to be great consequences to action, too.

Are you ignoring those consequences intentionally, do you think? :-)

"who when I suggested that most folks are risk averse and that perhaps we should take out some insurance in the form of policies to reduce carbon emissions, asserted "I never buy insurance"!"

Very sensible! Insurance companies are a business, who need to make a profit. They do that by setting the premiums such that paying the premium is on average more expensive than taking the risk. Always! So if you can survive the impact, it's always economically a better deal to go uninsured.

The times when it is sensible to take insurance is when the impact is catastrophic and beyond your means to survive without help, when the probability is known to be high enough that there's a significant probability you might face it, and when the price is affordable and does not pose a significant problem in itself. It's worth $500 a year to insure your house against earthquake, flood, fire, and theft. It's not worth $50,000 a year, unless it's a *very* expensive house in a very bad neighbourhood.

"But I think we need to speak more plainly about people such as my chemist acquaitance . We go to school to undertand how the world works. And when we encounter a problem, are supposed to have learned to say: "oh, this is a problem and I need to think about it". Next we should have learned to say "now, how should I think about it?" And this is the stuff of our course work and research."

Yes, and we have learnt the way the world works is that different people think differently, set different standards of evidence, draw different conclusions, and that none of us is necessarily being dishonest in doing so. What we need to do is ask "Why does the other person think differently? Might they know something I don't?" This is how and why the scientific method works - conclusions are subjected to systematic scepticism; all results, no matter how well established or by who, are open to being questioned; we always show our working; we try to check everything we can and document what we've done, so others can help us fill in the gaps; and we take criticism seriously. Scientific scepticism is the immune system of the body of scientific knowledge, constantly searching for and eliminating flaws. Science has taught us that we are all of us fallible, have many cognitively biases, and inclined to fool ourselves. Science is a toolbox of methods to try to reduce that tendency.

"But if the most educated among us do not do this, then we should simply conclude that they are intellectually dishonest, and say publicly."

People are intellectually dishonest if they publicly espouse high scientific standards or processes that they don't themselves adhere to - when they know what the rules of science are and ignore them.

So you must know as a scientist that you don't make data up. You don't hide inconsistencies and cases where your theory conflicts with observation. You don't only report the results you like. You don't truncate data when it contradicts your assumptions or conclusions. You don't describe data from sensors as "high quality" without going and checking. You don't tell people you checked when you didn't (and couldn't, because the data doesn't exist). You don't perform validation tests on your results, and still publish the results without mentioning that the tests failed. You don't continue to publish data you know is "meaningless" or wrong, or at least unreliable, without telling the people using it. You don't use ad hoc and undocumented statistical tests you just invented to test critical results, with no background understanding of the test's properties.

And you don't, as a scientist, publicly endorse scientific results, claiming the mantle of scientific authority, when you haven't yourself checked the results out personally. If you haven't checked the evidence and arguments yourself as a scientist would, you've got no more justification for belief than any non-scientist. Less, arguably, since you ought to know better.

But a lot of scientists do, since - as Dan here has pointed out in the past - nobody has the time to check every scientific result they rely on. We claim to be using science, but 'Argumentum ad Verecundiam' is fundamentally unscientific. Is that really 'intellectual dishonesty'? Or just being human?

May 10, 2014 | Unregistered CommenterNiV

NiV -

--> "Ah! I see! I apologise, I hadn't meant to imply that one followed from the other. The economic models describe what might happen 50-100 years down the line. The reference to people dying referred to the phenomena of 'excess winter deaths' and 'fuel poverty', which happen now and are measurable and modellable with greater confidence. We have a lot of elderly people on the poverty line. There are widespread reports of such people having to make choices between eating and heating. There are cases of people who died of hypothermia in houses with the heating off, reportedly because they could not afford to heat them. If you raise energy prices with respect to incomes far enough to have a real effect on energy use (which is the mechanism of current government policies) then it will have a perfectly predictable effect on the poor."

Remember, NiV -

you must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.”


Actually, I have followed discussions of "excess winter deaths due to 'fuel poverty,'" and on the basis of what I've read, I find your statement to be over confident, and based on unvalidated modeling. Validated modeling would account for all variety of factors to distinguish correlation from causation. "Widespread reports" does not equal validated modeling. The causes of price increases are many and varied, as are the factors that affect access to energy. The factors behind "excess winter deaths" are complicated and varied. The differential health impact of less ACO2 emissions are complicated. To make a statement such as yours, positive and negative externalties need to be controlled. I have asked quite a number of "skeptics" to verify and validate their certain claims about deaths from "fuel poverty" as the result of your sort of unspecified "measures governments are taking against global warming," and thus far, IMO, none have done so. The official documents that I have looked at (primarily from governmental organizations in the UK), are more circumspect than what can be seen in your statement.

--> "If you raise energy prices with respect to incomes far enough to have a real effect on energy use (which is the mechanism of current government policies) then it will have a perfectly predictable effect on the poor."

This is, quite precisely, the sort of statement I was referring to - and it has nothing to do with what economic models project 50-100 years out. It has to do with your unvalidated and unverified economic modeling. This reminds me of the oft' found claim from "skeptics" about deaths due to "bans" of DDT. Counterfactual claims about what might have happened had things been different require a very high bar of proof. Raising energy prices can mean many things. It can mean raising the prices for some while providing subsidies for others. It can result in fewer negative health impacts from particulates. It can mean less geopolitical and economic costs to keeping fuel flowing - and saving resources that can be spent on things such as increasing access to healthcare or education (which would result in fewer deaths).

--> "I will grant you that it's a conclusion that could certainly be argued with."

Well, that was my point. Now you are acknowledging the

--> "But it's got nothing to do with the long range economic models that are used to assess the benefit or otherwise of global warming."

I was careful to refer to different kinds of modeling in my discussion - not just the sort employed by Tol - where you spoke of the caveats. I also spoke to your own personal economic modeling that finds a directly linear relationship between those unspecified governmental policies and deaths. You took selected input data with ambiguous parameters, ran them through some unverified algorithm, and derived an output of completely certain conclusions. That's modeling. My point was that your reference to the caveats about was selective.

--> "It's similar to the St Petersburg paradox, where there is a game with an infinite payoff which you will nevertheless almost certainly lose."

One of my favorite cartoons is where "Joe" comes up on "Jim" who is lighting dollar bills on fire. When asked why he was burning up dollars, Jim said he hadn't had the change to buy lottery tickets that day.

I don't see that form of thinking as a "fallacy" within a certain range of bounds. I see it as "normal" human behavior, that makes rational sense within the larger framework of risk analysis in the face of uncertainty. The question of when it becomes fallacious thinking, or irrational if you will, is subjective. There is no firm line that differentiates that methodology as transgressing the bounds of rationality.

--> "It's a useful political argument because if you're advocating for action based on predictions of events for which you have no evidence, you can always make the case stronger by predicting a more scary event. Thus, people with no evidence tend to predict more dramatic catastrophes that we have to act fast to avoid."

Of course. Reagan's "evil empire" and Bush's "WsMD" come to mind. But it is fallaciously binary to argue that simply because such reasoning can be manipulated for political ends, it is therefore invalid. We employ such reasoning on a daily basis. Sometimes it works out and sometimes it doesn't.

--> "Thus Pascal's Wager is equally an argument for sacrificing to pagan Gods instead, on the basis that they all have their infinite heavens and hells too."

Sure. And we all have our subjective ways of determining which sacrifices to make, or even what comprises a sacrifice. I might believe that being killed on an alter will bring me everlasting joy.

--> "However, a very low risk of even highly concerning actions does not necessarily justify action. "

Of course not. Just as ruling out actions to prevent a low risk of highly concerning future events does not necessarily justify inaction

--> "However, a very low risk of even highly concerning actions does not necessarily justify action. The only time, I would say, was when the costs of acting are trivial, as when an atheist might pray when prayer costs nothing."

But your assessment of "cost" and "trivial" are subjective. They are, necessarily (because one man's "cost" can easily be another man's "benefit") and, it is certainly easy to be overconfident about one's assessment. IMO, the problem is when people don't recognize their own subjectivity and become selective w/r/t validation and verification. - and where assessments regarding "costs" and "triviality" are a product of identity aggression and identity protection.

--> " I wouldn't necessarily expect you to agree with my point of view, but do you at least understand it?"

Do I understand that you might define "costs" and "triviality" different than others? Of course. That is my basic argument. Do I understand your specific definitions of "cost" and "triviality?" Yes, for the most part. And I can't simply will your definitions away by labeling them as unvalidated or unverified. But I can discuss with you the differing definitions with an understanding that there are basically two course of action: (1) one set of definitions is aligned with people who have social power, and thus become operational. There is no shared ownership over policy outcomes. It is win/lose, or, (2) with dialog to distinguish "positions" (in this case, subjective definitions) and "interests," we can work from a point of trying to establish common definitions - but even if we can't, we can begin to identify common interests. In that way, we can share ownership over policies developed. It is win/win (along the lines of "Getting to Yes").

--> "So would you agree if I argued that the climate model outputs were sufficiently concerning for us to obtain more reliable information, and that we should therefore institute a research programme to collect and publicly archive the data, assess and improve the quality of the sensors, use professional software engineers to write the software, professional data archivists to manage the data, have all the statistics checked over by professional statisticians, and put *all* the calculations, data, quality checks, auditors reports, code reviews, source code, model outputs, and so on into a public archive where anyone can read, comment, and criticise? "

That's all fine. What that doesn't solve is the question of weighing probablities in the meantime. And, of course, each of those actions you propose are fraught with the biasing influences of motivated reasoning. They won't make the polarization disappear, because the polarization is not rooted in the scientific analysis. You and I often come back to this point. I think that you're trying to have your cake and eat it too. If you accept the mechanism of motivated reasoning, as a mechanism that is rooted in fundamental cognitive and psychological characteristics of human nature, then you have to accept (IMO) that to address the problem you have to go to the root. You can't eradicate a disease merely by treating the symptoms.

--> "Because that's the argument Steve McIntyre has been making since the beginning, and that Anthony Watts made with his SurfaceStations project, and lots of others. If this is such a serious business, and it is, then fix the science! Make it as solid and rigorous as you can. And absolutely do not continue to accept known errors in the record, or dismiss poor scientific practice with a wave of the hand, or tolerate weak arguments and opinions masquerading as solid scientific evidence."

That is complicated, and your description is, IMO, unrealistically binary. McIntyre and Watts are selective in their approach to the science and the tribalism, IMO. Their focus on "rigor" is highly selective, IMO. That are not unique in that regard - but the binary vision that sees them as being some life raft of skepticism (w/o quotations) rowing against a tide of "bad science" seems to me to only perpetuate existing patterns, IMO.

--> "I'm saying we should take it more seriously! But that what we should be doing first is to fix the science, and get the best possible information we can, *before* we decide what to do about it. "

I think that this notion of sequence is fallacious. There is no distinct "first." There is only an imperfect sequence that flows in various directions at various times. I think that the notion of "first fix the science" is destructive. Who decides when it has been "fixed?" Who determines what steps to take? Just as you can construct an "irrational and manipulative" vision of low risk/high damage function reasoning, so I can construct an "irrational" and "manipulative" notion of some sequence of "fixing" the science and then "acting" upon "verified and validate" model output.

--> "Because the evidence so far is enough to justify looking for more evidence, but is *not* good enough to safely take further action on."

This seems to imply that we need to have a definitive answer there - without even specifying what action you're talking about. There is no perfect justification. There are only probabilities.

Gonna have to stop there. Too many things I need to do that are not getting done.

May 10, 2014 | Unregistered CommenterJoshua

In reponse to my mention of the MD ideologue who wrote me: "I never buy insurance"!",

NiV writes:

"Very sensible! Insurance companies are a business, who need to make a profit. They do that by setting the premiums such that paying the premium is on average more expensive than taking the risk. Always! So if you can survive the impact, it's always economically a better deal to go uninsured."

This of course omits a small detail that virtually everyone buys or wants insurance, and thus the proiferation of insurance of all types, and for centuries.

Easy to see motivated reasoning in full display for the physician and NiV. I guess we owe him or her some gratitude for the public demonstration. I learn from this, and when I encounter this I continue the interaction to see how much I learn, and how the motivated reasoner fails to learn.

System 1 in full display.

May 10, 2014 | Unregistered CommenterHCG

"This of course omits a small detail that virtually everyone buys or wants insurance, and thus the proiferation of insurance of all types, and for centuries."

Why is this particular small detail relevant? What point are you making?

That something has been desired by many and proliferated for centuries does not mean it is logical or sensible. Insurance is just a form of gambling - what gamblers refer to as the 'sucker bet'. And suckers have been losing their shirts to con men for centuries, too.

May 10, 2014 | Unregistered CommenterNiV
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.