follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« MAPKIA! episode 1: religiosity, science comprehension & climate change | Main | WSMD? JA! How different are tea party members' views from those of other Republicans on climate change? »
Tuesday
Nov192013

Don't select on the dependent variable in studying the science communication problem

I’ve talked about this before (in fact, there isn’t anything that I ever talk about that I haven’t talked about before, including having talked before about everything that I ever say), but it's impossible to overemphasize the point that one will never understand the “science communication problem”—the failure of valid, widely accessible decision-relevant science to dispel controversy over risk and other facts to which that evidence directly speaks—if one confines one’s attention to instances of the problem.

If one does this—confines one’s attention to big, colossal, pitiful spectacles like the conflict over issues like climate change, or nuclear power, or the HPV vaccine, or gun control—one's methods will be marred by a form of the defect known as “selecting on the dependent variable.” 

"Selecting on the dependent variable" refers to the practice of restricting one’s set of observations to cases in which some phenomenon of interest has been observed and excluding from the set cases in which the phenomenon was not observed. Necessarily, any inferences one draws about the causes of such a phenomenon will then be invalid because in ignoring cases in which the phenomenon didn’t occur one has omitted from one’s sample instances in which the putative cause might have been present but didn’t generate the phenomenon of interest—an outcome that would falsify the conclusion.  Happens all the time, actually, and is a testament to the power of ingrained non-scientific patterns of reasoning in our everyday thought.

So to protect myself and the 14 billion regular readers of this blog from this trap, I feel obliged at regular intervals to call attention to instances of the absence of the sort of conflict that marks the science communication problem with respect to applications of decision-relevant science that certainly could—indeed, in some societies, in some times even have—generated such dispute.

To start, consider a picture of what the science communication problem looks like.

There is conflict among groups of citizens based on their group identities—a fact reflected in the bimodal distribution of risk perceptions. 

In addition, the psychological stake that individuals have in persisting in beliefs that reflect and reinforce their group commitments is bending their reason. They are using their intelligence not to discern the best available evidence but to fit whatever information they are exposed to to the position that is dominant in their group. That’s why polarization actually increases as science comprehension (measured by “science literacy,” “numeracy,” “cognitive reflection” or any other relevant measure) magnifies polarization.

This sort of division is pathological, both in the sense of being bad for the well-being of a democratic society and unusual

What’s bad is that where there is this sort of persistent group-based conflict, members of a pluralistic democratic society are less likely to converge on the best available evidence—no matter what it is. Those who “believe” in climate change get this—we ought to have a carbon tax or cap and trade or some other set of effective mitigation policies by now, they say, and would but for this pathology. 

But if you happen to be a climate skeptic and don’t see why the pathology of cultural polarization over decision-relevant science is a problem, then you must work to enhance the power of your imagination.

Let me help you: do you think it is a good idea for the EPA to be imposing regulations on carbon emissions? For California to have its own cap & trade policy? If you don’t, then you should also be trying to figure out why so many citizens disagree with you (and should be appalled, just as believers should be, when you see ones of your own number engaging in just-so stories to try to explain this state of affairs).

You should also be worried that maybe your own assessments of what the best evidence is, on this issue or any other that reflects this pathology, might not be entitled to the same confidence you usually accord them (if you aren’t, then you lack the humility that alerts a critically reasoning person to the ever-present possibility of error on his or her part and the need to correct it), since clearly the normal forces that tend to reliably guide reflective citizens to apprehension of the best available evidence have been scrambled and disrupted here.

It doesn’t matter what position you take on any particular issue subject to this dynamic. It is bad for the members of a democratic society to be invested in positions on policy-relevant science on account of the stake that those individuals have in the enactment policies that reflect their group’s position rather than ones that reflect the best available evidence.

What’s unusual is that this sort of conflict is exceedingly rare. There are orders of magnitude more issues informed by decision-relevant science in which citizens with different identities don’t polarize.

On those issues, moreover, increased science comprehension doesn’t drive groups apart; on the contrary, it is clearly one of the driving forces of their convergence. Individuals reasonably look for guidance to those who share their commitments and who are knowledgeable about what’s known to science. Individuals with different group commitments are looking to different people—for the most part—but because there are plenty of highly science comprehending individuals in all the groups in which individuals exercise their rational faculty to discern who knows what about what, members of all theses groups tend to converge.

That’s the normal situation. Here’s what it looks like:

 

What’s normal here, of course, isn’t the shape of the distribution of views across groups. For all groups, positions on the risks posed by medical x-rays are skewed to the left—toward “low risk” (on the “industrial strength” risk-perception measure).

But these distributions are socially normal. There isn’t the bimodal distribution characteristic of group conflict. What’s more, increased science comprehension is in the same direction for all groups, and reflects convergence among the members of these groups who can be expected to play the most significant role in the distribution of knowledge.

Do these sorts of “pictures” tell us what to do to address the science communication problem? Of course, not.  Only empirical testing of hypothesized causes and corresponding strategies for dispelling the problem—and better yet avoiding it altogether—can.

My point is simply that one can’t do valid research of that sort if one “selects on the dependant variable” by examining only cases in which persistent conflict in the fact of compelling scientific evidence exists.

Such conflict is rare.  It is not the norm.  Moreover, any explanation for why we see it in the pathological cases that doesn’t also explain why we don’t in the nonpathological or normal ones is necessarily unsound.

Are you able to see why this is important?  Here’s a hint: it’s true that the “ordinary citizen” (whatever his or her views on climate change, actually) doesn’t have a good grasp of climate science; but his or her grasp of the physical science involved in assessing the dangers of x-ray radiation —not to mention the health science involved in assessing the risks of fluoridation of water or the biological science that informs pasteurization of milk, the toxicology that informs restrictions on formaldehyde in pressed wood products, the epidemiology used to assess the cancer risks of cell phones and high-power electrical power lines, and a host of additional issues that fit the “normal” picture—is no better.

We need to be testing hypotheses, then, on why the social and cognitive influences that normally enable individuals to orient themselves correctly (as individuals and as citizens) with respect to the best available evidence on these matters are not operating properly with regard to the pathological ones.

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (2)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments (10)

OK, two questions; I'm sure I've asked them before. First, how do we know that when the partisans agree, that they're actually basing their decision on the best available scientific evidence? Maybe they're just both wrong in the same direction? Or at least, only got it right by chance. There are a lot of common scientific misconceptions. Do they only occur in politically polarised debates?

Second, what does this imply about the opinions of scientists? Presumably they're just over at the right-hand end of the science comprehension curve? What is the distribution of political views of scientists? And should we compensate for that? Given this phenomenon, should we really be carrying out polls of scientists views in order to figure out what to believe?

"If you don’t, then you should also be trying to figure out why so many citizens disagree with you"

I'm afraid the most popular suggestion seems to be that Egalitarians/Democrats say they're concerned about global warming because that justifies a whole bunch of policies that Egalitarians/Democrats happen to like. It flatters them that their policies are right, and that they hold the moral and scientific high ground, it enables them to obtain funds from the taxpayer to engage in advocacy for their favoured policies, and it is an opportunity for them to be part of something bigger and more meaningful than their everyday lives: they're "saving the world". Because of their motivation and preconceptions, they suffer from confirmation bias in which they accept arguments from authority, anecdotes and circumstantial evidence, partial arguments with unnoticed major gaps in them, and dismiss any apparently convincing contrary arguments as merely a better quality deception.

Is this a 'just-so' story? Yes, probably. Most people cannot do psychology experiments on AGW believers very easily, and the psychology research community have mostly seemed to have only studied the other question - why do people *dis*-believe in AGW. (And more to the point, how can we persuade them to change their minds.) The average sceptic is just another guy (or gal) on the street, trying to make sense of it all, and they don't have the *resources* to carry out research into SoSC. They don't have any funding.

So while I don't necessarily *believe* the theories, I wouldn't say that I was 'appalled' that people come up with them. :-)

November 19, 2013 | Unregistered CommenterNiV

@NiV:

0. It is okay to repeat yourself in this forum.

1. If one makes the reasonable assumption that ordinary people ordinarily are able to recognize the best available evidence on issues of consequence to their decisionmaking, then the probability that they will recognize it is higher in an environment free of the influences that generate the identity-protective forms of reasoning associated with polarized risk issues. That's good enough for me.

2. I tend to believe (and would like to explore through empirical study) that in the domains in which individuals exercise expert or professional judgment, their reasoning is less influenced by cultural cognition. Professional judgment is characterized by a stock of specialized prototypes that enable members of the profession reliably to recognize patterns of relevance to their decisionmaking. Ordinary people lack those prototypes but make use of a different sort of expertise -- one enable by prototypes that enable them to recognize who knows what about what. It is the latter type of reasoning that is being disrupted when we see states of polarization over risk. I'm not saying that scientists or experts never are biased on political grounds; I'm saying it is less likely they will be. And only when they are making judgments in their domain & not outside of it.

3. You say that the explanation "that Egalitarians/Democrats say they're concerned about global warming because that justifies a whole bunch of policies that Egalitarians/Democrats happen to like" is "popular." My experience is different: like believers, skeptics credit just-so stories involving "heuristic reasoning" and manipulation by interest groups and the media. Because you regularly read this blog, I don't know why you would characterize your own belief the account you describe -- if you hold it -- as a "just so" story, for as you know, the conjecture that this species of motivated reasoning distorts judgment on climate chnage has been tested plenty. The tests, however, show that the effect is symmetric -- skeptics also fit the facts to their worldviews. When I have asked whether skeptics have ideas on whether there are ways to reduce the frquency of this sort of impediment to science-informed democratic decisionmaking generally, the answers have been unenlightening (indeed, mainly incomprehending and hostile).

4. It is okay to repeat yourself in this forum.

November 19, 2013 | Unregistered Commenterdmk38

Great post, Dan.


A bit off topic (something that I have repeatedly said here - but that's OK, right?)

Here's an interesting blog.

http://rocketscientistsjournal.com/

A rocket scientist. Seemingly someone who would have the expertise to be familiar with specialized prototypes that enable recognition of patterns relevant to decision-making.

Dude seems to me to wicked smart (I've seen him offer very sophisticated arguments in domains where I can understand what he says - and while I have disagreed with his arguments, I can recognize that they are the product of a sharp mind).

Now here is a guy who delves into the science and figures out stuff for himself. And he comes up with answers that people who are "experts," both "skeptics" and "realists" alike, say is nonsense (that CO2 has no measurable impact on climate) and flies in the face of proven fundamentals of physics.

So how am I, someone of limited knowledge and intellect, supposed to assess his opinions in relation to those of wicked smart people from the "skeptical" and "realist" camps who say that he is full of it?

He, and "skeptic" and "realist" experts alike, tell me that they, respectively, are the ones who recognize the "best available evidence."

November 20, 2013 | Unregistered CommenterJoshua

@Joshua:

I take it you don't see him as an expert. How about you tell me what you think is going on when you make that judgment.

If you ask me why you should *trust* that judgment, I'll say do you imagine you'd be better off not doing so across the run of situations in which you must aggregate the relevant impressions of who knows what? What would you do instead?

But if you say to me that you are less confident in your judgment that your judgment is reliable in a state in which we are all surrounded by the sorts of influences that generate identity-protective cognition, I'd agree. The answer is to remove those influences, though, & not to try to find some alternative to the reasoning faculty most suited to this sort of judgment

November 20, 2013 | Unregistered Commenterdmk38

Dan -

I think he is an expert, basically. He certainly seems to understand the science in-depth - certainly fare more than I do or ever could. He seems to be able to argue about esoteric details of the science with other people who seem to understand the science at very deep levels. What's interesting is when people like him wind up, in arguments with other people like him, claiming that their interlocutor is an idiot/doesn't understand basic science/has poor reading comprehension, etc. despite it being obvious to me that none of that is true.

Murry Salby would be another, similar, example.

http://www.desmogblog.com/2013/07/12/murry-salby-galileo-bozo-or-p-t-barnum

Or Oliver ("Iron Sun") Manuel who frequents "skeptical" blogs.

http://rarereaders.seablogger.com/2012/03/the-exceptional-views-of-oliver-k-manuel-professor-of-nuclear-chemistry/

So what can I do? One thing that I do is look for patterns in their reasoning in contexts where I feel more confident in my ability to assess their logic. Specifically, I look for whether they are diligent in discerning fact from opinion, as it seems to me, that distinction is key to differentiating between "motivated" and "un-motivated" reasoning. It is likely, although it can't be assumed, that someone who is blind to how their reasoning is biased by "motivation" in one area would inclined to not be very good at controlling for how "motivation" biases their reasoning in other areas.

And I consider the prevalence of other "experts" who are in agreement. Not as decisive evidence, but as relevant evidence.

And then, perhaps most importantly, I look for how I am "motivated" to react to said "expert" and work to control for my own "motivations."

"But if you say to me that you are less confident in your judgment that your judgment is reliable in a state in which we are all surrounded by the sorts of influences that generate identity-protective cognition, I'd agree. "</i.

Yes, that is what I say.

As for removing influences...but as a matter of semantics, I would word it differently: I would describe it as engaging them in an organized and systemic way that accepts their inevitability but seeks to control for them. Yes, it is a kind of "removing," in the sense that you seek to "remove" the influence of a conflating variable, by controlling for it, in an empirical context. But my point of semantics is that the influences themselves can't be removed, in any of us.

November 20, 2013 | Unregistered CommenterJoshua

"If one makes the reasonable assumption that ordinary people ordinarily are able to recognize the best available evidence on issues of consequence to their decisionmaking,"

But that's the question I'm asking - have we shown that the assumption is valid?

We agree, I think, that tSoSC has to be empirical all the way down, yes?

"I tend to believe (and would like to explore through empirical study) that in the domains in which individuals exercise expert or professional judgment, their reasoning is less influenced by cultural cognition."

So does that mean we don't have empirical support for it, yet?

"Professional judgment is characterized by a stock of specialized prototypes that enable members of the profession reliably to recognize patterns of relevance to their decisionmaking."

I'm guessing you mean something like scientific method or generally accepted methods or rules for achieving the necessary quality standards.

So let's try a specific example - here's a climate scientist discussing his feelings on discovering that a climate database is irretrievably corrupted and he has to make up false numbers to get the process to work:

"You can't imagine what this has cost me - to actually allow the operator to assign false WMO codes!! But what else is there in such situations? Especially when dealing with a 'Master' database of dubious provenance (which, er, they all are and always will be).

False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as there is no central repository for WMO codes - especially made-up ones - we'll have to chance duplicating one that's present in one of the other databases. In any case, anyone comparing WMO codes between databases - something I've studiously avoided doing except for tmin/tmax where I had to - will be treating the false codes with suspicion anyway. Hopefully.

Of course, option 3 cannot be offered for CLIMAT bulletins, there being no metadata with which to form a new station.

This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em, and it's the main reason the project is nearly a year late."

Is that sort of thing ("what CRU usually do") what you mean by a "specialized prototype" that professionals would recognise and be able to come to a professional judgement on?

Because if so, I'd say that it was a prototype that many ordinary people do share, and can understand quite easily. The curious thing is that whether people recognise it or not is correlated to their cultural/political inclination, and that the same appears to be true of professionals. I know plenty of professionals in, say, software engineering, who tend to go a bit purple in the face on reading about such practices. Other professionals, climate scientists among them, see nothing wrong with it and regard it as standard practice. Perhaps a bit less than ideal, but you have to be practical in the face of the many difficulties of the job.

So I'm not convinced that professionals are any less affected. And indeed, your results would suggest that the more scientifically literate they are the *more* polarised they become. At what point does that tendency stop?

"The tests, however, show that the effect is symmetric -- skeptics also fit the facts to their worldviews."

Indeed. Everybody has their cognitive blind spots.

"When I have asked whether skeptics have ideas on whether there are ways to reduce the frquency of this sort of impediment to science-informed democratic decisionmaking generally, the answers have been unenlightening (indeed, mainly incomprehending and hostile)."

Well, some suggestions have been made - open debate, open data availability, audit and replication, education, enforcement of the highest scientific quality and integrity standards - but ultimately the sceptics don't have any better ideas than you do. I'm not convinced that it *can* be prevented, entirely. And given the potential costs that a lack of diversity of viewpoint might have, I'm not sure that it *should* be prevented. Science thrives on challenging established ideas.

I don't know about 'enlightening', but there is instead 'The Enlightenment'. I've quoted from Mill and Milton before - but as you say, there's nothing wrong with doing things we've done before again. So here we go...

"So essential is this discipline to a real understanding of moral and human subjects, that if opponents of all important truths do not exist, it is indispensable to imagine them, and supply them with the strongest arguments which the most skilful devil's advocate can conjure up."

...and similarly...

"Assuredly we bring not innocence into the world, we bring impurity much rather; that which purifies us is trial, and trial is by what is contrary."

November 20, 2013 | Unregistered CommenterNiV

@Joshua:

If you do perceive him to be an expert, then you shoudl go with that. Indeed, if you were willing to see him as an expert only if he agreed with you (or with the position that is predominant in your group), you'd be displaying confirmation bias (or culturally biased assimilation).

By the same token, you can recognize him as an "expert" w/o changing your position or w/o agreeing with him. You should update your priors in line with the weight his views deserve. But after doing that (or in other words, on the basis of your synthesis of all the evidence you find to be valid), you might still believe that the position he espouses is wrong.

No problem!

November 20, 2013 | Registered CommenterDan Kahan

I still think that a closer examination of how the chemical industry has handled formaldehyde would be instructive here. While I do not approve of their results, I do believe that their efforts at keeping this issue out of the public eye, using delays, not denial, and re-framing the issue as "over-regulation by the EPA hurts business" has been very effective in avoiding effective formaldehyde regulation for well over a decade. See my comment on your post here: http://www.culturalcognition.net/blog/2013/5/30/polarization-on-policy-relevant-science-is-not-the-norm-the.html
I do not think that the "lack of controversy" here is an example of where science is being well served at all. I do think that it is an example of how carefully avoiding direct head bashing with your most extreme opponents can work to promote your cause.

Another example would be fluoride in drinking water, in cities like Portland Oregon which have never enacted it, or Albuquerque which recently stopped its use. There is a problem with communicating fluoride to the general public because it really does have a much narrower therapeutic dose than does, for example, chloride (as in table salt). And, there are communities in which their drinking water source is too high in fluoride and the water needs to be treated to reduce fluoride concentrations. Publicity about these efforts can drive concerns elsewhere. These facts can leave the public confused. I actually do think that some dental uses of fluoride (toothpaste with small children, some of the paint your teeth with it dental office fluoride treatments) are over my personal line as to what is reasonable. So, IMHO, even dental professionals can be confused. But I do strongly agree that low doses of fluoride in drinking water are a safe and effective way of delivering fluoride.

But fluoridating drinking water is not the only way of doing this. Perfectly civilized countries do utilize alternative delivery mechanisms. And I personally raised toddlers in the metro Portland area who grew up to have teeth. There is fluoridated toothpaste, chewable tablets, swish it in your mouth solutions (I switched toothpastes when we returned to an area that had fluoride in the water). The issue in Portland is that they are failing those children who do not have access to good dental care. By battling this issue as one against "the forces of anti-science", all that is accomplished is that some serious nutcases are given a media platform. This is a result of the media's general desire to present "both sides". And that may in fact lead to some un-informed members of the public getting mis-informed and being more against fluoride than they would be otherwise. But the average Portlander laughs in your face and votes against fluoride anyway. Because it is true that they know the science. And, they also know perfectly well that their family doesn't actually need to have fluoride in their drinking water to still get the needed fluoride. I believe that a better approach would be to engage with their generally liberal and egalitarian values and point out that they are not providing for the children of the poor. And if they chose not to use the method that other communities have found to be safe and effective, then they still need to do something. And if they want to stand on their heads and do something much more expensive and complicated, that is perfectly fine. As long as they do something to provide all children with the necessary dental care.

On the subject of tribal dogmatism and climate change, I think that an interesting expansion of the discussion of problems regarding ocean acidification are now taking place in Washington State. The farmed oysters, Pacific Oysters, are not the same species as the native Olympic Oysters. The Olympic Oysters are smaller, better adjusted to colder waters and also slower growing. Oyster farms are also monocultures. As such they create problems in the mudflat bays in which they are farmed. They have created expanded areas for the native ghost shrimp to thrive. Prolific diggers, the ghost shrimp apparently dredge up sediments to the detriment of the oysters. Also, the oyster tidelands have become areas where the invasive sea weeds, notably spartina, or cordgrass. So Yikes! they are using pesticides and herbicides on these tidelands, and my delicious oysters are filter feeders! I agree here that over promotion of climate change fears breeds skepticism:

"Both global warming and ocean acidification are very serious issues and by the end of the century their impacts will be substantial. But exaggerating and hyping the effects today are unacceptable. Citizens and policy makers deserve the facts, not exaggerations designed to elicit the proper response. Crying wolf in the end is counterproductive and undermines the credibility of science to promote the proper actions is unacceptable."
http://cliffmass.blogspot.com/2013/11/coastal-ocean-acidification-answering.html

The local Lummi nation uses the native oysters and more naturalistic marsh based cultivation methods. Sea grasses and other plant life absorb CO2 which is quite helpful in reducing the impacts of ocean acidification. A more balanced ecosystem may aid in keeping the ghost shrimp in check. They are teaching their methods to others: http://www.edibleeastend.com/online_magazine/fall_2010/native_aquaculture/

In active attempts to battle the forces of anti-science, sometimes some scientists give themselves a form of locked in syndrome.

November 21, 2013 | Unregistered CommenterGaythia Weis

I really should have said "science communicators" rather than "scientists" themselves. Scientists generally deal in narrowly defined steps in published research and carefully nuanced statements. Science communicators, as in the media, want to capture an audience, and want to be able to convey a big picture view. Or maybe have their post, "go viral". Policy makers also need to be able to generalize. And often the public would like to not engage in the details, they'd like to boil things down to yes/no lines. I just spent an election campaigning for local candidates for which the demanded answer was, so is he going to vote against the coal terminal, yes or no? The correct answer was "this is a quasi judicial decision for which an unbiased look at the data is needed, and the candidate is going to approach it with science and reason, with knowledge of such things as climate change and ocean acidification. This was a very hard concept to convey to voters. Who tended to distrust such an approach as "all wishy washy". "All I want to know is yes or no?"

November 21, 2013 | Unregistered CommenterGaythia Weis

what the author has failed to recognize is that the sword cuts both ways. Every sin you might attribute to one side of the question applies equally to the other.

science is not determined by popularity. it makes no difference how many experts believe ulcers are caused by stress, or that heart disease is caused by saturated fat. If you take the wrong prescription it can do more harm than good.

Science is full of bogus studies that are driving public policy, which are faulty because of “selecting on the dependent variable.” Health and Climate head the list. The motivation is simple. Government funding to deliver a politically popular result.

April 3, 2014 | Unregistered Commenterferd berple

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>