follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Can you spot which "study" result supports the "gateway belief model" and which doesn't? Not if you use a misspecified structural equation model . . . | Main | Job opening for social-science editor at Nature! »
Thursday
May192016

Serious problems with "the strongest evidence to date" on consensus messaging ... 

So . . .

van der Linden, Leiserowitz, Feinberg & Maibach (2015) posted the data from their study purporting to show that subjects exposed to a scientific-consensus message “increased” their “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it.

Christening this dynamic the "gateway belief" model, VLFM touted their results as  “the strongest evidence to date” that “consensus messaging”— social-marketing campaigns that communicate scientific consensus on human-caused global warming—“is consequential.”

At the time they published the paper, I was critical because of the opacity of the paper’s discussion of its methods and the sparseness of the reporting of its results, which in any case seemed underwhelming—not nearly strong enough to support the strength of the inferences the authors were drawing.

But it turns out the paper has problems much more fundamental than that.

I reanalyzed the data, which VLFM posted in March, a little over a year after publication,  in conformity with the “open data” policy of PLOS ONE, the journal in which the article appeared.

As I describe in my reanalysis, VLFM fail to report key study data necessary to evaluate their study hypotheses and announced conclusions. 

Their experiment involved measuring the "before-and-after" responses of subjects who received a “consensus message”—one that advised them that “97% of climate scientists have concluded that human-caused climate change is happening”—and those who read only “distractor” news stories on things like a forthcoming animated Star Wars cartoon series. 

In such a design, one compares the “before-after” response of the “treated” group to the “control,” to determine if the "treatment"—here the consensus message—had an effect that differed significantly from the control placebo. Indeed, VLFM explicitly state that their analyses “compared” the response of the consensus-message and control-group subjects

But it turns out that the only comparison VLFM made was between the groups' respective estimates of the percentage of climate-change scientists who subscribe to the consensus position. Subjects who read a statement that "97% of climate scientists have concluded that climate-change is happening" increased theirs more than did subjects who viewed only a distractor news story.

But remarkably VLFM nowhere report comparisons of the two groups' post-message responses to items measuring any of the beliefs and attitudes for which they conclude perceived scientific consensus as a critical "gateway" .

Readers including myself, initially, thought that such comparisons were being reported in a table of “differences” in “Pre-” and “Post-test Means” included in the article.

These aren't experimental effects after all...

But when I analyzed the VLFM data, I realized that, with the exception of the difference in "estimated scientific consensus," all the "pre-" and "post-test" means in the table had combined the responses of consensus-message and control-group subjects.

There was no comparison of the pre- and post-message responses of the two group of subjects; no analysis of whether their responses differed--the key information necessary to assess the impact of being exposed to a consensus message.

Part of what made this even harder to discern is that VLFM presented a complicated “path diagram” that can be read to imply that exposure to a consensus message initiated a "cascade" (their words) of differences in before-and-after responses, ultimately leading to “increased support for public action”—their announced conclusion.

The misspecified "gateway belief" SEM...

But this model also doesn't compare the responses of consensus-message and control-group subjects on any study measure except the one soliciting their estimates of the "percentage of scientists [who] have concluded that human-caused climate change is happening."

That variable is the only one connected by an arrow to the "treatment"--exposure to a consensus message.

As I explain in the paper, none of the other paths in the model distinguishes between the responses of subjects “treated” with a consensus message and those who got the "placebo" distractor news story. Accordingly, the "significant" coefficients in the path diagram reflect nothing more than correlations between variables one would expect to be highly correlated given the coherence of people’s beliefs and attitudes on climate change generally.

In the paper, I report the data necessary to genuinely compare the responses of the consensus-message and control-group subjects.

It turns out that, subjects exposed to a consensus message didn’t change their “belief in climate change” or their “support for public action to mitigate it” to an extent that significantly differed, statistically or practically, from the extent to which control subjects changed theirs.

Indeed, the modal and median effects of being exposed to the consensus message on the 101-point scales used by VLFM to measure "belief in climate change" and "support for action" to mitigate it were both zero--i.e., no difference in "after" or "before" responses to these  study measures. 

No one could have discerned that from the paper either, because VLFM didn't furnish any information on what the raw data looked like. In fact, both the consensus-message and placebo news-story subjects' '"before-message" responses were highly skewed in the direction of belief in climate change and support for action, suggesting something was seriously amiss with the sample, the measures, or both--all the more reason to give little weight to the the study results.

But if we do take the results at face value, the VLFM data turn out to be highly inconsistent with their announced conclusion that "belief in the scientific consensus functions as an initial ‘gateway’ to changes in key beliefs about climate change, which in turn, influence support for public action.”

The authors “experimentally manipulated” the expressed estimates of the percentage of scientists who subscribe to the consensus position on climate change. 

Yet the subjects whose perceptions of scientific consensus were increased in this way did not change their level of "belief" in climate change, or their support for public action to mitigate it, to an extent that differed significantly, in practical or statistical terms, from subjects who read a "placebo" story about a Star Wars cartoon series.

That information, critical to weighing the strength of the evidence in the data, was simply not reported.

VLFM have since conducted an N = 6000 "replication."  As I point out in the paper, "increasing sample" to "generate more statistically significant results" is recognized to be a bad research practice born of a bad convention--namely, null-hypothesis testing; when researchers resort to massive samples to invest minute effect sizes with statistical significance, "P values are not and should not be used to define moderators and mediators of treatment" (Kraemer, Wilson, & Fairburn 2002, p, 881). Bayes Factors or comparable statisics that measure the inferential weight of the data in relation to competing study hypotheses should be used instead (Kim & Je 2015; Raftery 1995). Reviewers will hopefully appreciate that. 

But needless to say, doing another study to try to address lack of statistical power doesn't justify claiming to have found significant results in data in which they don't exist. VLFM claim that their data show that being exposed to a consensus message generated a “a significant increase” in “key beliefs about climate change” and in "support for public action" when “experimental consensus-message interventions were collapsed into a single ‘treatment’ category and subsequently compared to [a] ‘control’ group” (VLFM p. 4).  The data -- which anyone can now inspect-- say otherwise.

Hopefully reviewers will pay more attention too to  how a misspecified SEM model can conceal the absence of an experimental effect in a study design like the one reflected here (and in other "gateway belief" papers, it turns out...). 

As any textbook will tell you, “it is the random assignment of the independent variable that validates the causal inferences such that X causes Y, not the simple drawing of an arrow going from X towards Y in the path diagram” (Wu & Zumbo 2007, p. 373).  In order to infer that an experimental treatment affects an outcome variable, “there must be an overall treatment effect on the outcome variable”; likewise. in order to infer that an experimental treatment affects an outcome variable through its effect on a “mediator” variable, “there must be a treatment effect on the mediator” (Muller, Judd & Yzerbyt 2005, p. 853). Typically, such effects are modeled with predictors that reflect the “main effect of treatment, main effect of M [the mediator], [and] the interactive effect of M and treatment” on the outcome variable (Kraemer, Wilson, & Fairburn 2002, p, 878).

Because the VLFM structural equation model lacks such variables, there is nothing in it that measures the impact of being “treated” with a consensus message on any of the study’s key climate change belief and attitude measures. The model is thus misspecified, pure and simple.

To illustrate this point and underscore the reporting defects in this aspect of VLFMI'll post "tomorrow" the results of a fun statistical simulation that helps to show how the misspecified VLFM model-- despite its fusillade of triple-asterisk-tipped arrows--is simply not capable of distinguishing the results a failed experiment from one that actually does support something like the “gateway model” they proposed.

BTW, I initiatlly brought all of these points to the attention of the PLOS One editorial office.  On their advice, I  posted a linke to my analyses in the comment section, after first soliciting a response from VLFM.

A lot of people are critical of PLOS ONE

I think they are being unduly critical, frankly.

The mission of the journal--to create an outlet for all valid work-- is a valuable and admirable one.

Does PLOS ONE publish bad studies? Sure. But all journals do! If they want to make a convincing case, the PLOS ONE critics should present some genuine evidence on the relative incidence of invalid studies in PLOS ONE and other journals.  I at least have no idea what such evidence would show.

But in any case, everyone knows that bad studies get published all the time-- including in the "premier" journals. 

What happens next-- after a study that isn't good is published --actually matters a lot more. 

In this regard, PLOS ONE is doing more than most social science journals, premier ones included, to assure the quality of the stock of knowledge that reserchers draw on. 

The journal's "open data" policy and its online fora for scholarly criticsm and discussion supply scholars with extremely valuable resources for figuring out that a bad study is bad and for helping other scholars see that too.

If what's "bad" about a study is that the inferences its data support are just much weaker than the author or authors claim, other scholars will know to give the article less weight.

If the study suffers from some a serious flaw (like unreported material data or demonstrably incorrect forms of analysis), then the study is much more likely to get corrected or retracted than it would be if it managed to worm its way into a "premier" journal that lacked an open-data policy and a forum for online comments and criticism.

Peer review doesn't end when a paper is published.  If anything, that's when it starts. PLOS ONE gets that. 

I do have the impression that in the social sciences, at least, a lot of authors think they can dump low quality studies on PLOS ONE.  But that's a reason to be mad at them, not the journal, which if treated appropriately by scholars can for sure help enlarge what we know about how the world works.

So don't complain about PLOS ONE. Use the procedures it has set up for post-publication peer review to make authors think twice before denigrating the journal's mission by polluting its pages with bull shit studies.

References

Kraemer, H.C., Wilson, G.T., Fairburn, C.G. & Agras, W.S. Mediators and moderators of treatment ef-fects in randomized clinical trials. Archives of general psychiatry 59, 877-883 (2002).

Muller, D., Judd, C.M. & Yzerbyt, V.Y. When moderation is mediated and mediation is moderated. Journal of personality and social psychology 89, 852 (2005).

van der Linden SL, Leiserowitz A.A., Feinberg G.D., Maibach E.W. The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLoS ONE (2015), 10(2): e0118489. doi:10.1371/journal.pone.0118489.

Wu, A.D. & Zumbo, B.D. Understanding and Using Mediators and Moderators. Social Indicators Re-search 87, 367-392 (2007).

 

 

 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (16)

the cynical would suggest - they believed if only the public knew there was a consensus amongst scientists (Lewandowsky put this idea into John Cook's head years ago) , the public would do something.. People like Dan show that doesn't work..

we then get the papers - proving' that consensus messaging" does work... LOL

write a dodgy paper, get a headline and some messaging, who cares about the quality?
borderline fraud (of the deluding themselves kind)

May 19, 2016 | Unregistered CommenterBarry Woods

Well spotted.

The curious thing is why they are so determined to defend the consensus meme, such that they'd mangle their conclusions? To me it indicates that it’s the climate change argument they understand best themselves. I doubt they could even define what the consensus was, let alone argue the complexities of the issues covered by it.

When the MMR jab was questioned the scientists didn’t win back public support by talking about a consensus, they did more and better science to prove the MMR jab was the best option. They didn’t do repeated studies to prove that the scientists were satisfied with the results they’d already published. Ironically, it was the weakness of the peer review system that spawned the distrust in the first place.

My wider concern is that those wedded to consensus messaging are drastically underestimating how painful serious action on CO2 will be if they think a head count exercise would significantly sway the public. If you’re struggling to get people to tick a box on a form to say they theoretically agree to some unspecified action, you’ve got nothing in the real world, where people actually have to pay for their decisions. And you've proved they really don't even have a transient effect by using 'treatment'.

There is no 'gateway belief' to get people to accept real hardship now for a predicted one. The closest model is religion, which is increasingly wobbly in the western world, if it ever worked properly. People were continuously motivated to believe in god because the alternative was painful death. Acceptance would be through better science, better solutions and knowledge rather than belief.

If the public don’t already know that there is a 97% consensus on climate change it’s because a) they’ve stopped listening altogether, b) they assume the figure is fiction or c) they don’t think that those who use it know what the consensus represents so adjust the figure downwards – eg I don’t know what percentage of scientists think that AGW will be definitely catastrophic but it’s lower than 97%.

If I was worried about CO2, I’d want to stop faffing about with the consensus and ask how other fields, including industry, generate trust.

May 19, 2016 | Unregistered CommenterTinyCO2

When Cook started Skeptical Science, he was al about the science, then:

"Stephan Lewandowsky from the School of Psychology at the University of Western Australia contacted John. He was worried that the work John was doing to rid the skeptics could actually be strengthening their skepticism. Stephan gave John a lesson in how to debunk myths and the two of them wrote a fantastic little guide ‘the debunking handbook’, which is available on SkepticalScience.com.

One of the things we found rather interesting was when John spoke about the need to ‘replace myths’. If you remove a myth that somebody holds this leaves a gap in his or her knowledge or understanding of a certain thing. If this gap isn’t filled with an alternative narrative then the old myth will quite easily fall back into its old resting ground."

http://mathsofplanetearth.org.au/john-cook/

tackling scepticism with science to the general public (not sceptics) was the wrong approach according to Lewandowsky, risking 'backfire', thus consensus messaging was born, not to try to get the pubic to understand the science, John original intention (laudible) but to persude the public, basically, trust us because we are scientists - 97% of scientists say......... [say: something that is misleading defined, All warming]

Given that the public has been bomrabed with messages like this from marketing departments for decades for any product under the sun, I have no idea whythey think this would work, beyong showing somebody the messae - and then asking them about it later, and see how many people remembered it...

(TV ad from a decade ago - Cat Food - Whiskas brand)

ask the control group about how many cats prefer Whiskas

ask the other group exposed to this message - 8 out of 10 cats prefer whiskas -
then ask that group about increased knowledge of how many cats prefer whiskas - compare dto the control group

write a science paper about it.

psychologist have discovered basic marketing soundbites and advertisng techniques to manipulate the publis - nobel prizes all around!

this would equally work with Womens Hair products and makeup adverts - scientist say, surveys say, etc, etc

just for fun. the Whiskas TV ad
(8 out of ten owners who expressed a preference, said their cat preferred it)
(what of those that ate whiskas, and did not express a preference!)

http://www.bing.com/videos/search?q=whiskas+8+out+of+ten+cats&&view=detail&mid=18767228E74FE3EE3F8918767228E74FE3EE3F89&FORM=VRDGAR

https://youtu.be/jC1D_a1S2xs

The public just tune out this marketing/PR/advertising nonsense now -

May 19, 2016 | Unregistered CommenterBarry Woods

a few typos - sorry -

& this LOL
https://en.wikipedia.org/wiki/Whiskas

"The well-known advertising slogan for Whiskas was "eight out of ten owners said their cat prefers it". After a complaint to the Advertising Standards Authority, this had to be changed to "eight out of ten owners who expressed a preference said their cat prefers it".

Cat food deniers strike back, deniers, hassling and intimidating cat food manufacturers!?

(who knows, an early example of Cook methodology, say 100 cats fed a sample (like his 10000+ papers, 60+% said nothing) , only yet 10 owners expressed preference on behalf of their cat, lets use that group, and 8 out 10- 'preferred it')


Cook's 97% papers and the rest Oreskes, Doran etc - are just a marketing/advertising attempt to persuade the public., to go along with policies. Will it work?

And if the public start to pay even the slightest bit of attention to this message and ask what do you mean by 97% of scientists say humans cause global warming / climate change - do you mean ALL climate change, some climate change, most of, major cause, - please be specific about what they actually agree about.

Cook admitted that it was just the weakest consensus - co2 a ghg, with no quantification - at a Bristol Uni - Cabot Institute Q/A -( I was there) Bristol didn't but the q/a online !
Kevin Marshall wrote about this:

https://manicbeancounter.com/2014/09/23/notes-on-john-cooks-presentation-at-bristol-university/

"In the Q&A John Cook admitted to two things. First, the paper only dealt with declared belief in the broadest, most banal, form of the global warming hypothesis. That is greenhouse gas levels are increasing and there is some warming as a consequence. Second is that the included papers that were outside the realm of climate science1, and quite possibly written by people without a science degree."

(remember Cook was at all bothered when Obama tweeted 97% of scientist say dangerous because of Cook's paper, John still boast about it, in his interviews bio, and never corrected it publicly (Richard Betts tweeted Obamas saying actually the paper does not say that)

all Cook/Oreskes and the messaging crowd have in response - you are denying science, you're using tobacco tactics, spreading uncertainty etc, because 97% of scientists say. science and peer review has spoken.

May 19, 2016 | Unregistered CommenterBarry Woods

Tiny -

==> "There is no 'gateway belief' to get people to accept real hardship now for a predicted one. "

Is there a gateway belief to get people to think that they know with complete confidence, as a condition of fact, what will or won't cause "real hardship now" - even when they can't accurately quantify the related positive and negative externalities?

May 19, 2016 | Unregistered CommenterJoshua

tiny -

==> "If I was worried about CO2, I’d want to stop faffing about with the consensus and ask how other fields, including industry, generate trust."

The issue of climate change is inherently polarized and politicized. I would say that referencing how "other fields" and "industry" "generate trust" would only be useful to the extent that those fields and/or industries are targeting public opinion in areas that are similarly polarized and politicized.

Do you have an example in mind?

May 19, 2016 | Unregistered CommenterJoshua

==> "Given that the public has been bomrabed with messages like this from marketing departments for decades for any product under the sun, I have no idea whythey think this would work, beyong showing somebody the messae - and then asking them about it later, and see how many people remembered it..."


It is always interesting to me why so many people spend so much time speculating (admittedly, from within a bubble, the numbers are small on an absolute scale) about the effects of "consensus messaging" one way or the other without having measured carefully what %'s of the public (among various demographics, in various regions, in various countries ) say that they've heard "consensus messaging."

"Realists" do such fact-less speculation (in concluding that it works), as do "skeptics" (as you show here) as do social scientists like Dan (who has often argued about the effect of "consensus-messaging" based on measuring outcomes public opinion on climate change - without deconstructing the actual effect - e.g., whether it has an effect of revering the effect of anti-consensus messaging, or whether or to what extent the measure of public opinion includes a % of people who have never heard "consensus-messaging" or who have only heard "anti-consensus" messaging or even who have predominately heard anti-consensus messaging, all effects that would be hidden if you were trying to work backwards to conclude about the effect on the basis of examining existing public opinion as a binary state of "are/aren't concerned about climate change" or "believe/don't believe" there is a 97% "consensus).

May 19, 2016 | Unregistered CommenterJoshua

Dan -

==> "On their advice, I posted a linke to my analyses in the comment section, after first soliciting a response from VLFM."

Did VLFM respond? If so, are you/can you divulge their response?

May 19, 2016 | Unregistered CommenterJoshua

==> "borderline fraud (of the deluding themselves kind)"

Is that like being "borderline pregnant?"

In what way is deluding oneself "borderline fraud?"

Doesn't any kind of fraud, whether borderline or not, require intentionally misleading others?

If you want to make an accusation of fraud, just do it. Plausible deniability is just a rhetorical tactic that doesn't advance the conversation.

May 19, 2016 | Unregistered CommenterJoshua

Joshua “what will or won't cause "real hardship now"”

Experience of existing renewables, which have had very little overall effect on CO2 consumption per person but have raised energy costs considerably. To substantially reduce CO2 would require several major developments in technology (wishing is not a plan) or significant lifestyle changes. The latter would be judged by most as hardship. A lot of people talk about cutting CO2 but they want it to be somebody else and not cost them much. Governments are beginning to balk at the scale of the problem and several have had to renege on promises of subsidies. Grids are reaching peak renewable, where any more unpredictable supply, risks brown outs or worse, surges. At the moment, countries are using their neighbours as extensions to their own networks. That can’t be duplicated much further.

It doesn’t matter that climate change is politicised. In one way or another, everything is. We bring our personal politics to what we do and say and it has very little to do with who we vote for. You have to recognise that people think differently and stop hoping that one size will fit all. For instance, I probably feel the same way about Greenpeace that you do about Exxon. Think how you’d feel to hear that Exxon was allowed to speak at and heavily lobby a climate conference. How would you feel if they were tight with all the attendees and press and the conclusion from the conference was that there wasn’t a problem? You’d small a rat.

Because of real and imagined corruption or mistakes in business, we no longer trust them. We demand that independent people inspect them and if they find problems, punish them. OK regulation doesn’t always work but it’s the least you should be doing to improve trust in climate science. Peer review just won’t mobilise nations for exactly the same reason this paper shouldn’t have been published.

May 19, 2016 | Unregistered CommenterTinyCO2

Joshua. You must have read Dan demolition of the methodology of that paper. Either they knew what they were doing and deliberately mislead the public. Or they seriously believe they were doing good science. It seems so bad that one might assume intelligent people deliberately misleading their peers/public.. which is perhaps too kind. And the reality is that they believe they are doing good science.. and are a bit dumb

Or some other explanation.

Do the public.. beyond superficial responses to opinion surveys actually buy into a consensus message long term.. perhaps some more long term research is required. My examples of 8 out of 10 cat owners is just showing f how it comes across to me as marketing
I'm 100% sure a 100% of climate scientists know co2 is a GHG, earth has warmed and man contributes,which is what Cook acknowledges if yoi wuestion him carefully. So carefully phrased 97% of scientists believe humans cause climate change.. messaging is just misleading PR

The implied message being ALL. (With the careful phrasing g to allow plausible deniability? Oh we didn't mean that) And implied it is 'dangerous' or something to care about.. whereas no surveys are promoted with the figures for how much, or how likely or how serious scientist or climate scientists

They are just going around in circles. The Doran survey was intended to show a better consensus,of previous consensus papers. It's all just being rehashed again.

May 19, 2016 | Unregistered CommenterBarry Woods

Barry -

I doubt they're "a bit dumb," and I am not in a position to judge their motivations. Dan's analysis stands on it's own merits. I see no particular reason, and certainly no advantage, to further speculation about conclusions which are ultimately, not provable and more than likely biased on the part of the "speculator."

==> "Do the public.. beyond superficial responses to opinion surveys actually buy into a consensus message long term..."

I suppose there might be some question about that in some generic context, as there is a certain likelihood of the "bandwagon effect" being real in some contexts. But my sense is that in a politically divisive and polarized context, such as climate change, it would be very hard to achieve such an effect because any "messenger" is overwhelmingly going to be viewed from within an already polarized context. Those already convinced one way or the other will find "consensus-messaging" or "anti-consensus messaging" useful for validating their already preexisting orientation. Those not already oriented will pretty much go "meh." And both side will make biased arguments about the effect that confirm their biases - just as we would expect based on the effects of "motivated reasoning."

==> "perhaps some more long term research is required."

IMO... longitudinal research would be required to say one way or the other, but it would need to be "real world based," not based on an experimental paradigm. The results of an experimental paradigm, and non-longitudinal data, IMO, are of very limited usefulness in this context.


==> "They are just going around in circles. "

My sense is that "they" feel compelled to compete against a "skeptical" narrative that attempts to undermine the public trust in climate scientists who think that concern about aCO2 emissions is justified. In as much, I can understand their proximal "motivation." But in the end, I have seen no empirical data that (IMO) actually provide valid evidence one way or the other as to whether their efforts have any effect, and if so, how big that effect actually is (or isn't). What I note is that people on both sides, and some in the middle, similarly feel compelled to promote conclusions that aren't evidence-based by generalizing from inconclusive evidence.

As such, IMO, it isn't only they that are going around in circles.

May 19, 2016 | Unregistered CommenterJoshua

tiny -

Your response requires a long response and I don't have time right now...and I've monopolized this comment thread enough already for now... I'll post a response later.

May 19, 2016 | Unregistered CommenterJoshua

@Joshua-- they indicated that they prefer to speak for themselves in making any response

May 19, 2016 | Registered CommenterDan Kahan

Joshua “speculating about the effects of "consensus messaging" one way or the other without having measured carefully what %'s of the public (among various demographics, in various regions, in various countries ) say that they've heard "consensus messaging."”

No, you only have to look at the ‘best’ countries that have taken the lead, like the UK. Climate change messaging (because few can identify the consensus), has been across the board. It’s been in schools for at east a decade, tv (including our national broadcaster the BBC that inserts AGW wherever it can), newspapers, government (national and local), utilities, etc. Nobody could have missed the issue or what the scientists think about it. That’s why there isn’t much difference between the before and after treatment responses in the paper. WE KNOW! We just have a problem with it or (the majority) just don’t care. Even those who accept it, have a problem with it because it hasn’t triggered significant behavioural changes. In Germany, where Green (political) voting has been highest, the support for action is waning. The realities of reducing CO2 are biting hard. If they weren’t a wealthy country and relying on their neighbours to manage their electricity system, things would be very different. Clearly their fear of nuclear has trumped their fear of CO2.

The amount of sceptic messaging (we call it debate) is tiny. It’s the preserve of a few newspapers that randomly run sceptic or consensus articles, and the internet. Until I was already sceptical and went looking for information, I didn’t even know that there were groups of sceptics online. In fact the biggest generator of ‘denier’ messaging is the consensus side when it regularly mischaracterizes sceptic opinion. They polarise the issue into a true/false position. The ‘denier’ position is painted as illogical. ‘It’s all a hoax, there is no warming, it’s a left wing plot, blah, blah, blah’. While some people do believe those things and many of us may get frustrated enough to vent something similar, it’s not what the main body of climate scepticism is trying to convey. Probably the main theme would be ‘it’s complicated.’

Pre ‘consensus’, the message I got was that the debate was over. That was probably the very thing that triggered me into investigating climate. I might not have known enough to argue the scientific points but I was fairly sure that there’d been no debate. The only possibility was that the debate had been held in secret and what kind of SOBs think that something that big can be decided on by a few self-appointed people behind closed doors? Better acquaintance of those people has not improved my opinion.

May 20, 2016 | Unregistered CommenterTinyCO2

1. You've done the work. A lot of work.
2. You've told all your friends and colleagues, who are interested and excited. Progress reports keep all of you interested and the others ask more and more how it's going.
3. The results don't quite equal your expectation and earlier reports.
4. You intuitively know what should come about. Natural conversations you've had in life teach you this. You are perplexed and become worried you ran the test wrong, well, somewhat wrong.
5. You dig deep in the study to find the hidden "truth".
6. You find the truth and breathe a sigh of relief.
7. You create the graphs etc. That reflect this truth and publish.
8. Not everyone, including your friends and colleagues, like the report, but, hey, at least YOU did the work.
9. Friends and colleagues agree: not all of their work was stellar.
10. You move on, telling yourself the next effort on this subject will be better, or, best of all, the subject has been covered well enough and it is time to do something else.

The 10 steps of producing something that, in hindsight, you wish you'd done differently or not at all, but did cost someone serious money and time and can't be ignored. No different than in the business world.

May 21, 2016 | Unregistered CommenterDoug Proctor

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>