follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: going to SENCER summer camp to learn about the "self-measurement paradox," the "science communication problem," & the "disentanglement project" | Main | Perplexed--once more--by "emotions in criminal law," Part 2: The "evaluative conception" »
Friday
Jul242015

On "best practices," "how to" manuals, and *genuinely* evidence-based science communication

From correspondence with a reflective person on whether there is utility in compiling “guide books” of “best practices” for climate-science and like-situated communicators . . . .

I think our descriptions of what we each have in mind are likely farther apart than what each of us actually has in mind.  My fault, I'm sure, b/c I haven't articulated clearly what it is that I think is "good" & what "not good" in the sorts of manuals that synthesizers of social science research compile and distribute.

I think the best thing would be for me to try to show you examples of each.

This is very very very good:

The concept of "best practices as best guesses" that is featured in the intro & at various points throughout is very helpful. It reminds users that advice is a provisional assessment of the best current evidence -- and indeed, can't even be meaningfully understood by a potential user who doesn't have a meaningful comprehension of what observations & inferences therefrom inform the "guess."

Also, as developed, the "best practices as best guesses" concept makes readers conscious that a recommendation is necessarily a hypothesis, to be applied in a manner that enables empirical assessment both in the course of implementation & at the conclusion of the intervention.  They are not mechanical, do-this directives.  The essays are written, too, in a manner that reflects an interpretive synthesis of bodies of literature, including the issues on which there are disagreements or competing understandings.  

This is bad-- very very very very bad.

It is a compilation of general banalities.  No one can get any genuine guidance from information presented in this goldilocks form: e.g., "don't use numbers, engage emotions to get attention ... but be careful to rely too much on emotions b/c that will numb people..."

If they think they are getting that, they are just projecting their own preconceptions onto the cartoons -- literally -- that the manual comprises.  


The manual  ignores complexity and issues of external validity that reflective real-world communicators should be conscious of.  

Worst of all, there is zero engagement with what it means to have an evidence-based orientation and mode of operation.  As a result, this facile type of work reinforces rather than revises & reforms the understandings of real-world communicators who mistakenly expect lab researchers to hand them a set of "how to" directives, as opposed to a set of tools for testing their own best judgments about how to proceed.

I know you have concerns about whether I have unrealistic expectations about the motivation and ability of individuals associated with climate-science communication groups to make effective use of materials of the sort I think are "good."  Maybe you won't have that reaction after you look at the FDA manual.  

But if you do, then I'd say that part of the practice that has to change here involves evaluation of which sorts of groups ought to be funded by NGOs eager to promote better public engagement with climate science.  Those NGOs should adopt standards for awards that will reliably weed out of the pool of support recipients the ones that by disposition & mindset can't conduct themselves in a genuinely evidence-based way & replace them with ones who can and will structure themselves in a manner that enables them to do so.  

There's too much at stake here to rely on people who just won't use the available financial resources in a manner that one could reasonably expect to generate success in the world.

In particular, such resources shouldn't go to any group that thinks the success of a “science communication strategy” should be measured by how much it boosts contributions to the group’s own fund raising efforts.  It doesn’t surprise me to know that this happens but it does shock me to constantly observe members of these groups talking so unself-consciously about it, in a manner that betrays that perpetuation of their own existence is a measure of success in their minds independently of whether they are achieving the results that they presumably exist to bring about.


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

Good post. Super relevant. As a co-author on one a recent update to one such guide, I share many of the concerns and caveats you've laid out. Having seen the FDA guide now for the first time, there are many aspects of that would have been great to replicate in our work.

I approach these kinds of projects as a designer and as someone concerned *primarily* with how individuals and groups begin to adopt best practices, how they transition their organizational cultures, and conduct effective engagement.

As such, I also "have concerns about unrealistic expectations about the motivation and ability of individuals associated with climate-science communicators".

As I scan the FDA example, I can see the obvious depth and usefulness of the information. I can also see that I need to block off an afternoon and three cups of coffee to *begin* to process, interpret, and apply its insights to my own practice or my organization's culture. I'm pretty motivated, and so I'll probably do that, but what if I were the Exec. Director of a watershed NGO? Climate communications might be rising on my list of priorities, but at the end of the day, keeping the lights on and making time for my 3 kids might make changing old habits more difficult. I also really just need 3-7 things that I can easily remember to help me shift how I discuss and frame regional impacts and risks. The FDA-type guide doesn't make that an easy proposition. Also, if we use the FDA guide's test for communications adequacy, any guide only needs to: 1) contain the information needed for effective decision making, 2) make sure that users can access that information, and 3) ensure that users can comprehend what they access.

So what I'm wondering is if guides like the CRED example are good demonstrations of external validity "in practice", rather than as a provider of "empirical accounts" of external validity. I mean, that's a testable hypothesis, right?

I think you rightly point to the aspect of fundraising as core factor driving decision making. It's an aspect of organizational culture that also includes assumptions about what is true and not true, valid and not valid, what counts as research, and what counts as science. Organizations have different standards of evidence, and all to often they are based on personal experience, rather than empirical work. In practice, this can make it extremely difficult to, for example, to conduct high quality evidence-based research with the kind of context the FDA guide offers. Marketers have different expectations than scientists about what counts. A LOT of this is based on personal affect about what has worked for them in the past. It can be tough to update those biases, but then, isn't that what cultural cognition all about? The problem is that in those kinds of orgs, people have supervisors and bills to pay. I'm always curious how researchers navigate the trade-offs between their bosses wanting "impact" and who are yet unwilling to actually do the work of empirical research to reliably understand what that impact is. I also wonder how they navigate supervisors who have little practical experience with doing actual science.

What I am personally interested in are tools that *focus* the activities of individual communicators and groups around the complexity of their audiences and contexts. The communications process is highly messy, and there is rarely a one-size-fits all approach. But what I think we need are tools that make "better practices" easier to try on for size and implement.

Here's an example that repurposed a tool for health comms from the CDC and reframed it for climate comms users: http://coclimate.github.io/Gusto/#

For better of worse, we also designed a map for organizations and groups to "project" their specific situations, culture, and needs upon—so that a group can actually begin the difficult process of developing a strategy. There are a lot of things to consider for good sci comms, including the capacity and expertise of both the implementing org and their audience(s). It seem to me that giving groups options and letting them determine which are a good fit for their circumstances is both more flexible and potentially more "externally relevant" than idealized processes. That said, I'm totally in agreement with the kind of recommendations the FDA guide makes about strategic planning. But getting the strategic planning to happen *in the first place* is the bigger hurdle.

For what it's worth, I'll offer a bounty of free design services to Cultural Cognition if that's ever of interest. I'd love to develop a simple guide that captures the complexity and makes it practically appealing for a broad-ish range of audiences. It would be a challenge to distill the major insights into practical takeaways. But "The Disentanglement Principle" seems like a reasonable start, eh? I mean, could it be worse than ecoAmerica's (IMHO: ridiculous) 13 Steps Guide?

July 24, 2015 | Unregistered CommenterGabriel Harp

@Gabriel--

thanks for these thoughtful reflections.

I think I'm inclined to stand firm on the point that on an issue like climate change -- where groups are making decisions about how to invest resources to try to create programs that mitigate the sort of political polarization we now see-- science communication groups that don't have the capacity or motivation to engage in evidence based practice shouldn't be supported. They shouldn't even get involved, given the risk that they will rely on deeply ingrained mistakes, ones that make matters worse.

This sort of science communication is too important not to be carried out by its own class of professionals. And if those professionals don't get that they need evidence-based practices here, then they have the wrong conception of the profession.

this might sound unrealistic or worse -- elitist or something like that -- to you. But I think it would be less likely to if you *saw* what I have in mind at work. CCP is engaged in promoting evidence based science communication in SE Florida and now in other places. We don't *tell* people what to do, much less tell them to do both X in just the right amount, neither too much nor too little, the way the CRED Manual does.

If we did that, we'd not only not be genuinely helping local communicators. We'd be falsely presenting ourselves as having knowledge we don't; we -- the researchers who work on the project -- have lots of *general* knowledge about *general* mechanisms of consequence. But how those general mechanisms can usefully be brought to bear in particular places on particular issues is something that requires empirical study, just as figuring the general mechanisms of consequence did. It's really disturbing that who distribute "how to" manuals based on (airy, banal) syntheses of lab studies don't *tell* the people they are purporting to advise this.

After *telling* them all of this, a social science researcher who really truly wants to help them rather than just enjoy playing the role of "expert,"should use her skills to help those local communicators to engage in evidence based practice.

The local communicators -- especially if they truly are local and truly are involved in interacting with the very people who they are trying to help understand the relevant science -- will be *filled* with sensible ideas about what might work. When you explain the work that's been done in general lab studies, they'll see that some of those ideas, which were perfectly plausible, are actually missing the point.

But they'll also know-- much better than anyone who only does lab studies-- what sorts of local strategies can be implemented that *fit* the general mechanisms established in the lab studies. *At that point* the researcher enables evidence-based practice by conferring with the communicator and figuring out a way to do tests that will help that communicator make a judgment about which of her conjectures are sound and which not (there are always more plausible hypotheses than are true in this line of work) and how the strategies that do make senses can be refined and adjusted to take account of what the test results show.

*Then* the researcher can help by collaborating with the communicator to be sure that as the strategy is executed, evidence can be collected that helps that communicator figure out whether the strategy had the effect she expected.

Then they can do the whole thing again, being even smarter than they were before.

All the while, too, the researcher can be meticulously recording all the activities that were being undertaken, in addition to maintaining all the study data and reports that were generated. When the whole project is done, that researcher can make those materials available -- so that the *next* team of evidence-based science communicators (ones who include researchers with general knowledge & skill & actual communicators with pertinent local experience and judgment) can use what was learned before as they go through the same process themselves.

Groups that don't want to do things this way-- who just want to go w/ their gut -- shouldn't get support from those who want to promote constructive public engagement w/ climate science. There's just too much at stake and too few resources available to let people "wing" it after reading cartoon versions of social science lab studies (or even after reading social science lab studies; again, those studies *don't tell you what to to do* -- even when they do help you make much better judgments about things would explore doing in an evidence-based way).

*Researchers* who don't acknowledge that this is the right process, moreover, are doing the project of promoting more constructive public engagement with climate science a huge disservice.

I like your "map"! I'll bring it with me to SE Fla. I won't spend 30 mins w/ someone, stick it in their hands & leave, though. I'll talk with him or her about what the basis of those principles are & help that person try to connect theme to what she tells me are the major issues she confronts, so she can get a feel for what valid studies actually have established that can help her. I'll make it clear that *she has to tell me* what she thinks might work to reproduce in her real world the good results that have been achieved in valid lab studies.

then the next goddam day I'll come back to her w/ an experiment design & say, "what do you think? If what you came up with yesterday is likely to work, then wouldn't you expect the experiment to generate a result like this? Whereas if *that* happens, would you treat that as ground for inferring that the contrary effect you recognized *could* occur seems a bit more likely than before?"

If she says "no," & helps me see why, I'll come back the next day with a better experiment, one she *will* agree has the qualities I just described. Then I'll spend a good long time doing the study and preparing a report on the result for her; and talking w/ her and her colleagues about it.

BTW, I'll be keeping notes that I and others can use to asses whether your guide was a useful device to help focus & enable the sort of interaction I'm discussing. B/c part of what is needed are protocols that people can be confident will help them to structure the sort of thinking and evidence-gathering that I'm describing.

The CDC tool is nice. Note that unlike carnoonish "how to" manuals being generated by researchers in this area, this protocol for preparing and reviewing health-consumer information was tested -- by seeing whether materials produced with it in fact generated greater comprehension than ones produced without it. Good. People who get funding to develop communication tools that they don't test are being "anti-science" in their approach to communication.

One thing, though: I couldn't tell from the paper how the tool was used here. I gather *existing* CDC materials were reviewed by *someone* using the tool & then revised. Thereafter, members of the public were assigned either the original materials or the revised materials & their comprehension tested.

But who revised the materials? Was it the same person who produced the original? I can't say for sure, but it doesn't sound like it; it sounds like the researchers were revising existing materials. Maybe the materials they started with sucked & the people who were assigned to revise them were more expert or more dedicated to producing clear materials.

They also should have had comparable revisers revise the material *without the tool* & tested those revised materials. If the materials revised by those not using the tool scored just as well as those revised w/ the tool, then we'd know the improvement in comprehension was due simply to good editing. We don't know that otherwise!

So yes, there should be testing,. But it has to be valid -- not window dressing done just to satisfy a formal "evaluation" requirement.

Finally I accept your offer to produce visual materials for our research group!!!

Knowing what information needs to be communicated, btw, is also a matter that has to be investigated empirically; most people figure out what's known to science not by comprehending the content of data but by interpreting reliable cues about the validity of it.

But for sure there *are* situations in which science communication is exactly that -- communicating the content of science to someone who is going to use it to make a decision. And when that's so, for sure graphic presentation is essential.

So the next time we are in that situation, I want *your* help. You are a professional w/ skill & judgment. Togeteher we'll help our communicator understand the state of existing knowledge & figure out what he or she thinks is resposive to the audience's needs.

Then after you design graphic materials that we think reflect the right general principles and our communicator agrees are well crafted for the audience he or she has to deal with, we'lll do a test to see if the graphic materials are achieving the result that we've hypothesized, and if not we'll either make judgments or try something else...

Then when we get done with that, we'll do the whole goddam thing again. And again.

Email me.

July 25, 2015 | Registered CommenterDan Kahan

Sounds like a plan.

A few follow-up comments:

@Dan wrote
“It's really disturbing that who distribute "how to" manuals based on (airy, banal) syntheses of lab studies don't *tell* the people they are purporting to advise this.”

I like how the FDA guide approached this and how it clearly highlights the QUALITY of it conclusions. But again, it’s a lot of information to process. I wonder if rubrics like the pedigree matrix associated with NUSAP (http://www.nusap.net/sections.php?op=viewarticle&artid=14) would be an example of a simple, yet communicative, shortcut to making descriptions of the *quality* of the evidence or recommendations easier to digest. Yeah, that’s an empirical question too :-)

Glad to hear your reaction re: the map. What you describe is an accurate description of how it’s intended to be used. All of the icons, categories, their sequence, and arrangements are indeed hypotheses. They have some basis in academic literature or cultural experience, BUT each organization or implementing group needs to see which combination fits for them, plan their implementation, act to implement, evaluate, refine their implementation, and communicate….and rinse, and repeat.

Good points about the expertise of the person doing the revising for the CDC tool. I’m working on putting together test group(s) for the version I shared, so expertise and types of expertise are useful treatments to include. I revised my implementation of the CDC’s tool from their original (orig is here: http://www.cdc.gov/ccindex/widget.html) to better apply the recommendations they outlined in the original. It’s worth testing if my revisions also make any difference.

I’m super interested in how visual communication can help foster better science communications—as well as organizational cultures that can effectively engage. So yes, always excited to test what works. It’s just often a challenge of finding the right groups whose culture and capacity allow for those investigations to happen.

July 27, 2015 | Unregistered CommenterGabriel Harp

I had a look at your FDA example - the first few chapters, anyway.

I did very much like the paragraph in the introduction:

Risk communication is the term of art used for situations when people need good information to make sound choices. It is distinguished from public affairs (or public relations) communication by its commitment to accuracy and its avoidance of spin. Having been spun adds insult to injury for people who have been hurt because they were inadequately informed.

That commitment to accuracy and avoidance of spin is one point that a lot of these guides very often miss out - in fact, they usually read as if 'spin' was the entire point. Very good!

However, it then goes downhill for a bit. In chapter 2, their three suggested aims are to get a message 'out there' (which they apparently provide as an example of what *not* to do), to change belief, and to change behaviour. These last two are rather illiberal aims - as you've noted before yourself when it comes to the evolution debate. The aim isn't to make them believe, it's to give them the information and mental tools they need to make an informed choice. Likewise with making them do something - 'success' in a liberal society is not about how many you get to do what you want, but how many of those who choose not to do it anyway do so in full knowledge of the costs and benefits.

Whether or not the techniques work, this dive into authoritarian ethics is worrisome. And unfortunately it seems to be getting more common in preventative medicine.

Things gets a bit better a couple of chapters later in chapter 4, where the definition of 'adequacy'. The rest of that chapter is so-so, but not actually bad.

Chapter 5 looks at it from an information user's perspective and is very good. It emphasises the need for material to help people understand the deeply technical information they need to make good decisions (not just take the expert's word for it), and also tells cautionary tales of the entire community of experts haring off down the wrong path without any solid evidence, as a result of profoundly unscientific influences, and hence the importance of presenting the evidence.

When the randomized trials were finally completed at the end of the decade — the definitive studies that had languished for years, accruing at a snail’s pace because of the prevailing belief that the transplants worked — no benefit over standard chemotherapy could be found. By then, the treatment itself had killed thousands of women, many more than it helped.

The author tells us how this all-too-common carelessness with scientific standards can have a horrific cost: "I’d already had two close friends die horrible deaths from this treatment, a loss made doubly horrible when I learned — too late! — how scanty the evidence was. What happened to Pat and Mary, the way that they died, will probably always haunt me."

The following is also truly excellent:

Misperceptions of benefits and risks are not only the result of human foibles. They are purposefully cultivated by forces in society. Media alarmism, exaggeration, and oversimplification of health care issues is pervasive. Although often justified as educational, marketing and advertizing of drugs and other products to physicians and patients is carefully crafted to enhance perception of benefits and minimize perception of risk. Marketing works, as our massive consumption of these products clearly demonstrates. The lack of comparative, quantifiable data in direct-to-consumer drug marketing makes any kind of deliberative process almost impossible. All of this is compounded by an almost total lack of education in how to be an informed consumer of health care. How do we know what we know in medicine? Where does the evidence come from, and how believable is it? Most people have no idea.

Shouldn't it be the communicator's role to teach them? And how much healthier a sentiment is that than the goal of "changing behaviour" to whatever the experts decree?

Interesting document! Thanks!

July 27, 2015 | Unregistered CommenterNiV

The FDA book is terrific in many ways. I've been recommending it since it came out. (I actually like some of what the CRED piece has to offer as well, although it is a bit simplistic in many cases.) But like all advice about risk communication, the FDA work dances around a central question; what's the goal of risk communication? Is it simply to inform, not to actually get people to change their views or behavior. In many places that is what the book suggests. (This position is often posed as the moral high ground by those offended by the idea of purposefully manipulating people's beliefs and actions, and using the insights of social science to do so.) Or is the goal to get people to change their beliefs and behaviors, yes, to manipulate, and encourage what the communicator believes is the 'right' belief/behavior (don't smoke, get vaccinated, care about climate change)? Ch. 3 has it about right...it's all of the above, depending on the issue and circumstances.

This issue of goals is a critical part of thinking through 'best practices' and 'how to' recommendations for risk communication, and separately, for science communication. The two overlap but also stand apart.

The FDA guide fails to fully consider the difference between risk and science communication. Understandably, given that it was produced by the administration's Risk Advisory Committee and deals only with risk communication. SCIENCE communication is a broader field. Risk perception is a unique set of lenses through which we interpret information. Risk issues evoke unique triggers for motivated reasoning. The relationship between communicator and audience in risk communication has unique components. A whole lot of science communication is NOT also risk communication.

Finally, the FDA guide makes little mention of the modern view of risk communication, that what you say matters less than what you do. "Communication" makes the whole thing sound like it's about explicit messages. But given the fundamental importance of trust, the actions of the communicator (be that a person, a government agency, a company, or whoever) matter a lot more than just the messages. The phrase I use is 'risk relationship management', and the definition is "actions, messages, and other interactions that demonstrate an understanding of and respect for the feelings of the other party, intended to build trust, build more constructive/less contentious relationships, in order to maximize impact."

August 1, 2015 | Unregistered CommenterDavid Ropeik

"But like all advice about risk communication, the FDA work dances around a central question; what's the goal of risk communication? Is it simply to inform, not to actually get people to change their views or behavior."

The aim is to enable people to make better decisions, based on accurate information rather than misconceptions or ignorance. Certainly, the aim/expectation is for them to change their views and behaviour as a result, but not in one specific direction. Because facts don't map one-to-one onto views or policy - they will be interpreted in different ways by people with different values.

What is being objected to is the covert attempt to impose a particular set of values on others, by pushing them towards the beliefs or behaviours implied by those values, without giving them the supporting information that they could apply their own values and thereby arrive at a different outcome.

"In many places that is what the book suggests. (This position is often posed as the moral high ground by those offended by the idea of purposefully manipulating people's beliefs and actions, and using the insights of social science to do so.)"

It's not simply a matter of causing 'offence'. (Although in other contexts, that's widely regarded as sufficient reason to ban a behaviour. :-( )

The dispute is really between people who have authoritarian values (that society has the right and duty to impose rules on others to make them conform to the "right" way of living, "for their own good") versus liberal or libertarian values (that society has the duty to protect the freedom of others to live as they choose, with only the minimal constraints needed to stop them impinging on one another's freedom).

It's regarded as the moral high ground primarily by liberals, but not by authoritarians. We have both sorts of people living in the country, and because liberal values won the argument with authoritarian values on policy about controlling communications in our particular countries, they both get to have their say.

The authoritarians are still trying to overturn that, of course, but so far unsuccessfully.

It's a matter of "risk communication" again. You can educate people about the history and to current conditions in countries where authoritarian values got the upper hand - Maoist China, Stalinist Russia, North Korea, etc. - but it's down to people's own values whether they see that as good or bad. Authoritarians only see it as bad if it's being done to them, not if they're doing it to others. From their point of view, it's not a battle between freedom and authoritarianism, but between the 'right' policies and the 'wrong' ones. What went wrong in history's authoritarian dictatorships was not the authoritarianism itself, but only that the people supporting the wrong policies won.

Most authoritarians don't understand that while as individuals you can pick between liberty and authoritarianism, once authoritarianism gets the upper hand there is a vanishingly small probability that as an ordinary individual you will have any say over what policies get enforced. And since views on policy differ, most of the policies enforced on you will be the 'wrong' ones. There are of course those who do understand this, but see themselves as influential thought leaders, and so are ready to ride any wave that moves society in the right direction knowing they'll be able to steer it in the way they see fit later. (As Trotsky found out, they're often wrong too. But whatever...)

Dictatorships are a rhetorical extreme deployed by liberals in their arguments, of course, and there is a valid point that the example of dictatorships doesn't imply that a little bit of authoritarianism couldn't be an improvement. The "Because North Korea" argument is an example of the slippery slope fallacy. Nevertheless, it's good reason why some people can reasonably consider freedom of belief to constitute "moral high ground". I agree, it's not to everyone's taste.

"Or is the goal to get people to change their beliefs and behaviors, yes, to manipulate, and encourage what the communicator believes is the 'right' belief/behavior (don't smoke, get vaccinated, care about climate change)?"

And does the same policy apply to their opponents?

Is it OK to use science communication to try to get people to smoke more?

Is it OK to use science communication to try to get people not to vaccinate?

Is it OK to use science communication to try to get people to not care about climate change?

Never build any tool of governance that you would be seriously unhappy to see your political opponents wield. Because tides inevitably turn and one day they will, and then all those tools you created while in power to impose your view on others will be used by them to impose their views on you.

"Finally, the FDA guide makes little mention of the modern view of risk communication, that what you say matters less than what you do."

Ah! Is that why #greensgobyair? ;-)

I'd agree it's probably a true statement, but I've not noticed it being the modern view among climate change communicators, at least. "Do as I say, not as I do" has been more the rule. Perhaps because the true costs on the modern technological lifestyle of implementing their policies are far higher than they would claim?

August 2, 2015 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>