follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Some data on education, religiosity, ideology, and science comprehension | Main | Mooney's revenge?! Is there "asymmetry" in Motivated Numeracy? »
Saturday
Oct122013

A fragment: The concept of the science commmunication environment

Here is a piece of something. . . .


I. An introductory concept: the “science communication environment”

In order to live well (really, just to live), all individuals (all of them—even scientists!) must accept as known by science vastly more information than they could ever hope to attain or corroborate on their own.  Do antibiotics cure strep throat (“did mine”)? Does vitamin C (“did mine”)? Does smoking cause cancer (“. . . happened to my uncle”)? Do childhood vaccinations cause autism (“. . . my niece”)? Does climate change put us at risk (“Yes! Hurricane Sandy destroyed my house!”)? How about legalizing gay marriage (“Yes! Hurricane Sandy destroyed my house!”)?

The expertise individuals need to make effective use of decision-relevant science consists less in understanding particular bodies of specialized knowledge than in recognizing what has been validly established by other people—countless numbers of them—using methods that no one person can hope to master in their entirety or verify have been applied properly in all particular instances. A foundational element of human rationality thus necessarily consists in the capacity to reliably identify who knows what about what, so that we can orient our lives to exploit genuine empirical insight and, just as importantly, steer clear of specious claims being passed off by counterfeiters or by those trading in the valueless currency of one or another bankrupt alternative to science’s way of knowing (Keil 2010).

Individuals naturally tend to make use of this collective-knowledge recognition capacity within particular affinity groups whose members hold the same basic values (Watson, Kumar & Michelsen 1993). People get along better with those who share their cultural outlooks, and can thus avoid the distraction of squabbling.  They can also better “read” those who “think like them”—and thus more accurately figure out who really knows what they are talking about, and who is simply BS’ing. Because all such groups are amply stocked with intelligent people whose knowledge derives from science, and possess well functioning processes for transmitting what their members know about what’s collectively known, culturally diverse individuals tend to converge on the best available evidence despite the admitted insularity of this style of information seeking.

The science communication environment comprises the sum total of the everyday cues and processes that these plural communities of certification supply their members to enable them to reliably orient themselves with regard to valid collective knowledge.  Damage to this science communication environment—any influence that disconnects these cues and processes from the collective knowledge that science creates—poses a threat to individual and collective well-being every bit as significant as damage to the natural environment.

Persistent public conflict over climate change is a consequence of one particular form of damage to the science communication environment: the entanglement of societal risk risks with antagonistic cultural meanings that transform positions on them into badges of membership in and loyalty to opposing cultural groups (Kahan 2012).  When that happens, the stake individuals have in maintaining their standing within their group will often dominate whatever stake they have in forming accurate beliefs. Because nothing an ordinary member of the public does—as consumer, voter, or public advocate—will have a material impact on climate change, any mistake that person makes about the sources or consequences of it will not actually increase the risk that climate change poses to that person or anyone he or she cares about. But given what people now understand positions on climate change to signify about others’ character and reliability, forming a view out of line with those in one’s group can have devastating consequences, emotional as well as material. In these circumstances individuals will face strong pressure to adopt forms of engaging information—whether it relates to what most scientists believe (Kahan, Jenkins-Smith & Braman 2011) or even whether the temperature in their locale has been higher or lower than usual in recent years (Goebbert, Jenkins-Smith, et al. 2012)—that more reliably connects them to their group than to the position that is most supported by scientific evidence.

Indeed, those members of the public who possess the most scientific knowledge and the most developed capacities for making sense of empirical information are the ones in whom this “myside bias” is likely to be the strongest (Kahan, Peters, et al. 2012; Stanovich & West 2007). Under these pathological circumstances, such individuals be expected to use their knowledge and abilities to search out forms of identity-supportive evidence that would likely evade the attention of others in their group, and to rationalize away identity-threatening forms that others would be saddled with accepting.  Confirmed experimentally (Kahan 2013a; Kahan, Peters, Dawson & Slovic 2013), the power of critical reasoning dispositions to magnify culturally biased assessments of evidence explains why those members of the public who are highest in science literacy and quantitative reasoning ability are in fact the most culturally polarized on climate change risks. Because these individuals play a critical role in certifying what is known to science within their cultural groups, their errors propagate and percolate through their communities, creating a state of persistent collective confusion.

The entanglement of risks and like facts with culturally antagonistic meanings is thus a form of pollution in the science communication environment.  It literally disables the faculties of reasoning that ordinary members of the public rely on—ordinarily to good effect—in discerning what is known to science and frustrates the common stake they have in recognizing how decision-relevant science bears on their individual and collective interests. It thus deprives them, and their society, of the value of what is collectively known and the investment they have made in thieir own ability to generate, recognize, and use that knowledge.

Protecting the science communication environment from such antagonistic meanings is thus an essential element of effective science communication--indeed of enlightened self-government (Kahan 2013b). Because the entanglement of positions on risk with cultural identity impels ordinary members of the public to use their knowledge and reason to resist evidence at odds with their groups’ views, nothing one does to make scientific information more accessible or widely distributed can be expected to counteract the forms of group polarization that this toxin generates.

References

Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013a).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013b).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Keil, F.C. The Feasibility of Folk Science. Cognitive science 34, 826-862 (2010).

Stanovich, K.E. & West, R.F. Natural Myside Bias Is Independent of Cognitive Ability. Thinking & Reasoning 13, 225-247 (2007).

Watson, W.E., Kumar, K. & Michaelsen, L.K. Cultural Diversity's Impact on Interaction Process and Performance: Comparing Homogeneous and Diverse Task Groups. The Academy of Management Journal 36, 590-602 (1993).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (8)

It's an interesting essay, but you still don't address the fundamental difference between Science and Authority - you keep on speaking of *who* to trust, you're not talking about *what* sort of arguments to trust.

The basic problem with Authority is that reliably working out *who* to trust actually takes more expertise than looking at the issue for yourself. You have to know what to look for, you have to know what methods they're supposed to be using, you have to find out whether they are actually using those methods, and not cheating or bluffing, and there are those people who can be trusted on one subject but not on another, so you have to do it for every combination of expert and topic.

It's far easier to judge the *arguments* people use, by means of a variety of heuristics, and then judge the person by their arguments. I'd like to say that I think most people do do it that way, except that I've come across a lot of people who do genuinely seem to have a blind faith in the Authority of Science - as contradictory as that sounds - so I'll have to settle for saying a lot of people do. It does seem to be a noticeable and persistent difference.

But even supposing we do accept a role for personal Authority in public scientific understanding, as a pragmatic shortcut, I am adamant that when we have both forms of evidence available, that an understanding of the arguments and information ***always*** takes precedence over trust in Authorities. If ten thousand scientists were to assure me with absolute confidence that 2 + 2 = 5, I still wouldn't believe them.

And moreover, I would say that anyone who speaks with the Authority of Science, cannot use this method of trusting experts too, or it becomes incestuous. The scientist when they are wearing their 'science hat' has to say "I don't know", unless they have looked into the matter themselves. Or what's the point? Why should we invest any extra trust in scientists if they are only doing what we could do for ourselves? There are many ways to know things, and I'm not saying all of the other ways are bad, but anything that involves trusting authority is not scientific, and should not claim to be.

The Authority we can safely invest in people is very limited, because people are fallible - especially when they seem to think they're not. We're far safer trusting in arguments. Do they use the right sort of argument? Is it clear? Is it quantified? Do they make the evidence and arguments available for examination? Has anyone examined them? Tested them? What are the best counter-arguments, and are they any good? Arguments, not people - at least, not in genuine science.

October 12, 2013 | Unregistered CommenterNiV

Dan from your work:

1.)Confirmed experimentally (Kahan 2013a; Kahan, Peters, Dawson & Slovic 2013), the power of critical reasoning dispositions to magnify culturally biased assessments of evidence explains why those members of the public who are highest in science literacy and quantitative reasoning ability are in fact the most culturally polarized on climate change risks.

2.)The cultural cognition thesis posits that individuals make extensive reliance on cultural meanings in forming perceptions of risk.

From Knutti, R., Hegerl, G. C., Nature Geoscience that Dana Nuccitelli and Michael E Mann used. There are two distinct most likely temperatures: 1.) 1.9C - 2.5C and 2.) 2.6C - 3.2C. The first has now gone down to 1.7C - 2.3C with the recent pause. The problem is, that for the first range, that the world should be making the changeover from adaptation to mitigation about 2100 AD. For the second, the changeover should be 2050. With the new estimate, for 1.) the changeover would be about 2125 - 2150 AD.

My point is that the science supports both starting mitigation now at high economic cost, and promoting fossil fuel for energy for the next 75 years before starting mitigation at high economic cost. Your posit of the damage to science communication ""Damage to this science communication environment—influences that disconnect these cues and processes from the collective knowledge that science creates—poses a threat to individual and collective well-being every bit as significant as damage to the natural environment."" would appear to be biased wrt climate change (CC).

Since the science supports both, or neither, risk perception along the lines of environment or economics is where the actual differences can be allocated.

If 1.) is true even when the science is non-supportive of a side, then assuming that 3.) "" They can better “read” those who “think like them”—and thus more accurately figure out who really knows what they are talking about, and who is simply BS’ing."" then when these persons receive good science wrt CC, is it correct to posit that those who take either position as being "wrong"? In this case, the polarization occurs, the rhetoric, the angst, etc. However, in this case neither side is right or wrong. Those who start with the posit that one side is correct have a flawed argument. Though I would note that recognizing 3.) has more explanatory power than Mooney. I would think that Mooney on CC is fatally flawed by starting with an erroneous assumption.

October 13, 2013 | Unregistered CommenterJohn F. Pittman

"...except that I've come across a lot of people who do genuinely seem to have a blind faith in the Authority of Science "

Interesting, because I haven't come across a lot of people who are like that. I have, on the other hand, come across a lot of people who selectively have faith in the authority of science. Sometimes they consider science authoritative, and sometimes not - but rarely to they seem to have "blind faith" and often their faith in the authority of science (or lack thereof) seems to be rather predictable by their cultural, political, and experiential backgrounds.

I also come across quite a few people who look at someone who says something on the order of:

"While not dispositive, a strong prevalence of view among those who have closely studied issues and who have spent years developing expertise is instructive for evaluating probabilities related to issues where understanding the technical evidence and its implications is very complex"

and falsely labels them as having a blind faith in authority.

Have you come across many folks like that last group, NiV?

October 13, 2013 | Unregistered CommenterJoshua

Joshua,

Many have faith in the Authority of Science, but assign such authority selectively to some people and not others. For everyone else, there are always reasons why the usual rules don't apply.

But what I was talking about were those people who felt that the Authority of Science was sufficient, whose argument was not based on physics or measurement but on "Scientists say...". Or "The peer-reviewed literature says...". They are the people who use the term "anti-science" as an insult, who are obsessed with counting opinions for and against, who require that any counter-arguments be peer-reviewed and published before they can be taken seriously. And yet, these same people often know very little of the science themselves, are usually unable to sustain a technical argument, and frequently don't even know what the argument is about.

In short, they're the people Dan is describing - the people who don't have the time or inclination to learn the science themselves, but instead get their beliefs pre-packaged from their social networks. But they don't recognise it as simply a shared community belief because it comes with this label: "Scientists say...", which makes it unarguable. You can't reason them out of it because they were never reasoned into it - anything they can't answer they assume is a trick. It's by no means confined to only one of the sides, but I've tended to notice it more (for obvious reasons) on the mainstream side. I don't regard that in itself as a problem (humans will be human), except that it often comes with an absolute intolerance for dissent, which is.

And I consider the root cause of it to be precisely the system of personal Authority that Dan is trying to fix. People get these beliefs from social networks, which are founded on the sets of *people* whose opinions and expertise you trust. Teaching science via argument and evidence is difficult and time-consuming, so many educators (not all) revert to using Authority. "It is because science says so." And people learn to think that's how science works. But Authority is essentially a political technique, and is therefore easily hijacked by political interests. The poor public are armed with none of the tools to tell them apart.

The alternative is what they sometimes teach as "critical thinking". Look for evidence and argument, look for and evaluate counter-arguments, look to see how thoroughly it has been checked, and with what results. Be aware of your own biases, and how easily you can be fooled. Know your limits, and don't be afraid to say "I don't know". And be wary of people who try to avoid debate, and showing their working.

The point about real Science is that politics can't fake it. A trusted Authority can say anything and get away with it, but if they have to produce a rational argument to support it, and face cross-examination, it's very difficult to sustain it without revealing what's going on.

But the Authority-followers can only see the problem as being that people no longer have faith in the Authorities, or at least, in the right Authorities, and they see the solutions to the problem all built around restoring that faith. "How can we persuade the public? How can we get our message out? How can we communicate the science?"

They think it is because of an information deficit, but it doesn't matter how often they tell people what scientists say, they still don't believe. They think it is because of cultural context, but it doesn't matter how carefully they frame their message in the political clothes of their opponents, in a local, more immediate context; people still see through it.

I say there is no substitute for Science except Science itself, and if you want its benefits, there is no alternative but to teach the general public how to do it. Every alternative you come up with will continue to have these problems - political methods are inherently subvertible by politics, and Science's reputation is an attractive prize for any political movement.

And we'll keep on going round and round in the same routine.

October 13, 2013 | Unregistered CommenterNiV

I think we can agree on: "The entanglement of risks and like facts with culturally antagonistic meanings is thus a form of pollution in the science communication environment. "
But I have some concerns with what it might mean to employ the "protections" described here: "Protecting the science communication environment from such antagonistic meanings is thus an essential element of effective science communication--indeed of enlightened self-government "
I do not believe that we can evaluate the distortions within our science communication environment without taking into account the distortions upon our communications media and mechanisms as a whole. These start with an extreme concentration of wealth, power and thus media access with a very few individuals and corporate entities. We also have geographic segregation which put many of us (frequently by choice) in residential locations very far from those of differing views. At the very least, we can adjust our online and other media communication mechanisms to avoid those annoying alternative cultures and their attempts at counter arguments. Or listen to them only through the filter of media favorable to our own viewpoints.

I think that those interested in pushing a message, such as marketers and political consultants have gotten increasingly adept at pushing the culturally cognitive tribal buttons in order to reach, and mobilize a "base" audience.

Science communicators, in my opinion, are frequently making two mistakes that aggravate their difficulties in dealing with the public. One is the format of many institutional research press releases, which tend to try to rise above the noise by touting their work as the latest earthshaking breakthrough. This causes cynicism with the public, who see yesterday's breakthrough as refuted by today's and thus extrapolate that tomorrow's announcements will prove today's "wrong" as well. This tendency to lack a bit of humility in making announcements, and a dearth of announcements on the many blind alleys in which science tends to find itself at times, leads to a public lack of understanding as to how science is a evidence based work in progress.

Faced with dedicated and frequently well funded science denialists, scientists and science communicators themselves tend to go into a circle the wagons defensiveness. Again, it is hard to convey how science builds strong theories based on long term evaluation of evidence, when communicating an absolutist sounding "Scientists say" message.

So, my concern with the idea of "protecting the science communication environment" is that it sounds as if it might lead off in an authoritarian direction. What we lack, but sorely need in both the scientific and political communication environments, is a free marketplace of ideas, one that is not dominated by the overarching agendas of special interest groups and their targeted outreach strategies.

October 13, 2013 | Unregistered CommenterGaythia Weis

I think we can agree on: "The entanglement of risks and like facts with culturally antagonistic meanings is thus a form of pollution in the science communication environment. "
But I have some concerns with what it might mean to employ the "protections" described here: "Protecting the science communication environment from such antagonistic meanings is thus an essential element of effective science communication--indeed of enlightened self-government "
I do not believe that we can evaluate the distortions within our science communication environment without taking into account the distortions upon our communications media and mechanisms as a whole. These start with an extreme concentration of wealth, power and thus media access with a very few individuals and corporate entities. We also have geographic segregation which put many of us (frequently by choice) in residential locations very far from those of differing views. At the very least, we can adjust our online and other media communication mechanisms to avoid those annoying alternative cultures and their attempts at counter arguments. Or listen to them only through the filter of media favorable to our own viewpoints.

I think that those interested in pushing a message, such as marketers and political consultants have gotten increasingly adept at pushing the culturally cognitive tribal buttons in order to reach, and mobilize a "base" audience.

Science communicators, in my opinion, are frequently making two mistakes that aggravate their difficulties in dealing with the public. One is the format of many institutional research press releases, which tend to try to rise above the noise by touting their work as the latest earthshaking breakthrough. This causes cynicism with the public, who see yesterday's breakthrough as refuted by today's and thus extrapolate that tomorrow's announcements will prove today's "wrong" as well. This tendency to lack a bit of humility in making announcements, and a dearth of announcements on the many blind alleys in which science tends to find itself at times, leads to a public lack of understanding as to how science is a evidence based work in progress.

Faced with dedicated and frequently well funded science denialists, scientists and science communicators themselves tend to go into a circle the wagons defensiveness. Again, it is hard to convey how science builds strong theories based on long term evaluation of evidence, when communicating an absolutist sounding "Scientists say" message.

So, my concern with the idea of "protecting the science communication environment" is that it sounds as if it might lead off in an authoritarian direction. What we lack, but sorely need in both the scientific and political communication environments, is a free marketplace of ideas, one that is not dominated by the overarching agendas of special interest groups and their targeted outreach strategies.

October 13, 2013 | Unregistered CommenterGaythia Weis

Saw this at J Curry's site:

""The ironic impact of activists: Negative stereotypes reduce social change influence

Nadia Bashir, Penelope Lockwood, Alison Chasteen, Daniel Nadolny, Indra Noyes

Abstract. Despite recognizing the need for social change in areas such as social equality and environmental protection, individuals often avoid supporting such change. Researchers have previously attempted to understand this resistance to social change by examining individuals’ perceptions of social issues and social change. We instead examined the possibility that individuals resist social change because they have negative stereotypes of activists, the agents of social change...""

Though perhaps it would be a better post for the previous thread.

October 14, 2013 | Unregistered CommenterJohn F Pittman

Hey NiV -


Many have faith in the Authority of Science, but assign such authority selectively to some people and not others.

But my point is that if it is selective, then it is not really "faith in the authority of science," it is actually just confirmation bias.

But what I was talking about were those people who felt that the Authority of Science was sufficient, whose argument was not based on physics or measurement but on "Scientists say...". Or "The peer-reviewed literature says...".

First, for the most part (i.e., except for extreme outliers), I think that is a misrepresentation I think that essentially what most people mean is something on the order of "When the issue is complex, more complex than I can handle, and when I distrust the motivations or intentions of those I disagree with, I believe that the prevalence of view among "experts" is relevant information. I think there is significant value in screening information through a system such as peer review,."

I don't think that those folks truly feel that the "authority" assigned by screening through experts is absolute, although they will use that criterion as a means to confirm biases. If you asked those folks something like: "I want to clarify something here. Do you think that (1) the "authority" of peer review is absolute and that anything that passes peer review must be accurate and anything that doesn't pass peer review must be wrong, or do you think that (2) the weight of peer review is meaningful metric for evaluating probabilities?" they would likely pick option #2.

That doesn't mean that I don't think that people place too much confidence, at least sometimes, in the objectivity of peer review. And again, I note, I think that the question of peer review (and on the flip side, appeal to authority), is often used by both sides in these wars to, essentially, confirm biases.

(They are the people who use the term "anti-science" as an insult,

There is a flip-side to the "anti-science" rhetoric; such as "alarmist," or "true-believer," or "cultist," etc. These are rhetorical devices on both sides, ways of confirming biases. Someone accusing someone else of being "anti-science" is not any more reflective of a "blind faith in the authority of science," than someone accusing someone else of being a "true-believer" or a member of the AGW cult with a "religious" belief about climate change.

who are obsessed with counting opinions for and against, who require that any counter-arguments be peer-reviewed and published before they can be taken seriously.

The facile and self-contradictory attitudes about peer review exist on both sides of the debate. Consider how vehemently some "skeptics" argue that the arguments that "realists" make about the prevalence of view among peer reviewed literature are factually incorrect. Are they not "counting." Consider how often you see "skeptics" argue about "gatekeeping" and the false "authority" of peer review only to turn around and put great weight on any peer reviewed science that they ("skeptics") see as supporting their perspective.

And yet, these same people often know very little of the science themselves, are usually unable to sustain a technical argument, and frequently don't even know what the argument is about.

So here we see that many heavily-engaged "skeptics" are relatively familiar with the science. But they are a tiny, tiny outlier. More commonly, what we see are "skeptics" in the general public who are absolutely convinced about the invalidity of peer review, and who use climate science as an example of the corruption and fraudulence of peer review - without themselves understanding the science well-enough to actually make a scientific argument to back their perspective.

In short, they're the people Dan is describing - the people who don't have the time or inclination to learn the science themselves, but instead get their beliefs pre-packaged from their social networks.

Well, it happens on both sides. And further, I would say that there is a certain logic to giving weight to peer review if you aren't capable of evaluating the science yourself - with the caveat that it is fallacious to see peer review as being dispositive, and that unjustifiable arguments about the value of peer review are confirmation bias. But that is precisely what I would expect as the result of motivated reasoning. And, of course, motivated reasoning takes many forms. One form that it takes is where people accuse those who argue for the validity of peer review of being members of a religious cult.

But they don't recognise it as simply a shared community belief because it comes with this label: "Scientists say...", which makes it unarguable.

Sure. Peer review is used as a vehicle for motivated reasoning. As is disdain for peer review.

You can't reason them out of it because they were never reasoned into it - anything they can't answer they assume is a trick. It's by no means confined to only one of the sides, but I've tended to notice it more (for obvious reasons) on the mainstream side. I don't regard that in itself as a problem (humans will be human), except that it often comes with an absolute intolerance for dissent, which is.

I largely agree - except for your comment about the imbalance (and with the caveat that I'm not exactly sure what "it" refers to). My argument is that motivated reasoning is a product of the human condition, of human cognition, of the pattern-finding way that we think of a our tribal nature. As such, I reject, as a pre-condition, ANY arguments of imbalance....although I like to think that I am open to examining the evidence in that regard.

And I consider the root cause of it to be precisely the system of personal Authority that Dan is trying to fix. People get these beliefs from social networks, which are founded on the sets of *people* whose opinions and expertise you trust.

Maybe this is a semantic difference, but I don't see it as "fixable," but something that can be identifiedand worked with.

Teaching science via argument and evidence is difficult and time-consuming, so many educators (not all) revert to using Authority. "It is because science says so." And people learn to think that's how science works.

As an educator, I see the problem as far deeper than that. It is not merely an argument that "it is so because science says so," but it is more deeply rooted in the top-down and hierarchical nature of our educational paradigms. People lack the educational methodology and commitment to a different educational philosophy that would be needed to make significant change in that regard.

But w/r/t that problem and the context of our debate, I feel that using that problem as a tool in the debate about climate change is, essentially, using a broader problem as a tribal weapon. This is not a problem of the climate change debate or a problem about science education - but a more fundamental problem with how people view pedagogy and epistemology.

But Authority is essentially a political technique, and is therefore easily hijacked by political interests.

As is the demonizing of authority. This I see over and over when "skeptics" (I must say, laughably)_ conclude with absolute certainty that I am a statist authoritarian who wants to cede all power to the all-knowing "authority" of government.

The poor public are armed with none of the tools to tell them apart. I'm not sure what "them" is here.

The alternative is what they sometimes teach as "critical thinking". Look for evidence and argument, look for and evaluate counter-arguments, look to see how thoroughly it has been checked, and with what results. Be aware of your own biases, and how easily you can be fooled. Know your limits, and don't be afraid to say "I don't know". And be wary of people who try to avoid debate, and showing their working.

I think that is only 1/2 of the what is needed, and I say that as someone who has dedicated much of his life to helping students develop critical thinking skills (I don't think it is something that you "teach").

OK...this is already way too long. Now that I"m this far into it, I realize that this piece by piece form of response is too complex. If you tried to respond to me using a similar format, we'd be stuck in an endless and recursive loop. At some point soon I will try to read the rest as a block and comment with a kind of summary response.

October 14, 2013 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>