In recognition of the impact that the Macao internet outage has had on the posting of entries in the ongoing "MAPKIA!" contest, we are extending the time for posting entries.
Besides, literally 10^3s (figureatively speaking) of entries have been delivered offline by emails, fedex deliveries, telegraphs, mental telepathy & other alternative channels during the outage (thank goodness they found the squirrel who was gnawing on the internet tubes and reclocated him to one of the nation's 10^3s "wildlife" preserves (figureatively speaking)). It's going to take me a while to process all of them!
So just go to the "comments" section for the post & make your own predictions (supported by a "cogent" theory)--right now while there is still time to compete for the fame & notoreity--not to mention cool prizes!--that winning a MAPKIA confers!
Join the SBST Team: Neither nudge nor shove will stop us from improving your life (whether you are aware of it or not) in 2016 & beyond!
The "Team's" mission is to use behavioral economics--primarily of the "nudge" variety--to steer people into making decisons that mesh better with one or another government program aimed at improving a variety of social and economic outcomes, from the proportion of peple obtaining higher education to the proportion of small businesses that keep afloat; from living a more healthy life to availing oneself of myriad govt benefits etc.
But what struck me is the casual assumption that SBST is going to happily outlive the Obama Administration.
Obama is a classic "University of Chicago Democrat"--someone who substitutes for the old style passion of Neal Deal liberalism a cool confidence in technocratic management strategies, many of which tweak but don't fundamentally question "private orderings" as a means of promoting collective wellbeing (distributional justice, an aim of the old-style New Deal Democrat liberalism just as fundamental as collective well-being, has shrunk in importance to near invisibility in the U of C Democratic program).
This is Cass Sunstein's liberalism, not John Kenneth Galbraith's, much less Ted Kennedy's!
But the vision of U of C Democrats is if anything even more obnoxious to the "Chicago School" neo-liberals and the dyed-in-the-wool social conservatives that cohabit, albeit often uneasily, in the Republican Party.
U of C Democrats say, "hey, we are not only going to take back some share of the profits you've made by exploiting public goods ('you didn't build that!') but we're going to do so with 'strategies' that bypass your reason, so you don't really notice & fail, as a result of 'bounded rationality', to contribute your fair share."
It's hard to think of a program more likely to make the descendants of Hayek & Ayn Rand (what a weird marriage! & what a weird brood of offspring!) see red(s)!
That's one of things that makes the "Fellowship" so damn interesting!
"One year, beginnign in October 2016," you say...
The basis for the "SBST" is an Obama Executive Order that directs all executive agencies to "identify policies, programs, and operations where applying behavioral science insights may yield substantial improvements in public welfare, program outcomes, and program cost effectiveness" and " develop strategies for applying behavioral science insights to programs and, where possible, rigorously test and evaluate the impact of these insights."
To implement this directive, the Nudge Order directs the SBST (also created by the Obama White House) to issue "agencies ... advice and policy guidance to help them execute policy objectives.
This "Nudge Order" (let's call it that; snappier than "Executive Order--Using Behavioral Science Insights to Better Serve the American People") seems to be patterned on the Reagan Executive Order that mandated all executive agencies (only a fraction, actually of the agencies that have been authorized by Congress to engage in significant regulatory activity) submit their proposed regulations to the Office of Management & Budge for "cost benefit analysis."
Decried at the time by traditional New Deal Liberal Democrats, U of C Democrats actually have really grown to like that Reagan order a lot & even proposed extending it!
But I have a feeling that the next President, if he or she is a Republican, isn't going to reciprocate the love when it comes to Obama's "Nudge Order."
Pretty clear, I think, that neither a President Trump nor a President Cruz--both of whom seem to look to a very different source for their "strategies" for "managing" public opinion-- would have much use for the Nudge Order or the apparatus that carries it out.
But I doubt that a President Fiorina, a President Rubio, a President Bush, a President Christie, a President Carson, or a President Paul would either. (I'm sure I'm forgetting somebody-- but who has the memory capacity to keep track of all of them?)
I don't know what a President H.R. Clinton would think--but I would note that President W.J. Clinton was the first & remains the model U of C Democrat President.
I know for sure what President Sanders would do w/ the Nudge Order and SBST--and well before Oct. 2017.
So, this is a cool position -- not only b/c the normal job description is interesting but b/c it's certain to be interesting to be "on hand" to witness the Nudge Order "in transition."
Oh, but I've decided not to apply. I like what I'm doing just fine!
Its time for the first "MAPKIA!"! [Make a prediction, know it all!"] episode of 2016!
Yup--this wildly popular feature of the CCP Blog—the #1 most popular game show in Macao for two years running—has been renewed for another season!
It’s of course inconceivable that anyone doesn’t know the rules, and I don’t mean to insult anyone’s intelligence, but legal niceties do require me to post them before every contest. So here they are:
I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or this or some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)
Actually, though, the rules are being significantly modified for this particular episode! The question I’m going to pose has to be answered with data from the Pew’s big hit “Public vs. the ‘Scientists’ ” Report from last yr.
As you likely all realize, I’ve been going on & on since last yr about the fun that can be had poking around in the “public” portion of Pew’s report.
In previous posts, I showed that the data in Pew’s study (for the public rspts; the data for the AAAS members who formed the “scientist” sample hasn’t been released, at least not yet. . .) corroborates the usual story about politically disputed risks: namely, that as science literacy goes up, cultural polarization (measured by one or another proxy for cultural identity) intensifies in magnitude.
Well, the study also has some interesting “science attitude” items, one of which is this:
I’m going to call this the Pew “Malthusian worldview” item.
“What do you think,” the question effectively asks,
are we in fact just like all the other stupid animals who keep multiplying in number and engorging themselves on all their foodstuffs and other necessary resources until they crash, calamitously, over the top of the Malthusian curve in some massive die off? Or are human beings special precisely because their reason allows them to keep shifting the curve through technological innovation?
Consider climate change to be history’s “biggest ‘I told you so’ ” confirmation of what “Marx wrote about capitalism’s ‘irreparable rift’ with ‘the natural laws of life itself’ ” and what “indigenous peoples" have been "warning about the dangers of disrespecting ‘Mother Earth’ [since] long before that”?
Then answer “2” is for (or just is) you!
Right! These are the same fools who told us that we couldn’t have a city more populous than 200,000 people or we’d be choking to death on our own excrement! Well, thanks to the advent of modern sanitation systems, reinforced with related advances in public health, we can safely inhabit cities orders of magnitude larger and more dense than the ones whose residents regularly succumbed to devastating outbreaks of cholera in the 19th century.
Sure, we'll face some new challenges but we’ll just blast our shit into outer space & everything will be fine-- just you watch & see!
Hey—did you hear about those cool mirror-coated nanotechnology flying saucer drone things that automatically levitate up to just the right altitude to reflect the sunlight necessary to neutralize climate change & keep temperatures here on earth a comfortable 72 degrees everywhere yr ‘round?
This changes ... nothing!
That's answer number "1" talking!
So the question is, should we expect the Pew item to tap into those two opposing mindsets?
How powerfully (if at all) will responses to the Pew Malthusian Worldview item predict beliefs and attitudes toward technological and environmental risks like climate change, fracking, nuclear power, and GM foods? Will it be a stronger predictor than political partisanship? Will responses interact with—or essentially amplify—the explanatory power of political ideology and party identification?
What will the relationship be between the Malthusian Worldview item and science literacy? Will responses be correlated with it—and if so in which direction? Will higher science literacy magnify the correlation between responses to the Malthusian Worldview item and opposing perceptions of environmental and technological risks--just as higher science comprehension magnifies cultural polarization on climate change, nuclear power, fracking, and the like?
Perhaps my framing of the question implies an answer. But if you think I have one, then obviously mine could be wrong!
“Make a prediction know it all”—and explain cogently the reasoning for it and how one might test your conjecture with Pew dataset items, which have been featured in previous posts and are set forth in their entirety at the Pew site.
Here’s your chance to win not only a great prize but to also to demonstrate to all the schoolchildren in Macao and to billions of other curious and reflective people everywhere that you, unlike everybody else, really knows what the hell you are talking about when it comes to making sense of public perceptions of risk.
Just post your prediction, & take a stab at specifying a testing strategy, in a comment below. I'll do the analyses & we'll see what you got!
It's that friggin' simple!
Ready ... set ... MAPKIA!
The motivation for this post is to respond to commentators--@Joshua & @HalMorris—who wonder, reasonably, whether there’s really much point in continuing to examine the relationship between cultural cognition & like mechanisms, on the one hand, and one or another element of science comprehension (cognitive reflection, numeracy, “knowledge of basic facts,” etc).
They acknowledge that evidence that cultural polarization grows in step with proficiency in critical reasoning is useful for, say, discrediting positions like the “knowledge deficit” theory (the view that public conflicts over policy-relevant science are a consequence of public unfamiliarity with the relevant evidence) and the “asymmetry thesis” (the positon that attributes such conflicts to forms of dogmatic thinking distinctive of “right wing” ideology.
But haven’t all those who are amenable to being persuaded by evidence on these points gotten the message by now, they ask?
I agree that the persistence of the “knowledge deficit” view and to a lesser extent the “asymmetry thesis” (which I do think is weakly supported but not nearly so unworthy of being entertained as “knowledge deficit” arguments) likely don’t justify sustained efforts at this point to probe the relationship between cultural cognition and critical reasoning.
But I disagree that those are the only reasons for continuing with—indeed, intensifying—such research.
On the contrary, I think focusing on science comprehension is critical to understanding cultural cognition; to forming an accurate moral assessment of it; and to identifying appropriate responses for managing its potential to interfere with free and reasoning citizens’ attainment of their ends, both individual and collective (Kahan 2015a, 2015b).
I should work more systematically how to convey the basis of this conviction.
But for now, consider these “two conceptions” of cultural cognition and rationality. Maybe doing so will foreshadow the more complete account—or better still, provoke you into helping me to work this issue out in a way that satisfies us both.
1. Cultural cognition as bounded rationality. Persistent public conflict over societal risks (e.g., climate change, nuclear waste disposal, private gun possession, HPV immunization of schoolgirls, etc.) is frequently attributed to overreliance on heuristic, “System 1” as opposed to conscious, effortful “System 2” information processing (e.g., Weber 2006; Sunstein 2005). But in fact, the dynamics that make up the standard “bounded rationality” menagerie—from the “availability effect” to “base rate neglect,” from the “affect heuristic” to the “conjunction fallacy”—apply to people of all manner of political predispositions, and thus don’t on their own cogently explain the most salient feature of public conflicts over societal risks: that people are not simply “confused” about the facts on these issues but systematically divided on them on political grounds.
One account of cultural cognition views it as the dynamic that transforms the mechanisms of “bounded rationality” into fonts of political polarization (Kahan, Slovic, Braman, & Gastil 2006 Kahan 2012). Cultural predispositions thus determine the valence of the sensibilities that govern information processing according in the manner contemplated by the “affect heuristic” (Peters, Burraston & Mertz 2004; Slovic & Peters 1996). The same for the “availability effect”: the stake individuals have in forming “beliefs” that express and reinforce their connection to cultural groups determines what sorts of risk-relevant facts they notice, what significance to them, and how readily they recall them; (Kahan, Jenkins-Smith & Braman 2011). The motivation to form identity-congruent beliefs drives biased search and biased assimilation of information (Kahan, Braman, Cohen, Gastil & Slovic 2010)..—not only on existing contested issues but on novel ones (Kahan, Braman, Slovic, Gastil & Slovic 2009).
2. Cultural cognition as expressive rationality. Recent scholarship on cultural cognition, however, seems to complicate if not in fact contradict this account!
By treating politically motivated reasoning—of which “cultural cognition” is one operationalization (Kahan in pressb)—as in effect a “moderator” of other more familiar cognitive biases, the “bounded rationality” conception of it implies that cultural cognition is a consequence of over-reliance on heuristic information processing (e.g., Taber & Lodge 2013; Sunstein 2006). If this understanding is correct, then we should expect cultural cognition to be mitigated by proficiency in the sorts of reasoning dispositions essential to conscious, effortful “System 2” information processing.
But in fact, a growing body of evidence suggests that System 2 reasoning dispositions magnify rather than reduce cultural cognition! Experiments show that individuals high in cognitive reflection and numeracy use their distinctive proficiencies to discern what the significance of crediting complex information is for positions associated with their cultural or political identities (Kahan 2013; Kahan, Peters, Dawson & Slovic 2013).
As a result, they more consistently credit information that is in fact identity-affirming and discount information that is identity-threatening. If this is how individuals reason outside of lab conditions, then we should expect to see that individuals highest in the capacities and dispositions necessary to make sense of quantitative information should be the most politically polarized on facts that have become invested with identity-defining significance. And we do see that—on climate change, nuclear power, gun control, and other issues (Kahan 2015; Kahan, Peters, et al., 2012).
This work supports an alternative “expressive” conception of cultural cognition. On this account, cultural cognition is not a consequence of “bounded rationality.” It is a form of engaging information rationally suited for forming affective dispositions that reliably express their group allegiances (cf. Lessig 1995; Akerlof & Kranton 2000).
“Expressing group allegiances” is not just one thing ordinary people do with information on societally contested risks. It is pretty much the only thing they do. The personal “beliefs” ordinary people form on issues like climate change or gun control or nuclear power etc. don’t otherwise have any impact on them. Ordinary individuals just don’t matter enough, as individuals, for anything they do based on their view of the facts on these issues to affect the level of risk they are exposed to or the policies that get adopted to abate them (Kahan 2013, in press). In contrast, it is in fact critical to ordinary people’s well-being—psychic, emotional, and material—to evince attitudes that convey their commitment to their identity-defining groups in the myriad everyday settings in which they can be confident those around them will be assessing their character in this way (Kahan in pressb).
* * * * *
At one point I thought the first conception of cultural cognition was right. Indeed, it didn’t even occur to me, early on, that the second conception existed!
But now I believe the second view is almost certainly right. And that no account that fails to recognize that cultural cognition is integral to individual rationality can possibly make sense of it or manage successfully the influences that create the conflict between expressive rationality and collective rationality that give rise to cultural polarization over policy-relevant facts.
If that’s right, then in fact the continued focus on the interaction of cultural cognition and critical reasoning proficiencies will remain essential.
So is it right? Maybe not; but the only way to figure that out also is to keep probing this interaction.
Akerlof, G. A., & Kranton, R. E. (2000). Economics and Identity. Quarterly Journal of Economics, 115(3), 715-753.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.
Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).
Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).
Lessig, L. (1995). The Regulation of Social Meaning. U. Chi. L. Rev., 62, 943-1045.
Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge ; New York: Cambridge University Press.
Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).
Slovic, P. & Peters, E. The importance of worldviews in risk perception Risk Decision and Policy 3, 165-170 (1998).
Sunstein, C. R. (2006). Misfearing: A reply. Harvard Law Review, 119(4), 1110-1125.
Weber, E. Experience-Based and Description-Based Perceptions of Long-Term Risk: Why Global Warming does not Scare us (Yet). Climatic Change 77, 103-120 (2006).
"Don't jump"--weekend reading: Do judges, loan officers, and baseball umpires suffer from the "gambler's fallacy"?
I know how desperately bored the 14 billion regular subscribers to this blog can get on weekends, and the resulting toll this can exact on the mental health of many times that number of people due to the contagious nature of affective funks. So one of my NY's resolutions is to try to supply subscribers with things to read that can distract them from the frustration of being momentarily shielded from the relentless onslaught of real-world obligation they happily confront during the workweek.
So how about this:
We were all so entertained last year by Miller & Sanjurjo’s“Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” which taught us something profound about the peculiar vulnerabilities to err super smart people can acquire as a result of teaching themselves to avoid common errors associated with interpreting random events.
So I thought, hey, maybe it would be fun for us to take a look at other efforts that try to "expose" non-randomness of events that smart people might be inclined to think are random.
Actually, I'm not sure this is really a paper about the randomness-detection blindspots of people who are really good at detecting probability blindspots in ordinary folks.
It's more in the nature of how expert judgment can be subverted by a run-of-the-mill (of the "-mine"?) cognitive biases involving randomness--here the "gamblers' fallacy": the expectation that the occurrence of independent random events will behave interdependently in a manner consistent with their relative frequency; or more plainly, that an outcome like "heads" in the flipping of a coin can become "due" as a string of alternative outcomes in independent events--"tails" in previous tosses--increases in length.
CMS present data suggesting that behavior of immigration judges, loan officers, and baseball umpires all display this pattern. That is, all of these professional decisionmakers become more likely than one would expect by chance to make a particular determination--grant an asylum petition; disapprove a loan application; call a "strike"--after a series of previous opposing determinations ("deny," "approve," "ball" etc.).
If you liked puzzling over the the M&S paper, I predict you'll like puzzling through this one.
In figuring out the null, CMS get that it is a mistake, actually, to model the outcomes in question as reflecting a binomial distribution if one is sampling from a finite sequence of past events. Binary outcomes that occur independently across an indefinite series of trials (i.e., outcomes generated by a Bernoulli process) are not independent when when one samples from a finite sequence of past trials.
In other words, CMS avoid the error that M&S showed the authors of the "hot hand fallacy" studies made.
But figuring out how to do the analysis in a way that avoids this mistake is damn tricky.
If one samples from a finite sequence of events generated by a Bernoulli process, what should the null be for determining whether the probability of a particular outcome following a string of opposing outcomes was "higher" than what could have been expected to occur by chance?
One could figure that out mathematically.... But it's a hell of a lot easier to do it by simulation.
Another tricky thing here is whether the types of events decisionmakers are evaluating here--the merit of immigration petitions, the crediworthiness of loan applicants, and the location of baseball pitches--really are i.i.d. ("independent and identically distributed").
Actually, no one could plausibly think "balls" and "strikes" in baseball are.
A pitcher's decision to throw a "strike" (or attempt to throw one) will be influenced by myriad factors, including the pitch count--i.e., the running tally of "balls" and "strikes" for the current batter, a figure that determines how likely the batter is to "walk" (be allowed to advance to "first base"; shit, do I really need to try to define this stuff? Who the hell doesn't understand baseball?!) or "strike out" on the next pitch.
CMS diligently try to "take account" of the "non-independence" of "balls" and "strikes" in baseball, and like potential influences in the context of judicial decisionmaking and loan applications, in their statistical models.
But whether they have done so correctly--or done so with the degree of precision necessary to disentangle the impact of those influences from the hypothesized tendency of these decisonmakers to impose on outcomes the sort of constrained variance that would be the signature of the "gambler's fallacy"-- is definitely open to reasonable debate.
Maybe in trying to sort all this out, CMS are also making some errors about randomness that we could expect to see only in super smart people who have trained themselves not to make simple errors?
But b/c I love all 14 billion of you regular CCP subscribers so much, and am so concerned about your mental wellbeing, I'm calling your attention to this paper & asking you-- what do you think?
I’m going to resist summarizing Miller & Sanjurjo’s “Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers,” not only because I’ve already tried to do that multiple times –
- Holy smokes! The " 'hot-hand fallacy' fallacy"!
- Still fooled by non-randomness? Some gadgets to help you *see* the " 'hot hand fallacy' fallacy
but also because any attempt to do so results in a mental misadventure of staggering proportions.
Actually, that is what’s so cool about the article. At least in my view.
Like lots of other people—including, to his credit, the scholar most prominently identified with the classic “hot hand fallacy” study—I think it is really neat that M&S have re-opened the question whether the performance of athletes really do vary in patterns that defy the fluctuations one would expect to see by chance (i.e., whether NBA basketball players and others really do go on “hot streaks” etc).
I also am filled with admiration for their mathematical dexterity in exposing the error in the original “hot hand fallacy” research (viz., the assumption that the shooting consistency of basketball players over a finite set of observations should be measured in relation to the variance associated with a binomial distribution).
But what really intrigues me is what M&S's accomplishment tells us about cognition. Or really what it tells us about what we don’t know but should about how intuition and conscious reflection operate in expert judgment.
How could researchers so familiar with probability theory, and so accomplished in exposing the errors people routinely make when attempting to detect patterns in random events, fail to detect the mistaken assumption that they themselves were making about how to detect such a pattern in this particular setting?
How could the error have evaded the notice of those who reviewed their work—and much more fundamentally the notice of thousands of scholars who for decades have held up the original “hot hand fallacy” study (along with its many progeny) as the paradigmatic demonstration of a particular cognitive bias (one that no one disputes really exists) and of a method for detecting defects in human rationality generally?
Why when they are shown incontrovertible (really!) proof of the error that the “hot hand” researchers made (and re-made over the course of numerous successor studies) do so many highly intelligent, reflective people—ones who unquestionably possess the knowledge and reasoning proficiency that it takes to understand the logic of the M&S argument—so strongly and stubbornly resist accepting it before (in the vast majority of cases, at least) finally acknowledging (often with a gratifying display of appreciative surprise) that M&S are right?
What is the cognitive process, in short, that makes individuals who have cultivated the habits of mind necessary to resist commonplace but mistaken intuitions about randomness vulnerable to being misled by mistaken intuitions about randomness that only those highly proficient in reasoning about randomness could have developed in the first place?
The project to answer this question started before 2015.
But the vividness imparted to this puzzle by the astonishing M&S paper, and the resulting amplification and dissemination of the motivation to solve it, will, I predict, energize researchers for years to come.
Mining even more insight from the Pew "public science knowledge/attitudes" data--but hoping for even better extraction equipment (fracking technology, maybe?) in 2016...
Futzing around "yesterday" with the "public" portion of the"public vs. scientists" study (Pew 2015), I presented some data consistent with previous findings (Kahan 2014, 2015) that "beliefs" in human evolution and human-caused climate change measure cultural identity, not any aspect of science comprehension.
Well, there's actually still more fun things one can do with the Pew data (a way to pass the time, actually, as I wait for some new data on climate-science literacy... stay tuned!).
"Today" I'll share with you some interesting correlations between the Pew "science literacy" battery (also discussed yesterday; but actually, a bit more about it at the end of the post) & various "science-informed" policy issues. I'll also show how those relationships intereact with (vary in relation to) right-left political outlooks.
Okay -- consider this one!
See? It's scary to eat GM foods, but people of all political outlooks & levels of science literacy agree that it makes sense to put blue GM tomatoes (or even a single "potatoe") in the gas tank of their SUVs.
But you know my view here: "what do you think of GM ..." in a survey administered to the general public measures non-opinion. Fun for laughs, and for creating fodder for professional "anti-science" commentators, but not particulary helpful in trying to make genuine sense of public risk perceptions.
Just my opinion....
When polarization on a "societal risk" doesn't abate but increases conditional on science comprehension, that's a super strong indicator of a polluted science communication environment. It is a sign that positions on an issue have become entangled in antagonistic social meanings that transform them into badges of identity in and loyalty to groups (Kahan 2012). When that happens, people will predictably use their reasoning proficiencies to fit their understanding of evidence to the view that predominates in their group.
Here one can reasonably question the inference I'm drawing, since Pew's items aren't about "risk perceptions" but rather "policy preferences."
But if one is familiar with the "affect heuristic"--which refers to the tendency of people to conform their understanding of all aspects of a putative risk source to a generic pro- or con- attitude (Slovic, Finucane & MacGregor 2005; Loewenstein, Weber, Hsee & Welch 2001)--then one would be inclined to treat the Pew question as just another indicator of that risk-perception-generating sensibility.
The "affect heuristic" is what makes the "Industrial Strength Risk Perception Measure" so powerful. Using ISRPM, CCP data has found that both the perceived risk of both fracking and of nuclear power (not to mention climate change, of course) display the signature "polluted science communication environment" characteristic of increased cultural polarization conditional on greater reasoning proficiency.
I, anyway, am inclined to view the Pew data as more corroboration of this relationship, just as in "yesterday's" post I explained how the Pew data corroborated the findings that greater science comprehension generally and greater comprehension of climate science in particular magnify polarization.
But before signing off here, let me observe one thing about the Pew science literacy battery.
You likely noticed that the values on the y-axes of the figures start to get more bunched together at the high end.
That's because the six-item, basic facts sicence literacy battery used in the Pew 2015 report are highly skewed in the direction of a high score.
The distribution is a bit less skewed when one scores the responses to the battery using Item Response Theory, which takes account of the relative difficulty and measurement precision (or discrimination) of the individual items. But only a bit less. (You can't tell from the # of bins in the histogram, but there are actually over 5-dozen "science literacy" levels under the IRT model, as opposed to the 7 that result when one simply adds the number of correct responses; pretty cool illustration of how much more "information," as it were, one can get using IRT rather than "classic test theory" scoring.)
To put it plainly, the Pew battery is just too darn easy.
The practical consequence-- a serious one-- is that the test won't do a very good job in helping us to determine whether differences in science comprehension affect perceptions of risk or other science-related attitudes among individuals whose scores are above the population average.
Actually, the best way to see that is to look at the Item Reponse Theory test information and reliability characteristics for the Pew battery:
But what they are telling us is that the power of the Pew battery to discern differences in science comprehension is concentrated at about -1 SD below the estimated population mean. Even there, the measurement precision is modest -- a reliability coefficient of under 0.6 (0.7 is better).
More importantly, it quickly tails of to zero by +0.5 SD.
In other words, above the 60th percentile in the population the test can furnish us with no guidance on differences in science literacy levels. And even what it can tell us even at the population mean ("0" on the y-axis) is pretty noisy (reliability = 0.40).
As I've explained in previous posts, the NSF Indicators have exactly the same problem. The Pew battery is an admirable effort to try to improve on the familiar NSF science literacy test, but with these items, at least, it hasn't made a lot of progress.
As the last two posts have shown, you can in fact still learn a fair amount from a science literacy scale the measurement precision is this skewed toward the lower end of the distribution of this sort of proficiency.
But if we really want to learn more, we desperately need a better public science comprehension instrument.
That conviction has informed the research that generated the "Ordinary Science Intelligence" assessment. An 18-item test, OSI combines a modest number of "basic fact" items (ones derived from the Indicator and from a previous Pew battery) with critical reasoning measures that examine cognitive reflection and numeracy, dispositions essential to being able to recognize and give proper effect to valid science.
OSI was deliberately constructed to possess a high degree of measurement precision across the entire range of the underlying latent (or unobserved) dispotion that it's measuring.
That's a necessary quality, I'd argue, for an instrument suited to advance scholarly investigation of how variance in public science comprhension affects perceptions of risk and related facts relevant to individual and collective decisionmaking.
Is OSI (actually "OSI_2.0") perfect?
Indeed, while better for now than the NSF Indicators battery (on which it in fact builds) for the study of risk perception and science communication, OSI_2.0 is primarily intended to stimulate other scholars to try to do even better, either by building on and refining OSI or by coming up with instruments that they can show (by conducting appropriate assessments of the instruments' psychometric characteristics and their external validity) are even better.
I hope that there are a bunch of smart researchers out there who have made contributing to the creation of a better public science comprehension instrucment one of their New Year's resolutions.
If the researchers at Pew Research Center are among them, then I bet we'll all be a lot smarter by 2017!
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).
Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as feelings. Psychological Bulletin 127, 267-287 (2001).
Slovic, P., Peters, E., Finucane, M.L. & MacGregor, D.G. Affect, Risk, and Decision Making. Health Psychology 24, S35-S40 (2005).
Having posted the IRT test information & reliability data, figured might as well share the item response profiles too for the 6 items in the Pew "general knowledge" battery.
Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center!
One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies. That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew's own research reports.
I've been playing around with the "public" portion of the "public vs. scientists" study, which was issued last January (Pew 2015). Actually Pew hasn't released the "scientist" (or more accurately, AAAS membership) portion of the data. I hope they do!
But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from "The Climate Science Communication Measurement Problem" (2015).
In that paper, I presented data suggesting, first, that neither "belief" in evolution nor "belief" in human-caused climate change were measures of general science literacy. Rather both were better understood as measures of forms of "cultural identity" indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.
Second, and more importantly, I presented data suggesting hat there is no relationship between "belief" in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most "climate science literacy" measures), the more polarized the respondents were on "belief" in AGW--which, again, is best understood as simply an indicator of "who one is," culturally speaking.
Well, it turns out one can see the same patterns, very clearly, in the Pew data.
Patterned on the NSF Indicators "basic facts" science literacy test (indeed, "lasers" is an NSF item), the Pew battery consists of six items:
As I've explained before, I'm not a huge fan of the "basic facts" approach to measuring public science comprehension. In my view, items like these aren't well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.
One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly. But certifying that she has stocked her "basic fact" inventory with any particular set of such propositions doesn't give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.
For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent's ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former. The "Ordinary Science Intelligence" battery, used in the Measurement Problem paper, includes "cognitive reflection" and "numeracy"modules for this purpose.
Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn't released the underlying data for that one.
But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale-- suggesting they are indeed measuring some sort of science-related apptitude.
But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.
For evolution, Pew used it's two-part question, which first asks whether the respondent believes (1) "Humans and other living things have evolved over time" or (2) "Humans and other living things have existed in their present form since the beginning of time."
Subjects who pick (1) then are asked whether (3) "Humans and other living things have evolved due to natural processes such as natural selection" or (4) "A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."
Basically, subjects who select (2) are "new earth creationists." Subjects who select (4) are generally regarded as believing in "theistic evolution." Intelligent design isn't the only variant of "theistic evolution," but it is certainly one of the accounts that fit this account.
Only subjects who select (3)-- "humans and other living things have evolved due to natural processes such as natural selection" -- are the only ones furnishing the response that reflects science's account of the natural history of humans.
So I created a variable, "evolution_c," that reflects this answer, which was in fact selected by only 35% of the subjects in Pew's U.S. general public sample.
On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) "the earth is getting warmer mostly because of natural patterns in the earth’s environment," (2) "the earth is getting warmer mostly because of human activity such as burning fossil fuels," or (3) "there is no solid evidence that the earth is getting warmer."
About 50% of the respondents selected (2). I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.
For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 "Democrat"; 2 "Independent leans Democrat"; 3 "Independent"; 4 "Independent leans Republican"; and 5 "Republican"). The composite scale had modest reliability (α = 0.61).
For religiosity, I combined two items. One was a standard Pew item on church attendance. The other was a dummy variable, "nonrelig," scored "1" for subjects who said they were either "atheists," "agnostics" or "nothing in particular" in response to a religious-denomination item (α = 0.66).
But the very first thing I did was toss all of these items -- the 6 "science literacy" ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity--into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).
Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.
But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the "science literacy" factor. Instead they loaded on the religiosity and right-left political orientation factors, respectively.
This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate change are not indicators of "science comprehension" but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.
To test this inference further, I used "differential item function" or "DIF" analysis (Osterlind & Everson, 2009).
Based on item response theory, DIF examines whether a test item is "culturally biased"--not in an animus sense but a measurement one: the question is whether the responses to the item measure the "same" latent proficiency (here, science literacy) in diverse groups. If it doesn't-- if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it "correctly"--then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.
In Measurement Problem, I used DIF analysis to show that belief in evolution is "biased" against individuals who are high in religioisity.
Using the Pew data (regression models here), one can see the same bias:
The latter but not the former are likely to indicate acceptance of science's account of the natural history of humans as their science literacy scores increase. This isn't so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations).
The obvious conclusion is that the evolution item isn't measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery.
In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy. That analysis, though, assessed responses to a "belief in climate change" item (one identical to Pew's) in relation to scores on a general climate-science literacy assessment, the "Ordinary Climate Science Intelligence" (OCSI) assesssment. Pew's scientist-AAAS study didn't have a climate-science literacy battery.
Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: "What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?" (CO2).
Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases--that makes perfect sense!
But as their science literacy score increases, individuals of diverse political outlooks don't converge on "belief in human caused climate change"; they become more polarized. That question is measuring who the subjects are, not what they know about about climate science.
I probably will tinker a bit more with these data and will tell you if I find anything else of note.
But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew. So reciprocate Pew's contribution to knowledge by analyzing them & reporting what you find out!
Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).
Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.
This is a bit of correspondence with a thoughtful scholar & friend who was commenting on The Politically Motivated Reasoning Paradigm.
“Biggest question [for me] is what is the relationship between values and identities. You make clear that people can be acting protect any type prior but those two seem distinct in some ways and may benefit from more discussion. . . . .
[I am interested in the] larger question about whether you would call cultural cognition orientations an identity. The question arose because [I have a colleague] who is writing . . . on cases of identity-value conflict such as when a minority holds distinct values from the modal member of his/her identity group.
I’m eager to offer a response or acknowledge I don’t have a very good one to the sort of “value-identity” conflict you are envisioning.
But I think we need to "iterate" a bit more in order to converge on a common conception of the issue here.
So I'm not going to try to address the "identity-value" conflict right off. Instead, I am going to discuss different understandings of how "values" & "identity" relate to one another in a research program that looks at the sort of "fact polarization" of interest to cultural cognition & other conceptions of PMR.
I'll start w/ two theories of why one might measure "values" to operationalize the source of "motivation" in PMR: dissonance avoidance & status protection.
As a preliminary point, neither theory understands the sorts of "values" being measured as what motivates information processing. For both, the theoretically posited "motivator" is some unobserved (latent) disposition that causes the observable expression of "values," which are then treated simply as "indicators" or imperfect measures of that latent disposition.
For that reason, both theories are agnostic on whether the relevant values are "truly, really" "political," "cultural" or something else. All "value" frameworks are just alternative measures of the same unobserved latent dispositions. The only issue is what measurement strategy works best for explanation, prediction, & prescription -- a criterion that will itself be specific to the goal of the research (e.g., I myself use much more fine-grained indicators, corresponding to much narrower specifications of the underlying dispositions, when I'm doing "field based" science communication in a region like S.E. Florida than I do when I'm participating in a scholarly conversation about mass opinion formation in "American society": the constructs & measurement instruments in former context wouldn't have same traction in latter context but the ones w/ most traction in latter furnish less in the former, where the consumers of the information are trying to do something that is advanced by a framework fitted more to their conditions).
Okay, the 2 theories:
1. Dissonance avoidance (DA). We might imagine that as "political beings" individuals are like diners at a restaurant that serves a "fixe prixe" menu of "ideologies" or "worldviews" or whathaveyou. After making their selections, it would be psychologically painful for these individuals to have to acknowledge that the world is configured in a way that forecloses achieving states of affairs associated with their preferred "worldview"or "ideology" or whatever: e.g., that unconstrained private orderings of the sort prized by individualists will burden the natural environment with toxic byproducts that make such a way of life unsustainable. They are therefore motivated to construe information in a manner that "fits" the evidence on risk and like facts to positions ("beliefs") supportive of policies congenial to their worldviews & unsupportive of policies uncongenial to the same.
2. Status protection (SP). DA is a relatively individualistic conception of PMR; SP is more "social." On this account, individual well-being is understood to be decisively linked to membership in important "affinity groups," whose members are bound together by their shared adherence to ways of life. Cultivating affective styles that evince commitment to the positions conventionally associated with these groups will be essential to signaling membership in and loyalty to one or another of them. "Policy" positions will routinely bear such associations. But sometimes risks and like policy-relevant facts will come to bear social meanings (necessarily antagonistic ones in relation to the opposing groups) that express group membership & loyalty too. In those cases, PMR will be a mode of information processing rationally suited to forming the affective styles that reliably & convincingly express an individual's "group identity."
Avoiding the psychic disappointment of assenting to facts uncongenial to an individual's personal "policy preferences" is not the truth-extrinsic goal that "motivates" cognition on this view. Status protection--i.e., the maintenance of the sort of standing in one's group essential to enjoying access to the benefits, material and emotional, that membership imparts--is.
Okay, those are the two theories.
But let me be clear: neither of these theories is "true"!
Not because some other one is -- but because no theories are. All theories are simplified, imperfect "models"-- or pictures or metaphors, even! -- that warrant our acceptance to the extent that they enable us to do what we want to do w/ an empirical research program: enlarge our capacity to explain, predict & prescribe.
For now at least.
But in any case, my question is whether your & your colleague's question --whether "cultural cognition orientations" are "an identity" -- can be connected to this particular account of how "values," "identities," & PMR are connected? If so, then, I might have something more helpful to say! If not, then maybe what you have to say about why not will help me engage this issue more concretely.
But that's because it's a difficult question. Or at least is if one treats it as one of "measurement" & "weight of the evidence." I remain convinced that it is not of great practical significance--that is, even if "motivated reasoning" and like dynamics are "asymmetric" across the ideological spectrum (or cultural spectra) that define the groups polarized on policy-consequential facts, the evidence is overwhelming and undeniable that members of all such groups are subject to this dynamic, & to an extent that makes addressing its general impact -- rather than singling out one or another group as "anti-science" etc. -- the proper normative aim for those dedicated to advancing enlightened self-govt.
But issues of "measurement" & "weight of the evidence" etc. are still, in my view, perfectly legitimate matters of scholarly inquiry. Indeed, pursuit of them in this case will, I'm sure, enlarge knowledge, theoretical and practical.
"Asymmetry" is an open question--& not just in the sense that nothing in science is ever resolved but in the sense that those on both "sides" (i.e., those who believe politically motivated reasoning is symmetric and those who believe it is asymmetric) ought to wonder enough about the correctness of their own position to wish that they had more evidence.
Here's an excerpt from my The Politically Motivated Reasoning Paradigm survey/synthesis essay addressing the state of the "debate":
4. Asymmetry thesis
The “factual polarization” associated with politically motivated reasoning is pervasive in U.S. political life. But whether politically motivated reasoning is uniform across opposing cultural groups is a matter of considerable debate (Mooney 2012).
In the spirit of the classic “authoritarian personality” thesis (Adorno 1950), one group of scholars has forcefully advanced the claim that it is not. Known as the “asymmetry thesis,” their position links biased processing of political information with characteristics associated with right-wing political orientations. Their studies emphasize correlations in observational studies between conventional ideological measures and scores on self-report reasoning-style scales such as “need for closure” and “need for cognition” and on personality-trait scales such “openness to experience” (Jost, Glaser, Kruglanski & Sulloway 2003; Jost, Hennes & Lavine 2013).
But the research that the “neo-authoritarian personality” school features supplies weak evidence for the asymmetry thesis. First, the reasoning style measures that they feature are of questionable validity. It is a staple of cognitive psychology that defects in information processing are not open to introspective observation or control (Pronin 2007) –a conclusion that applies to individuals high as well as more modest in cognitive proficiency (West, Meserve & Stanovich 2012). There is thus little reason to believe a person’s own perception of the quality of his reasoning is a valid measure of the same.
Indeed, tests that seek to validate such self-report reasoning style scales consistently find them to be inferior in predicting the disposition to resort to conscious, effortful information processing than performance-based measures such as the Cognitive Reflection Test and Numeracy (Toplak, West & Stanovich 2011; Liberali, Reyna, Furlan & Pardo 2011). Those measures, when applied to valid general population samples, show no meaningful correlation with party affiliation or liberal-conservative ideology (Kahan 2013; Baron 2015).
More importantly, there is no evidence that individual differences in reasoning style predict vulnerability to politically motivated reasoning. On the contrary, as will be discussed in the next part, evidence suggests that proficiency in dispositions such as cognitive reflection, numeracy, and science comprehension magnify politically motivated reasoning (Fig. 6).
Ultimately, the only way to determine if politically motivated reasoning is asymmetric with respect to ideology or other diverse systems of identity-defining commitments is through valid experiments. There are a collection of intriguing experiments that variously purport to show that one or another form of judgment—e.g., moral evolution, willingness to espouse counter-attitudinal positions, the political valence of positions formed while intoxicated, individual differences in activation of “brain regions” etc.—is ideologically asymmetric or symmetric (Thórisdóttir & Jost 2011; Jost, Nam, Jost & Van Bavel 2013; Eidelman et al. 2012; Crawford & Brandt 2013; Schreiber, Fonzo et al. 2013). These studies vary dramatically in validity and insight. But even the very best and genuinely informative ones (e.g., Conway, Gideon, et al. 2015; Liu & Ditto 2013; Crawford 2012) are in fact examining a form of information processing distinct from PMRP and with methods other than the PMRP design or its equivalent.
One study that did use the PMRP design found no support for the “asymmetry thesis” (Kahan 2013). In it, individuals of left- and right-wing political outlooks displayed perfectly symmetric forms of politically motivated fashioning in evaluating evidence that people who reject their group’s position on climate change have been found to engage in open-minded evaluation of evidence (Figure 5).
But that’s a single study, one that like any other is open to reasonable alternative explanations that themselves can inform future studies. In sum, it is certainly reasonable to view the “asymmetry thesis” issue as unresolved. The only important point is that progress in resolving it is unlikely to occur unless studied with designs that reflect PMRP design or ones equivalently suited to support inferences consistent with the PMRP model.
Adorno, T.W. The Authoritarian personality (Harper, New York, 1950).
Baron, J. Supplement to Deppe et al.(2015). Judgment and Decision Making 10, 2 (2015).
Conway, L.G., Gornick, L.J., Houck, S.C., Anderson, C., Stockert, J., Sessoms, D. & McCue, K. Are Conservatives Really More Simple‐Minded than Liberals? The Domain Specificity of Complex Thinking. Political Psychology (2015), advance on-line, DOI: 10.1111/pops.12304.
Crawford, J.T. The ideologically objectionable premise model: Predicting biased political judgments on the left and right. Journal of Experimental Social Psychology 48, 138-151 (2012).
Eidelman, S., Crandall, C.S., Goodman, J.A. & Blanchar, J.C. Low-Effort Thought Promotes Political Conservatism. Pers. Soc. Psychol. B. (2012).
Jost, J.T., Glaser, J., Kruglanski, A.W. & Sulloway, F.J. Political Conservatism as Motivated Social Cognition. Psychological Bulletin 129, 339-375 (2003).
Jost, J.T., Hennes, E.P. & Lavine, H. “Hot” political cognition: Its self-, group-, and system-serving purposes. in Oxford handbook of social cognition (ed. D.E. Carlson) 851-875 (Oxford University Press, New York, 2013).
Liberali, J.M., Reyna, V.F., Furlan, S., Stein, L.M. & Pardo, S.T. Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment. Journal of Behavioral Decision Making 25, 361-381 (2012).
Nam, H.H., Jost, J.T. & Van Bavel, J.J. “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance. PLoS ONE 8(4) 8, :e59837. doi:59810.51371/journal.pone.0059837 (2013).
Pronin, E. Perception and misperception of bias in human judgment. Trends in cognitive sciences 11, 37-43 (2007).
Thórisdóttir, H. & Jost, J.T. Motivated Closed-Mindedness Mediates the Effect of Threat on Political Conservatism. Political Psychology 32, 785-811 (2011).
Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).
West, R.F., Meserve, R.J. & Stanovich, K.E. Cognitive sophistication does not attenuate the bias blind spot. Journal of Personality and Social Psychology 103, 506 (2012).
I've posted a revised "preprint" version of Kahan, D.M., Hoffman, D.A., Evans, D., Devins, N., Lucci, E.A. & Cheng, K. 'Ideology'or'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment, U. Pa. L. Rev. 164 (in press).
It is prettttttttttttttttttty darn close to final.
Main difference is that it has color rather than B&W graphics. I have a feeling, w/ all the advances in information technology associated with "our internet," & w/ humans now having walked on the moon & all, that I might still live to see the day when all scholarly journals use color graphics (at least for their on-line versions; I think I've already live long enough to see the day when no one reads the "hardcopy"/"print" versions of journals!).... Call me a dreamer!
I'm sure, too, you all remember but in case not: This is the study that examines a sample of judges, lawyers, law students & ordinary people to test competing theories about how identity-protective cognition relates to critical reasoning & professional judgment.
We find that judges & lawyers who are as culturally polarized on societal risks-- like climate change & marijuana legalization--as are members of general population converge in readings of manifestly ambiguous statutes despite experimental manipulations that were intended to and did polarize culturally diverse members of the public (and to a modest extentculturally diverse law students).
We view this result as most consistent with the theory that professional judgment furnishes experts with a degree of immunity from "identity-protective reasoning" when they perform "in-domain" but not "out-of-domain" decisionmaking tasks.
But as I emphasized in another recent post (one that presents an excerpt from another "in press" paper, The Politically Motivated Reasoning Paradigm), the "weight" of the evidence the study furnishes in this regard-- particularly as it relates to other types of experts like scientists who study contested societal risks--is indeed modest. More study is called for!
I'm sure I'll live long enough to see this & every other interesting question about cognition definitively resolved too. At which point, life will be so damn boring that people will stop fretting about its finite duration.
Anyway, happy clicking on grpahics!
2. Multivariate regression model estimates
3. "Weight of the evidence" likelihood ratios
4. Data-collection process
Okay, so “yesterday,” I discussed the significance of two “confounds” in studies of “politically motivated reasoning.”
“Politically motivated reasoning” is the tendency of individuals to conform their assessment of the significance of evidence on contested societal risks and like facts to positions that are congenial to their political or cultural outlooks.
The “confounds” were heterogeneous priors and pretreatment effects. “Today” I want to address how to avoid the nasty effects of these confounds.
The inference-defeating consequences of heterogeneous priors and pretreatment effects are associated with a particular kind of study design.
In it, the researcher exposes individuals of opposing political or cultural identities to counter-attitudinal information on a hotly contested topic such as gun control or climate change. Typically, the information is in the form of empirical studies or advocacy materials, real or fictional. If the information exposure fails to narrow, or even widens, the gap in the positions of subjects of opposing identities, this outcome is treated as evidence of politically motivated reasoning.
But as I explained in the last post, this inference is unsound.
Imagine, e.g., that members of one politically identifiable group might be more uniformly committed to “their side’s” position than the those of another, some of whose members might be weakly supportive of the former’s position. If so, we would expect members of the latter group to be overrepresented among the subjects who “change their minds” when members of both groups are exposed to evidence more supportive of the other group’s position. This is the “heterogeneous priors” confound.
Alternatively, a greater proportion of one group might already have been exposed to evidence equivalent to that featured in the study design. In that case, fewer members of that group would be expected to change their mind—not because they were biased but because they would have already adjusted their beliefs to take account of it. This is the “pretreatment effect” confound.
Put these two confounds together, and it’s clear that, under the design I described, no outcome is genuinely inconsistent with subjects having assessed the information in the “politically unbiased” manner associated with Bayesian information processing (Druckman, Fein & Leeper 2012; Druckman 2012; Bullock 2009; Gerber & Green 1999).
The solution, then, is to change the design.
That’s one of the central points of The Politically Motivated Reasoning Paradigm (in press). In that paper, I describe studies (e.g., Uhlman, Pizzaro, Tannenbaum, & Ditto 2009; Bolsen, Druckman & Cook 2014; Scurich & Shniderman 2014) that use a common strategy to avoid the confounding effects of heterogeneous priors and pretreatment effects. I refer to it as the “PMRP” (for “Politically Motivated Reasoning Paradigm) “design.”
Under the PMRP design, the researcher manipulates the subjects’ perception of the consequences of crediting one and the same piece of evidence. What’s compared is not individual subjects’ reported beliefs before and after being exposed to information but rather the weight or significance subjects of opposing predispositions attach to the evidence conditional on the experimental manipulation(cf. Koehler 1993). If subjects credit the evidence when they perceive it is consistent with their political predispositions but dismiss it when it’s not, then we can be confident that it is their politically biased weighing of evidence and not any discrepancy in priors or pre-study exposure to evidence that is driving subjects of opposing cultural or political identities apart.
One CCP study used the PMRP design to examine how study subjects of opposing cultural identities would assess the behavior of political protestors (Kahan, Hoffman, Evans, Braman & Rachlinski 2012). Instructed to adopt the perspective of juries in a civil case, the subjects examined a digital recording of demonstrators alleged to have assaulted passersby. The cause and identity of the demonstrators was manipulated: in one condition, they were described as “anti-abortion protestors” assembled outside the entrance to an abortion clinic; in the other, they were described as “gay-rights advocates” protesting the military’s “Don’t ask, don’t tell” policy outside a military-recruitment center.
Subjects of opposing “cultural worldviews” who were assigned to the same experimental condition—and who thus believed they were watching the same type of protest—reported forming opposing perceptions of whether the protestors “blocked” and “screamed in the face” of pedestrians trying to access the facility. At the same time, subjects who were assigned to different conditions—and who thus believed they were watching different types of protests—formed perceptions comparably different from subjects who shared their cultural worldviews.
In line with these opposing perceptions, the results in the two conditions produced mirror-image states of polarization on whether the behavior of the protestors met the factual preconditions for liability.
But that outcome—an increased state of political polarization, in effect, in “beliefs”—is not, in my view, an essential one under the PMRP design. Indeed, if the issue featured in a study is familiar (like whether human beings are causing climate change, or whether permitting individuals to carry concealed firearms in public increases or decreases crime), we shouldn’t expect a one-shot exposure to evidence in the lab to change subjects' “positions.”
The only thing that matters is whether subjects of opposing outlooks opportunistically shifted the weight (or in Bayesian terms, the likelihood ratio) they assigned to one and the same piece of evidence based on its congruence with their political predispositions. If that’s how individuals of opposing cultural identities behave outside the lab, then contrary to what would occur under a Bayesian model of information processing they will not converge on politically contested facts no matter how much valid evidence they are furnished with.
Or won’t unless & until something is done in the world that changes the stake individuals with outlooks like those have in conforming their assessment of evidence to the positions then associated with their cultural identities (Kahan 2015).
The PMRP design is definitely not the only one that validly measures politically motivated reasoning. Indeed, the consistency of findings of studies that reflect the PMRP design and those based on other designs (e.g., Binning, Brick, Cameron, Cohen, & Sherman 2015; Nyhan, Riefler & Ubel 2015; Druckman & Bolsen 2011; Bullock 2007; Cohen 2003) furnish more reason for confidence that the results of both are valid. Nevertheless, the test that the PMRP design is self-consciously constructed to pass—demonstration that individuals are opportunistically adjusting the weight they assign evidence to conform it to their political identities—supplies the proper standard for assessing whether the design of any particular study supports an inference of politically motivated reasoning.
Binning, K.R., Brick, C., Cohen, G.L. & Sherman, D.K. Going Along Versus Getting it Right: The Role of Self-Integrity in Political Conformity. Journal of Experimental Social Psychology 56, 73-88 (2015).
Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit. Behav. 36, 235-262 (2014).
Bullock, J. The enduring importance of false political beliefs. Unpublished Manuscript, Stanford University (2007).
Bullock, J.G. Partisan Bias and the Bayesian Ideal in the Study of Public Opinion. The Journal of Politics 71, 1109-1124 (2009).
Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).
Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011).
Druckman, J.N., Fein, J. & Leeper, T.J. A source of bias in public opinion stability. American Political Science Review 106, 430-454 (2012).
Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).
Druckman, J.N., Fein, J. & Leeper, T.J. A source of bias in public opinion stability. American Political Science Review 106, 430-454 (2012).
Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).
Kahan, D. M. The Politically Motivated Reasoning Paradigm. Emerging Trends in Social & Behavioral Sciences (in press).
Kahan, D. M. What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12 (2015).
Kahan, D. M., Hoffman, D. A., Braman, D., Evans, D., & Rachlinski, J. J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev., 64, 851-906 (2012).
Nyhan, B. & Reifler, J. The roles of information deficits and identity threat in the prevalence of misperceptions. (2015), http://www.dartmouth.edu/~nyhan/opening-political-mind.pdf
Scurich, N. & Shniderman, A.B. The Selective Allure of Neuroscientific Explanations. PLoS One 9 (2014).
Uhlmann, E.L., Pizarro, D.A., Tannenbaum, D. & Ditto, P.H. The motivated use of moral principles. Judgment and Decision Making 4 (2009).
The paper I posted “yesterday”—“The Politically Motivated Reasoning Paradigm”—is mainly about what “politically motivated reasoning” is and how to design studies to test whether it is affecting citizens’ assessment of evidence and by how much.
The paper is concerned, in particular, with two confounds—alternative explanations, essentially—that typically constrain the inferences that can be drawn from such studies. The problems are heterogeneous priors and pretreatment effects (Druckman, Fein & Leeper 2012; Druckman 2012; Bullock 2009; Gerber & Green 1999).
Rather than describe these constraints abstractly, let me try to illustrate the problem they present.
Imagine a researcher is doing an experiment on “politically motivated reasoning”—the asserted tendency of individuals to conform evidence on disputed risks or other policy-relevant facts to the positions that are associated with their political outlooks.
She collects information on the subjects' “beliefs” in, say, “human caused global warming” and the strength of those beliefs (reflected in their reported probability that humans are the principal cause of it). She then presents the subjects with evidence—in the form of a study that suggests human activity is the principal cause of global warming--and measures their beliefs and their confidence in those beliefs again.
This is what she observes:
Obviously, the subjects have become even more sharply divided. The difference in the proportion of Democrats and Republicans who accept AGW widened, as did the difference in their respective estimates of the probability of AGW.
Does the result support an inference that the subjects selectively credited or discredited the evidence consistent with their political predispositions?
The clam that individuals are engaged in “politically motivated reasoning” implies they aren’t assessing the information in an unbiased manner, uninfluenced by the relationship between that information and outcomes congenial to their political views.
We can represent this kind of “unbiased” information processing in a barebones Bayesian model, in which individuals revise their existing belief in the probability of a hypothesis, expressed in odds, by a factor equivalent to how much more consistent the new information is with that hypothesis than with a rival one. That factor is known as the “likelihood ratio,” and conceptually speaking reflects the “weight” of the new information with respect to the competing hypotheses.
The distinctive feature of “politically motivated reasoning” is the endogeneity of the likelihood ratio and individuals’ political predispositions. The political congeniality of crediting the evidence determines the weight they assign it. Because “whose side does this evidence support—yours or mine?” is a criterion unrelated to its validity, individuals who reason this way will fail to converge on the best understanding of the best available evidence.
But in the hypothetical study I described, we really don’t know if that’s happening. Certainly, we would expect to see a result like the one reported—partisans becoming even more “polarized” as they examine the “same” evidence--if they were engaged in politically motivated reasoning.
But we could in fact see exactly this dynamic consistent with the unbiased, Bayesian information-processing model.
As a simplification, imagine the members of a group of deliberating citizens, Rita, Ron, and Rose—all of whom are Republicans—and Donny, Dave, Daphne—all Democrats. Each has a “belief” about the contribution of human beings to “human caused climate change,” and each has a sense of how confident they are about their beliefs—a sensibility we can represent in terms of how probable they think it is (expressed in odds) that human beings are the principal cause of climate change.
Now imagine that they are shown a study. The study presents evidence supporting the conclusion that humans are the principal cause of climate change.
Critically, all of the individuals in this group agree about the weight properly afforded the evidence in the study!
They all agree, let’s posit, that the study has modest weight—a likelihood ratio of 3, let’s say, which means that it is three times more consistent with the hypothesis that human beings are responsible for climate change than with the contrary hypothesis (don’t confuse likelihood ratios with “p-values” please; the latter have nothing to do with the inferential weight evidence bears).
In other words, none of them adjusts the likelihood ratio or weight afforded to the evidence to fit their predispositions.
Nevertheless, the results of the hypothetical study I described could still display the polarization the researcher found!
This table shows how:
First, the individuals in this "sample" started with different priors. Daphne, e.g., put the probability that human beings were causing climate change at 2:1 (0.5:1 in favor) against before she got the information. Rita’s prior odds were 1000:1 against (.001:1 in favor).
When they both afforded the new information a likelihood ratio of 3, Daphne flipped from the view that human beings “probably” weren’t responsible for climate change to the view that they probably were (1.5:1 or 3:2 in favor). But because Rita was more strongly convinced that human beings weren’t causing climate change, she persisted in her belief that humans probably weren’t responsible for climate change even after appropriately adjusting downward (from 1000:1 to about 333:1) against (Bullock 2009).
Second, the individuals in our sample started with differing amounts of knowledge about the existing evidence on climate change.
In particular, Ron and Rose, it turns out, already knew about the evidence that the researcher showed them in the experiment! That's hardly implausible: members of the public are constantly being bombarded with information on climate change and similarly contentious topics. Their priors—10:1 against against human-caused climate change, and 2:1 in favor, respectively--already reflected their unbiased (I’m positing) assessment of that information (or its practical equivalent).
They thus assigned the evidence a likelihood ratio of “1” in reporting their "after evidence" beliefs in the study not because they were conforming the likelihood ratio to their predispositions—indeed, they agree that the evidence is 3x more consistent with the hypothesis that humans are causing climate change than that they are not—but because their priors already reflected having given the information that weight when they previously encountered it in the real world.
If the “outcome variable” of the study is “what percentage of Republicans and Democrats think human activity is a principal cause of climate change,” then we will see polarization even with Bayesian information processing—i.e, without the sort of selective crediting of information that is the signature of politically motivated reasoning--becaues of the heterogeneity of the group members' priors.
Likewise, if we examine the “mean” probabilities assigned to AGW by the Democrats and Republicans, we find the differential grew in the information-exposed condition. The reason, however, wasn't differences in how much weight they gave the information, but pre-treatment (pre-study) differences in their exposure to information equivalent to that conveyed to them in the experiment (Druckman, Fein & Leepr 2012).
In sum, given the study design, we can’t draw confident inferences that the subjects engaged in politically motivated reasoning. They could have. But because of the confounds of heterogeneous priors and pretreatment exposure to information, we could have ended up with exactly these results even if they were engaged in unbiased, Bayesian information processing.
To draw confident inferences, then, we need a better study design for politically motivated reasoning—one that avoids these confounds.
I describe that design in the “Politically Motivated Reasoning Paradigm” paper. I call it the “Politically Motivated Reasoning Paradigm” (PMRP) design.
I’ll say more about it . . . “tomorrow”!
Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).
Druckman, J.N., Fein, J. & Leeper, T.J. A source of bias in public opinion stability. American Political Science Review 106, 430-454 (2012).
Bullock, J.G. Partisan Bias and the Bayesian Ideal in the Study of Public Opinion. The Journal of Politics 71, 1109-1124 (2009).
Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).
A one-table version of the "mock data" illustrating how "heterogeneous priors" & "pretreatment effects" can defeat an inference of politically motivated reasoning in a study that treats "change in belief" as outcome measure. Just one of the improvements in Politically Motivated Reasoning Paradigm originating in helpful suggestions of generous commentators who've downloaded & read draft.
What is it, how do you measure it, is it ideologically symmetric, do any of the herbal supplements advertised as counteracting it really work, etc. Take a look & find out.
Still time for revisions, so comments welcome!
"*Scientists* & identity-protective cognition? Well, on the one hand ... on the other hand ... on the *other* other hand ..." A fragment
From something I'm working on. I'll post the rest of it "tomorrow," in fact. But likely this section will end up on the cutting room floor (that's okay; there's lots of stuff down there & eventually I expect to find use for most of it someplace; is a bit of fire hazard, though . . . .)
6. Professional judgment
Ordinary members of the public predictably fail to get the benefit of the best available scientific evidence when their collective deliberations are pervaded by politically motivated reasoning. But even more disturbingly, politically motivated reasoning might be thought to diminish the quality of the best scientific evidence available to citizens in a democratic society (Curry 2013).
Not only do scientists—like everyone else—have cultural identities. They are also highly proficient in the forms of System 2 information processing known to magnify politically motivated reasoning. Logically, then, it might seem to follow that scientists’ factual beliefs about contested societal risks are likely skewed by the stake they have in conforming information to the positions associated with their cultural groups.
But a contrary inference would be just as “logical.” The studies linking politically motivated reasoning with the disposition to use System 2 information processing have been conducted on general public samples, none of which would have had enough scientists in them to detect whether being one matters. Unlike nonscientists with high CRT or Numeracy scores, scientists use professional judgment when they evaluate evidence relevant to disputed policy-relevant facts. Professional judgment consists in habits of mind, acquired through training and experience and distinctively suited to specialized forms of decisionmaking. For risk experts, those habits of mind confer resistance to many cognitive biases that can distort the public’s perceptions(Margolis 1996). It is perfectly plausible to believe that one of the biases that professional judgments can protect risk experts from is “politically motivated reasoning.”
Here, too, neither values nor positions on disputed policies can help decide between these competing empirical claims. Only evidence can. To date, however, there are few studies of how scientists might be affected by politically motivated reasoning, and the inferences they support are equivocal.
Some observational studies find correlations between the positions of scientists on contested risk issues and their cultural or political orientations (Bolsen, Druckman, & Cook 2015; Carlton, Perry-Hill, Huber & Prokopy 2015). The correlations, however, are much less dramatic than ones observed in general-population samples. In addition, with one exception (Slovic, Malmfors et al. 1995), these studies have not examined scientists’ perceptions of facts in their own domains of expertise.
This is an important point. Professional judgment inevitably comprises not just conscious analytical reasoning proficiencies but perceptive sensibilities that activate those proficiencies when they are needed (Bedard & Biggs 1991; Marcum 2012). Necessarily preconscious (Margolis 1996), these sensibilities reflect the assimilation of the problem at hand to an amply stocked inventory of prototypes. But because these prototypes reflect the salient features of problems distinctive of the expert’s field, the immunity from bias that professional judgment confers can’t be expected to operate reliably outside the domain of her expertise (Dane & Pratt 2007).
A study that illustrates this point examined legal professionals. In it, lawyers and judges, as well as a sample of law students and members of the public, were instructed to perform a set of statutory interpretation problems. Consistent with the PMRP design, the facts of the problems—involving behavior that benefited either illegal aliens or “border fence” construction workers; either a pro-choice or pro-life family counseling clinic—were manipulated in a manner designed to provoke responses consistent with identity protective cognition in competing cultural groups. The manipulation had exactly that effect on members of the public and on law students. But it didn’t on either judges or lawyers: despite the ambiguity of the statutes and the differences in their own cultural values, those study subjects converged in their responses, just as one would predict if one expected their judgments to be synchronized by the common influence of professional judgment. Nevertheless, this relative degree of resistance to identity-protective reasoning was confined to legal-reasoning tasks: the judges and lawyers’ respective perceptions of disputed societal risks—from climate change to marijuana legalization—reflected the same identity-protective patterns observed in the general public and student samples (Kahan, Hoffman, Evans, Lucci, Devins & Cheng in press). Extrapolating, then, we might expect to see the same effect in risk experts: politically motivated divisions on policy-relevant facts outside the boundaries of their specific field of expertise; but convergence guided by professional judgment inside of them.
Or alternatively we might expect convergence not on positions that are true necessarily but that are so intimately bound up with a field’s own sense of identity that acceptance of them has become a marker of basic competence (and hence a precondition of recognition and status) within it. In Koehler (1993), scientists active in either defending or discrediting scientific proof of “parapsychology” were instructed to review the methods of a fictional ESP study. The result of the study was experimentally manipulated: Half the scientists got one that purported to find evidence supporting ESP, the other half one that purported to find evidence not supporting it. The scientists’ assessments of the quality of the study’s methods turned out to be strongly correlated with the fit between the represented result and the position associated with the scientists’ existing positions on the scientific validity of parapsychology—although Koehler found that this effect was in fact substantially more dramatic among the “skeptic” than the “non-skeptic” scientists.
Koehler’s study reflects the core element of the PMRP design: the outcome measure was the weight that members of opposing groups gave to one and the same piece of evidence conditional on the significance of crediting it. Because the significance was varied in relation to the subjects’ prior beliefs and not their stake in some goal independent of forming an accurate assessment, the study can and normally is understood to be a demonstration of confirmation bias. But obviously, the “prior beliefs” in this case were ones integral to membership in opposing groups, the identity-defining significance of which for the subjects was attested to by how much time and energy they had devoted to promoting public acceptance of their respective groups’ core tenets. Extrapolating, then, one might infer that professional judgment might indeed fail to insulate from the biasing effects of identity-protective cognition scientists whose professional status has become strongly linked with particular factual claims.
So we are left with only competing plausible conjectures. There’s nothing at all unusual about that. Indeed, it is the occasion for empirical inquiry—which here would take the form of the use of the PMRP design or one of equivalent validity to assess the vulnerability of scientists to politically motivated reasoning—both in and outside of the domains of their expertise, and with and without the pressure to affirm “professional-identity-defining” beliefs.
Bedard, J.C. & Biggs, S.F. Pattern recognition, hypotheses generation, and auditor performance in an analytical task. Accounting Review, 622-642 (1991).
Bolsen, T., Druckman, J.N. & Cook, F.L. Citizens’, scientists’, and policy advisors’ beliefs about global warming. The ANNALS of the American Academy of Political and Social Science 658, 271-295 (2015).
Carlton, J.S., Rebecca, P.-H., Matthew, H. & Linda, S.P. The climate change consensus extends beyond climate scientists. Environmental Research Letters 10, 094025 (2015).
Dane, E. & Pratt, M.G. Exploring Intuition and its Role in Managerial Decision Making. Academy of Management Review 32, 33-54 (2007).
Kahan, D.M., Hoffman, D.A., Evans, D., Devins, N., Lucci, E.A. & Cheng, K. 'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev. 164 (in press).
Marcum, J.A. An integrated model of clinical reasoning: dual-process theory of cognition and metacognition. Journal of Evaluation in Clinical Practice 18, 954-961 (2012).
Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).
Margolis, H. Patterns, thinking, and cognition : a theory of judgment (University of Chicago Press, Chicago, 1987).
Slovic, P., Malmfors, T., Krewski, D., Mertz, C.K., Neil, N. & Bartlett, S. Intuitive toxicology .2. Expert and lay judgments of chemical risks in Canada. Risk Analysis 15, 661-675 (1995).
What is "PMRP," you say? Well, here's your answer.
Disentanglement principle corollary no. 16a: "You don't have to choose ... between being a reality tv star & being excited to learn what science knows (including what it knows about how people come to know what's known by science)"
Sometimes 1 or 2 of the 14 billion regular followers of this blog ask, "are there really 14 billion reglar followers of this blog?..."
Yeah. There really are!
Sorry for lack of context here, but my guess is that it will become clear enough after a few sentences.
I apologize for disparaging your work at the Society for Risk Analysis session yesterday. You perceived my remarks that way, and on reflection I can see why you did, & why others likely formed the same impression. I truly regret that.
In fact, it wasn’t your work that I meant to be criticizing.
My intention was to respond to the argument you presented (with the admirable degree of clarity I wish I had been able to summon in response) in favor of “practical scholarship.” Because you see, I don’t think the sort of work you defended is either practical or scholarly.
You proposed to those in the room that the empirical study of climate science communication should be evaluated in light of its contribution to a “goal” of promoting a “world war II scale mobilization” of public opinion (I encourage you to post your slides; they were very well done).
Research aimed at identifying the significance of values & science comprehension for public conflict on climate change (the subject of the panel we were both on; great new research unveiled by the Shi, Visschers, Siegrist team!) doesn’t meet this criterion, you made clear. Indeed, it detracts from it, because, in your opinion, it implies change will take a “long time” (I disagree it implies any such thing but that’s another matter).
As an example of research that is “practical,” you offered your own, which you characterized as aimed at convincing democratic representatives that their prospects for re-election depend on honoring the sorts of “public preferences” revealed by the structured preference-elicitation methods you described.
You also stated that your work, along with that of others, is intended to “create cover” for officials to take positions supportive of climate change policies (a common refrain among researchers who generate endless streams of public opinion polls purporting to find that there is fact widespread public consensus for one or another climate change mitigation initiative).
We should all pitch in to help acehieve this result, you exhorted.
Again, to be clear, my point is that this vision of empirical work on science communication is neither “scholarly” nor “practical.”
Scholarship—of the empirical variety, in any event—tries to help people figure out what’s true, particularly under conditions in which there are multiple plausible understandings of phenomena of consequence. That’s what the scholarship on the relationship between “values” and “science literacy” that you disparaged is about. The occasion for that scholarly inquiry is a practical one: to figure out what sorts of dynamics are blocking public engagement with the best available evidence on climate change.
What’s definitely not practical (as Theda Skocpol has noted) is to think that public opinion researchers can be mobilized into a project to “show” elected officials what the public “really” wants.
Elected officials are in the profession of satisfying the expectations of their constituents. They invest plenty of money, most of the time wisely, to figure out how to do that.
They know that surveys purporting to show that a “majority” of Republicans support “the EPA's greenhouse gas emission standards” are measuring non-opinion. They know too that the sort of preference-elicitation methods you demonstrated—however truly valuable they might be for learning about cognition—are not modeling the decisionmaking dynamics that determine election outcomes.
Most importantly, they know—because those who agree with your conception of “practical scholarship” are constantly proclaiming this-- that your goal is to create an impression in these actors for your own purposes: to help “shove” them into supporting a particular set of policies (enough with these “nudges” already, you inspiringly proclaimed: we are facing the moral equivalent of Hitler invading Europe!), not help them get re-elected.
They know, in short, that “non-opinion” survey methods are actually intended to message them! And I would have sort of thought this was obvious, but it’s not a very good “messaging strategy” to incessantly go on & on within earshot of Republicans about “strategies” for “overcoming” the “Republicans' cognitive resistance to climate mitigation.”
The targeted politicians (Democrat and Republican) therefore sensibly discount (ignore really) everything produced by researchers who are following this "message the politicians" strategy. They listen instead to the professionals, who tell them something very different from what these "practical scholars" are saying (over & over & over; “keep repeating—that it hasn't worked yet is proof that we just need to do it for longer!,”--another refrain inside this bubble) . Politicians who take what these researchers say at face value, they’ve observed, get knocked out of office.
I believe there is plenty that science communication researchers can do to help actual people, including elected officials, promote science-informed decisionmaking relating to climate change by collaborating with them to adapt and test lab insights to their real-world problems.
The form of research that I think is best for that aims to help those decisionmakers change the meaning of climate change in their communities, so that discussions of it no longer are perceived as being about “whose side are you on” but instead about “what do we know, what more do we need to know, and what should we do.”
That research doesn't try to conjure a new world into existence by disseminatng "studies" that constantly purport to find it already exists.
It tries to supply people who actually are acting to make such a world with empirical information that they can use to exercise their judgment as best as they can.
Indeed, what motivated my rebuke of you yesterday was frustration at how closely aligned the program you defended (very clearly, very articulately) is with divisive forms of partisan advocacy that actually perpetuate the social meanings that make climate change a “struggle for the soul of America” rather than a practical problem that all Americans, regardless of the cultural identities, have a common interest in fighting.
Frustration too at how much the sort of "practical" "scholarship" you called for is distracting and diverting and confusing people who are looking to empirical researchers for help.
At how self-defeating it obviously is ever to propose that a criterion other than “figuring out & sharing one’s best understanding of the truth on contested empirical issues” could possibly be practical.
How twisted it is to call that singularly unscientific orientation “science communication” research!
It's pretty simple really: Tell people what they need to know, not what they want to hear.
That’s both ethical and practical.
Again, sorry I disparaged your scholarly work, which I think can teach people a lot about how people think.
The intended target was your conception of “practical scholarship.” And I did very much intend to be critical of that view and of those who are propogating the mindset you very much evinced in your talk.
p.s. My slides from talk on the challenge of "unconfounding" knowledge & identity in measuring "climate change science comprehension."
What to do when stuck in Ft. Lauderdale airport b/c missing connecting flight to Keys?....
See what happens when the "Rules of Evidence Are Impossible CBR Simulator" is expanded from "8 item of proof" size cases to "10 item of proof" size ones!
Lots of people, no doubt thinking of the wildly popular "Miller-Sanjurjo Turing Machine" (MSTM), have been writing asking if a version of the CBR simulator will be made available for home use by CCPB subscribers... Stay tuned!