follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Tuesday
Aug292017

How to see replication (protective eyegear required)

This is the key finding from “Rumors of ‘non-replication’. . . Greatly Exaggerated” (Kahan & Peters 2017).

 

Basically the idea was to display the essential comparative information from the studies in commensurable terms and in a form as economical as possible.

What is the essential information?

Well, remember, in both studies we have separate conditions in which the covariance-detection problem is being solved (click on the inset to refresh your memory of how the problem is set up).

First, there’s the politically neutral skin rash condition, in which, not surprisingly, high-numeracy subjects perform much better than low-numeracy ones.  (Panels (A) and (D)).

Second, there’s the “identity affirmed” condition.  That means that from the point of view of “one side”—either left-leaning subjects or right-leaning ones—the result in the covariance-detection problem, properly interpreted, generates an ideologically congenial answer on the effect of a ban on carrying concealed firearms.

For left-leaning subjects, that result would be that crime increases, whereas for the right-leaning ones, the identity-affirming result would be that crime actually decreases. By aggregating the responses of both right- and left-leaning subjects for whom the experiment produced this result, we can graph the impact of it in one panel for each study—(B) and (E).

Note that in those two panels, the high-numeracy subjects continue to outperform the low-numeracy ones. In short, high-numeracy subjects are better at ferreting out information that supports their “side” than are low-numeracy ones.

Of course, where one side (left- or right-leaning) is in a position to see the result as identity affirming, the other necessarily is in a position to  see the result as identity threatening. That information, too, can be plotted on one graph per study ((C) & (F)) if the responses of ideologically diverse subjects who face that situation are aggregated.

Note that, in contrast with the preceding conditions, high-numeracy subjects no longer do significantly better than low-numeracy ones, either statistically or practically.  Either they have been lulled into the characteristic “heuristic” mode of information processing or (more likely) they are using their cognitive-proficiency advantage to “rationalize” selecting the “wrong” answer.

Whichever it is, we now see a model not only of how partisans exposed to the same information assign opposing significance to it and thus end up even more polarized. In Bayesian terms, the reason isn’t that they have different priors; it’s that the subjects are assigning different likelihood ratios—i.e., different weights to one and the same piece of evidence (Kahan, Peters, Dawson & Slovic 2017; Kahan 2016).

That’s the essential information. What made the presentation relative economical was the aggregation of responses of right- and left-leaning subjects. The effect could be shown for each “side” separately, but that would require either doubling the number of graphs or creating a super-duper busy single one.

Note, too, that the amenability of the data to this sort of reporting was facilitated by running Monte Carlo simulations, which in generating 5000 or so results for each model made it possible to represent the results in each condition as a probability density distribution for subjects whose political outlooks and numeracy varied in the manner most pertinent to the study hypotheses (King, Tomz & Wittenberg 2000).

Pretty fun, don’t you think?

References

Kahan, D.M. & Peters, E. Rumors of the Non-replication of the “Motivated Numeracy Effect” Are Greatly Exaggerated. CCP Working paper No. 324 (2017) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026941

Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).

Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017).

King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci 44, 347-361 (2000), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026941.

Monday
Aug282017

"Non-replicated"? The "motivated numeracy effect"?! Forgeddaboutit! 

Limited edition--hurry up & get yours now for free!


Wednesday
Aug232017

The earth is (still) round, even at P < 0.005

In a paper forthcoming in Nature Human Behavior (I think it is still “in press”), a large & distinguished group of social scientists propose nudging (shoving?) the traditional NHST threshold from p ≤ 0.05 to P ≤ 0.005. A response to the so-called “replication crisis,” this “simple step would immediately improve the reproducibility of scientific research in many fields,” the authors (all 72 of them!) write.          

To disagree with a panel of experts this distinguished & this large is a daunting task.  Nevertheless, I do disagree.  Here’s why:

1. There is no reason to think a p-value of 0.005 would reduce the ratio of valid to invalid studies; it would just make all studies—good as well as bad—cost a hell of a lot more.

The only difference between a bad study at p ≤ 0.05 and a bad study at p ≤ 0.005 is sample size.  The same for a good study in which p ≤ 0.005 rather than p ≤ 0.05.

What makes an empirical study “good” or “bad” is the quality of the inference strategy—i.e., the practical logic that connects measured observables to the not-directly observables of interest.

 If a researcher can persuade reviewers to accept a goofy theory for a bad study (say, one on the impact of “himmicanes” on storm-evacuation advisories, the effect of ovulation on women’s voting behavior, or the influence of egalitarian sensibililties on the rate of altercations between economy class and business class airline passengers) at p ≤ 0.05, then the only thing that researcher has to do to get the study published at p ≤ 0.005  is collect more observations.

Of course, because sample recruitment is costly, forcing researchers to recruit massive samples will make it harder for researchers to generate bad studies.

But for the same reason, a p ≤ 0.005 standard will make it much harder for researchers doing good studies---ones that rest on plausible mechanisms—to generate publishable papers, too.

Accordingly, to believe that p ≤ 0.005 will improve the ratio of good studies to bad, one has to believe that scholars doing good studies will be more likely to get their hands on the necessary research funding than will scholars doing bad studies.

That’s not particularly plausible: if it were, then funders would be favoring good over bad research already—at p ≤ 0.05.

At the end of the day, a p ≤ 0.005 standard will simply reduce the stock of papers deemed publishable—period—with no meaningful impact on the overall quality of research.

2. It’s not the case that a p ≤ 0.005 standard will “dramatically reduce the reporting of false-positive results—studies that claim to find an effect when there is none—and so make more studies reproducible.”

The mistake here is to think that there will be fewer borderline studies at p ≤ 0.005 than at p ≤ 0.05.

P is a random variable.  Thus, if one starts with a p ≤ 0.05 standard for publication, there is a 50% chance that a study finding that is “significant” at  p = 0.05 will be “nonsignificant” at p = 0.05 on the next trial, even assuming both studies were conducted identically & flawlessly. (That so many replicators don’t seem to get this boggles one’s mind.)

If the industry norm is adjusted to  p ≤ 0.005, we’ll simply see another random distribution of p values, now around the mean of p ≤ 0.005.  So again, if a paper reports a finding at p = 0.005, there will be a 50% chance that the next, replication trial will produce a result that's not significant at p < 0.005. . . .

Certifying reproducibility won’t be any “easier” or any more certain. And for the reasons stated above, there will be no more reason to assume that studies that either clear or just fall short of clearing the bar at p ≤ 0.005 are any more valid  than ones that occupy the same position in relation to p < 0.05.

3. The problem of NHST cannot be fixed with more NHST.

Finally and most imporantly, the p ≤ 0.005 standard misdiagnoses the problem behind the replication crisis: the malignant craft norm of NHST.

Part of the malignancy is that mechanical rules like p ≤ 0.005 create a thought-free, “which button do I push” mentality: researchers expect publication for research findings that meet this standard whether or not the study is internally valid (i.e., goofy) .  They don’t think about how much more probable a particular hypothesis is than is the null—or even whther the null is uniquely associated with some competing theory of the obsrved effect.

A practice that would tell us exactly those things is better not only substantively but also culturally, because it forces the researcher to think about exactly those things.

Ironically, it is clear that a substantial fraction of the “Gang of 72” believes that p-value-driven NHST should be abandoned in favor of some type of “weight of the evidence” measure, such as the Bayes Factor.  They signed on to the article, apparently, because they believed, in effect, that ratcheting up (down?) the  p-value norm would generate even more evidence of the defects of any sort of threshold for NHST, and thus contribute to more widespread appreciation of the advantages of a “weight of the evidence” alternative.

All I can say about that is that researchers have for decades understood the inferential barenness of p­-values and advocated for one or another Bayesian alternative instead.

Their advocacy has gotten nowhere: we’ve lived through decades of defective null hypotheses testing and the response has always been “more of the same.”

What is the theory of disciplinary history that  predicts a sudden radicalization of the “what button do I push” proletariat of social science? 

As intriguing and well-intentioned the p ≤ 0.005 proposal is, arguments about standards aren’t going to break the NHST norm.

“It must get worse in order to get better” is no longer the right attitude.

Only demonstrating the superiority of a “weight of the evidence” alternative by doing it—and even more importantly teaching it to the next generation of social science researchers—can really be expected to initiate the revolution that the social sciences need.   

 

 

Tuesday
Aug222017

Science literacy & polarization--what replication crisis?

Sunday
Aug202017

Weekend update: Where I'll be this Fall

Saturday
Aug122017

Weekend update: "Show one, show all"--policy preference & political knowledge dataset posted

A reader asked if he could have access to the underlying data in  "yesterday's"™ post on age, political knowledge, and policy preferences.  I figured there might be others who could derive utility from them as well.  So if you are one of those persons, you can get access to the data here.

Enjoy! Feel free to use for any purpose, but please credit CCP if you do. 

And for sure let me know if you detect any glitches etc. in any of the files.

Monday
Aug072017

WSMD? JA!: Curiosity, age, and political polarization

This is approximately the 3,602nd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

“Loyal listener” @Joshua wonders if the correlation between age and conservatism might be explained by a decline in science curiosity. He was motivated to pose this interesting question in part by the interesting constraint that science curiosity imposes on politically movitated reasoning.

It was in the course of trying to construct some helpful models on this question that I came across the data featured in the last couple of posts on age, political knowledge, & partisanship. 

But now let's consider curiosity:

1. The zero-order correlation between science curiosity & age is trivial.  A linear model (again) appears to fit the relationship here pretty well. The correlation between science curiosity (as we measure it) and age is negligible—r = 0.03, p = 0.20.  For purposes of illustration, consider the probability density distributions of science curiosity for three age cohorts:

It therefore seems unlikely that some age-related deficit in science curiosity is contributing much to the oft-observed relationship between age and conservatism.

2. The additive effect of (a) science curiosity and (b) age on the intensification of partisanship appears to be very modest and is driven by latter. In a multivariate regression, science curiosity and age both make independent, additive contributions  (they don’t interact, a finding featured in Science Curiosity and Political Information Processing, Tbl. S3) to conservatism. But it is reasonably clear that (b) is responsible for most of the age-conservatism effect.

Consider:

It can be seen here that both science curiosity and age are having an effect.  The impact of the former, however, is uniform across the age continuum; it doesn’t seem to be adding to the conservatism of older citizens in a distinct way.

3. We won’t really be able to make more sense of all this until the effect of science curiosity can be assessed in relation to political knowledge and to personality traits that inform PT theory. As the last post showed, there was a massive missing-variable bias in my analyses of age, resulting from the omission of political knowledge.  Accordingly, I am reluctant to form a strong opinion on the importance of age-related curiosity without taking political knowledge into account. Unfortunately, I don’t have a dataset with both political knowledge and curiosity.

It would also be interesting, I’m sure, to add measures of the “Big 5” personality traits, especially since one of the measures—openness to experience—is sometimes assumed to evince intellectual curiosity generally.

Thursday
Aug032017

Mystery solved? Age, political knowledge, and political polarization

What's going on here? Older partisans are more extreme than younger ones1. Age and polarization. “Yesterday,”tm I reported an intriguing finding on the interaction between age and political polarization.  The finding was that partisan polarization increases with age.

This is not the same thing as saying that older citizens are “more conservative.”   That effect is already familiar to political scientists, who debate whether it is a consequence of the personality-shaping effects of aging (the “personality theory” or “PT”) or instead a lasting effect of exposure to the political climate that prevailed at some earlier, more impressionable point in older people’s lives (the “cohort theory” or “CT”).

The effect I reported had to do with the increased intensity of partisanship conditional on age. There may be (or may not be) “more” conservatives in absolute terms among the oldest cohort of citizens. But the data  posted suggested that as people age they became more intensely (or reliably?) partisan compared to younger citizens with the same partisan identity (“conservative Republican,” “liberal Democrat,” etc.).

I was surprised by this result and not sure what to make of it, so I invited feedback.

2. Political knowledge, age, polarization. One explanation, advanced on Twitter and seconded by others off-line, was that “political knowledge” might be correlated with age. 

It’s a well established finding that citizens with greater political knowledge (or sophistication) express political preferences that are more in line with their self-identified political ideology. Older citizens have (necessarily) been around longer and thus had more time to work through the relationship between their political outlooks in general and their stances on particular issues such as climate change and gun control.  This age-dependent coherence between self-reported political outlooks and policy preferences can be expected to manifest itself in the pattern I reported between age and intensity of partisan policy stances.

This explanation—let’s call it the “Kalmoe conjecture”—struck me as interesting but not particularly plausible. To test it, I rummaged around in old CCP datasets until I found one that had both a policy preference battery and a “political knowledge” one.

The latter is conventionally measured with a set of basic civic literacy items, which are well known to predict ideology/policy preference coherence.

The analysis revealed, first, that there is indeed a correlation between age and political knowledge.

Second, like the relationship of age to policy preferences, the relationship between age and political knowledge (measured with 9-item scale) features more intense political preferences among older than among younger citizens.

Third, when one regresses policy positions on age and political knowledge (as well as the interaction between these two), the relationship between age and intensity of policy positions disappears.  So if one considers, e.g., a young “conservative Republican” and an older one, there is no meaningful difference in the strength and coherence of their opposition to gun control and to mitigation of climate change by restricting carbon emissions. Of course, consistent with zillions of studies, study subjects "high" in political knowledge are more polarized than subjects who are "low"--but that is true irrespective of the subjects' ages.

This is exactly what one would expect under the Kalmoe conjecture.  In effect, once one partials out the covariance of age and political knowledge, age no longer is associated with higher degrees of polarization.  This finding supports the inference that the relationship between age and policy-preference intensity was just a statistical echo of the impact of the greater acquisition of political knowledge as people age.

3. Huh?

Wow.

I’m no longer as skeptical of the claim that greater political knowledge accounts for the relationship between age and intensity of policy-preferences.

But I still can’t shake the feeling that there is something wrong with that position.

I think my hesitation is grounded in the highly linear relationship between age and political knowledge. 

I have a hard enough time believing that a 75-year-old person , as a result of greater life experience and reflection thereupon, is more politically sophisticated than a 35-year-old one.  But the idea that the former, for exactly the same reasons, is more sophisticated than a 65-year-old seems absurd to me. 

It also seems absurd to think that the advantage the 75-year-old has over 65-year-old one is identical to the advantage that the 65-year-old one has over the 55-year-old one, the 55 over the 45, the 45 over the 35 etc.  If political knowledge relentlessly expands over the course of a person’s life, it still must be the case that the marginal growth diminishes over time.

Right?

4. Back to PT vs. CT.

Another thing has happened to me over the course of this foray into age and political polarization: I’m now definitely less skeptical about PT (the “personality thesis”) in relation to CT (the “cohort thesis”).

This shift also is rooted in the seeming linearity of the effect of age on partisanship.  (This linearity, it is important to point out, is observable in the raw data, not simply in the regression models, which constrain the effect of age to be linear.)

Again, the cohort theory attributes the greater conservatism of older citizens not to the experience of aging on their preferences but rather to the imprinting of the political spirit of the time in which those citizens came of age (presumably in their 20’s).

If that’s right, the impact of age shouldn’t be so damn linear.  The relative strength of conservative and liberal sensibilities in the general population presumably ebbs and flows.  If CT is right, then, any trends toward conservatism should be punctuated with trends toward liberalism.  We should see a ragged line, not a straight one, when we plot conservatism in relation to age.

The linearity of the march toward conservatism with age is much more consistent with PT, which before now struck me as more of a just-so story than a plausible account of how political sensibilities change as people grow older.

Or at least that is what I think for now.

What do you make of all this? 

Wednesday
Aug022017

Culture vs. Cognition: a false dichotomy? Amen!

A basic premise of our research since its inception, the fusion of cultural outlooks and cognition of risk is the defining characteristic of the cultural cognition thesis.

From Kahan, D.M., Slovic, P., Braman, D. & Gastil, J. Fear of Democracy: A Cultural Evaluation of Sunstein on Risk. Harvard Law Review 119, 1071-1109 (2006), pp. 1083-85:

The claim behind cultural cognition is that culture is prior to facts in societal disputes over risk. Normatively, culture might be prior to facts in the sense that cultural values determine what significance individuals attach to the consequences of environmental regulation, gun control, drug criminalization, and the like. But more importantly, culture is cognitively prior to facts in the sense that cultural values shape what individuals perceive the consequences of such policies to be. Individuals selectively credit and dismiss factual claims in a manner that supports their preferred vision of the good society. . . .

Although one can imagine alternative explanations for cultural variation in risk perceptions, cultural cognition offers a distinctively psychometric one. On this view, the impact of cultural worldviews is not an alternative to, but rather a vital component of, the various psychological and social mechanisms that determine perceptions of risk. These mechanisms, cultural cognition asserts, are endogenous to culture. That is, the direction in which they point risk perceptions depends on individuals’ cultural values.

Consider the affect heuristic. Emotional responses to putatively dangerous activities strongly determine risk perceptions, but what determines whether those responses are positive or negative? The answer, according to cultural cognition, is culture: persons’ worldviews infuse various activities — firearm possession, nuclear power generation, red-meat consumption — with despised or valued social meanings, which in turn determine whether individuals react with anxiety or calmness, dread or admiration, toward those activities. This account recognizes, in line with the best psychological accounts, that emotions are not thoughtless surges of affect, but rather valueladen judgments shaped by social norms.

A similar account can be given of probability neglect. Individuals display less sensitivity to the improbability of a bad outcome when that outcome is attended by intensely negative affect. But insofar as the valence and strength of individuals’ affective responses are influenced by their cultural appraisals of putatively dangerous activities (guns, nuclear power plants, drug use, casual sex, etc.), probability neglect will again be culture dependent.

Availability, too, is likely to be endogenous to culture. The magnitude of a perceived risk depends on how readily an individual can recall instances of misfortune associated with that risk. But how likely someone is to take note of such misfortunes and to recall them almost certainly depends on her values: to avoid cognitive dissonance, individuals are likely to attend selectively to information in a way that reinforces rather than undermines their commitment to the view that certain activities (say, gun possession, or economic commerce) are either noble or base.

Culture will also condition the impact of social influences on risk perceptions. Most individuals are not in a position to determine for themselves whether childhood vaccines induce autism, silicone breast implants cause immune system dysfunction, private firearm possession reduces or increases crime, and so on. Accordingly, they must trust others to tell them which risk claims, supported by which forms of highly technical empirical evidence, to believe. And the people they trust, not surprisingly, are the ones who share their cultural worldviews — and who are likely to be disposed to particular positions by virtue of affect, probability neglect, availability, and similar mechanisms. Risk perceptions are thus likely to be uniform within cultural groups and diverse across them. Accordingly, group polarization and cascades are endogenous to culture, too.

Tuesday
Aug012017

Going to be in Oxford area week of Nov. 20? If so, come to Clarendon lectures!

One of my summer projects--preparing these lectures:


Wednesday
Jul262017

How age and political outlooks *interact* in formation of policy positions

So what’s going on here?

The answer isn’t that older people are more conservative than younger ones. Graphically, that would look something like this:

This is a well established pattern. Scholars have advanced two explanations for it. The “personality theory” (PT) holds that various psychological influences cause people to become more conservative as they age (e.g., Cornelis et al. 2009).

The “cohort theory” (CT), in contrast, holds that people tend to form political outlooks that reflect the ideological climate that existed when they were coming of age (late teens to early 20s, basically), & stick with those outlooks over the remainder of their lives (e.g., Ghitza & Gelman 2014; Desilver 2014).   

On the CT account, today’s older conservatives, many of whom became “political grownups” in the Reagan years, are no less conservative than they were when they were younger. At some point, too, we should expect to see an association between age and liberalism as a result of the maturing  of today’s younger liberals, many of whom formed their political outlooks during the Clinton era.

I generally find CT more convincing.

Get your raw data here! But in any event, the patterns featured in the first graphic above don’t convey information about how political outlooks differ in relation to age. Rather they reflect how much more likely older people are than younger ones to form a political-outlook-consistent position on various policies conditional on a shared political outlook.

Thus, a 65 yr. old “conservative Republican” (a “4” and a “6”, respectively, on the five-point ideology measure and seven-point party-identification measure that were combined to form the political-outlook scale) is 13 percentage points (± 8 pct points, LC = 0.95) more likely to oppose “universal healthcare” than a 25 yr. old “conservative Republican.”  The former is also 15 percentage points more likely than the latter (± 9 pct points) to be against use of carbon-emission limits to combat global warming.

What’s more, the same sort of intensification of outlook-consistent preferences shows up for liberals on at least some policies.   E.g., a 65 yr. old “liberal Democrat” (“2” and “2” on the outlook scale’s component items) is 11 percentage points (± 6) to support stricter gun control laws.

So the question is, Why are older citizens either more conservative or more liberal in the intensity of their outlook-consistent policy positions than are younger ones?

Maybe someone has already observed this pattern and presented evidence to support his or her answer to the question I’m asking.  Please let me know if you are familiar with such work!

Meanwhile, here are a couple of conjectures:

1.  Cultural identity vs.  policy. Normally we think that labels like “conservative” and “liberal,” as well as identification with one or the other of the two major political parties, imply a set of policy positions. But maybe that assumption is less supportable for recent generations. Maybe younger people view these sorts of designations as the ones that cohere best with their cultural style, even if their policy positions aren’t completely orthodox in relation to them. 

2.  Measurement drift.  Scales like the one I constructed are supposed to be using observable indicators—here, how people characterize themselves in political terms—to indirectly measure an unobserved, unobservable characteristic—here, their political predispositions.  Such a strategy, however, assumes that the indicators have the same relationship to the unobserved characteristic across the entire population whose dispositions one is trying to measure.  Maybe the labels “conservative” and “liberal,” “Republican” and “Democrat,” don’t mean what they used to and thus supply less reliable guidance on what younger people’s policy positions are.

 Frankly, I don’t find either of these explanations very convincing.

So I’m again asking the 14 billion readers of this blog to share the benefit of their insight and intelligence, in this case by weighing in with their own explanations—and also with ways to carry out empirical tests that would give us reason to view one hypothesis as more likely to be true than some alternative one.

Well? What do you think?

References

Cornelis, I., Van Hiel, A., Roets, A. & Kossowska, M. Age Differences in Conservatism: Evidence on the Mediating Effects of Personality and Cognitive Style. Journal of Personality 77, 51-88 (2009).

Desilver, D., The politics of American generations: How age affects attitudes and voting behavior. Pew Research Center (2014), available at http://www.pewresearch.org/fact-tank/2014/07/09/the-politics-of-american-generations-how-age-affects-attitudes-and-voting-behavior/. 

 Ghitza, Y. & Gelman, A. The great society, Reagan’s revolution, and generations of presidential voting. Working paper  (2014), available at http://graphics8.nytimes.com/newsgraphics/2014/07/06/generations2/assets/cohort_voting_20140707.pdf. 

 

Tuesday
Jul252017

Forecasting risk perceptions—are we there yet? (Lecture summary plus slides)

No one is afraid of this GM mosquito--because he has been egineered to have a comforting blue color!So as the Twitter-ban saga played out, I made my way to Washington to address the National Academy of Sciences & International Life Sciences Institute. The topic was public perceptions of novel- or emerging-technology risks (slides here).

I told the audience that I had 7 points to make: three about why the Science of Science Communications isn’t “there yet” on risk-perception forecasting; and 4 more about how it might get closer by empirically investigating public reactions to gene drive and related forms of biotech.

I told them I was going to defer the 7 points, though, and simply describe three studies first. This format has really grown on me in the last year or so, despite its violation of the principle that one should start with one’s core thesis to orient the audience & motivate exposition.

The first study was the one reported in “They Saw a Protest.” In that study, we investigated how subjects, playing the role of jury members, would evaluate disputed facts in a case in which demonstrators were suing the police for breaking up a political protest. We found that the subjects conformed their perceptions of the demonstrators’ behavior to the subjects’ positive or negative cultural predispositions toward the demonstrators’ cause (anti-abortion in one condition; anti— don’t-ask-don’t-tell policy for military service, in the other).  This effect was consistent with the dynamic of identity-protective cognition,  which refers to the tendency of people to selectively credit and discredit evidence in patterns that reflect and reinforce their group commitments.

The second study related to a novel form of applied science: nanotechnology. In “Cultural Cognition of Nanotechnology Risks and Benefits,” the subjects were randomly assigned to either a “no information” condition, in which they simply assessed the risks and benefits of nanotechnology; or a “balanced information” condition, in which they made their assessments after first reading a balanced set of statements on nanotechnology risks and benefits.

The responses of subjects in the “no information” condition were pure noise. That shouldn’t surprise anyone: 80% of the subjects said they either knew nothing or very little about nanotechnology.

In the “balanced information” condition, in contrast, the subjects’ split into factions consistent with the their cultural predispositions toward environmental and technological risks generally. As in “They Saw a Protest,” this effect suggested the biased assimilation of factual information functions as a form of identity protection.

The third study concerned the HPV vaccine: “Whose Afraid of the HPV vaccine and Why.”  As in both “They Saw a Protest” and “Cultural Cognition of Nanotechnology Risks and Benefits,” subjects of opposing cultural outlooks polarized over the significance of balanced information.  However, an even more potent influence was the position of “culturally identifiable” public health experts—pictured individuals who pretest subjects perceived to be subscribing to opposing cultural outlooks. We had predicted that identity-protective cognition would produce this effect: subjects’ tacit identification of the cultural outlooks of the experts, we had surmised, would cue the subjects on which position was consistent, and which inconsistent, with the one that predominates in their cultural group.

Against the background of these studies, I started in on the 7 points:

1. “Not there yet”: Culture conflict  affects science but is not about science. The same mechanism—identity-protective reasoning—explains the result in all three studies. Yet the first of them--“They Saw a Protest”—had nothing to do with science. This ought to make us skeptical of science-specific explanations—low science literacy, distrust of scientists, misinformation about science, etc.--of public conflicts over decision-relevant science. What’s needed are studies that examine how science becomes entangled in more general forms of cultural status competition.

2. “Not there yet”: Beware survey artifact. Everyone remembers last year’s wild celebration of three decades of studies predicting how the public will react to nanotechnology as a “novel,” “emerging” source of risk. As the number of consumer goods that incorporate nanotechnology approaches 2000, 80% of the public still admit they have heard nothing or little about this form of applied science. The results we see in lab studies and surveys are classic public-opinion-study artifacts: they tell us only how subjects will react when polled or exposed to experimental manipulations, not how they are assessing information in the real world. Rather than continuing to do studies that lack external validity, we should be using our experience with nanotechnology to figure out why nothing of interest happened to public opinion over this period of time.

3. “Not there yet”: Beware exogenous politicization.  Here is one thing we have arguably learned: that no public conflict over any application of science is “inherently” contentious; something external and contingent has to happen to invest a scientific issue with the sort of antagonistic social meanings that transform positions on the issue into badges of membership in and loyalty to one or another cultural group. The opposing public reactions to the HPV vaccine and the HBV vaccine powerfully illustrate this point.  When we study (genuinely) novel and emerging sources of technological risk, then, we should be trying to identify the sorts of influences that could have this kind of impact on it.

4. “Making it the rest of the way”: Avoid “hyping” (or contributing to same in media). Ringing the societal alarm bell on the basis of survey-artifactual findings of “public concern” (studies of GM food risk perceptions are great examples of this) is not only misleading but potentially dangerous: because how others like them react is an important cue members of the public use to gauge risk, studies that overstate public concerns can create fear, which then feeds on itself—a central lesson of the “social amplification of risk.”

5.  “Making it the rest of the way”: Furnish social proof, not just facts. Again, individual members of the public, lacking the time and expertise to make sense of scientific evidence on matters of consequence to their lives, use the behavior and attitudes of socially competent actors to draw inferences about what is known to science.   Images of such actors evincing confidence in decision-relevant science through their words or actions—not more scientific information—is thus the best way to help citizens align their own behavior with the best possible evidence.

6.  “Making it the rest of the way”: Investigate locally, with field-study methods. Field work should now be the main focus of research of public risk perceptions and science communication.  Studying real decisionmaking in action minimizes the risk of survey artifact.  Moreover, because lab studies are usually low in operational validity, field studies are necessary to determine how the interventions supported externally valid lab studies can be reproduced in the real world.  

 7.  “Making it the rest of the way”: Prefer administrative to political risk-perception management authorities. Administrative proceedings are easier to protect from exogenous politicization than are democratic law-making ones. Again, compare the relative public acceptability of the HPV vaccine to the HBV vaccine.  Conducted correctly, administrative proceedings can still be made responsive, in a confidence-enhancing way, to local stakeholders, whose confidence in decision-relevant science is a prerequisite to its public legitimacy.

* * *

Talk done in less than 30 mins! 

Sunday
Jul232017

Sunday reading -- & listening: the "backfire effect"

There's been some interesting discussion in the "comments" field (here, e.g.) on whether factual corrections "backfire," thereby entrenching mistaken beliefs that are ideologically congenial.

So here are a couple more logs for the fire on that.

1st, an interview of Brendan Nyhan, who speculates on whether his own studies suggesting a backfire effect might be wrong ("Walking back the backfire effect"). His willingness to question his own work displays admirable scholarly chracter.

2d, a new empirical study:

I haven't yet read it closely -- hope to later today -- but am curious to see what others think of it.

Saturday
Jul222017

"Uninhabitable Earth" ... Good #scicomm? 

This article in New York magazine--

 

--has attracted a lot of critical attention, particularly from climate scientists, who have attacked many of its claims as unsupported by the best available evidence. But at least some commentators think the "alarmism" the story conveys is needed to galvanize public opinion.

So what do the 14 billion regular readers of this blog think? Is the article good science communication?

Also, let's try this: in addition to stating your view, identify what you think is the best argument on the other side.   

 

Friday
Jul212017

Thanks to 14 billion speaking as 1, Culturalcognition.net is free once more!

Besieged by 14 billion frustrated, angry posts, Twitter relented & lifted the ban on tweets linking to www.cultural cognition.net (an episode that now ranks 3d on the list of strangest thing ever to happen to the site on the internet; see one & two).

On net, this was a super positive experience: the aggravation associated with the loss of 2 days of access for site-related tweets was more than offset by the gratification I experienced upon witnessing this widespread, public-spirited & generous support.

I’m not sure yet whether I owe any particular person $1000 (if I do, he or she should speak up!), but I do owe a much larger quantity of gratitude to many  people.  The way to repay them, I think, is to be sure I follow their example the next time I happen upon someone who is being treated in an arbitrary & capricious way.

Oh, last thing: if a twitter robot was the agent of the temporary ban on culturalcognition.net links, I want him or her to know that I haven’t changed my position on robots.  All of us – naturally & artificially intelligent —learn from our mistakes!

Small sample of supportive tweets

Thursday
Jul202017

#freeculturalcognition ... imprisoned in twitter wonderland

For some ill-specified reason, Twitter has decided that this site is too "dangerous" for Twitter users to visit (maybe they are concerned NiV will box their ears).

As a result, it has barred msgs w/ links to culturalcognition.net.  If the unmodified URL is linked in a tweet, the tweet isn't posted:

 

If a tiny URL is used, the msg makes it through, but clicking on the link directs readers, first, to a scary warning page & then, if one persists, a tiny URL page that identifies the site as a spam source & refuses to resolve the address:

As you can see, the tiny URL page declares that the site has been "blacklisted" by SURBL.  But SURBL in fact gives culturalcognition.net a clean bill of health:

SURBL has a procedure for getting off its blacklist, but since my site isn't actually on the company's blacklist, it won't disclose to me what those procedures are.... Also, I've checked dozens of other "blacklist" compilers, all of which give the same healthy diagnosis for this site (see below).

Needless to say, I've tried to contact twitter by filling out a "form" that exists for erroneous blacklisting. But when I do that, I get an email that tells me the company can't respond to individual requests & has "closed" the case.

I've tried various "workarounds," only one of which seems effective: linking to the cultural cognition site's squarespace address: www.culturalcognition.squarespace.com.  I suspect twitter will catch on to this. But in any case, it is a manifestly inferior substitute for links to the culturalcogntion.net URL, which users of the site--unaware of this weird situation-- will continue to try to use when they are moved to link to site content in their own tweets.

So ... what to do?

Obviously, any advice anyone has about additional ways to free the site will be much appreciated.

But if any of the 14 billion regular subscribers to this site wants to do even more, I'm willing to supply a reward of an "I ❤ Popper/citizen of the Liberal Republic of Science t-shirt" and also $1,000 (seriously!) to anyone who can actually manage to get through to twitter & get them to remove the ban (documentation of steps taken & their causal effect in getting ban lifted must be supplied so I can verify that the reward-seeking intervenor's efforts are the ones that genuinely liberated culturalcogniton.net).

Also, twitter users might choose to post protest messages by using the trademarked #freeculturalcogntion hashtag in their tweets.