follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Professional judgment--in risk perception & in law: Dual process reasoning and science communication part 3... | Main | Is "shaming" an effective way to counteract biased information processing? A preliminary investigation »
Friday
Mar132015

What can we learn from (a) studying public perceptions of the risks of technologies the public hasn't heard of & (b) studying studies that do that?

Tamar Willner has posted another very perceptive and provocative essay in reaction to the readings for Science of Science Communication 2.0, this time in relation to Session 8, on “emerging technologies.”  I’ve posted the first portion of it, plus a link to her site for continuation.

She also posed a very interesting question in the comments about an experiment that CCP did on nanotechnology risk perceptions.  I’ve posted my answer to her question below the excerpt from her own post.

1. Tamar Wilner on studying perceived risks of emerging technologies ...

read rest of post

2. Q&A on a CCP study of nanotechnology risk perceptions

Tamar's question:

[I]n your paper (Cultural Cognition of the Risks and Benefits of Nanotechnology) you say, “The ‘cultural cognition’ hypothesis holds that these same patterns [cultural polarization] are likely to emerge as members of the public come to learn more about nanotechnology.” But in your blog you repeatedly make the point that only a minority of public science topics end up getting polarized - that such polarization is “pathological” in its rarity. Why then did you hypothesize that such a pattern would be likely to emerge for nanotech?

I noticed that you start to address this later in the paper when you say, “At the same time, nothing in our study suggests that cultural polarization over nanotechnology is inevitable…” and point out that proper framing can help people to extract factual information. Does this indicate that the passages used in your study employed framing likely to encourage polarization? They seem to use pretty neutral language, to me. What about them makes them polarizing - and is it possible that some polarizing language is unavoidable? For example it seems like just talking about "risks of a new technology" tape into certain egalitarian/communitarian sensibilities, but since that's exactly what the topic of discussion is, I don't see how you would avoid it.

My response

This is a great question.  It raises some important general issues & also gives me a chance to say some things that how my own views of the phenomenon of cultural contestation over risk have evolved since performing the study.

The main motivation for the study, actually, was a position that we characterized as the “familiarity hypothesis”: that as people learned more about nanotechnology, their views were likely to be positive.

This was an inference from the a consistent survey finding that although only a small percentage of the public reports having heard of nanotechnology, those who say they have tend to express very favorable views about the ratio of benefits to risks that it is likely to involve.

That inference is specious: there is obviously something unusual about people who know about a technology 80% of the rest of the public is unfamiliar with; it reflects poor reasoning not to anticipate that whatever is causing them to become familiar with a novel technology might also dispose them to form a view that others who lack their interest in technology might fail to form when they eventually learn about a novel form of science.

Our hypotheses, largely corroborated by the study, was that those who were already familiar with nanotechnology (or actually, simply saying they were familiar; the surveys were using self-report measures) were likely people with a protechnology “individualist” cultural outlook, and that when individuals with anti-technology “egalitarian communitarian” ones were exposed to information on nanotechnology they would likely form more negative reactions.

Okay, fine.

But Tamar’s perceptive question is why did we expect people unfamiliar with a technology to react at all when exposed to such a small amount of info?

As she notes, only a small minority of potentially risky technologies excite polarization.  People tend to overlook this fact b/c the they understandably fixate on those and ignore the vast majority of noncontroversial ones.

My answer, basically, is that I don’t think the research team really had a good grasp of that point at the time we did the study.  I know I didn’t! 

I think, actually, that I really did mistakenly believe that culturally infused and hence opposing reactions to putative risk sources was “the norm,” and that it was therefore likely our subjects would polarize in the way they did.

Looking back, I’d say the reason it was reasonable to expect subjects would polarize is that the study was putting them in the position of consciously evaluating risks and benefits.

Check out these polarized & nonpolarized risks -- all you need to do is click! C'mon! It won't hurt!On the vast majority of putative risk sources on which there isn’t any meaningful level of polarization—from pasteurization of milk to medical x-rays to cell phone radiation to high-power transmission lines etc.—people don’t consciously think anything; they just model their behavior on what they see other people like them doing; when they do so, it’s rare for them to observe signs that give them reason to think there is anything to worry about.

Perfectly sensible approach, in my view, given how much more information known to science it makes sense to use in our lives than we have time to make sense of on our own.

But as I said, the study subjects were being prompted to do conscious risk assessment.  Apparently, in doing that, they reliably extracted from the balanced risk-benefit information culturally affective resonances that enabled them to assimilate this novel putative risk source—nanotechnology—to a class of risks, environmental ones, on which members of their group are in fact culturally polarized. 

Being made to expect, in effect, that there would be an issue here, the subjects reliably anticipated too what position “people like them” culturally speaking would likely take.

This interpretation raises a second point on which my thinking has evolved: the external validity of public opinion studies of novel technologies.

This was (as Tamar’s excellent blog post on the readings as a whole discusses) a major theme of the readings.  Basically, when pollsters ask people their views on technological risks about which members of the public have never heard and don’t have discussions about in their daily lives, they aren’t genuinely measuring a real-world phenomenon. 

They are, in effect, modeling how people react to the strange experience of being asked questions about something they have not thought about.  To pretend that one can draw inferences from that to what actual people in the world are truly thinking is flat-out bogus. Serious social science researchers know this is a mistake; news-maker and advocacy pollsters either don’t or don’t care.

One can of course try to anticipate how people—including ones with different cultural outlooks might react to an emerging technology when they do learn about it.  Indeed, I think that is a very sensible thing to do; the failure to make the effort can result in disaster, as it did in the case of the HPV vaccine!

But to perform what amounts to a risk-perception forecasting study, one must use an experimental design that it is reasonable to think will induce in subjects the reaction that people in the real world will form when they learn about the technology—or could form depending on how they learn about it.  That is what one is trying to model.

A simple survey question—like the one Pew asked respondents about GM foods in its recent public attitudes studycannot plausibly be viewed as doing that.  The real-world conditions in which people learn things about a new technology will be much richer—much more dense with cues relating to the occasions for discussing an issue, the setting in which the discussion is being had, and the identity and perceived motivations of the information sources—than are accounted for in a simple survey question.

I think it is possible to do forecasting studies that reasonable people can reasonably rely on.  I think our HPV vaccine risk study, e.g., which tried to model how people would likely react depending on whether they learned about the vaccine in conditions that exposed them to cues of group-conflict or not was like that.

But I think it is super hard to do it.

Frankly, I now don’t think our nanotechnology experiment design was sufficiently rich with the sorts of contextual background to model the likely circumstances in which people would form nanotechnology risk perceptions!

The study helped to show that the “familiarly hypothesis,” as we styled it, was simplistic.  It also supported the inference that it was possible people might assimilate nanotechnology to the sorts of technological-risk controversies that now polarize members of different groups. 

But the stimulus was too thin to be viewed as modeling the conditions in which that was actually likely to happen..

We should be mindful of hindsight bias, of course, but the fact that nanotechnology has not provoked any sort of cultural divisions in what is now approach two decades of its use in commercial manufacturing helps show the limited strength of inferences on the likelihood of conflict that can be drawn from experiments like the one we did. 

As Tamar notes, we were careful in our study to point out that the experimental result didn’t imply that conflict over nanotechnology was “inevitable” or necessarily even “likely.”

But I myself am very willing—eager even—to acknowledge that we viewed the design we used as more informative than it could have been expected to be about the likely career of nanotechnology.

I have acknowledged this before in fact. 

In doing so, too, I pointed out that that doesn’t mean studies like the ones we and other researchers did on nanotechnology risk perceptions weren’t or aren’t generally useful.  It just means that the value people can get from those studies depends on researchers and readers forming a valid understanding of what designs of that sort are modeling and what they are not.  

In order for that to happen, moreover, that researchers must reflect on their own studies over time to see what the fit between them and experience tells them about what is involved in modeling real-world processes in a manner that is most supportive of real-world inferences.

Speaking for myself, at least, I acknowledge that, despite my best efforts, I cannot guarantee anyone I will always make the right assessment of the inferences that can be drawn from my studies.  I can promise, though, that when I figure out that I didn’t, I’ll say so—not just to set the record straight but also to help enlarge understanding of the phenomena that it is in fact my goal to make sense of.

Of course, if a cultural conflagration over nanotechnology ignites in the future, I suppose I’ll have to acknowledge the “me” I was then then had a better grasp of things than the “me” I am now; I doubt that will happen—but life, thank goodness, is filled with surprises!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Thanks for your kind words, Dan, and your thoughtful response. A few responses of my own, on two fronts: 1) polarization and the nanotech study; 2) the intersection of knowledge and opinion.

1) Polarization and nanotech:

I'm intrigued by the passage where you say:
"Looking back, I’d say the reason it was reasonable to expect subjects would polarize is that the study was putting them in the position of consciously evaluating risks and benefits.... Apparently, in doing that, they reliably extracted from the balanced risk-benefit information culturally affective resonances... Being made to expect, in effect, that there would be an issue here, the subjects reliably anticipated too what position “people like them” culturally speaking would likely take."

My understanding of your general thesis about why issues get polarized is that a polluted science communication environment alerts people that they should be taking a stance on the issue in line with their cultural affiliations. If that's the case, where's the polluted sci comm here - is it the very fact of "being made to expect... that there would be an issue here"? That's a far cry from some of the sci comm pollution we saw in the case of the HPV vaccine, for example. And if that's all it takes to create a polluted sci comm environment, then such an environment is unavoidable (at least for just about any technology worth covering in the media). It would hardly even be fair to call it "polluted" - maybe "risk framing," or "controversy-raising?"

2) Knowledge and opinion:

I think I managed to spit out a lot of words on this without really being clear on what my major question/problem is - apologies. Yes, I think you are right to be worried about "modeling how people react to the strange experience of being asked questions about something they have not thought about." But my point wasn't simply that we should "anticipate how people... might react to an emerging technology when they do learn about it." I also really want to know how people *feel about an issue* (or think? not sure if affect or opinion is a better measure) *given that they know next to nothing*! (Think about the receptionist, and how strongly she felt about fracking.)

Why is this research area important? As I see it, one of the major problems in our science communication environment is that know-littles are populous (unavoidable - most of us are know-little on a lot of issues, myself included) - yet know-littles spread opinions and (mis)information.

So the usual survey technique may not be the way to do it, but we need some way of discerning the opinions/feelings of people in various knowledge categories on a particular issue, and seeing how opinions/feelings and knowledge correlate; and then doing that again with other issues to try and discern patterns. Another useful line of inquiry would be in the information sciences, to see how one's likelihood of sharing (mis)information varies according to both knowledge level and opinion/affect.

March 16, 2015 | Unregistered CommenterTamar Wilner

Dan -

==> "Being made to expect, in effect, that there would be an issue here, the subjects reliably anticipated too what position “people like them” culturally speaking would likely take."

This makes me think a bit of stereotype threat.


==> "I think it is possible to do forecasting studies that reasonable people can reasonably rely on. "

Seems to me that what's mostly missing are longitudinal studies - and in particular prospective longitudinal studies. Forecasting studies that are informed by and based on cross-sectional analyses seem a bit like a cat chasing its tail.

March 20, 2015 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>