What’s that hiding behind the poll? Perceiving public perceptions of biotechnology
Tuesday, June 24, 2014 at 8:22AM
Dan Kahan

Hey look! Here's something you won't find on any of those other blogs addressing how cultural cognition shapes perceptions of risk: guest posts from experts who actually know something! This one, by Jason Delborne, addresses two of my favorite topics: first, the meaning (or lack thereof) of surveys purporting to characterize public attitudes on matters ordinary people haven't thought about; & second, GM foods (including delectable mosquitoes!  MMMMM MMM! 

Jason Delborne:

Whether motivated by a democratic impulse or a desire to tune corporate marketing, a fair amount of research has focused on measuring public perceptions of emerging technologies. In many cases, results are reported in an opinion-poll format: X percentage of the surveyed population support the development of Y technology (see examples on GM food, food safety,  and stem cells/cloning).

But what is behind such numbers, and what are they meant to communicate?

From a democratic perspective, perhaps we are supposed to take comfort in a majority support of a technology that seems to be coming our way. 51% or more support would seem to suggest that our government “by and for the people” is somehow functioning, whereas we are supposed to feel concerned if our regulators were permitting a technology to move forward that had 49% support or less. A more nuanced view might interpret all but the most lopsided results as indicative of a need for greater deliberation, public dialogue, and perhaps political compromise.

From a marketing perspective, the polling number offers both an indicator of commercial potential and a barometer of success or failure in the shaping of public perceptions. An opponent of a given technology will interpret high approval numbers as a call to arms – “Clearly we have to get the word out about this technology to warn people of how bad it is!” And they will know they are succeeding if the next polling study shows diminished support.

Below the headline, however, lie two aspects of complexity that may disrupt the interpretations described above. First, survey methodologies vary in their validity, reliability, and their strategic choices that construct “the public” in particular ways. Much has been written on this point (e.g., The Illusion of Public Opinion and The Opinion Makers), and it’s worth a critical look. A second concern, however, is whether such measures of support provide any meaningful insight into the “public perception” of a technology.

Several of my colleagues recently conducted a survey in Key West, FL, where the Mosquito Control Board has proposed the use of Oxitech’s genetically-modified mosquitoes as a strategy to reduce the spread of dengue fever (see “Are Mutant Mosquitos the Answer in Key West?”). My colleagues have not yet published their research, but they kindly shared some of their results with me and gave me permission to discuss it in limited fashion at the 2014 Biotechnology Symposium and in this blog post. They were thoughtful in their development of a survey instrument and in their strategic choices for defining a relevant public. They also brought a reflexive stance to their research design that nicely illustrates the potential disconnect between measures of public perception and the complexity of public perception.

Reporting from a door-to-door survey, Elizabeth Pitts and Michael Cobb (unpublished manuscript) asked whether residents supported the public release of GM mosquitos. The results would seem to comfort those who support the technology:

With a clear majority of support, and opposition under 25% of survey respondents, we might assume that little needs to be done – either by the company developing the mosquito or the state agency that wishes to try it. Only the anti-GM campaigners have a lot of work to do – or maybe such numbers suggest that they should just give up and focus on something else.

But the story does not and should not end there. The survey protocol also asked respondents to describe the benefits and risks of GM mosquitos – enabling the coding of their open-ended responses as follows in the next two tables.

These tables do not exactly offer rock-solid pillars to support the apparently straightforward “polling numbers”. First, despite having just been told a short version of how GM mosquitos would work to control the spread of dengue fever, very few respondents seemed to have internalized or understood this key point. In fact, we should not even take solace in the fact that 40% of respondents mentioned “mosquito control” as a benefit – the GM mosquito is designed to reduce the population only of the species of mosquito that transmits Dengue fever, which may have little impact on residents’ experience of mosquitos (of all species) as pesky blood-sucking pests. Second, nearly one-third of respondents had no response at all to either the benefits or hazards questions – suggesting a lack of engagement and/or knowledge with the topic. Third, nearly 40% of respondents expressed one or more concerns, many of which are at least superficially reasonable (e.g., questions about ecological consequences or unintended impacts on human health). While the survey data do not tell us how concerned residents were, such concerns have the potential to torpedo the 60% support figure, depending on subsequent dissemination of information and opinions.

To me, these data reveal the superficiality of the “approval rating” as a measure of public perception; yet, those are the data that are easiest to measure and most tempting for our media to report and pundits to interpret. It is a lovely sound bite to sum up a technology assessment in a poll measuring support or approval.

As someone who has practiced and studied public engagement (for example, see 2013a, 2013b, 2012, 2011a, 2011b, 2011c, 2011d), I would argue that if we truly care about how non-experts perceive an emerging technology – whether for democratic or commercial purposes – we need to focus on more messy forms of measurement and engagement. These might be more expensive, less clear-cut, and perhaps somewhat internally inconsistent, but they will give us more insight. We also must at least entertain the idea that opinion formation may reflect an evaluative process that does not rely only upon “the facts.” My hope would be that such practices would promote further engagement rather than quick numbers to either reassure or provoke existing fields of partisans. 

Jason Delborne is an Associate Professor of Science, Policy, and Society in the Department of Forestry and Environmental Resources, and an affiliated faculty member in the Center on Genetic Engineering and Society, at North Carolina State University.

 

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.