Unlike our myriad competitors, the CCP blog now & again gets genuine experts to come in & address complicated stuff that these commentators actually know something about. We've been criticized for this, but sometimes I'm too busy to write myself & have no choice. Anyway, the following is an expert guest post from a commentator making his second "guest" appearance. Kevin Arceneaux's last essay, Partisan Media Are Not Destroying America (while subsequently disproven by events), was the most popular post ever on this blog, being read by an estimated 19.3 billion readers. Now he's back to address related issues on study design and causal inference in assessments of the impact of partisan news coverage on public opinion. Arceneaux is the author, with Martin Johnson, of the acclaimed Changing Minds, Changing Channels (Univ. Chicago Press 2013).
News and entertainment media have the dubious distinction of serving as both a whipping boy and a potential savior. They are often treated as the source of many social ills. Beauty magazines perpetuate unhealthy body images; political advertisements inveigle; partisan news programs mislead and confuse (especially if we happen to disagree with them). We also imagine that their power can be put to good use. Media can serve as a catalyst for positive change, however defined.
As seductive as these narratives are, the problem is that they are difficult to evaluate empirically. How could this be? In modern advanced democracies, like the United States, we are surrounded by media. Traditional forms of mass media – newspapers, magazines, radio, television – operate along side newer forms of interactive media on the Internet. Can’t we just observe how people respond to all these forms of news media?
We can certainly observe what people consume and what they do, but we can’t always infer the effects of media consumption on their behavior. Observational research is inherently beset by many threats to causal inference, and the current media environment only makes it worse. The study of media effects could easily be the poster child for the dictum that correlation does not necessarily imply causation.
The biggest hurdle to divining the effects of media from observational research is the fact that people, by and large, choose what to consume. For instance, we know that conservatives say that they consume conservative media at higher rates than other Americans. But because conservatives are consciously choosing to view conservative media and construct conservative networks on social media, it is difficult to sort out how much of their conservatism come from their personal predispositions and how much of it comes from the messages that they encounter.
To muddy the waters further, the ability to select among news and entertainment alternatives creates incentives for media producers to fashion content that will appeal to particular segments of the population. To take a current day example, Fox News has received its fair share of criticism for how it has covered the threats posed by the Islamic State of Iraq and Syria (ISIS) and the Ebola epidemic. It is easy to accuse Fox News and other media outlets of whipping up hysteria, but we must also entertain the possibility that they are just giving their viewers what they think they want. People who are chronically worried about threats need a place to turn to for answers and outlets like Fox News are happy to oblige.
From the standpoint of causal inference, it is difficult to pinpoint the effects of Fox News, because people who aren’t predisposed to be worried about Ebola are happily consuming different media content and if, for some reason, they happened across Fox News coverage of the Ebola epidemic, they may find in more amusing than worrying.
The problem here is so bad that statisticians refer to it as the Fundamental Problem of Causal Inference. In a nutshell, the only way we can really now the effect of media content to observe two states of the world: one where the person consumes it and one where the person does not. Of course, that’s impossible. The only way forward for intrepid researchers is to figure out how to construct a comparable group of people. For example, people who are just like the ones who watch Fox News but who do not. That is easier said than done.
Fancy statistical models that try to address the problem by accounting for people’s viewing preferences (i.e., “control variables”) can actually cause more harm than good. At the very least, this approach rests on the strong assumption that one has accounted for everything, and we can never know if we have.
Another approach that fares a little better is observing the same people overtime. In doing so, we can get a before and after take on their behavior. Yet this approach also makes strong assumptions, too, and as Tobias Konitzer points out in a recent conference paper even if we can make those assumptions, we need lots of observations across time. Panel surveys are rare and long-running panel surveys, even rarer.
Many scholars, including myself, have pointed to randomized experiments as a way forward. Experiments use random assignment to construct comparable groups of individuals. Some people are exposed to media content while others are not. Because people were assigned to groups at random, we know that they should have similar tastes and similar responses. So, if we see one group behaving differently than another, we can more credibly infer that the difference was caused by the treatment that we administered.
While randomized experiments do allow us to say with more confidence that exposure to, say, partisan news content causes people to do X, Y, or Z, it is also not a panacea. For one, experimentalists generally construct comparable groups and then ask people to do things that they would not always do or expose them to things that they may not have encountered but for the intervention of the researcher. Consequently, we cannot be certain that they would not behave differently if the treatment had unfolded through natural means. Field experiments and natural “experiments” (i.e., observational designs that have plausibly exogenous treatments) do better on this score, but they are often difficult to employ.
Another limitation is that experiments are not particularly good at measuring the cumulative effect of media exposure, but rather at pinpointing the effect of a particular intervention. So, the upshot here should be familiar: nothing is perfect and there is no silver bullet. It may be trite, but it is true. We learn the most through the triangulation of methods.