Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS (politics.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 24 April 2017

Using Experiments to Understand How Agency Influences Media Effects

Summary and Keywords

Students of public opinion tend to focus on how exposure to political media, such as news coverage and political advertisements, influences the political choices that people make. However, the expansion of news and entertainment choices on television and via the Internet makes the decisions that people make about what to consume from various media outlets a political choice in its own right. While the current day hyperchoice media landscape opens new avenues of research, it also complicates how we should approach, conduct, and interpret this research. More choices means greater ability to choose media content based on one’s political preferences, exacerbating the severity of selection bias and endogeneity inherent in observational studies. Traditional randomized experiments offer compelling ways to obviate these challenges to making valid causal inferences, but at the cost of minimizing the role that agency plays in how people make media choices.

Resent research modifies the traditional experimental design for studying media effects in ways that incorporate agency over media content. These modifications require researchers to consider different trade-offs when choosing among different design features, creating both advantages and disadvantages. Nonetheless, this emerging line of research offers a fresh perspective on how people’s media choices shapes their reaction to media content and political decisions.

Keywords: media choice, media effects, experiments, causal inference, selective exposure

Introduction

The study of political behavior is principally about understanding the choices people make. A vast literature is devoted to explaining why people choose to hold the attitudes that they do, vote for particular candidates, and choose to participate in the political arena in the first place. In the study of public opinion, there is a tendency to treat political media as an explanatory variable, with scholars focusing on the effects of news coverage, political advertisements, partisan talk shows, and, more recently, social media. Yet, as many political scientists recognize, exposure to political media is itself a choice (Prior, 2007; Zaller, 1992). People have a good bit of agency over their consumption of political media, and the rapid explanation of options for news and entertainment over the past 20 years has only made this fact more relevant for the study of political behavior.

Since the 1990s, Americans have witnessed a sea change in the media landscape. On television, the number of channels swelled exponentially as cable and satellite companies expanded their reach and offerings. Before the 1990s, the typical American household had access to a handful of channels, and people primarily consumed content on the three major broadcasting networks. Today, the average household has access to well over a hundred channels. The rise of the Internet has only compounded the options available to consumers. This expansion of choices is politically relevant for two fundamental reasons. First, people have political content—news and opinion—available at their fingertips 24 hours a day. In the past, people received their news on a regimented schedule dictated by newspapers and broadcast networks. Second, there are far more entertainment and diversions available today. Not long ago, many people consumed political news because there were few interesting alternatives. In many locales, the only thing on television in the early evening was broadcast network news. Sure, one could have read a book instead of turning on the television, but many opted for the warm glow of their television sets as the path of least resistance. Today, people are freer to tailor their media consumption to their specific preferences. News junkies can watch 24-hour news shows on cable, read political news on myriad of news and opinion websites, participate on political boards on the Internet, and share their opinions on a host of social media sites. Conversely, people who are uninterested in politics can construct a media environment nearly devoid of politics if they so choose (Bennett & Iyengar, 2008).

The current-day media environment creates greater variance in media consumption and, in doing so, broadens the research agenda for students of political behavior. What affects people’s diet of political news? How do media choices influence the political effects of media content? How do differences in media consumption influence aggregate patterns in political knowledge, electoral outcomes, and governance?

By the same token, while today’s hyperchoice media environment opens new avenues of media effects research, it also complicates how we should approach, conduct, and interpret this research. Observational research designs that infer the media effects from correlations between exposure to political content (or, often, self-reported exposure) and political attitudes or behavior have always been bedeviled by endogeneity and selection bias. For instance, a positive correlation between exposure to political advertising and voter turnout could reflect the effects of political advertisement or merely the fact that the kinds of people who vote are also the kinds of people who consume political advertisements (Ansolabehere, Iyengar, & Simon, 1999; Vavreck, 2007). The proliferation of choices on television and the Internet only makes this problem more acute, because it allows for people to sort more effectively.

Traditional randomized experiments address these shortcomings by removing choice. Subjects are exposed at random to political content, making sorting impossible by design. If a group of individuals randomly selected to view newscasts know more about politics than a group of individuals randomly selected to do something else, we can confidently infer that exposure to news programs inform. Nonetheless, the hyperchoice media environment also complicates what we learn from traditional experimental designs. Because people have many options to actively seek out and avoid political content in real world settings, traditional experiments create an artificial world in which people are forced to do things they normally would not do. In doing so, their findings may not travel to real-world contexts where people can behave as they wish.

This article unfolds as follows. We begin with a discussion of why experiments offer a superior approach to estimating the causal effects of political media. Next, we pivot to consider how the expansion of media choices in the real world undermines the generalizability of traditional experimental designs. We then spend the bulk of the article discussing novel approaches to incorporating choice into experimental designs, paying particular attention to the advantages and disadvantages of various approaches. The article concludes with some thoughts about future directions in media effects research.

Why Experiments?

Let us say up front that this is not a polemic on the unflagging virtues of experiments and the inherent vices of observational research. Well-crafted and appropriately interpreted observational designs offer valuable insights. They allow scientists to gain a clearer picture of political phenomena. As the dictum goes, correlation is not causation, but knowing what is correlated with what is still useful information. We must have some descriptive understanding of the world before we can explain it.

Nonetheless, the ultimate goal of scientific research is making causal inferences (King, Keohane, & Verba, 1994). Observational research is, as a general matter, poorly suited for this task. With respect to media effects research, two serious threats to causal inference bedevil observational designs. First and foremost, people create variation in exposure political messages by selecting different media content. Some people choose to watch the news, while others choose to watch something else. In and of itself, this fact does not vitiate causal inference. However, if particular characteristics of individuals simultaneously shape their media preferences and political attitudes and behaviors under investigation, observational designs are not very good out sorting out the causes of media choices from the effects of media exposure. In this instance, self-selection biases correlational estimates away from the true causal effect.

Using Experiments to Understand How Agency Influences Media EffectsClick to view larger

Figure 1: Example of selection bias

Figure 1 helps illustrate the problem that selection bias poses for correlational research. Consuming conservative news media correlates positively with voting for Republican candidates (Barker, 2002; Hopkins & Ladd, 2014). Conservative media may very well cause some members in its audience to vote for Republican candidates. Yet if conservatives are more likely to select conservative news, while liberals are more likely to select mainstream news, Republican voting and conservative media consumption may arise from pre-existing political preferences. In the face of selection bias, we cannot infer the causal impact of conservative news media from this correlation.1

In additional to selection bias, endogeneity muddies our ability to infer the direction of causal effects from correlations. To continue with the example in Figure 1, imagine if a detailed analysis of the content on conservative news media showed that conservative talk show hosts extol the virtues of Republican candidates while denigrating Democratic candidates. It would be tempting to draw a straight line from the content featured on these media outlets to the voting behavior of its audience members. Unfortunately, one cannot, on the basis of this evidence alone, rule out an alternative explanation: conservative outlets cater to their audience members’ preferences. If so, the composition of talk show hosts’ audiences may cause them to produce pro-Republican content, rather than pro-Republican content altering audience members’ vote preferences.

Separating cause and effect is no easy matter, because every attempt to do so must contend with the fundamental problem of causal inference (Holland, 1986). Epistemologically speaking, the only way we can really know the causal effect of an action is to perform the impossible. We must simultaneously observe the behavior of an individual after she takes the action and her behavior in an alternative universe where she does not take the action. Continuing with our example, in order to really know the causal effect of exposure to a radio talk show, one would need to take an individual, expose him to talk radio, and observe his voting behavior. The researcher would then need to go back in time, take the same individual, prevent him from listening to talk radio, and then observe his voting behavior. Baring the invention of time travel, this research design is the stuff of science fiction.

The crux of the fundamental problem of causal inference is the inability to construct a true counterfactual. We only observe one state of the world. Some people choose to consume conservative media, while others do not. Causal inference requires that we also know what people would have done had they made a different choice, often called the potential outcome. The inability to know the potential outcomes in this counterfactual state of the world means that we cannot know if a particular action changed people’s behavior in the way implied by a correlation. The example in Table 1 shows why. Imagine that we observe that 90% people who consume conservative media vote for Republicans, while only 10% of mainstream news consumers vote for Republicans. This would be a powerful correlation. Nonetheless, depending on the counterfactual, exposure to conservative media could have no effect on support for Republican candidates, a positive effect, or even a negative effect. Conservative media would have no effect if we would have observed the same 90/10 split in an alternative universe where conservative media does not exist. It would have a positive effect if conservative viewers would have had a lower Republican voting rate but for their exposure to conservative media. And it would have a negative effect if the opposite were true.

Table 1: The Importance of Knowing the Counterfactual

Media Selection

Observed

No Effect Counterfactual

Positive Effect Counterfactual

Negative Effect Counterfactual

Conservative

90

90

80

100

Mainstream

10

10

20

0

Note: Cell values are the probability of voting for Republican candidates. Shaded areas refer to the potential outcomes under different counterfactual scenarios.

So what are researchers to do? The answer is that they must find a way to construct a credible estimate of the counterfactual. Observational designs do this by comparing naturally occurring groups of individuals. In our example, the estimated counterfactual for consuming conservative news media would be the group of individuals who consume something else. The problem with this approach, as explained above, is that selection bias and endogeneity generate an inaccurate estimate of the counterfactual. If people sort into different audiences as a function of their political preferences, we cannot reasonably make the assumption that these two groups of individuals are identical save for the fact that one group happens to consume conservative media and the other happens to consume something else.2 Consequently, a common analytical strategy for observational data is to use statistical modeling to “control” for covariates that predict media consumption and political behavior as a way to account for people’s media choices and isolate the causal effects of media exposure.3 Unfortunately, this strategy makes the strong assumption that the research observes all of the relevant covariates, which in the vast majority of social and political phenomena is unknowable (Arceneaux, Gerber, & Green, 2006; Gerber & Green, 2012). To compound matters, if selection bias is present, adding an incomplete set of covariates could actual exacerbate the bias in correlational estimates (Achen, 1986). In short, there are no easy or straightforward solutions when attempting to estimate causal effects from observational data.

Randomized experiments offer a more credible approach to estimating the counterfactual. In an experiment, the researcher directly manipulates the variable of theoretical interest. In a media effect study, the variable of theoretical interest would typically be exposure to media content, and in a media effects experiment, the researcher exposes study participants to different media content. In doing so, the researcher—and not the study participant—is in full control of the selection process, obviating concerns about selection bias and endogeneity. Randomized experiments assign subjects to different levels of the variable of theoretical interest at random. Random assignment creates groups of individuals with similar distributional characteristics. For instance, across experimental groups, random assignment ensures that there are roughly the same proportions of liberals and conservatives, young and old, politically aware and unaware, and so on. As a result, random assignment offers a method for simulating the counterfactual. Although we are not able to observe directly people’s potential outcomes, we are able to observe how similar groups of individuals respond to different stimuli. Had we randomly assigned individuals to different treatment groups and then failed to administer the treatment, we would expect the outcomes to be the same across experiment groups. Therefore, if we do observe differences in outcomes across treatment groups after administering the treatment, we can reasonably make the inference that it was caused by the treatment.4

Randomized experiments have been put to very good use studying media effects. In fact, media effects researchers embraced experimentation well before other subfields in political science. Carl Hovland and his colleagues’ (Hovland & Janis, 1959; Hovland, Janis, & Kelley, 1953; Hovland, Lumsdaine, & Sheffield, 1949) innovative experiments in the 1940s and 1950s investigated the factors that make messages more or less persuasive and generated insights that continue to animate current day research (e.g., source credibility, one-sided versus two-sided messages, and appealing to fear). These experiments set the mold for media effects studies. For example, Hovland and Weiss (1951) isolated the effects of source credibility by asking college students to read persuasive messages on various topics (e.g., selling antihistamine drugs over the counter), with a randomly selected group led to believe that the messages came from a high-credibility source and the remainder led to believe that the messages came from a low-credibility source. Iyengar and Kinder (1987) used similar techniques in their series of experiments on the effects of television news broadcasts some 30 years later. Although the subjects were non-student adults and the medium was different, they edited the news broadcasts in ways that isolated a single factor (e.g., the presence or absence of a story about the economy) and exposed different experimental groups to different versions of nearly identical broadcasts.

This simple experimental design has served media effects researchers quite well over the past 75 years. It has been used to study the effects of television news (e.g., Iyengar & Kinder, 1987; Mutz, 1998), political advertising (see Lau, Sigelman, & Rovner, 2007), persuasive messages (see Chong & Druckman, 2007), and many other phenomena in the study of political psychology and communication. Yet as the media landscape becomes more fragmented and people have more opportunities to personalize their diet of political information, it is necessary to consider the limitations of the traditional media effects experiment.

Does the Expansion of Media Choices Pose a Problem for Media Effects Experiments?

Although the recent expansion of news and entertainment choices has been impressive, people have always had choices. Even in the 1950s, when there were a limited number of options on radio and television, people could choose from a number of periodicals with identifiable ideological agendas (Noel, 2014), and newspapers continued to reflect editorial position-taking and partisanship through the 20th century (Druckman & Parkin, 2005). Researchers at this time uncovered evidence that people gravitated toward like-minded political messages (Klapper, 1960; Lazarsfeld, Berelson, & Gaudet, 1948). Yet with the growth of television as the medium of choice for news in the latter half of the 20th century, a solid majority of Americans tuned into the evening news on the major broadcast networks and scholars either became less concerned about or less interested in selectivity. The reach of broadcast network news meant that these shows had the power to guide the priority of issues and the definition of problems (Iyengar & Kinder, 1987; Mutz & Martin, 2001).

Broadcast news networks no longer enjoy such a privileged position in American society. Purveyors of partisan and ideological content on cable networks and Internet sites have broken the near-monopoly of mainstream news. There is some debate over whether people actively seek out like-minded political information (Garrett, Carnahan, & Lynch, 2013; Gentzkow & Shapiro, 2011; Stroud, 2011). Yet from the perspective of research design, people’s mix of partisan news is less important than the fact that news audiences have been shrinking as entertainment options have expanded (Hindman, 2008; Prior, 2007).

Traditional media effects experiments generally take place in laboratories or are embedded in surveys. In these settings, everyone in the treatment group is exposed to the experimental stimuli. To the extent that the “captive audience” in the experimental setting is different from the audience one would observe in naturalistic settings, experimental findings are less likely to generalize to real-world contexts (Hovland, 1959). As entertainment seekers remove themselves from news audiences, traditional experimental designs are more likely to overestimate the effects of political news for two reasons. First, and most straightforwardly, traditional experiments overestimate the rate of exposure. Captive audience designs simulate a world where everyone is exposed. In the real world, however, only a fraction of the country consumes news, especially partisan news. Consequently, traditional experiments that seek to estimate the effects of partisan news tell us more about what would happen if everyone started watching partisan news (Arceneaux & Johnson, 2013).

There is a second reason that captive audience designs may overestimate the effects of news media. As Hovland (1959, p. 9) explains, “Some of the individuals in a captive audience experiment would, of course, expose themselves in the course of natural events to a communication of the type studied; but many others would not. The group which does expose itself is usually a highly biased one,” because people tend to seek out media that fits with their interests. The types of people who gravitate to news tend to have stable political preferences and, relative to those who gravitate toward entertainment, are less likely to change their opinions. Entertainment seekers are more likely to change their opinions but also less likely to expose themselves to mind-altering information (Zaller, 1992). Although there may be exceptions to this general pattern (see Levendusky, 2013), it should hold for established political debates that dominate news reporting (see Arceneaux & Johnson, 2013). As such, traditional experimental designs end up artificially inflating the proportion people in the captive audience who are more likely to be open to persuasion.

Incorporating Choice into Experimental Designs

In experimental design, as in architecture, form should follow function. Captive audience designs are most appropriate when researchers are interested in establishing the mere existence of a political communication effect. They are less appropriate when one wants to make generalizations to real-world contexts where selective exposure predominates. In this instance, researchers should incorporate choice into their experimental design. The challenge is doing so in a way that maintains the integrity of random assignment and, therefore, our ability to construct a credible counterfactual. In this section we sketch out various approaches that bring choice into the design.

Field Experiments

Figure 2 displays a simplified representation of a randomized field experiment with a pure control group. Like most experiments, the research design begins with a target population (e.g., individuals living in a particular geographic area) and randomly assigns individuals (or groups of individuals) to a treatment group or a control group (dotted lines). The key feature of a field experiment is that the treatment takes place in a real world context, often without the knowledge of study participants. As a result, field experiments incorporate choice in the most natural way possible: people decide whether they are going to expose themselves to the treatment (solid lines). From a statistical standpoint, the problem of non-compliance to the treatment makes analyzing field experiments more complicated than the typical forced exposure laboratory study. However, from a theoretical standpoint, non-compliance in field experiments allows us to study the effects of media content in the face of selective exposure.

Using Experiments to Understand How Agency Influences Media EffectsClick to view larger

Figure 2: Schematic of randomized field experiment with treatment and control group

(Note: Dotted lines indicate random assignment and solid lines indicate self-selection).

From the potential outcomes perspective, there are four types of compliance, as displayed in Table 2. Compliers receive the treatment if they are assigned to the treatment condition and do not receive it if they are assigned to the control condition. Never-takers refuse the treatment even if they are assigned to the treatment condition. Defiers do the opposite of what they are asked to do (take the treatment in the control condition and refuse it in the treatment condition). Always-takers always take the treatment irrespective of their assignment status. Because these are potential types of compliance, we only observe an individual’s behavior once. We cannot know how someone in the control group would have behaved if they were assigned to the treatment group. Consequently, we only observe compliance or non-compliance. In the treatment group we cannot sort out never-takers from defiers or always-takers from compliers, and in the control group we cannot sort out always-takers from defiers or never-takers from compliers.

Table 2: Potential Types of Compliance

Individual Is Assigned to Control

Individual Is Assigned to Treatment

Not Treated

Treated

Not Treated

Never-taker

Defier

Treated

Complier

Always-taker

Some field experimental designs make it impossible to observe the behavior of defiers or always-takers. These designs exhibit one-sided non-compliance. For example, Panagopoulos and Green (2008) studied 28 cities with upcoming mayoral elections, randomly assigning 14 to receive non-partisan radio advertisements about the election that mentioned the candidates running and their party affiliation (if applicable) and the remaining 14 to a control group that received no advertisements. The only way one could be an always-taker or defier in the control group would be to purposely travel to a city in the treatment group, which, given the fact that the study was not widely known, would have been highly improbably if not impossible.

Designs that cannot restrict access to the treatment create the conditions for two-sided non-compliance. For example, Gerber, Karlan, and Bergan (2009) investigated the effects of political news by randomly assigning individuals living near Washington, D.C., who were not newspaper subscribers to receive the conservative Washington Times, the relatively more liberal Washington Post, or no newspaper subscription (control group). Because anyone can obtain a subscription to either of these newspapers by merely requesting one and making a payment, Gerber et al. could not prevent individuals in the control group from doing just that.

How does one analyze field experiments with non-compliance? Comparing the mean outcome in the treatment group to the mean outcome in the control group is the most straightforward approach. Although this is the same method often used to analyze forced-exposure experiments, the interpretation of the result is different. In the presence of non-compliance, the differences-of-means approach does not tell us—as it does in a forced exposure design—how everyone in the study would respond to the treatment on average (known as the average treatment effect, or ATE). Rather, it tells us the effect on those we intended to treat—some of whom we did (the compliers and always-takers) and some of whom we did not. For this reason, statisticians refer to this estimator as the intent to treat (ITT) analysis. Random assignment allows us to assume that we observe roughly equal proportions of compliers, defiers, never-takers, and always-takers across the treatment groups. Consequently, the behavior of these individuals should average out, and we should observe the overall effects of the treatment, given non-compliance.

For researchers interested in the overall effects of political media given selective exposure, ITT offers the answer. It represents the effects of some stimulus in a context where some people choose to not receive it as a function of their own agency. Returning to the radio ads example introduced above, Panagopoulos and Green (2008) found that every gross ratings point—a standard way of quantifying how often ads are aired in a media market—of non-partisan ads that mentioned the names of both mayoral candidates decreased the incumbent mayoral candidate’s vote share by 0.078 percentage points. This result demonstrates that merely informing voters who the challenger is helps boost support for that candidate. More important for the purpose of this article, these results also indicate the effects of political advertisements in elections where only a fraction of the electorate likely heard those ads.

Although relatively small in number, field experiments inform our knowledge on a range of media effects using ITTs. Encouraging people to read either the Washington Post or the Washington Times in 2005 increased support for Democratic candidates, suggesting that as far as newspapers are concerned, ideological slant matters less than the topics they cover (Gerber, Karlan, & Bergan, 2009). A number of field experiments, in addition to Panagopoulos and Green (2008), have assessed the effects of radio programs on political attitudes and behavior. A field experiment conducted in post-genocide Rwanda discovered that exposure to a soap opera, even though not explicitly political, increased their willingness to express dissent (Paluck & Green, 2009) but not their personal beliefs (Paluck, 2009). In war-torn Democratic Republic of Congo, a political talk radio program that encouraged listeners to be more tolerant actually backfired, causing some listeners to be less tolerant (Paluck, 2010). In Ghana, inadvertent exposure to opposing views on partisan talk radio randomly assigned on buses caused people to moderate their political opinion (Conroy-Krutz & Moehler, 2015; Moehler & Conroy-Krutz, 2015). On television, political advertisements in support of the sitting governor of Texas were randomly rolled out during the 2006 campaign. In the days following the ads, polls showed a considerable increase in support for the governor, but these gains rapidly decayed and were no longer evident after a week (Gerber, Gimple, Green & Shaw, 2011). Moving to the world of social media, Bond et al. (2013) partnered with Facebook to conduct a large-scale randomized experiment in which some users saw an encouragement to vote at the top of their newsfeed during the 2010 U.S. congressional elections. The authors found that the encouragement to vote only increased voter turnout if the banner also announced which of their friends had reported that they voted.

Although the ITT offers useful information about the effects of political media given selective exposure, many researchers remain interested in estimated media effects among those who chose to be exposed to it. In order to accomplish this, researchers must be able to observe exposure in the treatment group. For many media effects field experiments, it is difficult to obtain this information because one cannot feasibly observe whether someone reads a newspaper or watches an advertisement. Albertson and Lawrence (2009) were able to collect these data by analyzing encouragement design field experiments conducted by the National Opinion Research Center. In an encouragement design, subjects are recruited (e.g., through a telephone survey) and encouraged (or not encouraged) to expose themselves to a real-world intervention (e.g., watch a particular television program). When the subjects are resurveyed after the intervention, it is possible to measure who exposed themselves to the treatment. With this information, it should be possible to estimate the causal average complier effect (CACE) by using random assignment as an instrumental variable to compare the behavior of the compliers to the non-compliers (for a more in-depth discussion, see Gerber & Green, 2012). This approach comes with some assumptions. First, one must be able to assume that treatment assignment can only influence the outcome variable via the treatment administered (this is often called the exclusion restriction assumption). For instance, provided we had information about participant’s exposure in the radio ads study, we would need to assume that the advertisements themselves influenced voters, rather than the candidates changing their behavior in response to the placement of the advertisements. Second, in the case of two-way non-compliance, we must assume that there are no defiers (this is often called the monotonicity assumption). For instance, in the NORC field experiments analyzed by Albertson and Lawrence (2009), we must assume that there are no individuals who would have chosen to watch a television show had they been assigned to the control group but not if they were assigned to the treatment group, evincing a sort of “you can’t tell me what to do” behavior. Finally, estimating the CACE assumes that the assigned units (e.g., individuals, cities, etc.) cannot influence each other’s behavior (often called the stable unit treatment value assumption). Had neighbors in Gerber et al.’s (2009) study been assigned to different experimental groups and shared their newspapers with each other, it would undermine our ability to estimate the causal effects of the newspapers separately.

Lab and Survey Experiments

By studying media effects in the wild, field experiments offer the most compelling experimental design for assessing the real-world effects of media. By design, field experiments obviate the captive audience problem. Nonetheless, field experiments are not always feasible or desirable. With respect to feasibility, randomized field experiments tend to be costly and logistically difficult (if not impossible). For instance, it is unlikely that many scholars will be able to convince a major political candidate or major news media outlet to randomize aspects of their campaigns or news reporting. With respect to desirability, social scientists are not simply interested in cataloging causal effects. They are also interested in developing and testing theories about how people formulate political attitudes and behave. Researchers generally have fewer degrees of freedom when placing an experiment in the wild than they do when placing it in a controlled setting. Yet, as we discussed above, control brings in the sorts of problems caused by the artificiality of the experimental context, and, when it comes to understanding the effects of selective exposure, it creates the captive audience problem.

Researchers have taken essentially three approaches to incorporating choice into lab and survey experiments. The first, which we call the selective exposure experiment (Figure 3), begins with the standard forced exposure experiment in which study participants are randomly exposed to a political stimulus (e.g., a news show) or a control stimulus (e.g., an entertainment show). In order to study the effects of selective exposure, a subset of subjects are randomly assigned to a choice condition in which they can choose among the treatment and control stimuli. This experimental protocol was pioneered by Dolf Zillmann to study selective exposure (see Zillmann, Hezel, & Medoff, 1980) and modified by us to study political media effects (Arceneaux & Johnson, 2013). The design essentially embeds the failure to treat characteristic of field experiments into a lab setting by allowing a random subset of subjects the ability to choose, albeit among a highly constrained set of alternatives.

Using Experiments to Understand How Agency Influences Media EffectsClick to view larger

Figure 3: Schematic of selective exposure experiment

(Note: Dotted lines indicate random assignment and solid lines indicate self-selection).

There are two ways to the design of the choice condition: (1) allow choice throughout the session or (2) lock in one’s choice for the session. In our research we chose the first option, maximizing the mundane reality of the choice condition by simulating the fluidity of choice in natural settings. On television people can flip among channels, choosing to stay on one option, surf among the many, or something in between. The same is true for the Internet and social media. People can always revisit a web page or a post. In doing so, we allow people in the choice condition to consume as much or as little as they want of the political stimuli. This approach allows one to estimate the ATE of the political stimuli by comparing the forced treatment group to the forced control group. From the potential outcomes perspective, the ATE answers the question: What would happen if everyone in the sample were exposed to the treatment? By comparing the maximal-choice condition to the control group, one is able to estimate the ITT effect—the effect of political stimuli in a context where there is choice. Comparing the choice condition to the forced treatment group provides an estimate of the difference between the ATE and the ITT—the degree to which selective exposure alters the overall effect of the political stimuli. Across a number of selective exposure experiments using maximal-choice conditions, we found that including entertainment choices in the mix attenuated the effects of partisan news media (Arceneaux & Johnson, 2013).

All experimental designs have benefits and drawbacks, and one must make inevitable trade-offs when choosing among design features. One drawback to our design is that in mimicking natural settings, the maximal-choice condition creates a great deal of variance in how people expose themselves to the political stimuli and vitiates our ability to estimate the CACE of political stimuli among the subset of individuals who want to expose themselves to political stimuli. If one is interested in estimating the CACE, Gaines and Kuklinski (2011) suggest an alternative way to design the choice condition, which involves asking participants to make a discrete choice. Once subjects in the choice condition select a stimulus, they are fully exposed to it. This approach sacrifices some degree of mundane reality in order to allow the research to estimate the CACE using the same methods used in field experiments.

There are two unfortunate downsides to discrete-choice selective exposure experiments. Estimating the CACE requires a fairly large sample size (Gaines & Kuklinski, 2011), and administering the treatment directly after obtaining people’s choices may display demand characteristics as subjects try to guess what the researcher is studying (see Shadish, Cook, & Campbell, 2001). Consequently, other researchers have drawn on an ingenious design developed by medical researchers (Macias et al., 2009; Torgerson & Sibbald, 1998). This approach, which we dubbed the patient preference experiment (Arceneaux & Johnson, 2013), employs a standard forced exposure experiment in which participants are asked to rank (or choose among) the experimental stimuli on the pretest questionnaire (see Figure 4). By asking participants’ preferences before randomly assigning them to treatments, we are able to estimate the effects of the treatments conditional on viewing preferences. Using subgroup analysis, researchers can simply compare the effects of the treatment among those who would have chosen it and among those who would have chosen something else. This approach has more statistical power (and therefore requires fewer observations) than the discrete-choice selective exposure experiment. The patient preference experiment also allows flexibility in where researchers place the preference questions on the pre-test instrument. It need not be placed directly before the stimulus is administered (as it is in the discrete-choice selective exposure design), which means that researchers can increase distance between recording people’s preferences and administering the treatment to minimize demand effects. Researchers have used the participant preference experiment—or designs that closely resemble it—to study the effects of partisan news media (e.g., Arceneaux & Johnson, 2013; Druckman, Levendusky, & McLain, 2015; Levendusky, 2013) as well as the effects of political arguments (Druckman & Leeper, 2012).

Using Experiments to Understand How Agency Influences Media EffectsClick to view larger

Figure 4: Schematic of participant preference experiment

(Note: Dotted lines indicate random assignment and solid lines indicate self-selection).

Richard Lau and David Redlawsk introduced a third major approach to studying choices over information types and choices in the process of political decision-making with the decision board technique (see, e.g., Lau & Redlawsk, 1997, 2001). Developed for the study of decision-making processes (Carroll & Johnson, 1990), the decision board presents a grid or dynamic list of information links to participants. This allows researchers to monitor their use of types of information search strategies and heuristics in the collection and use of political information. Decision board studies have provided great insight into the decision processes of members of the mass public and political elites (Mintz, Geva, Redd, & Carnes, 1997). They also offer a hybridity of the potential for random assignment to information with carefully monitored observation of choice behavior.

As a field, we have just begun to incorporate choice into lab and survey experiments. A number of researchers are actively studying various ways to accomplish the task. For instance, Adam Berinsky and his colleagues have proposed combining aspects of the selective exposure experiment with the patient preference experiment as a way to leverage both designs’ advantages, while minimizing their downsides (Knox, Yamamoto, Baum, & Berinksy, 2014). Taking a step back, Stroud, Wojcieszak, Feldman, and Bimber (2014) consider the psychological effects of offering (or failing to offer) choices in experimental setting. They find that many subjects respond to stimuli differently when they have a choice than they do when they are held captive by the experimental protocol, presenting researches with an additional set of challenges to consider.

Natural Experiments

Although randomized experiments offer researchers the surest way to establish causal effects, in rare instances observational designs can approximate experimental designs. A natural experiment exists if researchers are able to make a credible clam that non-random assignment to some naturally occurring treatment was made in a haphazard fashion unrelated to the outcome of interest (Dunning, 2008). When researchers are fortunate, natural experiments can point to how media shape political attitudes and behavior in a messy world. Nonetheless, we must keep in mind that as with any observational design, one can never be certain if selection bias or endogeneity are afoot. We can only minimize these concerns.

A number of media effects studies have identified the haphazard roll out of the Fox News network in the United States as a natural experiment. In its early days, the cable news station only appeared in handful of cable markets, and it appears that audience penetration rather than political considerations influenced its presence (DellaVigna & Kaplan, 2007). If so, one can use the presence and absence of Fox News in its initial years to gauge its effects. Extant research suggests that Fox News may have increased Republican vote share in the 2000 election (DellaVigna & Kaplan, 2007; Hopkins & Ladd, 2014; Martin & Yurukoglu, 2014) and induced members of the U.S. House to cast votes that sided with the Republican Party (Arceneaux, Johnson, Lindstädt, & Vander Wielen, 2016; Clinton & Enamorado, 2014).

Researchers have exploited similar geographic discontinuities to estimate the effects of campaign coverage on political interest (Butler & De La O, 2011), the impact of presidential advertisements on voting (Krasno & Green, 2008) and election outcomes (Huber & Arceneaux, 2007), the political effects of increasing the number escapist entertainment options in an authoritarian context (Kern & Hainmueller, 2009), the effects of newspaper coverage (Campbell, Alford, & Henry, 1984; Mondak, 1995; Snyder & Strömberg, 2010), and the effects of selective exposure to partisan news outside of the United States (Trilling, Klingeren, & Tsfati, 2016; Tsfati & Chotiner, 2016). Others have combined panel data with exogenous shifts news coverage to gauge how much people learn about public policy from the news (Barabas & Jerit, 2009; Hopmann, Wonneberger, Shehata, & Höijer, 2016; Lenz, 2009), support for political leaders (Ladd & Lenz, 2009), and rates of political participation (Gentzkow, 2006).

Conclusion

The expansion of news and entertainment choices has simultaneous made the study of human agency and media effects more relevant and more difficult. As tempting as it is to use the variance in news media exposure created by agency to study its causal effects, researchers must be even more attuned to selection bias and endogeneity, which are also a product of human agency. With the experimental approach taking root in political science (Druckman, Green, Kuklinski, & Lupia, 2006), many media effects scholars are actively working to incorporate choice within the experimental framework.

In some ways, the alliance between choice and experimentation is an awkward one. The hallmark of experimentation is control. Experiments enable researchers to address the fundamental problem of causality by closely controlling the selection process. In doing so, the experimenter can rule out selection bias, endogeneity, and other confounds as alternative explanations. Introducing choice into the experimental design requires that the researcher let go of this control to some degree. Yet if done carefully, it is possible to maintain control over important aspects of the selection process—namely, randomization—while allowing some degree of human agency into the way the treatment is administered and received. Embedding choice into the experimental protocol adds complications to the design, in how treatments are administered and the analysis of the data that result. We believe that these add complications are worth the benefits.

Predicting the future may be a foolish game, but we strongly suspect that the role that choice plays in shaping the reach and influence of news media is going to become more prominent. Fewer Americans are buying cable television subscriptions, and more are getting their entertainment through social media and algorithm-directed searches. These trends may further weaken the grip of traditional news media institutions (e.g., television channels and newspapers), while it raises the importance of people’s taste, the composition of their social networks, and the unpredictable waves of viral content that courses through social media, episodically focusing people’s attention. If we are correct, it offers all the more reason for political scientists and political communication scholars to attend more to the effects of human agency.

References

Achen, C. H. (1986). The statistical analysis of quasi-experiments. Berkeley: University of California Press.Find this resource:

Albertson, B., & Lawrence, A. (2009). After the credits roll: The long-term effects of educational television on public knowledge and attitudes. American Politics Research, 37(2), 275–300Find this resource:

Ansolabehere, S. D., Iyengar, S., & Simon, A. (1999). Replicating experiments using aggregate and survey data: The case of negative advertising and turnout. American Political Science Review, 93(4), 901–909.Find this resource:

Arceneaux, K., Gerber, A., & Green, D. P. (2006). Comparing experimental and matching methods using a large-scale voter mobilization experiment. Political Analysis, 14(1), 37–62.Find this resource:

Arceneaux, K., & Johnson, M. (2013). Changing minds or changing channels? Partisan news in an age of choice. Chicago: University of Chicago Press.Find this resource:

Arceneaux, K., Johnson, M., Lindstädt, R., & Vander Wielen, R. J. (2016). The influence of news media on political elites: Investigating strategic responsiveness in Congress. American Journal of Political Science, 60(1), 5–29.Find this resource:

Barabas, J., & Jerit, J. (2009). Estimating the causal effects of media coverage on policy-specific knowledge. American Journal of Political Science, 53(1), 73–89.Find this resource:

Barker, D. C. (2002). Rushed to judgment? Talk radio, persuasion, and American political behavior. New York: Columbia University Press.Find this resource:

Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political communication. Journal of Communication, 58(4), 707–731.Find this resource:

Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., et al. (2013). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298.Find this resource:

Butler, D. M., & De La O, A. L. (2011). The causal effect of media-driven political interest on political attitudes and behavior. Quarterly Journal of Political Science, 5(4), 321–337.Find this resource:

Campbell, J. E., Alford J. R., & Henry, K. (1984). Television markets and congressional elections. Legislative Studies Quarterly, 9(4), 665–678.Find this resource:

Carroll, J. S., & Johnson, E. J. (1990). Decision research: A field guide. Beverly Hills, CA: SAGE.Find this resource:

Cassino, D., Woolley, P., & Jenkins, K. (2012). What you know depends on what you watch: Current events knowledge across popular news sources. Public Mind Poll. Teaneck, NJ: Farleigh Dickinson University.Find this resource:

Chong, D., & Druckman, J. N. (2007). Framing Theory. Annual Review of Political Science, 10(1), 103–126.Find this resource:

Clinton, J. D., & Enamorado, T. (2014). The national news media’s effect on Congress: How Fox News affected elites in Congress. Journal of Politics, 76(4), 928–943.Find this resource:

Conroy-Krutz, J., & Moehler, D. C. (2015). Moderation from bias: A field experiment on partisan media in a new democracy. The Journal of Politics, 77(2), 575–587.Find this resource:

DellaVigna, S., & Kaplan, E. (2007). The Fox News effect: Media bias and voting. Quarterly Journal of Economics, 122(3), 1187–1234.Find this resource:

Druckman, J. N., Green, D. P., Kuklinski, J. H., & Lupia, A. (2006). The growth and development of experimental research in political science. American Political Science Review, 100(4), 627–636.Find this resource:

Druckman, J. N., & Leeper, T. J. (2012). Is public opinion stable? Resolving the micro/macro disconnect in studies of public opinion. Daedalus, 141(4), 50–68.Find this resource:

Druckman, J. N., Levendusky, M. S., & McLain, A. (2015). No need to watch: How the effects of partisan media can spread via inter-personal communication. Unpublished manuscript.Find this resource:

Druckman, J. N., & Parkin, M. (2005).The impact of media bias: How editorial slant affects voters. Journal of Politics, 67(4), 1030–1049.Find this resource:

Dunning, T. (2008). Improving causal inference: Strengths and limitations of natural experiments. Political Research Quarterly, 61(2), 282–293.Find this resource:

Gaines, B. J., & Kuklinski, J. H. (2011). Experimental estimation of heterogeneous treatment effects related to self-selection. American Journal of Political Science, 55(3), 724–736.Find this resource:

Garrett, R. K., Carnahan, D., & Lynch, E. K. (2013). A turn toward avoidance? Selective exposure to online political information, 2004–2008. Political Behaviour, 35(1), 113–134.Find this resource:

Gentzkow, M., & Shapiro, J. M. (2011). Ideological segregation online and offline. The Quarterly Journal of Economics, 126(4), 1799–1839.Find this resource:

Gentzkow, M. (2006). Television and voter turnout. The Quarterly Journal of Economics, 121(3), 931–972.Find this resource:

Gerber, A., Karlan, D. S., & Bergan, D. (2009). Does the media matter? A field experiment measuring the effect of newspapers on voting behavior and political opinions. American Economic Journal: Applied Economics, 1(2), 35–52.Find this resource:

Gerber, A. S., Gimple, J. G., Green, D. P., & Shaw, D. R. (2011). How large and long-lasting are the persuasive effects of televised campaign ads? Results from a randomized field experiment. American Political Science Review, 105(1), 135–150.Find this resource:

Gerber, A. S., & Green, D. P. (2012). Field experiments: Design, analysis, and interpretation. New York: Norton.Find this resource:

Hindman, M. (2008). The myth of digital democracy. Princeton, NJ: Princeton University Press.Find this resource:

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.Find this resource:

Hopkins, D. J., & Ladd, J. M. (2014). The consequences of broader media choice: Evidence from the expansion of Fox News. Quarterly Journal of Political Science, 9(1), 115–135.Find this resource:

Hopmann, D. N., Wonneberger, S., Shehata, A., & Höijer, J. (2016). Selective media exposure and increasing knowledge gaps in Swiss referendum campaigns. International Journal of Public Opinion Research, 28(1), 73–95.Find this resource:

Hovland, C. I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15, Winter, 635–650.Find this resource:

Hovland, C. I., & Janis, I. L. (1959). Personality and persuasibility. New Haven, CT: Yale University Press.Find this resource:

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.Find this resource:

Hovland, C. I., Lumsdaine, A. A., & Sheffield, F. D. (1949). Experiments on mass communication, Vol. 3. Princeton, NJ: Princeton University Press.Find this resource:

Hovland, C. I. (1959). Reconciling conflicting results derived from experimental and survey studies of attitude change. American Psychologist, 14(1), 8–17.Find this resource:

Huber, G. A., & Arceneaux, K. (2007). Identifying the persuasive effects of presidential advertising. American Journal of Political Science, 51(4), 961–981.Find this resource:

Iyengar, S., & Kinder, D. R. (1987). News that matters: Television and American opinion. Chicago: University of Chicago Press.Find this resource:

Kern, H. L., & Hainmueller, J. (2009). Opium for the masses: How foreign media can stabilize authoritarian regimes. Political Analysis, 17(4), 377–399.Find this resource:

King, F., Keohane, R. O., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.Find this resource:

Klapper, J. (1960). The effects of mass media. Glencoe, IL: The Free Press.Find this resource:

Knox, D., Yamamoto, T., Baum, M. A., & Berinsky, A. J. (2014). Design, identification, and sensitivity analysis for patient preference trials, 1–36.Find this resource:

Krasno, J. S., & Green, D. P. (2008). Do televised presidential ads increase voter turnout? Evidence from a natural experiment. The Journal of Politics, 70(1), 245–261.Find this resource:

Ladd, J. M., & Lenz, G. S. (2009). Exploiting a rare communication shift to document the persuasive power of the news media. American Journal of Political Science, 53(2), 394–410.Find this resource:

Lau, R. R., & Redlawsk, D. P. (1997). Voting correctly. American Political Science Review, 91(3), 585–599.Find this resource:

Lau, R. R., & Redlawsk, D. P. (2001). Advantages and disadvantages of cognitive heuristics in political decision making. American Journal of Political Science, 45(4), 951–971.Find this resource:

Lau, R. R., Sigelman, L., & Rovner, I. B. (2007). The effects of negative political campaigns: A meta-analytic reassessment. Journal of Politics, 69(4), 1176–1209.Find this resource:

Lazarsfeld, P. F., Berelson, B. R., & Gaudet, H. (1948). The people’s choice (2d ed.). New York: Columbia University Press.Find this resource:

Lenz, G. S. (2009). Learning and opinion change, not priming: Reconsidering the priming hypothesis. American Journal of Political Science, 53(4), 821–837.Find this resource:

Levendusky, M. (2013). How partisan media polarize america. Chicago: University of Chicago Press.Find this resource:

Macias, C., Gold, P. B., Hargreaves, W. A., Aronson, E., Bickman, L., Barreira, P. J., et al. (2009). Preference in random assignment: Implications for the interpretation of randomized trials. Administration and Policy in Mental Health and Mental Health Services Research, 36(5), 331–342.Find this resource:

Martin, G. J., & Yurukoglu, A. (2014, December). Bias in cable news: Real effects and polarization. NBER Working Paper No. 20798.Find this resource:

Mintz, A., Geva, N., Redd, S. B., & Carnes, A. (1997). The effect of dynamic and static choice sets on political decision making: An analysis using the decision board platform. American Political Science Review, 91(3), 553–566.Find this resource:

Moehler, D. C., & Conroy-Krutz, J. (2015). Partisan media and engagement: A field experiment in a newly liberalized system. Political Communication.Find this resource:

Mondak, J. J. (1995). Nothing to read: Newspapers and elections in a social experiment. Ann Arbor: University of Michigan Press.Find this resource:

Mutz, D. C., & Martin, P. S. (2001). Facilitating communication across lines of political difference: The role of mass media. American Political Science Review, 95(1), 97–114.Find this resource:

Mutz, D. C. (1998). Impersonal influence: How perceptions of mass collectives affect political attitudes. Cambridge, U.K.: Cambridge University Press.Find this resource:

Noel, H. (2014). Political ideologies and political parties in America. Cambridge, U.K.: Cambridge University Press.Find this resource:

Paluck, E. L. (2009). Reducing intergroup prejudice and conflict using the media: A field experiment in Rwanda. Journal of Personality and Social Psychology, 96(3), 574–587.Find this resource:

Paluck, E. L. (2010). Is it better not to talk? Group polarization, extended contact, and perspective taking in Eastern Democratic Republic of Congo. Personality and Social Psychology Bulletin, 36(9), 1170–1185.Find this resource:

Paluck, E. L., & Green, D. P. (2009). Deference, dissent, and dispute resolution: An experimental intervention using mass media to change norms and behavior in Rwanda. American Political Science Review, 103(4), 622.Find this resource:

Panagopoulos, C., & Green, D. P. (2008). Field experiments testing the impact of radio advertisements on electoral competition. American Journal of Political Science, 52(1), 156–168.Find this resource:

Prior, M. (2007). Post-broadcast democracy: How media choice increases inequality in political involvement and polarizes elections. Cambridge, U.K.: Cambridge University Press.Find this resource:

Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.Find this resource:

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference (2d ed.). Boston: Wadsworth.Find this resource:

Snyder, J. M., Jr., & Strömberg, D. (2010). Press coverage and political accountability. Journal of Political Economy, 118(2), 355–408.Find this resource:

Stroud, N. J., Wojcieszak, M., Feldman, L., & Bimber, B. (2014). Why choice matters in experimental designs with political stimuli. Paper presented at the American Political Science Association, Political Communication Division, Washington, DC.Find this resource:

Stroud, N. J. (2011). Niche news: The politics of news choice. Oxford: Oxford University Press.Find this resource:

Torgerson, D., & Sibbald, B. (1998). Understanding controlled trials: What is a patient preference trial? British Medical Journal, 316(7128), 360.Find this resource:

Trilling, D., van Klingeren, M., & Tsfati, Y. (2016). Selective exposure, political polarization, and possible mediators: Evidence from the Netherlands. International Journal of Public Opinion Research.Find this resource:

Tsfati, Y., & Chotiner, A. (2016). Testing the selective exposure–polarization hypothesis in Israel using three indicators of ideological news exposure and testing for mediating mechanisms. International Journal of Public Opinion Research, 28(1), 1–24.Find this resource:

Vavreck, L. (2007). The exaggerated effects of advertising on turnout: The dangers of self-reports. Quarterly Journal of Political Science, 2(4), 325–343.Find this resource:

Zaller, J. (1992). The nature and origins of mass opinion. Cambridge, U.K.: Cambridge University Press.Find this resource:

Zillmann, D., Hezel, R. T., & Medoff, N. J. (1980). The effect of affective states on selective exposure to televised entertainment fare. Journal of Applied Social Psychology, 10(4), 323–339.Find this resource:

Notes:

(1.) One could make a parallel example for negative selection bias. People who watch the conservative cable network Fox News tend to know less about politics than people who do not watch Fox News (Cassino, Woolley, & Jenkins, 2012). If uninformed people are more likely to select Fox News than more informed individuals, the correlation merely shows that Fox News viewers are different from non-viewers. Indeed, Fox News could even increase the political knowledge of those individuals.

(2.) Stated more formally, the strong ignorability assumption requires the treatment variable (e.g., exposure to conservative media) to be uncorrelated with individuals’ potential outcomes (Rosenbaum & Rubin, 1983).

(3.) This approach makes the conditional ignorability assumption, which requires the treatment variable to be uncorrelated with individuals’ potential outcomes after conditioning on covariates (Rosenbaum & Rubin, 1983).

(4.) In a single experiment, random assignment does not guarantee that experimental groups will be identical in every way. Sampling variability may cause some groups to differ from one another. Fortunately, randomization enables researchers to quantify the likelihood that differences in outcomes among experimental groups occurred by chance (as opposed to being caused by the treatment).