Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS (politics.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 26 March 2017

Media, Electoral Accountability, and Issue Voting

Summary and Keywords

Issue voting concerns the extent to which citizens reward or punish elected officials for their actions or inaction on legislative issues. There are debates about styles of issue voting as well as whether it takes place in the United States, but nearly all theoretical models elevate the role of political knowledge. That is, voters must know where politicians stand on policy issues as well as their own positions. While there are a variety of ways citizens could learn about policy positions and actions, the mass media are presumed to play an important role. Yet, demonstrating the empirical linkages has been difficult in the past due to ever-present challenges with data and research designs. More research is needed to understand the various mechanisms underpinning representative democracy.

Keywords: political knowledge, information, voting behavior, media

Introduction

Most conceptions of democracy assume a relationship between elected officials and the people they represent. This “electoral connection” follows a simple logic: voters elect candidates who represent their interests and punish those who do not (e.g., by voting them out of office). According to this view, citizens lead—that is, they express policy preferences and then “judge, compare, and vote on candidates’ policy platforms” (Lenz, 2009, p. 1). This idea is enshrined in the spatial theory of voting, the central assumption of which is that citizens cast their vote for the candidate whose policy position is closest to their ideological views. An alternative but normatively contested possibility is that citizens follow the views of their preferred politician or party (i.e., they adopt the policy positions of their favorite politician).

In order for citizens to “lead,” however, they need to know where their representatives stand on the major issues of the day, or, at a minimum, be familiar with their representative’s ideological tendencies. Variously called “policy voting” or “issue voting,” the phenomenon implies that ordinary people are able to hold elected officials accountable through their voting decisions. Recent research on issue voting in the United States is reviewed.1 Discussion is organized around several gaps in the literature that relate to the mass media and voter learning.2 The empirical evidence regarding the extent of issue voting and related theoretical models is reviewed. The evidence is somewhat mixed, but representatives do seem to be rewarded or punished on the basis of their voting behavior—at least some of the time and by certain kinds of voters. That said, the existing literature is largely silent on how voters acquire information about their representatives, a deficit that might be corrected by greater attention to the mass media. Several challenges (related to data, design, and statistical considerations) that scholars are likely to face as they continue to study this important topic are considered.

Varieties of Issue Voting

Before turning to the issue of whether citizens use information from the media in voting decisions, we must first define some central concepts. There are many theories of issue (or proximity) voting. According to Downs (1957), spatial theory posits that voters minimize the difference between their ideal point and the elected official’s estimated position. This is a precise calculus that would seem to require a considerable amount of political information as well as sophistication. Variants on this “gold standard” include the Grofman (1985) “discounting” model, as in discounting campaign pledges (Adams, Merrill, & Grofman, 2005), which specifies that voters go for the candidate they think will move policy closest to their ideal point. Typically this means voters choose a candidate whose own position is more extreme than their own. This model imposes even greater cognitive burdens on the voter since she has to know whether the candidate in question has the wherewithal to move the status quo. Another variant is the Rabinowitz and MacDonald (1989) “directional” model. Their theoretical contribution makes fewer demands upon citizens because it assumes only that voters can place candidates to the left or right of the ideological midpoint, and being on the same side becomes the basis for vote choice. However, nearly all of the models privilege what Barabas (2008, pp. 198–199) calls “positional knowledge,” which is an accurate sense of where political actors stand on political issues, rather than other types of knowledge such as civics awareness, policy details, or knowledge of recent events that require surveillance on the part of citizens (for more, see Barabas, Jerit, Pollock, & Rainey, 2014).

There have been many attempts to adjudicate between the models, and we do not review them here (but see Stromberg, 2015). However, Tomz and Van Houweling (2008) survey much of the work in this area in their experimental study of the frequency with which voters engage in different types of issue voting. They conclude that proximity voting is the most commonly observed decision strategy. It is much more widely used than discounting or directional voting. As such, most of the discussion presented favors proximity-based issue voting.

Does Issue Voting Take Place?

Does issue voting occur—that is, are citizens able to judge, compare, and vote for candidates on the basis of their policy views? One may wonder how such an important topic could remain unsettled, but as will be discussed, there are methodological challenges involved in answering this question (see Lenz, 2012 or Mitchell, 2009 for discussion). Perhaps as a result, the existing literature offers a range of (sometimes contradictory) answers to the question of whether issue voting occurs.

Early research cast doubt upon the notion that American citizens could identify where their elected representatives stood, at least on the issues of social welfare, racial integration, and foreign policy (Miller & Stokes, 1963). This lack of knowledge would seem to preclude issue voting (e.g., see Mitchell, 2009). However, small elite and mass sample sizes in the Miller and Stokes (1963) study rendered the original conclusions ambiguous (Erikson, 1978). In recent years, new methods and survey data have allowed political scientists to revisit this topic with greater precision. For example, Ansolabehere and Jones (2010) improve upon the Miller and Stokes (1963) study with surveys that ask people about specific roll-call votes that congressional legislators faced in the year leading up to the survey. The surveys measure respondents’ preferences on these issues (i.e., how they would vote if faced with the decision), their beliefs about how their legislators voted, and the legislators actual voting behavior. Across nearly a dozen roll-call votes, Ansolabehere and Jones (2010) find evidence for issue voting. Nearly all the respondents in their surveys had preferences on important bills before Congress, and most held beliefs about their legislators’ roll-call votes that they in turn used to hold their legislators accountable (in terms of vote choice). According to the authors, this constitutes “strong evidence that constituents choose representatives with whom they agree and vote against those with whom they disagree” (Ansolabehere & Jones, 2010, p. 592; also see Ansolabehere, Rodden, & Snyder, 2008).3

However, several recent studies challenge this optimistic conclusion. Hollibaugh, Rothenberg, and Rulison (2013) find that accountability is conditional on whether there is a viable challenger who better represents the district’s interests. Likewise, Hawley (2013) uses the same data source as Ansolabehere and Jones (the 2006 Congressional Cooperative Election Study) and finds little evidence of issue voting among Latinos. In particular, Latinos were not more likely to vote against Republican incumbents with anti-immigration records.4 Finally, the wide-ranging empirical analyses in Gabriel Lenz’s (2012) Follow the Leader call into question the ability of citizens to hold public officials accountable. Lenz seeks to determine whether people bring their support for politicians in line with their earlier stated views (“leading”) or whether they modify their opinions to match their party identification or candidate preferences (“following”). Across a variety of contemporary and historical cases, Lenz (2012) shows that people change their opinions to be in line with their party allegiances or prior candidate evaluations, though this pattern is most evident in the policy domain. When it comes to performance issues such as the economy, voters exert more accountability over elected officials.5

The work of Jessee (2009, 2010) also offers a nuanced view of electoral accountability. Using a data set that allows him to create measures of individuals’ and elected officials’ ideal points (based on issues that came before the U.S. Senate), Jessee evaluates whether citizens’ vote choice in the 2004 election was consistent with spatial theories of voting. Overall, there was a tendency for people to vote for the party that was closest to them on major policy issues. However, partisans (especially those with low levels of political knowledge) display a bias: they show a preference toward the candidate of their party, even in situations in which the other party’s candidate is ideologically closer to their own position (see Simas, 2013 for related findings on the U.S. House of Representatives). Taken together, the evidence for issue voting is varied, with scholars reaching slightly different conclusions depending on which voters they study, the data and methods employed, and even the types of outcomes examined.

Insofar as electoral accountability does take place through issue voting—and this seems to be the case at least some of the time and for certain kinds of voters—the existing literature is largely silent on how people acquire information about their representatives. For example, Ansolabehere and Jones (2010, p. 596) state that they “are agnostic about how people learn about the voting behavior of their members of Congress. We suspect it is based partly on facts learned from the media and campaigns and partly on inferences.” Yet there is enough heterogeneity in the empirical record that it may be fruitful to take more explicit account of the information environment. Why is it, for example, that people lead on performance topics but not policy issues (Lenz, 2012)? There may be differences in how the mass media cover these topics, either in terms of the volume of coverage or some other feature of media reporting (e.g., Barabas & Jerit, 2009; Jerit, 2009; Soroka, 2006), but this remains an open question. Similarly, the mass media are the smoking gun in Nyhan, McGhee, Sides, Masket, and Greene’s (2012) examination of issue voting on controversial legislation. That study found that legislators who are out of step, even on a single salient vote, can be voted out office. Nyhan et al. (2012, p. 863) speculate that this is because of “the publicity that [such votes] generate in the news media and in campaign communications” though they do not establish this relationship with their data. As will be elaborated, closer attention to the mass media might illuminate the conditions under which voters hold their elected officials accountable.

The Mass Media and Voter Learning: The Missing Link?

Given most citizens’ low levels of general political knowledge (Delli Carpini & Keeter, 1996) and the cognitive demands of issue voting (Mitchell, 2009), it is not a foregone conclusion that the typical citizen will be able to punish or reward representatives on the basis of their legislative record. For example, a non-trivial number of respondents in the Ansolabehere and Jones (2010) study either had no opinion or were mistaken about how their representative voted on salient roll-call votes (17% and 25%, respectively).6 There is some evidence that people with higher levels of general political knowledge are better able to choose the candidate who is closest to their ideological preferences (Jessee, 2009; Simas, 2013; Singh & Roy, 2014). But one recent study found that the group of people who were the most likely to know how their senator voted—namely, the politically interested—were also the most likely be misinformed when their senator took an “atypical” position (Dancey & Sheagley, 2013). Thus, one important task for future researchers is to develop a better understanding of the relationship between a person’s level of political knowledge or political engagement and his or her ability to ability to hold elected officials accountable.

Furthermore, while a variety of studies purport to show that voters learn over the course of an election campaign (e.g., Craig, Kane, & Gainous, 2005; Hirano, Lenz, Pinkovskiy, & Synder, 2015; Hopmann, Wonneberger, Shehata, & Höijer, 2016; Patterson & McClure, 1976; Sears & Chaffee, 1979), there is debate over whether learning affects how people vote. Lenz (2012, p. 109) is perhaps the most pessimistic, stating: “Even when voters have just learned candidates’ positions on the most prominent issues in these elections, they do not change their votes or their candidate/party evaluations accordingly” (also see Mitchell, 2009). In contrast, in their study of statewide primary campaigns, Hirano et al. (2015) report that voters learn about the ideologies of candidates and this learning influences vote choice in Senate and gubernatorial contests. Some portion of this confusion is likely due to varying measurement choices in how researchers measure knowledge and learning. For example, Hirano et al. (2015) operationalize learning as an increase in the ability of survey respondents to place the conservative candidates to the right of the liberal candidates. But this approach may be sensitive to the initial placements of survey respondents, the actual positions of the candidates in any given year, or both factors.

A more pressing challenge to understanding the conditions under which citizens engage in issue voting involves theorizing and measuring the role of the mass media. Nearly all previous work acknowledges that the mass media play a central role in disseminating information about legislator behavior. However, the conceptualization of the “media” or “information environment” is rather crude, with scholars either asserting that there was coverage of a particular vote or piece of legislation (e.g., Lenz, 2012; Nyhan, McGhee, Sides, Masket, & Greene, 2012), including a proxy for the information environment, such as level of competition across the U.S. states (e.g., Jones, 2013), or using a self-reported measure of media attention to represent a voter’s likelihood of being exposed to relevant news coverage (e.g., Alvarez & Gronke, 1996; Hawley, 2013). It is important to operationalize “media coverage” more explicitly, however, because that may hold the key to understanding when knowledge and engagement facilitates or hinders issue voting (see Jerit & Barabas, 2011 for a lengthy discussion as well as Dancey & Sheagley, 2013 or Elenbaas, de Vreese, Schuck, & Boomgaarden 2014).

Although most researchers working in this area do not include data regarding news coverage of legislator behavior, there are some notable exceptions (e.g., Arnold, 2004; Hutchings, 2003; Lipinski, 2001). For example, Snyder and Stromberg (2010) show that when the press coverage of members of Congress declines in the United States, then citizens are less likely to recall the name of their representative and they are less likely to rate him or her. Likewise, Wachtel and Barabas (2012) augment the 2006 Congressional Cooperative Election Study with data about how often the vote record of the Senate incumbent was portrayed in the news.7 Using data from InfoWeb NewsBank and the Audit Bureau of Circulation, Wachtel and Barabas (2012) conducted a content analysis of the top two circulating newspapers from each state for one week before and after the Senate vote. Thus, the authors have data on how much news coverage respondents in any given state might have been exposed to in the two-week period bracketing the Senate vote. Wachtel and Barabas (2012) report that survey respondents in information-rich environments were better able to reward senators who champion their interests in Congress and punish those who do not (see Hutchings, 2003 for a related finding regarding issue publics). The Wachtel and Barabas (2012) study is an advance over work that assumes the relevant information was available to voters. However, important questions remain as to which aspects of the news merit scholarly attention. While it is natural to consider the amount of new stories mentioning a particular vote or a legislator’s behavior, other content, such as frames or persuasive arguments, might provide information that voters can use to make inferences about a legislator’s behavior. In general, there is a dearth of theorizing about which aspects of the media environment are most relevant for understanding voter behavior in this domain.8

Methodological Challenges: Data, Design, and Statistical Considerations

Survey Data

Many of the studies reviewed rely upon survey data collected by online polling organizations, particularly the data collection efforts of YouGov (formerly “Polimetrix”), the survey firm that regularly conducts what is known as the Cooperative Congressional Election Study (hereafter, CCES). This survey is a collaborative effort that pools resources from dozens of universities and organizations in regularly scheduled election surveys, providing common content to all consortium members as well as the opportunity to pose more detailed questions to sub-samples of respondents. Given the reliance of numerous authors upon CCES, and YouGov data more generally (e.g., Ansolabehere & Jones, 2010; Dancey & Sheagley, 2013; Hawley, 2013; Hollibaugh, Rothenberg, & Rulison, 2013; Jones, 2013; Simas, 2013; Jessee, 2009), it is important to note some of the strengths and weaknesses of this data source.

Survey response rates have been falling in the 21st century for a variety of reasons, including the introduction of new technologies such as cell phones and caller identification systems that allow potential respondents to screen incoming calls (Kohut, Keeter, Doherty, Dimock, & Christian, 2012). In response to these trends, survey firms such as YouGov have created their own online panels of potential respondents who take surveys in exchange for payment, or perhaps because they have an interest in the topic or expressing their opinions. As of 2015, YouGov has an online panel of over 3 million respondents in more than 30 countries, and the U.S. panel has over 1.6 million respondents.9

Unlike other surveys, the large size of the CCES gives scholars many respondents per district to address what has been called the “Miller and Stokes problem.” Early scholars studying congressional representation (e.g., Miller & Stokes, 1963) were forced to rely on a single national survey. In their classic study, Miller and Stokes (1963, p. 46) report, “a sample of less than two thousand constituents has been divided among 116 districts.” This meant that, on average, there were roughly 17 respondents per congressional district. In contrast, the CCES has tens of thousands of respondents, resulting in much larger samples within election districts. In 2006, there were 36,500 adults in the common content module. By 2012, the common content sample surpassed 54,000 adults.

While there are many reasons to be enthusiastic about the CCES and other online national data collection platforms (e.g., Vavreck & Rivers, 2008; Vavreck & Iyengar, 2011; Iyengar & Vavreck, 2012), there are also some potential limitations. First, while the online panels are massive in size relative to other surveys like the National Election Studies (NES), survey respondents are not sampled randomly from the population. Instead, respondents opt into the panels and then those who participate are “matched” to national benchmarks in order to produce a more representative sample (see Rivers, 2007). One researcher trying to emulate this procedure with simulations suggests the need for caution (Bethlehem, 2015; cf. Ansolabehere & Rivers, 2013).10 Likewise, a leading survey research organizations warns against opt-in polls for estimating population characteristics (Baker et al., 2010),11 and some researchers worry that those who participate in opt-in panels are “professional respondents” (Hillygus, Jackson, & Young, 2014). All of these issues may introduce bias into studies of voter learning. For example, in his study of the 2004 election, Jessee (2009, p. 64) reports “a notable difference” in the level of knowledge between the respondents in his survey (commissioned by YouGov) and respondents in the 2004 American National Election Sample (ANES) sample. He observes that, “Almost all respondents in this study were able to identify the party in control of the House and Senate, whereas NES respondents were significantly less likely to know these answers” (Jessee, 2009, p. 65), and the author later acknowledges that his YouGov sample was, “more informed than the voter population as a whole” (2009, p. 65). Matching may have moved the opt-in online panelists closer to the national benchmarks demographically, but imbalances may remain on other dimensions such as political knowledge or interest. This is problematic insofar as knowledge is a crucial link in the representational chain (i.e., people can punish or reward representatives only if they know something about their records in office).

Causality and Research Design

Many of the works cited here would be characterized as “observational” studies. That is, they rely upon techniques that do not manipulate the key explanatory factors in a random fashion as one does in an experiment. Instead, studies in this area tend to rely on cross-sectional data, which means they are able to document an association between policy views and vote choice. When attitudes (e.g., policy views, performance judgements) are measured at the same time as outcomes such as vote choice and presidential approval, it is difficult to determine which came first. As Lenz (2012) observes, the public may be leading politicians by rewarding or punishing their policy stands. However, the citizens may first decide whether they like a politician and then adopt his or her views. The two possibilities are observationally equivalent in cross-sectional survey data.

Scholars have developed a variety of solutions for the problem of observational equivalence. Ansolabehere and Jones (2010) use instrumental variable (IV) techniques, a method of estimation that is used in the presence of measurement error, omitted variables, or simultaneity bias. In order to use IV techniques, the researcher must find a variable that predicts the outcome, but only through the potentially endogenous variable (see; Alvarez & Glasgow, 1999; Sovey & Green, 2011). In the Ansolabehere and Jones (2010, p. 586) study, the authors use legislators’ roll-call votes as the instruments for respondents’ perceptions of roll-call votes.

Some scholars leverage the passage of time in establishing the causal relationship between voter perceptions and vote choice. For example, Hirano, Lenz, Pinkovskiy, and Synder Jr. (2015) use observations separated by weeks to search for “learning” at a second time point relative to the first. Lenz (2012) uses yearly panel data (e.g., ANES surveys in 1990, 1991, and 1992) to examine change in support for politicians from one wave to the next (or over various waves) as a function of policy views or performance in the previous wave. But panels are not always available and there can be large time gaps between panels, which introduces the possibility of confounding influences. An additional concern with panel data relates to the tendency for people to attrite (drop out of the panel). This is especially worrisome if respondents with particular characteristics (e.g., low levels of political knowledge or interest) are less likely to participate in subsequent waves of a survey. Accordingly, many of the studies cited previously explicitly acknowledge the dangers of relying upon observational data and call for more attention to causality. For instance, Lenz (2012, pp. 232–233) writes, “When it comes to issue voting, even our field’s top journals continue to commit the sin of assuming that correlation implies causation.… A focus on causation, however, would probably speed up the generation of knowledge.” 12

Curiously, although experimental designs would provide the strongest causal evidence (e.g., Barabas & Jerit, 2010; Iyengar & Kinder, 1987; Jerit & Barabas, 2013), they are rare in this area. Lenz (2012) reports on a two-wave experiment in 2007 on the State Children’s Health Insurance (SCHIP) program that informs half of the subjects about President George W. Bush’s views and then examines presidential approval. Lenz (2012) finds large learning effects (roughly 17 percentage points), but he argues that individuals nevertheless continue to adopt their candidates’ party positions as their own (2012). There is, however, a substantial time gap between the two waves. In fact, six months separates the information treatments and the second wave outcome measures, which means that some respondents may have forgotten this information.13 Other survey experiments providing roll-call information to Americans around the nation demonstrate double digit learning effect and show that citizens reward or punish congressional incumbents on the basis of this information (Barabas, Pollock, & Wachtel, 2012; also see Chen et al., 2014 for related evidence).

Overall, these experiments add to what we know about the potential for voters to hold their elected officials accountable. However, there remains room for research that explores the psychological mechanisms of punishment and reward (i.e., how legislator behavior affects attitude and vote choice). For example, Nyhan, McGhee, Sides, Masket, and Greene (2012) argue that legislators’ votes on salient bills cause a change in voter’s perceptions of their representative’s ideology. Using statistical techniques designed to detect mediating relationships in observational data, Nyhan et al. (2012) show that Democratic legislators supporting health care reform were perceived as more extreme and ideologically distant than they otherwise would have been based on the overall voting record. An experimental test of this claim would involve exogenously manipulating both a legislator’s voting record as well as perceptions of ideological distance (the theorized mediator). More generally, there is debate among public opinion scholars over how citizens process and store information. Research shows that information may influence citizens’ decisions (such as vote choice) even if people are not able to recall the specific reasons for their choice (Lodge, Steenbergen, & Brau, 1995; see Coronel et al., 2012 for recent evidence). This body of work poses a challenge for researchers seeking to determine whether people choose among candidates on the basis of roll-call behavior (i.e., policy-specific information) or on the basis of partisanship.14

Statistical Considerations

Thus far issues with measurement, data quality, and causality have been discussed. It is certainly important to measure concepts well, to use representative data, and to employ designs that give researchers leverage on internal validity, but without adequate statistical power (e.g., Cohen, 1988) valiant efforts in other domains might be for naught. Few have demonstrated this as convincingly in the arena of media and elections as John Zaller (2002) has. Using simulations in which he attempts to recover known media effects (i.e., there is a “true” effect that the author attempts to recover in statistical models), Zaller shows that typical surveys, like the ANES, are too small to detect the effects of media exposure unless the effects are very large (i.e., 10 points or above), and even then only under weak statistical standards (p < .10, one-tailed), with much depending upon the expected relationships (e.g., are effects uniform or heterogeneous?). Thus, low statistical power in traditional surveys could lead to artificial null results (i.e., insignificant findings even though there is really a media effect, akin to a Type II statistical error). This is an important consideration for researchers seeking to include media data in studies of voter learning and issue voting.15

A related power issue concerns model specification. Because the content of media coverage varies across time and space, media effects often are contextual. That is, the effects of the media depend upon the type of messages that are in the information environment. To model these effects, interactions are generally needed, which means even more observations might be necessary (McClelland & Judd, 1993). In this situation, multilevel models might help, borrowing strength from areas with numerous observations or even using population data to better inform estimates (Fridkin & Kenney, 2011; Ghitza & Gelman, 2013; Keele & Wolak, 2008; Park, Gelman, & Bafumi, 2004). However, these models have their own limitations, and much depends on how many contextual units there are, how the contextual units are selected, and how the models are estimated (e.g., Buttice & Highton, 2013; Stegmueller, 2013).

Conclusion

Do citizens know where their representatives stand on the major issues of the day and use this information to reward (or punish) them at the voting booth? This is a fundamental question for scholars interested in the quality of representative democracy, but this review suggests that the answer to this question is not entirely clear. There is evidence (from a variety of studies) that voters are able to hold their elected officials accountable; however, this pattern might be best described as conditional. The extent to which people engage in issue voting depends on a host of factors, both at the individual level (e.g., a person’s level of political knowledge) and the contextual level (e.g., the presence of a challenger, the amount of electoral competition). Going forward, it will be important to devote greater attention to how voters acquire information about their representatives. Part of this effort will require more explicit attention to the information to which people are exposed to (e.g., in the mass media). An additional, and fruitful, line of inquiry involves an examination of the psychological mechanisms at work when people evaluate and compare candidates. Further research in both areas would help clarify the pathways and prospects for electoral accountability.

Acknowledgment

We thank Caitlin Davies for research assistance on this article.

References

Adams, J., Merrill, S., & Grofman, B. (2005). A unified theory of party competition. New York: Cambridge University Press.Find this resource:

Althaus, S., & Kim, Y. M. (2006). Priming effects in complex information environments: Reassessing the impact of news discourse on presidential approval. Journal of Politics, 68, 960–976.Find this resource:

Alvarez, M., & Glasgow, G. (1999). Two-stage estimation of nonrecursive choice models. Political Analysis, 8, 147–165.Find this resource:

Alvarez, M., & Gronke, P. (1996). Constituents and legislators: Learning about the Persian Gulf War resolution. Legislative Studies Quarterly, 21, 105–127.Find this resource:

Ansolabehere, S., & Jones, P. E. (2010). Constituents’ responses to congressional roll-call voting. American Political Science Review, 54, 583–597.Find this resource:

Ansolabehere, S., & Rivers, D. (2013). Cooperative survey research. Annual Review of Political Science, 16, 307–329.Find this resource:

Ansolabehere, S., Rodden, J., & Snyder, J. M. (2008). The strength of issues: Using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review, 102, 215–232.Find this resource:

Arnold, R. D. (2004). Congress, the press, and political accountability. Princeton, NJ: Princeton University Press.Find this resource:

Baker, R., Blumberg, S. J., Brick, M. J., Couper, M. P., Courtright, M., Dennis, J. M., & Zahs, D. (2010). AAPOR Report on Online Panels. Available at https://www.aapor.org/AAPOR_Main/media/MainSiteFiles/AAPOROnlinePanelsTFReportFinalRevised1.pdf.

Barabas, J. (2008). Presidential policy initiatives: How the public learns about state of the union proposals from the mass media. Presidential Studies Quarterly, 38, 195–222.Find this resource:

Barabas, J., & Jerit, J. (2009). Estimating the causal effects of media coverage on policy-specific knowledge. American Journal of Political Science, 53, 73–89.Find this resource:

Barabas, J., & Jerit. J. (2010). Are survey experiments externally valid? American Political Science Review, 104, 226–242.Find this resource:

Barabas, J., Jerit, J., Pollock, W., & Rainey, C. (2014). The question(s) of political knowledge. American Political Science Review 108, 840–855.Find this resource:

Barabas, J., Pollock, W., & Wachtel, J. (2012). Rewarding representation: The effects of roll-call information on voting for congressional incumbents. Paper presented at the annual meeting of the American Political Science Association.Find this resource:

Bartels, L. M. (2006). Priming and persuasion in presidential campaigns. In H. E. Brady & R. Johnston (Eds.), Capturing campaign effects (pp. 78–114). Ann Arbor: University of Michigan Press.Find this resource:

Bethlehem, J. (2015). Solving the nonresponse problem with sample matching? Social Science Computer Review, 33, 1–19.Find this resource:

Brady, H. E., Johnston, R., & Sides, J. (2006). The study of political campaigns. In H. E. Brady & R. Johnston (Eds.), Capturing campaign effects (pp. 1–28). Ann Arbor: University of Michigan Press.Find this resource:

Buttice, M. K., & Highton, B. (2013). How does multilevel regression and poststratification perform with conventional national surveys? Political Analysis, 21, 449–467.Find this resource:

Canes-Wrone, B., Brady, D. W., & Cogan, J. F. (2002). Out of step, out of office: Electoral accountability and house members’ voting. American Political Science Review, 96, 127–140.Find this resource:

Carson, J. L., Koger, G., Lebo, M., & Young, E. (2010). The electoral costs of party loyalty in Congress. American Journal of Political Science, 54, 598–616.Find this resource:

Chen, P. G., Appleby, J., Borgida, E., Callaghan, T. H., Ekstrom, P., Farhart, C. E., & Housholder, E. (2014). The Minnesota multi-investigator 2012 presidential election panel study. Analyses of Social Issues and Public Policy, 14, 78–104.Find this resource:

Claassen, R. L. (2011). Political awareness and electoral campaigns: Maximum effects for minimum citizens? Political Behavior, 33, 203–233.Find this resource:

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2d ed.). New York: Routledge.Find this resource:

Coronel, J. C., Duff, M. C., Warren, D. E., Federmeier, D. E., Gonsalves, B. D., Tranel, D., & Cohen, N. J. (2012). Remembering and voting: Theory and evidence from amnesic patients. American Journal of Political Science, 56, 837–848.Find this resource:

Craig, S. C., Kane, J. G., & Gainous, J. (2005). Issue-related learning in a gubernatorial campaign: A panel study. Political Communication, 22, 483–503.Find this resource:

Dalton, R. J., Beck, P. A., & Huckfeldt, R. (1998). Partisan cues and the media: Information flows in the 1992 presidential campaign. American Political Science Review, 92, 111–126.Find this resource:

Dancey, L., & Sheagley, G. (2013). Heuristics behaving badly: Party cues and voter knowledge. American Journal of Political Science, 57, 312–325.Find this resource:

Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.Find this resource:

Downs, A. (1957). An economic theory of democracy. New York: Harper.Find this resource:

Elenbaas, M., Boomgaarden, H., Schuck, A. & de Vreese, C. (2013). The impact of media coverage and motivation on performance-relevant information. Political Communication, 30, 1–16.Find this resource:

Elenbaas, M., de Vreese, C., Schuck, A., & Boomgaarden, H. (2014). Reconciling passive and motivated learning: The saturation-conditional impact of media coverage and motivation on political information. Communication Research, 41, 481–504.Find this resource:

Erikson, R. S. (1978). Constituency opinion and congressional behavior: A reexamination of the Miller-Stokes representation data. American Journal of Political Science, 71, 511–535.Find this resource:

Franz, M., & Ridout, T. (2007.) Does political advertising persuade? Political Behavior, 29, 465–491.Find this resource:

Fridkin, K. L., & Kenney, P. J. (2011). Variability in citizens’ reactions to different types of negative campaigns. American Journal of Political Science, 55, 307–325.Find this resource:

Gerber, A. S., & Malhotra, N. (2008). Do statistical reporting standards affect what is published? Publication bias in two leading political science journals. Quarterly Journal of Political Science, 3, 313–326.Find this resource:

Gerber, A. S., Malhotra, N., Dowling, C. M., & Doherty, D. (2010). Publication bias in two political behavior literatures. American Politics Research, 38, 591–613.Find this resource:

Ghitza, Y., & Gelman, A. (2013). Deep interactions with MRP: Election turnout and voting patterns among small electoral subgroups. American Journal of Political Science, 57, 762–776.Find this resource:

Grofman, B. (1985). The neglected role of the status quo in models of issue voting. Journal of Politics, 47, 230–237.Find this resource:

Hawley, G. (2013). Issue voting and immigration: Do restrictionist policies cost congressional republicans votes? Social Science Quarterly, 94, 1187–1206.Find this resource:

Hillygus, D. S., Jackson, N., & Young, M. (2014). Professional respondents in nonprobability online panels. In M. Callegaro, R. Baker, J. Bethlehem, A. S. Göritz, J. A. Krosnick, & P. J. Lavrakas, (Eds.). Online panel research: A data quality perspective (pp. 219–237). West Sussex, U.K.: John Wiley.Find this resource:

Hirano, S., Lenz, G. S., Pinkovskiy, M., & Synder Jr., J. M. (2015). Voter learning in state primary elections. American Journal of Political Science, 59, 91–108.Find this resource:

Hollibaugh, G. E., Rothenberg, L. S., & Rulison, K. K. (2013). Does it really hurt to be out of step? Political Research Quarterly, 66, 856–867.Find this resource:

Hopmann, D., Wonneberger, A., Shehata, A., & Höijer, J. (2016). Selective media exposure and increasing knowledge gaps in Swiss referendum campaigns. International Journal of Public Opinion Research, 28, 73–95.Find this resource:

Hutchings, V. L. (2003). Public opinion and democratic accountability. Princeton, NJ: Princeton University Press.Find this resource:

Iyengar, S., Curran, J., Lund, A. B., Salovaara-Moring, I., Hahn, K. S., & Coen, S. (2010). Cross-national versus individual-level differences in political information: A media systems perspective. Journal of Elections, Public Opinion & Parties, 20, 291–309Find this resource:

Iyenger, S., Hahn, K., Bonfadelli, H., & Marr, M. (2009). “Dark areas of ignorance” revisited: Comparing international affairs knowledge in Switzerland and the United States. Communication Research, 36, 341–358.Find this resource:

Iyengar, S., & Kinder, D. (1987). News that matters. Chicago: University of Chicago Press.Find this resource:

Iyengar, S., & Vavreck, L. (2012). Online panels and the future of political communication research. In H. A. Semetko & M. Scammell (Eds.), The Sage Handbook of Political Communication (pp. 225–240). Thousand Oaks, CA: SAGE.Find this resource:

Jerit, J. (2009). Understanding the knowledge gap: The role of experts and journalists. Journal of Politics, 71, 442–456.Find this resource:

Jerit, J., & Barabas, J. (2011). Exposure measures and content analysis in media effects studies. In R. Y. Shapiro & L. C. Jacobs (Eds.), Oxford Handbook of Public Opinion and the Mass Media (pp. 139–155). New York: Oxford University Press.Find this resource:

Jerit, J., & Barabas, J. (2013). Partisan perceptual bias and the information environment. Journal of Politics, 74, 672–684.Find this resource:

Jerit, J., Barabas, J., & Bolsen, T. (2006). Citizens, knowledge, and the information environment. American Journal of Political Science, 50, 266–282.Find this resource:

Jessee, S. (2009). Spatial voting in the 2004 presidential election. American Political Science Review, 103, 59–82.Find this resource:

Jessee, S. (2010). Partisan bias, political information and spatial voting in the 2008 presidential election. Journal of Politics, 72, 327–340.Find this resource:

Jessee, S. (2012). Ideology and spatial voting in American elections. New York: Cambridge University Press.Find this resource:

Jones, P. E. (2013). The effect of political competition on democratic accountability. Political Behavior, 35, 481–515.Find this resource:

Kassow, B., & Finocchiaro, C. J. (2011). Responsiveness and electoral accountability in the U.S. Senate. American Politics Research, 39, 1019–1044.Find this resource:

Keele, L., & Wolak, J. (2008). Contextual Sources of Ambivalence. Political Psychology, 29, 653–673.Find this resource:

Kohut, A., Keeter, S., Doherty, C., Dimock, M., & Christian, L. (2012). Assessing the representativeness of public opinion surveys. Pew Charitable Trusts Report.Find this resource:

Lenz, G. (2009). Learning and opinion change, not priming: Reconsidering the evidence for the priming hypothesis. American Journal of Political Science, 53, 821–837.Find this resource:

Lenz, G. (2012). Follow the leader: How voters respond to politicians’ policies and performance. Chicago: University of Chicago Press.Find this resource:

Lipinski, D. (2001). The Effect of Messages Communicated by Members of Congress: The Impact of Publicizing Votes. Legislative Studies Quarterly, 26(1), 81–100.Find this resource:

Lodge, M., Steenbergen, M. R., & Brau, S. (1995). The responsive voter: Campaign information and the dynamics of candidate evaluation. American Political Science Review, 89, 309–326.Find this resource:

McCelland, G. H., & Judd, C. H. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114, 376–390.Find this resource:

Miller, W. E., & Stokes, D. W. (1963). Constituency influence in Congress. American Political Science Review, 57, 45–46.Find this resource:

Mitchell, D. (2009). Perceptions and realities of issue voting. In J. J. Mondak & D. Mitchell (Eds.), Fault lines: Why the Republicans lost Congress (pp. 111–127). New York: Routledge.Find this resource:

Nyhan, B., McGhee, E., Sides, J., Masket, S., & Greene, S. (2012). One vote out of step? The effects of salient roll call votes in the 2010 election. American Politics Research, 40, 844–879.Find this resource:

Park, D. K., Gelman, A., & Bafumi, J. (2004). Bayesian multilevel estimation with poststratification: State-level estimates from national polls. Political Analysis, 12, 375–385.Find this resource:

Patterson, T. E., & McClure, R. D. (1976). The unseeing eye: The myth of television power in national publics. New York: G. P. Putnam’s Sons.Find this resource:

Rabinowitz, G., & MacDonald, S. E. (1989). A Directional Theory of Issue Voting. The American Political Science Review, 83, 93–121.Find this resource:

Rivers, D. (2007). Sampling for web surveys. Paper presented at Joint Statistical Meetings, Section on Survey Research Methods, Salt Lake City, UT.Find this resource:

Romer, D., Kensi, K., Winneg, K., Adasiewicz, C., & Jamieson, K. H. (2006). Capturing campaign dynamics, 2000 and 2004: The national Annenberg election study. Philadelphia: University of Pennsylvania Press.Find this resource:

Sears, D. O., & Chaffee, S. H. (1979). Uses and effects of the 1976 debates: An overview of empirical studies. In S. Kraus (Ed.), The Great Debates: Carter vs. Ford, 1976 (pp. 223–261). Bloomington: Indiana University Press.Find this resource:

Simas, E. (2013). Proximity voting in the 2010 U.S. House elections. Electoral Studies, 32, 708–717.Find this resource:

Singh, S., & Roy, J. (2014). Political knowledge, the decision calculus, and proximity voting. Electoral Studies, 34, 89–99.Find this resource:

Snyder, J. M., & Stromberg, D. (2010). Press coverage and political accountability. Journal of Political Economy, 118, 355–408.Find this resource:

Soroka, S. N. (2006). Good news and bad news: Asymmetric response to economic information. Journal of Politics, 68, 372–385.Find this resource:

Sovey, A. J., & Green, D. P. (2011). Instrumental variables estimation in political science: A reader’s guide. American Journal of Political Science, 55, 188–200.Find this resource:

Stegmueller, D. (2013). How many countries for multilevel modeling? A comparison of Frequentist and Bayesian approaches. American Journal of Political Science, 57, 748–761.Find this resource:

Strömberg, D. (2015). Media and politics. Annual Review of Economics, 7, 173–205.Find this resource:

Tomz, M., & Van Houweling, R. (2008). Candidate positioning and voter choice. American Political Science Review, 102, 303–318.Find this resource:

Vavreck, L., & Iyengar, S. (2011). The future of political communication research: Online panels and experimentation. In R. Y. Shapiro, L. R. Jacobs, & G. C. Edwards III (Eds.), Oxford Handbook of American Public Opinion and the Media (pp. 156–170). New York: Oxford University Press.Find this resource:

Vavreck, L., & Rivers, D. (2008). The 2006 cooperative congressional election survey. Journal of Elections, Public Opinion, & Parties, 18, 355–366.Find this resource:

Vivyan, N., & Wagner, M. (2012). Do voters reward rebellion? The electoral accountability of MPs in Britain European Journal of Political Research, 51, 235–264.Find this resource:

Wachtel, J., & Barabas, J. (2012). Political knowledge, representation, and the mass media. Paper presented at the annual meeting of the International Society for Political Psychology, Chicago.Find this resource:

Zaller, J. (2002). The statistical power of election studies to detect media exposure effects in political campaigns. Electoral Studies, 21, 297–329.Find this resource:

Notes:

(1.) Due to editorial limits, we focus on political information as it relates to voting decisions (i.e., “voter learning”) as opposed to knowledge more generally among members of the public (e.g., Delli Carpini & Keeter, 1996; Iyengar, Hahn, Bonfadelli, & Marr, 2009; Elenbaas, de Vreese, Schuck, & Boomgaarden, 2014). We also concentrate upon electoral choices rather than turnout.

(2.) Our review focuses on individual-level research (see Canes-Wrone, Brady, & Cogan, 2002; Carson, Koger, Lebo, & Young, 2010; Kassow & Finocchiaro, 2011 for examples of aggregate-level research in this area).

(3.) Our discussion focuses on the United States, but Vivyan and Wagner (2012) provide evidence from the United Kingdom that is broadly consistent with Ansolabehere and Jones (2010).

(4.) Mitchell (2009) finds little evidence of issue voting among respondents in the 2006 Congressional Cooperative Election Study, but she analyzes different questions than Ansolabehere and Jones (2010).

(5.) For example, Lenz (2012) shows that when the mass media devote attention to the economy (say, in a downturn), voters’ economic perceptions are an important determinant of presidential approval.

(6.) These figures are from the 2006 Congressional Cooperative Election Study (Ansolabehere & Jones, 2010, p. 586–587; also see Alvarez & Gronke, 1996).

(7.) The 2006 Congressional Cooperative Election Study asked about a series of high-profile votes in the U.S. Senate.

(8.) Other scholars have methodically coded the media environment (e.g., Barabas & Jerit, 2009; Elenbaas, Boomgaarden, Schuck, & de Vreese, 2013; Jerit, Barabas, & Bolsen, 2006), and these efforts could be instructive for researchers seeking to do the same in studies of electoral accountability (e.g., see Althaus & Kim, 2006; Dalton, Beck, & Huckfeldt, 1998; Franz & Ridout, 2007; or Fridkin & Kenney, 2011).

(9.) Another data set used by researchers (e.g., Claassen, 2011; Bartels, 2006; Brady, Johnston, & Sides, 2006) is the National Annenberg Election Study (NAES). The NAES features daily rolling cross-sections as well as panels. The 2008 NAES was a blend of telephone interviews (nearly 58,000) as well as online surveys (nearly 29,000) by Knowledge Networks, a firm that uses probabilistic sampling to recruit respondents. In earlier years, NAES was conducted by random-digit dialing by Schulman, Ronca, and Bucuvalas, Inc., in 2004 and Princeton Survey Research Associates in 2000 (Romer, Kensi, Winneg, Adasiewicz, & Jamieson, 2006).

(10.) Bethlehem (2015, p. 18) observes that sample matching “will only be able to totally remove the nonresponse bias if the set of auxiliary variables is capable of explaining the participation behavior completely. If the set of auxiliary variables only explains part of the participation behavior, the bias will be reduced but it will not vanish.”

(11.) According to this report, “Researchers should avoid nonprobability online panels when one of the research objects is to accurately estimate population values. There currently is no generally accepted theoretical basis from which to claim that survey results using samples from nonprobability online panels are projectable to the general population. Thus, claims of ‘representativeness’ should be avoided when using these samples” (Baker et al., 2010, p. 5).

(12.) Similarly, Jessee (2012, p. 177) states, “Ultimately, the goal of future work studying ideology and spatial voting should be not only to build on the findings presented here, but also to disentangle the (likely complex) causal relationships underlying this work.”

(13.) Lenz (2012) describes two other experiments on warrantless wiretapping and the economy in which people did not appear to change their attitudes to be in line with prior approval for President Bush. He notes that ceiling effects may be to blame.

(14.) Mitchell (2009, p. 115) elaborates on the problem: “Voters attend to new information [e.g., about policy positions] long enough to update their summary evaluation of a candidate. Later, the summary evaluation is recalled—voters know to what extent they favor or oppose a given candidate—even though the specific substantive data that informed that summary judgement is not stored in memory.”

(15.) Such concerns might be lessened in studies using the CCES because the sample sizes are much larger. We raise them here, however, because of the concern with the “file drawer” problem in other literatures (Gerber & Malhotra, 2008; Gerber, Malhotra, Dowling, & Doherty, 2010).