Real-Time Responses to Campaign Communication
Summary and Keywords
Real-time response measurement (RTR), sometimes also called continuous response measurement (CRM), is a computerized survey tool that continuously measures short-time perceptions while political audiences are exposed to campaign messages by using electronic input devices. Combining RTR data with information about the message content allows for tracing back viewers’ impressions to single arguments or nonverbal signals of a speaker and, therefore, showing which kinds of arguments or nonverbal signals are most persuasive. In the context of applied political communication research, RTR is used by political consultants to develop persuasive campaign messages and prepare candidates for participating in televised debates. In addition, TV networks use RTR to identify crucial moments of televised debates and sometimes even display RTR data during their live debate broadcasts.
In academic research most RTR studies deal with the persuasive effects of televised political ads and especially televised debates, sometimes including hundreds of participants rating candidates’ performances during live debate broadcasts. In order to capture features of human information processing, RTR measurement is combined with other data sources like content analysis, traditional survey questionnaires, qualitative focus group data, or psychophysiological data. Those studies answer various questions on the effects of campaign communication including which elements of verbal and nonverbal communication explain short-term perceptions of campaign messages, which predispositions influence voters’ short-term perceptions of campaign messages, and the extent to which voters’ opinions are explained by short-term perceptions versus long-term predispositions. In several such studies, RTR measurement has proven to be reliable and valid; it appears to be one of the most promising research tools for future studies on the effects of campaign communication.
Recent theories of information processing suggest that there is no direct influence of campaign messages on voters’ opinions. Instead, the effect of campaign communication is contingent on voters’ short-term reactions to the messages they receive, which are called perceptions or impressions. More specifically, voters’ opinions after being exposed to campaign messages seem to be more or less the result of several short-term perceptions or impressions they register while being exposed to them. Short-term perceptions can be triggered by either verbal or nonverbal communication signals, e.g., political candidates’ arguments presented during a televised debate, the rhetorical strategies they use, their gestures, or their facial expressions. Put simply, candidates elicit a positive impression every time they display favorable attributes, e.g., smile or make an argument which is shared by the audience. Moreover, one can assume that the more often they generate positive impressions, the more favorable are voters’ opinions about them afterwards. However, the outcome is typically more complex because different voters react differently to campaign messages because their short-term perceptions are also influenced by individual predispositions including their involvement with the debate, their party identification, and their prior opinions about the candidates. Voters seem to react to messages in light of their predispositions and already-existing knowledge and opinions.
This model of campaign communication raises at least three questions on the role of short-term perceptions in the context of campaign communication: The first question concerns the influence of particular elements of verbal and nonverbal communication on voters’ short-term perceptions of campaign messages. The second question concerns the influence of predispositions on voters’ short-term perceptions of campaign messages. And the third question concerns the extent to which voters’ opinions of the candidates are explained by their short-term perceptions compared to their longstanding predispositions.
Although short-term perceptions are thought to play a crucial role in mediating the effects of campaign communication, they are rarely measured. This is probably because opinions and predispositions are typically measured by traditional survey questionnaires, a method that does not capture short-term perceptions. Measurement of voters’ short-term conscious reactions to campaign messages requires a tool that registers voters’ perceptions continuously while they are exposed to the stimulus. One such tool is real-time response (RTR) or continuous response measurement (CRM). Generally, RTR is nothing more than a computerized version of a questionnaire. While in traditional surveys several questions are posed either before or after exposure to campaign communication, in RTR studies one single question (e.g., whether participants have a positive or negative impression of the candidates taking part in a televised debate) is answered continously during a given time period by respondents using electronic input devices. The status of each respondent’s input device is continuously recorded and sent to a central computer in intervals defined by the researcher (typically second by second).
Once the data is collected, it is converted into a trend line showing the aggregated mean impression of the participants’ second-by-second responses. This trend can then be matched to the onset of particular stimuli, e.g., on a television screen by video overlay. Next, the collected data is transferred into a data set allowing for further analysis at the individual and the aggregate level. Merging the RTR data with information about the verbal and nonverbal displays of the debaters allows the researcher to trace back viewers’ perceptions to the onset of arguments or nonverbal signals of the speaker. Therefore, while traditional pre- and postdebate surveys can answer whether a debate changed the opinions about the candidates, RTR analyses shed light on why opinions changed. This information is highly valued by candidates and their consultants because it can be used as a barometer of success and a starting point for improving campaign messages. In this context, candidates may learn what precisely impresses the audience when looking at RTR data showing reactions of a test group of voters to their communication. In the context of academic research, the RTR data is valuable because it helps to uncover the effects of different argumentation styles, rhetorical strategies, and nonverbal communication signals. Using this data collection technique, social scientists obtain important insights into human information processing, which would remain undiscovered if they relied exclusively on traditional survey data.
Originally, RTR was developed in the context of applied communication research and has been in use for that purpose for quite a long time. Thus, starting in the 1930s, radio and TV stations employed it to test their programming, Hollywood studios to assess the potential profit margin for their movies, and advertising agencies to evaluate their commercials (e.g., Gitlin, 1994). Simultaenously, RTR became standard practice for political consultants who wanted to develop especially effective advertisements or campaign statements and to prepare and coach candidates participating in televised debates. In fact, several TV stations and networks around the world have used RTR data to identify the “winner” or crucial moments of televised debates and sometimes even display the results of RTR measurement during their live debate broadcasts (e.g., Clark, 2000; Mitchell, 2015; Schill & Kirk, 2009).
The use of RTR in social science is a more recent development. It started in the late 1980s and early 1990s when Lynda Lee Kaid and Frank Biocca introduced RTR as an academic research tool measuring the persuasive effects of political ads and televised debates (e.g., Biocca, David, & West, 1994; McKinnon, Tedesco, & Kaid, 1993). While the initial RTR studies were primarily descriptive and criticized for a lack of theoretical foundation, more recently several promising efforts have been made to situate RTR research within theoretical models of information processing and opinion formation. Moreover, several new research designs have been developed that combine RTR analysis with other data sources including content analysis, traditional surveys, focus group interviews, or psychophysiological measurement. RTR seems to be one of the most promising research tools in political communication, although it is still used only infrequently and sometimes subject to controversial methodological discussions.
The primary objective of this article is to make the RTR methodology more transparent to the political communication research community and thereby promote its use. It focuses on several methodological issues concerning RTR including a comparison of different RTR systems and research designs, as well as a discussion of the reliability, validity, and reactivity of RTR measurements. It also summarizes findings of RTR studies conducted in the context of political campaigns. This includes studies tracing back viewers’ short-term perceptions to candidates’ rhetorical strategies, studies measuring the influence of viewers’ predispositions on their perceptions, and also studies measuring the influence of viewers’ short-term perceptions on their long-term opinions. It also discusses studies dealing with the question of whether RTR data displayed during live broadcasts of televised debates influences postdebate opinions. In closing, it sums up the strengths and weaknesses of RTR measurement and points to directions for future research.
RTR measurement was invented by Paul F. Lazarsfeld and Frank N. Stanton in the 1930s. In their Princeton Radio Research Project they used their aptly named program analyzer to identify radio listeners’ immediate reactions to music and spoken word programs so that they might adapt the program to the taste of the audience. Test listeners were asked to indicate their impressions continuously using two push buttons—a green one for positive and a red one for negative impressions (Levy, 1982; Millard, 1992; Mitchell, 2015). Since the 1950s, the input devices have been steadily improved, especially to enable participants to provide more differentiated judgements. This seemed necessary, as most subjects used the buttons only in case of extremely positive or extremely negative impressions (Hallonquist & Suchman, 1944). As a result, in the 1950s the Televac system was developed, which is a joystick-controlled device that measures audience’s responses on a 4-point scale ranging from “very favorable” to “very unfavorable.” Not pushing the joystick in any direction indicates a neutral impression. Nowadays, most RTR systems work with dial input devices or sliders including metric scales. The two most frequently used systems in academic research are Microcomputer Interface’s Poller, which uses a fixed scale ranging from 0 to 7, and Dialsmith’s Perception Analyzer, which uses a variable scale ranging from 0 to up to 100. Both use scales with a midpoint in order to give participants the chance to intentionally indicate a neutral impression.
From a methodological standpoint, the use of the two latter dials changed the nature of RTR measurement much more than it seems at first glance. Push buttons and the Televac system recorded respondents’ impressions only when they struck a key or pushed the joystick. If no input was given, no signal was transmitted and the system automatically returned to the neutral position. This kind of measurement has been called the reset mode. In contrast, dial devices and sliders remain in their last position if no input is given by the subjects and this most recent impression is recorded on and on. Therefore participants are asked to change the position of their input devices whenever their impressions change. Because these devices do not move back to the neutral position on their own, this has been called the latched mode. Generally, the latched mode generates much more data because respondents can give graduated answers and, therefore, use the dials more often. On the other hand, it has been argued that dial devices measuring metric dimensions require more cognitive resources from participants, especially in studies on televised debates when more than one speaker has to be judged. In this case, participants might get several positive and negative impressions at the same time, which have to be outbalanced to find the adequat position of the input device (Baggaley, 1987). But in fact, findings from RTR studies using the latched versus the reset mode do not show big differences in participants’ impressions (Maier & Faas, 2009; Maier, Maurer, Reinemann, & Faas, 2007).
Early RTR studies were typically conducted on small focus groups with about 20 participants. Nowadays, many studies on televised debates include hundreds of subjects watching the debate on a big screen in university classrooms or auditoriums under live conditions. In these studies, RTR measurement is usually accompanied by pre- and postdebate surveys measuring subjects’ predispositions and postdebate opinions. Because participants watch the debate’s live broadcast, its influence can be measured in the context of the real campaign and compared to the influence of postdebate media coverage. For example, in a study of the second 1976 U.S. presidential debate, Steeper (1978) showed that President Ford’s awkward statement on the Soviet dominance of Eastern Europe, which is often viewed as a turning point in the 1976 campaign, was not immediately noticed by debate viewers. Instead, Ford only lost in the polls after the mass media heavily discussed his erroneous statement.
Most recently, RTR systems are also administered through networked media (Boydstun, Glazier, Pietryka, & Resnik, 2014; Iyengar, Jackman, & Hahn, 2016; Jasperson, Gollins, & Walls, 2016). Subjects follow a debate at home using mobile apps including online sliders in order to indicate their perceptions. Those studies are not restricted in the selection of participants and therefore allow for representative samples. While these mobile tools seem to work well from a technical standpoint, the fact that subjects cannot be monitored when using the dials at home might be a problem for the validity of the data (for a deeper discussion, see Questions of Measurement: Reliability, Validity, and Reactivity*).
The most simple and still most common way of analyzing RTR data is called peak analysis. While watching a televised debate, viewers will be affected by certain arguments or episodes more than by others. Additionally, some candidates gestures and facial expressions will have a stronger influence on viewers’ impressions than others. These crucial or defining moments of a debate are called “peaks.” In a peak analysis, the peaks first have to be identified by analyzing the mean perception trend line of all subjects. For example, means differing by one standard deviation or more from the overall mean during the debate can be regarded as peaks (for a deeper discussion, see Biocca et al., 1994). In a second step, these peaks can be explained by having a closer look at what happened during these crucial moments. This can be accomplished by using the video overlay generated by the RTR system. The most simple analyses just cite candidates’ arguments during a peak. This is usually done in applied research, e.g., when RTR data is analyzed by TV stations to guide postdebate discussions about who has won a debate and why. More elaborate studies try to find patterns which appear during the peaks, e.g., analyzing whether peaks in televised debates are caused more by image or issue content (e.g., McKinnon et al., 1993; Schill & Kirk, 2014) or caused by vagueness versus specificity in statements (e.g., Reinemann & Maurer, 2005). In this case, all peaks measured during a debate are compared to find similarities in the rhetorical strategies used. If, e.g., most of them include issue content or vague statements, it can be assumed that these characteristics are especially responsible for viewers’ impressions. In sum, peak analysis can be a first step when working with RTR data but is, of course, subject to several limitations. The most obvious limitation is that it inverts the actual research process: instead of predicting effects from content analyzing a debate, it tries to explain effects by looking at the content ex post facto. In order to figure out the causes of peaks, some degree of speculation is needed because there is no statistical procedure available. For example, reseachers may conclude that talking about issues has caused a peak, although the given argument or nonverbal behavior may have also included several other message characteristics that could be responsible for viewers’ reactions. Moreover, only looking at the peaks may be misleading because the same strategies and behaviors may have been used in other parts of the debate. In this case, the conclusion that the strategies appearing during the peaks have caused the peaks would be incorrect. Therefore, at a minimum, a comparison between peaks and other parts of the debate is necessary so as to determine whether strategies and behaviors during peaks differ from those during less impactful moments.
Given the clearly visible limitations of peak analyses, some recent studies have combined RTR data with content analysis of campaign messages. In these studies, several content characteristics in each second or segment of a televised ad or televised debate are measured using quantitative content analysis. Later, RTR scores elicited by candidates statements in which, say, a positive tone is evident are compared to RTR scores during statements in which a negative tone is evident. When significant differences in mean RTR scores occur, the difference is attributed to the tone of the statements (e.g., Fahr, 2008; Kaid, 2009). While these studies remain on a bivariate level, this design has been extended to the multivariate case using a complex statistical procedure in a recent study by Nagel, Maurer, and Reinemann (2012). The authors combined a second-by-second RTR analysis with a second-by-second content analysis of about 50 verbal attributes (e.g., the use of rhetorical strategies like attacks, emotional appeals, and strategic ambiguity), visual displays (e.g., certain gestures and facial expressions), and vocal qualities (e.g., speech rate and pitch) appearing during a televised debate. Hence, the dataset consists of multiple time-series: one variable for each of the 50 message elements coded (independent variables) and one variable resulting from RTR measurement (the dependent variable). Therefore, the effect of each communication element on viewers’ perceptions while watching the debate can be calculated by a combination of time-series and regression analysis. Possible analyses include the effect of every verbal, visual, and vocal communication element and the relative effect of the three communication channels (verbal, visual, and vocal) on viewers’ perceptions. Moreover, interaction effects between the use of the different communication elements can be calculated.
Another possible research design is the combination of RTR measurement with either qualitative focus group interviews or quantitative survey data. In Lazarsfeld’s early studies on radio research, RTR measurement was often accompanied by qualitative focus group interviews, in which study participants deeply discussed their ratings. In these studies, peaks or other critical incidents are shown to the participants after an RTR study has been finished. Participants are then asked to explain what they found especially impressive and why they rated candidates’ arguments and nonverbal behavior as they did during the study (Hughes & Bucy, 2016).
As explained, RTR measurement and tradional survey data may be combined for two reasons. On the one hand, the influence of viewers’ predispositions on their short-term perceptions can be measured. In this case, a short survey is conducted before doing a RTR study measuring viewer characteristics like party identification, issue involvement, or social variables potentially influencing viewers’ individual impressions while being exposed to campaign messages. By comparing the RTR data of different groups of participants, the influence of predispositions on individual perceptions can be uncovered (e.g., Jarman, 2005; Kaid, 2009).
On the other hand, the influence of viewers’ short-term perceptions on their long-term opinions can also be measured. In this case, a survey is conducted measuring viewers’ opinions concerning their overall evaluation of a televised ad, their opinion about the candidates taking part in a televised debate, or their opinion about who has won the debate after subjects have been exposed to campaign communication. By integrating the RTR overall mean in the survey dataset, the correlation between impressions and opinions can be calculated. When a RTR analysis is combined with a predebate and postdebate survey, even the influence of short-term perceptions on opinion changes can be measured. Moreover, several other viewer characteristics can be controlled in multivariate models, for example, comparing the effects of predispositions and short-term perceptions on opinions about the candidates (e.g., Maier, Faas, & Maier, 2014; Maier & Strömbäck, 2009; Reinemann & Maurer, 2005). The combination of RTR analysis and traditional survey data offers opportunities that are clearly beyond the scope of simply monitoring candidates’ success in televised debates. Indeed, mixed methods research involving RTR can be used to test various theories of information processing, including concepts like selective perception and selective processing of information which otherwise have remained a black box.
Insights into human information processing can be deepened further by combining RTR analysis with psychophysiological data measuring emotional reactions to political messages. In the most comprehensive study of that kind, Wang, Morey, and Srivastava (2014) conducted a time-series analysis including RTR, heart rate, skin conductance level, and facial electromyography in order to uncover the interplay between message elements in televised ads, viewers’ predispositions, and their emotional reactions. Interestingly, RTR in this case was used to measure ad content, e.g., whether the ad in any given second was emotionally arousing, negative, or positive. A more straightforward application of that design might be to measure the activational dimension of emotions using physiological data, while measuring the valence of an emotion by using RTR. This would clearly improve the measurement of emotional reactions to campaign messages because the emotional arousal measured by physiological data is not always easy to interpret as a site outcome measure. Another promising research design is the combination of RTR data and data on viewers’ facial expressions. Using Face Reader, an electronic research tool based on the Facial Action Coding System, Ottler, Mousa-Kazemi, and Resch (2016) found correlations between viewers’ emotions and the frequency of using RTR dials.
Finally, RTR measurement has been integrated in classical experimental research designs, especially in studies comparing the effects of verbal and nonverbal communication. In many of these studies, one experimental group is exposed to an audiovisual version of a political message, while the other is exposed to an audio-only version. The effect of nonverbal communication is measured by comparing the differences in the opinions of those who watched a televised debate with full audio-video to those who only listened to it. The stronger the difference, the stronger the effect of the missing visuals. This is usually measured by using traditional survey questionnaires after participants have been exposed to the entire stimulus. In addition to filling out a questionnaire, in some studies subjects are instructed to use RTR devices during their exposure to the stimulus to continuously indicate their impressions of the speaker. By comparing the two groups’ RTR graphs during the debate, the differences in the short-term impressions of both groups can be measured second by second. Therefore, it can be examined which nonverbal signals cause differences between viewers and listerners. Moreover, it can be examined whether the relevance of nonverbal communication for political impression formation generally changes during the course of a debate or remains about the same (e.g., Maier & Maier, 2009; Maurer, 2016).
Questions of Measurement: Reliability, Validity, and Reactivity
As RTR is a relatively new research tool, there is ongoing discussion about its value for political communication research. While more and more researchers are aware of the benefits described above, others remain skeptical about whether to trust data generated by RTR measurement. This skepticism is best illustrated by Republican political consultant Mike Murphy’s comment on CNN’s use of RTR measurement in the 2008 U.S. presidential debates. Murphy stated that “. . . turning this voodoo into a television spectacular completely distorts whatever limited research value a group might provide” (cited in Schill & Kirk, 2009, p. 163). Generally speaking, the skepticism is based on two concerns: the first is whether subjects seriously use RTR dials to rate their impressions of the candidates or rather play around with the dials. The second concern is whether data from a small, nonrepresentative group of subjects collected more or less in a laboratory-like setting under close scrutiny can be generalized to a large audience watching the debate at home. Therefore, to systematically assess the benefits of RTR measurement, three issues need to be addressed: the question of reliability, the question of validity, and the question of reactivity of RTR measurement.
Reliability of measurement concerns whether repeated measures of the same construct produce the same results. Generally, the reliability of measurements can be tested in three ways: test-retest designs, split-half designs, and parallel-test designs. However, measuring test-retest reliability may be problematic when RTR is concerned. Since the goal of RTR is to measure spontaneous impressions, in many cases particpants will react differently to a second presentation of the same stimulus. Nevertheless, a study by Fenwick and Rice (1991) found quite high levels of test-retest reliability for RTR measurements. In split-half designs, participants of one RTR study are randomly assigned to two different groups in order to compare their perceptions. Doing so, in early as well as recent studies (e.g., Hallonquist & Suchman, 1944; Papastefanou, 2013), obtained high reliablity with coefficient typically in the 0.80–0.90 range. Finally, reliability can be measured by parallel-test designs. In the case of RTR, this has been performed in a study by Maier et al. (2007). By comparing the results of two RTR studies conducted independently on the same German televised debate and using different sets of measurement devices (push buttons versus dial devices) and different instructions, the authors still found high correlations between the impressions that the two groups had of the candidates. This was especially the case during the crucial moments of the debate, when the RTR trend lines in both studies moved exactly parallel. Overall, based on the given evidence, the reliability of RTR measurement appears to be high.
Concerning the validity of measurements, external and internal valitity must be distinguished. External validity concerns the question whether the results of studies using experimental designs can be generalized to natural settings. In most RTR studies, subjects are exposed to the stimulus in a kind of laboratory setting, with a greater or lesser degree of formality, but in any case different from their usual TV exposure at home. Subjects also are instructed to follow the stimulus and thus may be more attentive. Furthermore, they may be exposed to a stimulus they would not have been exposed to in real life because they cannot leave the room or choose to watch another program. Therefore, just like every experimental study, studies using RTR measurement might tend to overestimate the effects of campaign messages. This effect is probably reduced when using mobile RTR systems that allow subjects to watch the stimulus at home.
An additional problem concerning external validity might occur because participants are required to use dial input devices which might distract their attention from the visual part of the stimulus. As a consequence, the perceptions of subjects taking part in RTR studies might differ from those of ordinary television viewers because they could be based to a greater extent on verbal message components. Whether this is the case has been examined in an experimental study by Reinemann and Maurer (2009). They exposed two groups of subjects to the same short excerpt of a political talk show. While one group used RTR dials during the reception, the other group watched the excerpt without using dials. Immediately after watching the stimulus each group was posed several questions assessing the recall of verbal and nonverbal information conveyed during the clip. All in all, no differences in recall occurred between both groups, suggesting that using RTR dials does not distract viewers from visual message elements. Nevertheless, before starting an RTR study subjects should be trained on how to use the control units until they are so familiar with the dials that they are able to use them without looking down.
Internal validity concerns whether a study really measures what it is supposed to measure. In the case of RTR, this might be especially problematic because due to technical reasons and the limited capacity of human information processing RTR can only measure a single dimension (Baggaley, 1987). For example, continuously rating the overall impression of two candidates during a 90-minute televised debate is already a very demanding task. Adding a second dimension, e.g., the level of one’s emotional arousal, would completely overwhelm the participants. The question is whether a complex process like viewers’ information processing can be depicted by measuring only one single variable. Crucial decisions that have to be made include the selection and definition of the construct that is to be measured and, furthermore, the verbalization of the instructions given to participants. Definitions and instructions need to be precise so that participants know what is expected from them. At the same time, instructions must leave some degree of freedom in order to measure recipients’ individual reactions to the stimulus. Thus, the question of whether RTR measurement yields valid results can only be answered on a study-by-study basis.
In their comparison of two RTR studies, Maier et al. (2007) analyzed different aspects of internal validity. Construct validity is indicated when the results of the measurement correlate with other variables in ways that one would expect. In this case, the authors found that supporters of a candidate had much more positive impressions of him compared to nonsupporters. Criterion or prognostic validity is given when the results of the measurement correlate with a manifest external criterion in a way that one would expect. Here, the authors found that respondents’ individual impressions during the debate strongly predicted their postdebate opinions. This held true for both studies despite the fact that they used different instructions. These results have been more or less replicated by a recent study performing the same analyses but using three other debates as examples (Papastefanou, 2013).
Reactivity occurs when subjects alter their behavior because they are aware of the fact that they are being observed. Traditional surveys are subject to reactivity because respondents’ answers may be influenced by the wording of a question, the appearance of an interviewer, the sponsor of the survey or other factors. As RTR is a computerized survey, it is most likely subject to reactivity as well. Therefore, as in every survey, the instructions should be worded carefully. In the case of RTR, an additional source of reactivity is the use of dials, a departure from normal viewing experience. Participants in RTR studies might be overchallenged, feel burdened, or even be distracted from the stimulus because they have to fulfill the task of using the dials. Thus, the technical component of RTR studies may add additional reactivity problems. In an experimental study comparing subjects watching the same stimulus with and without using RTR dials, Fahr and Fahr (2009) found that RTR seems to be a rather unobtrusive tool, because subjects using it rated the test situation almost exactly the same way as those who did not use it. Those who used RTR also stated that they did not find it difficult or distracting. To further test the reactivity of RTR measurement, the authors compared the skin conductance levels of subjects who watched the stimulus with and without using RTR dials. Interestingly, they found no general influence of using the dials on emotional arousal. Rather, it was only at the beginning of the study, when those subjects who used the dials evidenced more arousal than those who did not. Later, when subjects habituated to the task it was just the other way round.
In a similar study, the influence of using RTR dials on emotional arousal was measured using functional magnetic resonance imaging (Hutcherson et al., 2005). In this research, the brain activities of those instructed to rate their emotions while watching amusing and sad films using RTR dials were compared to the brain activities of those who watched the same films without rating their emotions. All in all, no differences between the two groups were found, again suggesting that the technical demands of RTR measurement do not seem to add much to the reactivity also present in traditional survey studies.
Existing research suggests that RTR studies are reliable and valid when they are carefully designed, subjects are carefully instructed, and the data is carefully analyzed. Moreover, reactivity does not seem to be any greater than in traditional survey studies. As is always the case in empirical research, internal and external validity have to be reconciled: while RTR studies conducted in a laboratory setting secure internal validity because media stimuli can be controlled, online RTR studies instead secure external validity because they can be conducted with a large number of subjects exposed to the stimulus at home. However, these conclusions are based on a limited number of studies. The validity of mobile or networked RTR systems, however, has not yet been sufficiently examined, and more research on these issues is needed.
Effects of Message Content on Voters’ Short-Term Perceptions
The majority of RTR studies deal with the question of how different verbal and nonverbal message elements influence voter reactions to campaign communication. This holds true for applied research by political consultants testing candidate statements as well as for academic research on the persuasive power of different message strategies. With one exception, which will be discussed later, these studies try to explain voters’ overall impressions of televised ads or their overall impressions of the candidates taking part in televised debates. Questions concerning four kinds of message elements have frequently been under examination: (1) whether image or issue content is more persuasive, (2) whether positive or negative messages are more effective, (3) whether vague or concrete statements are more persuasive, and (4) whether verbal or nonverbal communication is more influential.
(1) In several RTR studies it is hypothesized that issue content, e.g., statements about a candidate’s issue stands, rather than image content, e.g., statements about a candidate’s personality, is evaluated more positively in televised ads and leads to a more positive impression of the speaking candidate in televised debates. The argument is usually based on normative theories of democracy, which hold that citizens should take candidates issue stands into account when making political decisions. In fact, however, the evidence is rather mixed. On the one hand, the assumption that voters prefer issue content has been supported for a 1990 Oklahoma gubernatorial debate (Bystrom, Roper, Gobetz, Massey, & Beall, 1991), the 1996 U.S. presidential debates (McKinnon & Tedesco, 1999), and the 2008 and 2012 U.S. presidential debates (Schill & Kirk, 2014). On the other hand, image content was more effective in moving viewers in the 1992 U.S. presidential debates (McKinnon et al., 1993). And in the case of the televised ads in the 1996, 2000, and 2004 U.S. presidential elections, Kaid (2009) found no differences in the evaluation of image and issue content at all.
(2) Whether positive or negative campaign messages are more effective is one of the most frequently discussed issues in political communication research. On the one hand, attacking the political opponent attracts more attention than discussing one’s own issue stands and, therefore, probably also has a stronger impact on voters’ opinion formation (e.g., Skowronski & Carlston, 1989). On the other hand, attacking the opponent may cause backlash effects as most voters say they do not like political candidates who attack (Allen & Burrell, 2002). The verdict from RTR studies is quite straightforward: attacking the opponent in televised debates is ineffective because it polarizes the audience. While attacks cause positive impressions of those who already support the speaking candidate, they turn off the rest of the audience. This is especially true for the supporters of the other candidate, who might feel attacked in place of him and therefore stick to their initial opinion more than ever. But it also holds true for the undecided, who generally dislike mudslinging.
In a study dealing with that question, Reinemann and Maurer (2005) compared the characteristics of the most effective and the most polarizing statements in a televised debate during the 2002 German national election. They found that in 14 out of the 17 most positively rated statements, candidates were speaking positively about their own political plans. In contrast, 9 out of the 13 most polarizing statements were attacks. Also in the 1992 (Delli Carpini, Keeter, & Webb, 1997) and 2008 (Schill & Kirk, 2014) U.S. presidential debates; the 2000 Republican primary debate (McKinney, Kaid, & Robertson, 2001); the 2005 (Maurer, Reinemann, Maier, & Maier, 2007) and 2013 (Bachl, 2016) German national election debates; and 2006 Swedish national election debate (Maier & Strömbäck, 2009), attacks prompted negative audience reactions. In the case of the 2012 U.S. presidential debates, evidence is mixed: while Schill and Kirk (2014) found that especially attacks had been rated unfavorably by the viewers, a study by Hughes and Bucy (2016) suggests that acclaims received the lowest level of liking. Viewers also rate campaign advertisements more favorably during positive than during negative valenced message content (Iyengar et al., 2016; Kaid, 2009). Overall, attacking one’s opponent might be a useful strategy only when the audience largely consists of voters already supporting the speaker. In front of a mixed audience—the typical case in televised debates and televised ads—it is clearly more effective to speak positively about one’s own issue stands.
(3) The theory of political ambiguity (Page, 1976) suggests that speaking about one’s own plans is risky when these plans are explained in detail because concrete issue stands (e.g., “I want to raise/lower taxes”) will be rejected by large parts of the audience. The theory further assumes that only those segments of the audience that already agree with a speaker’s issue stands will rate the speaker positively. Consequently, detailed issue statements or policy positions will lead to polarized reactions, as voters can easily compare a speaker’s issue stand to their own. Instead of explaining an issue stand in detail, it may be more effective to describe one’s plans vaguely (e.g., “I want fair taxes”) because in this case voters may simply project their own views onto candidates’ positions, leading to broad support. The effectiveness of strategic ambiguity has been examined in a study by Reinemann and Maurer (2005). They found that nearly all of the most successful statements during the televised debate in the 2002 German national election were characterized by the use of strategic ambiguity. For example, candidates generated positive reactions from almost all viewers when advocating equal opportunities for children from disadvantaged families or stating that political leaders need to maintain the balance between the interests of owners and workers—statements almost nobody could disagree with. These findings have been replicated in a study of the 2005 German national election debate (Maurer et al., 2007). When watching a debate viewers are not necessarily aware of the fact that those statements are rather meaningless truisms. Instead, they often become impressed by the candidates because they seem to share their own views.
(4) Finally, RTR has been used to compare the effects of verbal and nonverbal communication. Most of the studies integrate RTR measurement in experimental designs by comparing the short-term perceptions of viewers (audio-visual group) and listeners (audio-only group) of a televised debate or political advertisements. Early studies were primarily descriptive, showing that viewers and listeners perceptions sometimes differed and that nonverbal communication sometimes matters (Faas & Maier, 2004; Maier & Maier, 2009; Roessing, Jackob, & Petersen, 2009). Taking a German local election with relatively unknown candidates, Maurer (2016) added a third group of subjects who were exposed to visuals without hearing the sound (video-only group). By analyzing the differences in the impressions of the audio-visual group and the two groups exposed to an incomplete stimulus, the effects of missing visual and missing verbal signals can be compared. The analyses show that visual communication is especially important during the first 30 seconds of a debate—probably because at the beginning of a debate viewers are not yet fully aware of the verbal context. In the later stages, verbal communication becomes dominant. Combining a second-by-second content analysis of about 50 verbal, visual, and vocal message elements and a second-by-second RTR analysis during the 2005 German national election debate, Nagel et al. (2012) found that verbal message elements had by far the strongest influence on viewers’ perceptions of the candidates. That held especially true for the issues discussed and the argumentative structures used. Visual and vocal message elements like gestures, facial expressions, pitch, and speech rate showed additional independent effects on voters’ perceptions of the candidates, but they were much weaker than those of verbal communication.
A notable exception from studies using RTR to measure recipients’ impressions of political candidates is a study by Fahr (2008). She combines a content analysis of several characteristics of the arguments used by political talk-show guests with two separately conducted RTR studies, one measuring the perception of being informed and one measuring the perception of being entertained. According to her findings, recipients feel especially informed when talk-show guests use strong arguments accompanied by dynamic gestures. In contrast, humor and controversy are the best predictors of feeling entertained.
Effects of Voters’ Predispositions on Short-Term Perceptions
Various theories of information processing assume that messages are perceived selectively in light of prior knowledge, opinions, and other predispositions. This might especially hold true for campaign communication because parties and candidates are usually well known and most voters have strong opinions about them. Using RTR measurement, the influence of voters’ predispositions on short-term perceptions can easily be calculated when RTR measurement is combined with a traditional predebate questionnaire. Most studies considering predispositions focus on the role of party identification as a predictor of different short-term perceptions. For example, Jarman (2005) compared the short-term reactions of Republicans and Democrats during a 2004 U.S. presidential debate and found strong and significant differences in more than 90% of the debate time. In each case Republicans rated the Republican candidate much more positively than Democrats and vice versa. About the same results have been found in a 1992 U.S. presidential debate (Delli Carpini et al., 1997), the 2012 U.S. presidential debates (Boydstun et al., 2014; Jasperson et al., 2016), the 2006 Swedish national election debate (Maier & Strömbäck, 2009), several German national election debates (Bachl, 2016; Maier et al., 2014; Maurer et al., 2007; Reinemann & Maurer, 2005), and the televised ads in the 2006 U.S. senate campaign (Iyengar et al., 2016).
Some of these studies also test the influence of another predisposition: viewers’ assumptions about which candidate is going to win the debate. In general, these studies show that during a debate viewers have a more positive impression of the candidate they expect to win. However, this effect is much smaller than the effect of party identification. Another relevant predisposition is interest in the debate. Psychological models of information processing such as the Elaboration Likelihood Model (Petty & Cacioppo, 1986) posit that recipients with high involvement will be more likely to focus on verbal message elements such as the strength of the presented arguments when processing and using information. In contrast, recipients with low involvement are more likely to rely on peripheral cues like visual or image elements of the message. Using that model as a theoretical basis, Maurer and Reinemann (2015) analyzed the influence of recipients’ involvement on the relative contributions of verbal and nonverbal communication to viewers’ short-term impressions of the candidates in the 2005 German national election debate. As predicted by the model, the effect of nonverbal communication was clearly stronger for those subjects showing low involvement with the debate than for those showing high involvement. However, in both groups viewers’ short-term impressions were clearly stronger influenced by verbal communication than by nonverbal communication.
In a study on a 2012 U.S. presidential debate, Jasperson et al. (2016) found that engagement with the debate, a variable similar to debate interest, enhanced the polarizing effects of the debate: Democrats and Republicans especially differed in their short-term reactions to candidate messages when they were engaged with the debate. Finally, Boydstun et al. (2014) also found an effect of viewers’ interest in the issues discussed during a debate. Those who found that the economy is an important political issue rated candidate messages concerning the economy more positively than those not interested in the economy.
Effects of Voters’ Short-Term Perceptions and Predispositions on Opinions
One of the most crucial questions in RTR research is whether short-term perceptions measured by RTR influence long-term opinions measured by postdebate questionnaires. In the event both measures are largely uncorrelated, one has to conclude that voters’ short-term perceptions do not have long-term consequences and are essentially irrelevant. Surprisingly, only a few studies have so far addressed the question, and most of these have been carried out in the context of German televised debates (Maier et al., 2014; Maier & Strömbäck, 2009; Maurer et al., 2007; Reinemann & Maurer, 2005). Instead of calculating simple correlations between RTR and postdebate self-reports, these studies also include viewers’ predispositions measured by predebate questionnaires to compare the effects of impressions and predispositions. Multivariate models are used to explain viewers’ opinions about who has won the debate by their party identification, predebate assumptions about who is going to win, and short-term impressions of the candidates during the debate measured by RTR. Each of these studies shows a strong correlations between RTR-based impressions and postdebate opinions usually ranging from .50 to .60—even when viewers’ above-mentioned predispositions are controlled. In almost all cases, the effect of short-term perceptions exceeded the effects of predispositions.
Using a slightly different research design, Maurer and Reinemann (2006) tried to explain why some viewers of a televised debate in the 2002 German national election were misled by one candidate frequently citing incorrect facts about the state of the economy while others were not. By asking several questions on the state of the economy before and after the debate, they identified those who changed their assessments of the state of the economy in the direction of the misleading statements. By taking RTR data into account, they found that those who had a positive impression of the candidate during his misleading statements were especially likely to change their assessments of the economy in the direction of the incorrect statements. Obviously, only some of the viewers noted that the statements were wrong and indicated this by turning the dials into negative territory and later sticking to their initial assessments of economic conditions.
Effects of Displayed RTR Data on Voters’ Opinions
One of the more controversial aspects of using RTR measurement in applied research is the fact that some TV stations and networks have displayed RTR data during their live broadcasts of televised debates. A small number of usually undecided voters participate in these studies, and the aggregated RTR trend line generated by them is shown live to the debate audience at home. From a television broadcaster’s standpoint, offering the audience peer impressions in real time is a useful practice because it makes watching the debate more interesting and vivid. From a normative standpoint, however, it has frequently been critized for distracting viewers from debate content and strongly influencing their opinion formation about the candidates. A significant influence of information about others’ opinions on one’s own opinion formation is assumed by several theories in psychology and communication science. Spiral of silence theory suggests that individuals observe others’ opinions and avoid taking the minority position for fear of social isolation. The so-called bandwagon effect holds that humans often express opinions siding with the majority position because they want to be on the winning side. Finally, individuals who are uncertain when making decisions may look for anchors to help them decide. Other peoples’ opinions may serve as such anchors as most people seem to believe that the majority’s actions and thoughts cannot be wrong (for a review of these explanations, see Weaver, Huck, & Brosius, 2009).
Several studies have investigated the question of whether displayed RTR data in fact influences viewers’ opinion formation. These studies usually show two groups of participants a televised debate including a manipulated RTR graph. In one group the graph is clearly biased toward one candidate, e.g., frequently showing peaks when the candidate is speaking. In the other group it is biased toward the other candidate. Nevertheless, both groups believe they are watching the unbiased perceptions of a neutral focus group. These studies show impressive effects of the RTR displays on viewers’ opinions about candidates’ performance. In one study using a 10-minute clip from a 1984 U.S. presidential debate between Ronald Reagan and Walter Mondale, those who watched the RTR graph biased toward Reagan rated Reagan’s performance clearly better than Mondale’s. In contrast, those who watched the graph biased in favor of Mondale rated Mondale’s performance clearly better than Reagan’s (Fein, Goethals, & Kugler, 2007). Similar results have been found in a German study using the 2006 Austrian national election debate as an example (Wolf, 2010). Moreover, this study suggests that RTR displays also show long-term effects because viewers stuck to their opinions about the candidates but forgot that they were largely based on the RTR graphs.
The only study testing the effects of RTR displays under live conditions was conducted during the 2010 British national election debate. While two groups of participants watched the debate live, the displayed RTR graph was once biased toward the incumbent Gordon Brown and once toward the liberal candidate Nick Clegg. In the Brown-biased group, 47% reported that Brown won the debate, while in the Clegg-biased group 79% reported that Clegg won. This substantial effect remained stable even when undecided voters were excluded from the analysis (Davis, Bowers, & Memon, 2011). Evidently, many debate viewers tend to trust the RTR displays more than their own ears and eyes. The only study not showing straightforward effects of RTR displays is an early experiment on the spiral of silence theory (Gonzenbach, 1992). In this study, participants were exposed to a news report on the Iran-Contra affair discussing Vice President George H. Bush’s role and an interview the journalist Dan Rather conducted with Bush about his involvement in this affair. While one group watched the original interview, two other groups watched the interview with a displayed RTR graph biased toward Rather or biased toward Bush. While the opinions about Rather were influenced by the RTR graph, no effect for Bush occurred.
The role of RTR measurement as a research tool used in empirical studies on debate effects and its role as a visualization tool used by television networks during debate broadcasts have to be disentangled. While there is little doubt about the value of RTR measurement as a measurement tool for academic research analyzing the effects of televised debates, there is an ongoing discussion about its potential misuse by television networks for entertainment purposes (Mitchell, 2015; Schill & Kirk, 2009).
RTR measurement is a unique and versatile research tool for analyzing voters’ immediate reactions during the reception of campaign messages. Combining RTR measurement with traditional surveys or psychophysiological data allows unique insights into the black box of human information processing and, therefore, strengthens the testing of different theories of campaign communication. Combining RTR measurement with content analysis to simultaneously compare the persuasive effects of varying verbal and nonverbal message elements might complement or even replace classical experiments that deal with the effect of each element in isolation. Moreover, RTR measurement seems to be reliable and valid, at least when it is carried out in a systematic way. Nevertheless, RTR measurement has been used only infrequently in the social sciences thus far. This could be due to several reasons: first, RTR measurement is quite complex and expensive. The hardware and software needed for RTR studies are costly and there are costs associated with assembling large numbers of participants, especially when studies of televised debates occur under live conditions.
Second, gathering and analyzing the data is time-consuming and labor-intensive. This holds especially when RTR data is combined with other methods of data collection like questionnaires or content analyses. In these cases, multiple data sets must be combined and complex statistical procedures deployed, e.g., taking into account time delays in viewers’ reactions to different message elements or the autoregression of the RTR time-series. Third, RTR studies under live conditions are somewhat risky because the study cannot be repeated in the event that technical problems occur or some participants do not show up.
Given the relative novelty of the method, several relevant questions concerning RTR measurement are as yet unanswered. Almost all RTR studies have thus far been conducted either in the United States or Germany. Thus, not much is known about how people in other countries perceive candidates’s arguments in televised debates or react to different message strategies in televised ads or other political communications. More cross-national studies are needed to uncover cultural differences in the perceptions and processing of campaign messages. Second, many RTR studies remain on a descriptive level, e.g., by conducting peak analyses simply listing the most and least successful statements in televised debates. In the future, more studies might use RTR measurement to test theories of information processing by taking into account viewers’ predispositions and long-term opinions. In this context, the dynamic character of messages might be further taken into account, e.g., by testing whether the perception of certain arguments early in a message influences the perception of other arguments used later.
Third, the use of RTR measurement in political communication research is still heavily focused on analyzing the effects of message strategies in televised debates and campaign ads. Future studies might also take other forms of campaign communication into account, including public speeches or candidate appearances in television news. This might help to clarify whether there is an influence of message format on viewers’ perceptions of political candidates.
Fourth, in most studies RTR is used to measure the overall impression of a political candidate or a televised ad. In future studies, RTR might also be used to measure other variables like the perception of certain personality traits or emotional reactions. Because simultaneously rating more than one dimension by RTR is too demanding, RTR data on different dimensions might be collected in different rounds.
Fifth, in some recent studies alternative ways of measuring voter reactions to campaign communication in real time have been developed. This includes studies that measure second-by-second voter reactions to televised debates on social media like Twitter (Shah, Hanna, Bucy, Wells, & Quevedo, 2015) and studies measuring laughter and applause of the studio audience sometimes present during televised debates (Stewart, 2015). Thus far, nothing is really known about the validity of these new measures and how they are correlated with RTR measurement. Studies comparing these measures with RTR are needed.
Sixth, from a methodological standpoint there is still more to be learned about the reliability and validity of networked RTR measures. At present, we know little about whether online participants provide genuine responses or instead arbitrarily move the dials; studies that compare online to in-person RTR results are needed. Finally, future studies might also combine RTR data with other continuous measures like eyetracking analysis. In such studies, for example, differences in viewers’ immediate reactions to candidates’ nonverbal behavior could be traced back to their individual eye movements as an indicator of whether a certain candidate behavior has been noticed or not.
Allen, M., & Burrell, N. (2002). The negativity effect in political advertising: A meta-analysis. In J. P. Dillard & M. Pfau (Eds.), The persuasion handbook. Developments in theory and practice (pp. 83–96). Thousand Oaks, CA: SAGE.Find this resource:
Bachl, M. (2016). How attacks and defenses resonate with viewers’ political attitudes in televised debates. An empirical test of the resonance model of campaign effects. In D. Schill, R. Kirk, & A. Jasperson (Eds.), Political communication in real time: Theoretical and applied research approaches. New York: Routledge.Find this resource:
Baggaley, J. (1987). Continual response measurement: Design and validation. Canadian Journal of Educational Communication, 16(3), 217–238.Find this resource:
Biocca, F., David, P., & West, M. (1994). Continuous response measurement (CRM): A computerized tool for research on the cognitive processing of communication messages. In A. Lang (Ed.), Measuring psychological responses to media messages (pp. 15–64). London: Routledge.Find this resource:
Boydstun, A. E., Glazier, R. A., Pietryka, M. T., & Resnik, P. (2014). Real-time reactions to a 2012 presidential debate. A method for understanding which messages matter. Public Opinion Quarterly, 78(S1), 330–343.Find this resource:
Bystrom, D. G., Roper, C., Gobetz, R., Massey, T., & Beall, C. (1991). The effects of a televised gubernatorial debate. Political Communication Review, 16, 57–80.Find this resource:
Clark, H. (2000). The worm that turned: New Zealand’s 1996 general election and the televised “worm” debates. In S. Coleman (Ed.), Televised election debates: International perspectives (pp. 122–129). New York: Palgrave Macmillan.Find this resource:
Davis, C. J., Bowers, J. S., & Memon, A. (2011). Social influence in televised election debates: A potential distortion of democracy. PLoS One, 6(3), 1–7.Find this resource:
Delli Carpini, M. X., Keeter, S., & Webb, S. (1997). The impact of presidential debates. In P. Norris (Ed.), Politics and the press: The news media and their influences (pp. 145–164). Boulder, CO: Rienner.Find this resource:
Faas, T., & Maier, J. (2004). Schröders Stimme, Stoibers Lächeln: Wahrnehmungen von Gerhard Schröder und Edmund Stoiber bei Sehern und Hörern der Fernsehdebatten im Vorfeld der Bundestagswahl 2002. In T. Knieper & M. G. Müller (Eds.), Visuelle Wahlkampfkommunikation (pp. 186–209). Cologne: Halem.Find this resource:
Fahr, A. (2008). Politische Talkshows aus Zuschauersicht. Informiertheit und Unterhaltung im Kontext der Politikvermittlung. Munich: Verlag Reinhard Fischer.Find this resource:
Fahr, A., & Fahr, A. (2009). Reactivity of real-time response measurement: The influence of employing RTR techniques on processing media content. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 45–61). Frankfurt: Peter Lang.Find this resource:
Fein, S., Goethals, G. R., & Kugler, M. B. (2007). Social influence on political judgments: The case of presidential debates. Political Psychology, 28(2), 165–192.Find this resource:
Fenwick, I., & Rice, M. D. (1991). Reliability of continuous measurement copy-testing methods. Journal of Advertising Research, 31(1), 23–29.Find this resource:
Gitlin, T. (1994). Inside prime time. New York: Routledge.Find this resource:
Gonzenbach, W. J. (1992). The conformity hypothesis: Empirical considerations for the spiral of silence’s first link. Journalism Quarterly, 69(3), 633–645.Find this resource:
Hallonquist, T., & Suchman, E. A. (1944). Listening to the listener: Experiences with the Lazarsfeld-Stanton program analyzer. In P. F. Lazarsfeld & F. Stanton (Eds.), Radio research 1942–1943 (pp. 265–334). New York: Essential Books.Find this resource:
Hughes, S. R., & Bucy, E. P. (2016). Moments of partisan divergence in presidential debates. Indicators of verbal and nonverbal influence. In D. Schill, R. Kirk, & A. Jasperson (Eds.), Political communication in real time: Theoretical and applied research approaches. New York: Routledge.Find this resource:
Hutcherson, C. A., Goldin, P. R., Ochsner, K. N., Gabrieli, J. D., Feldman Barrett, L., & Gross, J. J. (2005). Attention and emotion: Does rating emotion alter neural responses to amusing and sad films? NeuroImage, 27(3), 656–668.Find this resource:
Iyengar, S., Jackman, S., & Hahn, K. (2016). Polarization in less than thirty seconds: Continuous monitoring of voter response to campaign advertising. In D. Schill, R. Kirk, & A. Jasperson (Eds.), Political communication in real time: Theoretical and applied research approaches. New York: Routledge.Find this resource:
Jarman, J. W. (2005). Political affiliation and presidential debates: A real-time analysis of the effect of the arguments used in the presidential debates. American Behavioral Scientist, 49(2), 229–242.Find this resource:
Jasperson, A. E., Gollins, J., & Walls, D. (2016). Polarization in the 2012 presidential debates. A moment-to-moment, dynamic analysis of audience reactions in Ohio and Florida. In D. Schill, R. Kirk, & A. Jasperson (Eds.), Political communication in real time: Theoretical and applied research approaches. New York: Routledge.Find this resource:
Kaid, L. L. (2009). Immediate responses to political television spots in U.S. elections: Registering responses to advertising content. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 137–153). Frankfurt: Peter Lang.Find this resource:
Levy, M. R. (1982). The Lazarsfeld-Stanton program analyzer: A historical note. Journal of Communication, 32(4), 30–38.Find this resource:
Maier, J., & Faas, T. (2009). Measuring spontaneous reactions to media messages the traditional way: Uncovering political information processing with push button devices. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 15–26). Frankfurt: Peter Lang.Find this resource:
Maier, J., Faas, T., & Maier, M. (2014). Aufgeholt, aber nicht aufgeschlossen: Wahrnehmungen und Wirkungen von TV-Duellen am Beispiel von Angela Merkel und Peer Steinbrück 2013. Zeitschrift für Parlamentsfragen, 45(1), 38–54.Find this resource:
Maier, J., Maurer, M., Reinemann, C., & Faas, T. (2007). Reliability and validity of real-time response measurement. A comparison of two studies of a televised debate in Germany. International Journal of Public Opinion Research, 19(1), 53–73.Find this resource:
Maier, M., & Maier, J. (2009). Measuring the perception and the impact of verbal and visual content of televised political ads: Results from a study with young German voters in the run-up to the 2004 European parliamentary election. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 63–84). Frankfurt: Peter Lang.Find this resource:
Maier, M., & Strömbäck, J. (2009). Advantages and limitations of comparing audience responses to televised debates: A comparative study of Germany and Sweden. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 97–116). Frankfurt: Peter Lang.Find this resource:
Maurer, M. (2016). Visual dominance lasts about 30 seconds. What we can learn from using continuous response measurement in research about the effects of verbal and nonverbal political communication. American Behavioral Scientist.Find this resource:
Maurer, M., & Reinemann, C. (2006). Learning versus knowing. Effects of misinformation in televised debates. Communication Research, 33(6), 489–506.Find this resource:
Maurer, M., & Reinemann, C. (2015). Do uninvolved voters rely on visual message elements? A test of a central assumption of the ELM in the context of televised debates. Politische Psychologie/Journal of Political Psychology, 4(2), 71–87.Find this resource:
Maurer, M., Reinemann, C., Maier, J., & Maier, M. (2007). Schröder gegen Merkel. Wahrnehmung und Wirkung des TV-Duells 2005 im Ost-West-Vergleich. Wiesbaden, Germany: VS Verlag für Sozialwissenschaften.Find this resource:
McKinney, M. S., Kaid, L. L., & Robertson, T. A. (2001). The front-runner, contenders, and also-rans. Effects of watching a 2000 Republican primary debate. American Behavioral Scientist, 44(12), 2232–2251.Find this resource:
McKinnon, L. M., & Tedesco, J. C. (1999). The influence of medium and media commentary on presidential debate effects. In L. L. Kaid (Ed.), The electronic election: Perspectives on the 1996 campaign (pp. 191–206). Mahwah, NJ: Erlbaum.Find this resource:
McKinnon, L. M., Tedesco, J. C., & Kaid, L. L. (1993). The third 1992 presidential debate: Channel and commentary effects. Argumentation and Advocacy, 30(2), 106–118.Find this resource:
Millard, W. J. (1992). A history of handsets for direct measurement of audience response. International Journal of Public Opinion Research, 4(1), 1–17.Find this resource:
Mitchell, G. R. (2015). Public opinion, thinly sliced and served hot. International Journal of Communication, 9, 21–45.Find this resource:
Nagel, F., Maurer, M., & Reinemann, C. (2012). Is there a visual dominance in political communication? How verbal, visual, and vocal communication shape viewers’ impressions of political candidates. Journal of Communication, 62(5), 833–850.Find this resource:
Ottler, S., Mousa-Kazemi, R., & Resch, R. (2016). Measuring the effects of candidates on voters in germany. A methodological comparison between real-time response measurement and facial coding. In D. Schill, R. Kirk, & A. Jasperson (Eds.), Political communication in real time: Theoretical and applied research approaches. New York: Routledge).Find this resource:
Page, B. I. (1976). The theory of political ambiguity. American Political Science Review, 70(3), 742–752.Find this resource:
Papastefanou, G. (2013). Reliability and validity of RTR measurement device. GESIS-Working Papers (27).Find this resource:
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion. Central and peripheral routes to attitude change. New York: Praeger.Find this resource:
Reinemann, C., & Maurer, M. (2005). Unifying or polarizing? Short-term effects and post-debate consequences of different rhetorical strategies in televised debates. Journal of Communication, 55(4), 775–794.Find this resource:
Reinemann, C., & Maurer, M. (2009). Is RTR biased towards verbal message components? An experimental test of the external validity of RTR measurements. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 27–44). Frankfurt: Peter Lang.Find this resource:
Roessing, T., Jackob, N., & Petersen, T. (2009). The explanatory power of RTR graphs: Measuring the effects of verbal and nonverbal presentation in persuasive communication. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 85–95). Frankfurt: Peter Lang.Find this resource:
Schill, D., & Kirk, R. (2009). Applied dial testing: using real-time response to improve media coverage of debates. In J. Maier, M. Maier, M. Maurer, C. Reinemann, & V. Meyer (Eds.), Real-time response measurement in the social sciences (pp. 155–173). Frankfurt: Peter Lang.Find this resource:
Schill, D., & Kirk, R. (2014). Courting the swing voter: “Real time” insights into the 2008 and 2012 U.S. presidential debates. American Behavioral Scientist, 58(4), 536–555.Find this resource:
Shah, D. V., Hanna, A., Bucy, E. P., Wells, C., & Quevedo, V. (2015). The power of television images in a social media age: Linking biobehavioral and computational approaches via the second screen. Annals of the American Academy of Political and Social Science, 659(1), 225–245.Find this resource:
Skowronski, J. J., & Carlston, D. E. (1989). Negativity and extremity biases in impression formation. A review of explanations. Psychological Bulletin, 105(1), 131–142.Find this resource:
Steeper, F. T. (1978). Public responses to Gerald Ford’s statement on Eastern Europe in the second debate. In G. F. Bishop, R. G. Meadow, & M. Jackson-Beeck (Eds.), The presidential debates. Media, electoral, and policy perspectives (pp. 81–101). New York: Praeger.Find this resource:
Stewart, P. A. (2015). Do the presidential primary debates matter? Measuring candidate speaking time and audience reactions during the 2012 primaries. Presidential Studies Quarterly, 45(2), 361–381.Find this resource:
Wang, Z., Morey, A. C., & Srivastava, J. (2014). Motivated selective attention during political ad processing: The dynamic interplay between emotional ad content and candidate evaluation. Communication Research, 41(1), 119–156.Find this resource:
Weaver, J. B., Huck, I., & Brosius, H.-B. (2009). Biasing public opinion: Computerized continuous response measurement displays impact viewers’ perceptions of media messages. Computers in Human Behavior, 25(1), 50–55.Find this resource:
Wolf, B. (2010). Beurteilung politischer Kandidaten in TV-Duellen. Effekte rezeptionsbegleitender Fremdmeinungen auf Zuschauerurteile. Baden-Baden, Germany: Nomos.Find this resource: