Modern Populism: Research Advances, Conceptual and Methodological Pitfalls, and the Minimal Definition
Takis S. Pappas
Populism is one of the most dynamic fields of comparative political research. Although its study began in earnest only in the late 1960s, it has since developed through four distinct waves of scholarship, each pertaining to distinct empirical phenomena and with specific methodological and theoretical priorities. Today, the field is in need of a comprehensive general theory that will be able to capture the phenomenon specifically within the context of our contemporary democracies. This, however, requires our breaking away from recurring conceptual and methodological errors and, above all, a consensus about the minimal definition of populism.
All in all, the study of populism has been plagued by 10 drawbacks: (1) unspecified empirical universe, (2) lack of historical and cultural context specificity, (3) essentialism, (4) conceptual stretching, (5) unclear negative pole, (6) degreeism, (7) defective observable-measurable indicators, (8) a neglect of micromechanisms, (9) poor data and inattention to crucial cases, and (10) normative indeterminacy. Most, if not all, of the foregoing methodological errors are cured if we define, and study, modern populism simply as “democratic illiberalism,” which also opens the door to understanding the malfunctioning and pathologies of our modern-day liberal representative democracies.
More Than Mixed Results: What We Have Learned From Quantitative Research on the Diversionary Hypothesis
Benjamin O. Fordham
In the three decades since Jack Levy published his seminal review essay on the topic, there has been a great deal of quantitative research on the proposition that state leaders can use international conflict to enhance their political prospects at home. The findings of this work are frequently described as “mixed” or “inconsistent.” This characterization is superficially correct, but it is also misleading in some important respects. Focusing on two of Levy’s most important concerns about previous research, there has been substantial progress in our understanding of this phenomenon.
First, as Levy suggests in his essay, researchers have elaborated a range of different mechanisms linking domestic political trouble with international conflict rather than a single diversionary argument. Processes creating diversionary incentives bear a family resemblance to one another but can have different behavioral implications. Four of them are (1) in-group/out-group dynamics, (2) agenda setting, (3) leader efforts to demonstrate competence in foreign policy, and (4) efforts to blame foreign leaders or perhaps domestic minorities for problems. In addition, researchers have identified some countervailing mechanisms that may inhibit state leaders’ ability to pursue diversionary strategies, the most important of which is the possibility that potential targets may strategically avoid conflict with leaders likely to behave aggressively.
Second, research has identified scope conditions that limit the applicability of diversionary arguments, another of Levy’s concerns about the research he reviewed. Above all, diversionary uses of military force (though not other diversionary strategies) may be possible for only a narrow range of states. Though very powerful states may pursue such a strategy against a wide range of targets, the leaders of less powerful states may have this option only during fairly serious episodes of interstate hostility, such as rivalries and territorial disputes. A substantial amount of research has focused exclusively on the United States, a country that clearly has the capacity to pursue this strategy. While the findings of this work cannot be generalized to many other states, they have revealed some important nuances in the processes that create diversionary incentives. The extent to which these incentives hinge on highly specific political and institutional characteristics point to the difficulty of applying realistic diversionary arguments to a large sample of states. Research on smaller, more homogenous samples or individual states is more promising, even though it will not produce an answer to the broad question of how prevalent diversionary behavior is. As with many broad questions about political phenomena, the only correct answer may be “it depends.” Diversionary foreign policy happens, but not in the same way in every instance and not in every state in the international system.
Josep M. Colomer
Logical models and statistical techniques have been used for measuring political and institutional variables, quantifying and explaining the relationships between them, testing theories, and evaluating institutional and policy alternatives. A number of cumulative and complementary findings refer to major institutional features of a political process of decision-making: from the size of the assembly to the territorial structure of the country, the electoral system, the number of parties in the assembly and in the government, the government’s duration, and the degree of policy instability. Mathematical equations based on sound theory are validated by empirical tests and can predict precise observations.
Sabine C. Carey and Neil J. Mitchell
Pro-government militias are a prominent feature of civil wars. Governments in Colombia, Syria, and Sudan recruit irregular forces in their armed struggle against insurgents. The United States collaborated with Awakening groups to counter the insurgency in Iraq, just as colonizers used local armed groups to fight rebellions in their colonies. An emerging cross-disciplinary literature on pro-government non-state armed groups generates a variety of research questions for scholars interested in conflict, political violence, and political stability: Does the presence of such groups indicate a new type of conflict? What are the dynamics that drive governments to align with informal armed groups and that make armed groups choose to side with the government? Given the risks entailed in surrendering a monopoly of violence, is there a turning point in a conflict when governments enlist these groups? How successful are these groups? Why do governments use these non-state armed actors to shape foreign conflicts whether as insurgents or counterinsurgents abroad? Are these non-state armed actors always useful to governments or perhaps even an indicator for state failure?
We examine the demand for and supply of pro-government armed groups and the legacies that shape their role in civil wars. The enduring pattern of collaboration between governments and these armed non-state actors challenges conventional theory and the idea of an evolutionary process of the modern state consolidating the means of violence. Research on these groups and their consequences began with case studies, and these continue to yield valuable insights. More recently, survey work and cross-national quantitative research contribute to our knowledge. This mix of methods is opening new lines of inquiry for research on insurgencies and the delivery of the core public good of effective security.
Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.
There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.
Katelyn E. Stauffer and Diana Z. O'Brien
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Politics. Please check back later for the full article.
Definitions of feminist research are wide ranging, and incorporate an array of approaches and perspectives. While there is great diversity within feminist scholarship, the work of Sharlene Nagy Hesse-Biber, in particular, examines the social and institutional norms and practices that shape women’s and men’s lived experiences. Feminist researchers thus challenge disciplinary norms and practices that ignore the roles that gender and sex play in developing and testing broader theoretical frameworks. Feminist political science, in particular, seeks to incorporate sex and gender into classic political science paradigms and to use a feminist approach to offer new insights about politics.
At its heart, feminist political science is rooted in the desire to understand how men and women experience politics differently, often in ways that systematically disadvantage women. This concern with systematic disadvantages lends itself to quantitative research, which relies on statistical methods to create abstract, simplified representations of political systems and institutions in order to allow for clearer inferences. Indeed, looking at all articles published in Politics & Gender, the journal of the Women and Politics Research Section of the American Political Science Association, we see that feminist political science research has increasingly drawn on quantitative methods. While there is some fear that statistical abstraction is inadequate for understanding women’s lived political experiences, when guided by feminist research principles, quantitative methods have proven useful for the study of gender and politics.
Our analysis dispels the myth that feminist political science research is hostile to quantitative methods; to the contrary, it has embraced these tools. Building on this analysis, we then ask whether quantitative political science has similarly embraced feminist research. We look at articles published in Political Analysis, the journal of the Society for Political Methodology. We find that these articles rarely address questions of gender and politics. Though gender and politics scholars have accepted statistical methods, applied statisticians within political science have not adopted a feminist approach to studying policies. Gender and politics researchers, moreover, are using statistical tools but not spearheading the development of these techniques.
After providing this overview of the state of the discipline, we offer insights for feminist scholars aiming to conduct quantitative research, as well as for quantitative researchers who would like to conduct feminist research. We argue that quantitative methods provide support for feminist conceptions of politics—beliefs that often require quantitative data in order to be tested. Similarly, applying feminist research principles can inform quantitative work. At a minimum, a feminist approach requires quantitative methods to account for gender and sex in both experimental and observational data. Incorporating these characteristics reveals how the personal is political; failure to do so leads to an incomplete understanding of political behavior and institutions. We believe that the two frameworks can (and should) be used in tandem, resulting in theoretically and methodologically richer and more rigorous work.
Jon C.W. Pevehouse
Scholarship in international relations has taken a more quantitative turn in the past four decades. The field of foreign policy analysis was arguably the forerunner in the development and application of quantitative methodologies in international relations. From public opinion surveys to events data to experimental methods, many of the earliest uses of quantitative methodologies can be found in foreign policy analysis. On substantive questions ranging from the causes of war to the dynamics of public opinion, the analysis of data quantitatively has informed numerous debates in foreign policy analysis and international relations. Emerging quantitative methods will be useful in future efforts to analyze foreign policy.
Nazli Choucri and Gaurav Agarwal
The term lateral pressure refers to any tendency (or propensity) of states, firms, and other entities to expand their activities and exert influence and control beyond their established boundaries, whether for economic, political, military, scientific, religious, or other purposes. Framed by Robert C. North and Nazli Choucri, the theory addresses the sources and consequences of such a tendency. This chapter presents the core features—assumptions, logic, core variables, and dynamics—and summarizes the quantitative work undertaken to date. Some aspects of the theory analysis are more readily quantifiable than others. Some are consistent with conventional theory in international relations. Others are based on insights and evidence from other areas of knowledge, thus departing from tradition in potentially significant ways.
Initially applied to the causes of war, the theory focuses on the question of: Who does what, when, how, and with what consequences? The causal logic in lateral pressure theory runs from the internal drivers (i.e., the master variables that shape the profiles of states) through the intervening variables (i.e., aggregated and articulated demands given prevailing capabilities), and the outcomes often generate added complexities. To the extent that states expand their activities outside territorial boundaries, driven by a wide range of capabilities and motivations, they are likely to encounter other states similarly engaged. The intersection among spheres of influence is the first step in complex dynamics that lead to hostilities, escalation, and eventually conflict and violence.
The quantitative analysis of lateral pressure theory consists of six distinct phases. The first phase began with a large-scale, cross-national, multiple equation econometric investigation of the 45 years leading to World War I, followed by a system of simultaneous equations representing conflict dynamics among competing powers in the post–World War II era. The second phase is a detailed econometric analysis of Japan over the span of more than a century and two World Wars. The third phase of lateral pressure involves system dynamics modeling of growth and expansion of states from 1970s to the end of the 20th century and explores the use of fuzzy logic in this process. The fourth phase focuses on the state-based sources of anthropogenic greenhouse gases to endogenize the natural environment in the study of international relations. The fifth phase presents a detailed ontology of the driving variables shaping lateral pressure and their critical constituents in order to (a) frame their interconnections, (b) capture knowledge on sustainable development, (c) create knowledge management methods for the search, retrieval, and use of knowledge on sustainable development and (d) examine the use of visualization techniques for knowledge display and analysis. The sixth, and most recent, phase of lateral pressure theory and empirical analysis examines the new realities created by the construction of cyberspace and interactions with the traditional international order.
Kevin Arceneaux and Martin Johnson
Students of public opinion tend to focus on how exposure to political media, such as news coverage and political advertisements, influences the political choices that people make. However, the expansion of news and entertainment choices on television and via the Internet makes the decisions that people make about what to consume from various media outlets a political choice in its own right. While the current day hyperchoice media landscape opens new avenues of research, it also complicates how we should approach, conduct, and interpret this research. More choices means greater ability to choose media content based on one’s political preferences, exacerbating the severity of selection bias and endogeneity inherent in observational studies. Traditional randomized experiments offer compelling ways to obviate these challenges to making valid causal inferences, but at the cost of minimizing the role that agency plays in how people make media choices.
Resent research modifies the traditional experimental design for studying media effects in ways that incorporate agency over media content. These modifications require researchers to consider different trade-offs when choosing among different design features, creating both advantages and disadvantages. Nonetheless, this emerging line of research offers a fresh perspective on how people’s media choices shapes their reaction to media content and political decisions.
Rational choice theory may seem like a separate theoretical approach with its own forbidding mathematics. However, the central assumptions of rational choice theory are very similar to those in mainstream political behavior and even interpretive sociology. Indeed, many of the statistical methods used in empirical political behavior assume axiomatic models of voter choice. When we consider individual voting behavior, the contribution of rational choice has been to formalize what empirical political scientists do anyway, and provide some new tools. However, it is when we consider collective voting choice—what elections mean and what kind of policy outcomes result—that rational choice leads to new, counterintuitive insights. Rational choice also has a normative dimension. Without voter rationality the traditional understanding of democracy as popular choice makes little sense.