Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.
There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.
Richard Ned Lebow
Counterfactuals seek to alter some feature or event of the pass and by means of a chain of causal logic show how the present might, or would, be different. Counterfactual inquiry—or control of counterfactual situations—is essential to any causal claim. More importantly, counterfactual thought experiments are essential, to the construction of analytical frameworks. Policymakers routinely use then by to identify problems, work their way through problems, and select responses. Good foreign-policy analysis must accordingly engage and employ counterfactuals.
There are two generic types of counterfactuals: minimal-rewrite counterfactuals and miracle counterfactuals. They have relevance when formulating propositions and probing contingency and causation. There is also a set of protocols for using both kinds of counterfactuals toward these ends, and it illustrates the uses and protocols with historical examples. Policymakers invoke counterfactuals frequently, especially with regard to foreign policy, to both choose policies and defend them to key constituencies. They use counterfactuals in a haphazard and unscientific manner, and it is important to learn more about how they think about and employ counterfactuals to understand foreign policy.
Comparative public policy (CPP) is a multidisciplinary enterprise aimed at policy learning through lesson drawing and theory building or testing. We argue that CPP faces the challenge of conceptual and analytical standardization if it is to make a significant contribution to the explanation of policy decision-making. This argument is developed in three sections based on the following questions: What is CPP? What is it for? How should it be done? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the persistence of two distinct bodies of literature made of basic and applied studies. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a synthesis of the main methodological pitfalls and the problems of case selection and then revising the main protocols in use. We conclude with a reflection on the contribution of CPP to policy design and policy analysis.
Process tracing is a research method for tracing causal mechanisms using detailed, within-case empirical analysis of how a causal process plays out in an actual case. Process tracing can be used both for case studies that aim to gain a greater understanding of the causal dynamics that produced the outcome of a particular historical case and to shed light on generalizable causal mechanisms linking causes and outcomes within a population of causally similar cases. This article breaks down process tracing as a method into its three core components: theorization about causal mechanisms linking causes and outcomes; the analysis of the observable empirical manifestations of the operation of theorized mechanisms; and the complementary use of comparative methods to enable generalizations of findings from single case studies to other causally similar cases. Three distinct variants of process tracing are developed, illustrated by examples from the literature.
Sabine C. Carey and Neil J. Mitchell
Pro-government militias are a prominent feature of civil wars. Governments in Colombia, Syria, and Sudan recruit irregular forces in their armed struggle against insurgents. The United States collaborated with Awakening groups to counter the insurgency in Iraq, just as colonizers used local armed groups to fight rebellions in their colonies. An emerging cross-disciplinary literature on pro-government non-state armed groups generates a variety of research questions for scholars interested in conflict, political violence, and political stability: Does the presence of such groups indicate a new type of conflict? What are the dynamics that drive governments to align with informal armed groups and that make armed groups choose to side with the government? Given the risks entailed in surrendering a monopoly of violence, is there a turning point in a conflict when governments enlist these groups? How successful are these groups? Why do governments use these non-state armed actors to shape foreign conflicts whether as insurgents or counterinsurgents abroad? Are these non-state armed actors always useful to governments or perhaps even an indicator for state failure?
We examine the demand for and supply of pro-government armed groups and the legacies that shape their role in civil wars. The enduring pattern of collaboration between governments and these armed non-state actors challenges conventional theory and the idea of an evolutionary process of the modern state consolidating the means of violence. Research on these groups and their consequences began with case studies, and these continue to yield valuable insights. More recently, survey work and cross-national quantitative research contribute to our knowledge. This mix of methods is opening new lines of inquiry for research on insurgencies and the delivery of the core public good of effective security.
Caroline A. Hartzell
Civil wars typically have been terminated by a variety of means, including military victories, negotiated settlements and ceasefires, and “draws.” Three very different historical trends in the means by which civil wars have ended can be identified for the post–World War II period. A number of explanations have been developed to account for those trends, some of which focus on international factors and others on national or actor-level variables. Efforts to explain why civil wars end as they do are considered important because one of the most contested issues among political scientists who study civil wars is how “best” to end a civil war if the goal is to achieve a stable peace. Several factors have contributed to this debate, among them conflicting results produced by various studies on this topic as well as different understandings of the concepts war termination, civil war resolution, peace-building, and stable peace.
Recent methodological work on systematic case selection techniques offers ways of choosing cases for in-depth analysis such that the probability of learning from the cases is enhanced. This research has undermined several long-standing ideas about case selection. In particular, random selection of cases, paired or grouped selection of cases for purposes of controlled comparison, typical cases, and extreme cases on the outcome variable all appear to be much less useful than their reputations have suggested. Instead, it appears that scholars gain the most in terms of making new discoveries about causal relationships when they study extreme cases on the causal variable or deviant cases.
Modern Populism: Research Advances, Conceptual and Methodological Pitfalls, and the Minimal Definition
Takis S. Pappas
Populism is one of the most dynamic fields of comparative political research. Although its study began in earnest only in the late 1960s, it has since developed through four distinct waves of scholarship, each pertaining to distinct empirical phenomena and with specific methodological and theoretical priorities. Today, the field is in need of a comprehensive general theory that will be able to capture the phenomenon specifically within the context of our contemporary democracies. This, however, requires our breaking away from recurring conceptual and methodological errors and, above all, a consensus about the minimal definition of populism.
All in all, the study of populism has been plagued by 10 drawbacks: (1) unspecified empirical universe, (2) lack of historical and cultural context specificity, (3) essentialism, (4) conceptual stretching, (5) unclear negative pole, (6) degreeism, (7) defective observable-measurable indicators, (8) a neglect of micromechanisms, (9) poor data and inattention to crucial cases, and (10) normative indeterminacy. Most, if not all, of the foregoing methodological errors are cured if we define, and study, modern populism simply as “democratic illiberalism,” which also opens the door to understanding the malfunctioning and pathologies of our modern-day liberal representative democracies.