Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS ( (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 28 April 2017

Comparative Public Policy

Summary and Keywords

Comparative public policy (CPP) is a multidisciplinary enterprise aimed at policy learning through lesson drawing and theory building or testing. We argue that CPP faces the challenge of conceptual and analytical standardization if it is to make a significant contribution to the explanation of policy decision-making. This argument is developed in three sections based on the following questions: What is CPP? What is it for? How should it be done? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the persistence of two distinct bodies of literature made of basic and applied studies. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a synthesis of the main methodological pitfalls and the problems of case selection and then revising the main protocols in use. We conclude with a reflection on the contribution of CPP to policy design and policy analysis.

Keywords: Comparative politics, policy analysis, policy design, process tracing, qualitative comparative analysis (QCA)


Comparative public policy (hereafter CPP) encompasses many dimensions of the political system; hence, it can refer to national, subnational, or regional levels of action in order to explain the decision-making process and the role played by institutions, actors, and context. The increasing importance of comparison in policy studies first responds to the intensification of the transnational relationships and interdependence caused by globalization. It is also a response to the demand for accountability expressed by civil society organizations, while taking advantage of the availability and accessibility of information made possible by the Internet and social networks.

Taking into consideration new policy approaches allows a better understanding of other political traditions. This “benchmarking role” (Dodds, 2013, p. 7) can focus on their institutional dimensions or collective action; likewise, it can explain policy success or failure, policy change or continuity, policy diffusion and convergence, and so forth. Since CPP provides for a virtually infinite number of cases and variables to research, there is no theoretical limit to the number of logical proposals to deal with.

From the beginning, in the 1960s and 1970s, this multidisciplinary enterprise has been highly influenced by comparative politics and economics. However, since the 1980s and 1990s, historical and sociological neoinstitutionalism have made significant contributions to policy studies in general, and CPP in particular. Since the 2000s, the field has been more and more influenced by international studies, even though international relationships have always been naturally interested in comparison.

This epistemological layout had decisive effects on the research agenda, beyond the influence of the international cooperation and financial organizations on local demand-driven reforms (Heidenheimer, 1985). The early focus on macroeconomics and social policies was displaced later by the interest in institutional systems and policy networks; before transnational interactions, multilevel governance and policy transfer would constitute major issues. It also had important consequences for the selected methods. Initially, large-N comparison and statistical research dominated the field until small-N comparison and, more recently, multimethods came of age. Today, CPP makes an extensive use of case-study methods, which include both within-case and small-N comparative analysis (George & Bennett, 2005, p. 18).

CPP is now a cluster of many research groups, networks, and academic journals that recently converged into the International Public Policy Association. Yet after four decades of cross-fertilization, CPP still lacks a common language and common protocols that would produce substantial cumulative knowledge for decision-making and theory building or testing (Dodds, 2013, p. 324; Schmitt, 2013; Engeli & Rothmayr Allison, 2014b, p. 2). It also needs to improve its methods, in particular to bridge the two traditional bodies of literature produced by basic research on the one hand, and by policy analysis on the other (Etzioni, 2006).

This paper intends to make a contribution to the debate, based on the following questions: What is CPP? What is it for? How should it be done? We begin with a presentation of the historical evolution of the field, its conceptual heterogeneity, and the persistence of two distinct bodies of literature. We proceed with a discussion of the logics operating in CPP, their approaches to causality and causation, and their contribution to middle-range theory. Next, we explain the fundamental problems of the comparative method, starting with a synthesis of the main methodological pitfalls and the problems of case selection and then revising the main protocols in use. The paper concludes with a reflection about the contribution of CPP to policy design and policy analysis.

What Is CPP?

The Comparative Turn in Policy Studies

It is hard to define a state of the art of CPP, given the heterogeneity of policy studies and the polysemy of comparison as methods and substance (but see Heidenheimer, 1985; Hassenteufel, 2005; DeLeon, 2006; Engeli & Rothmayr Allison, 2014a). CPP came after the convergence of two traditions in policy studies. The European tradition stems from the cameral sciences and the state’s need for command and control, which were followed by early studies on bureaucracy as the ideal type of the rational and instrumental logic (Lascoumes & Le Galès, 2007). The North American tradition has brought the professionalization of civil servants and the Woodrow Wilson administration’s need for strategic assessment (DeLeon, 2006). This convergence toward comparative studies would only be possible after World War II, in the wake of Keynesian economics and the development of the welfare state on both sides of the Atlantic. Foreign affairs and domestic policies were intertwined with the production of a specialized body of literature, while the public administration developed new instruments through statistics and planning.

The 1970s were the actual “launching decade of comparative policy studies” (Heidenheimer, 1985, p. 445). Political scientists to whom the international arena was familiar intended to explain how the state and political variables (such as party system configurations) shape public policies (see, for instance, Flora & Heidenheimer, 1981; Rose & Shiratori, 1986; Castles, 1998). Their first publications consisted of descriptive analyses on the impact of major policies of similar national agencies; the role of parties, interest groups, and bureaucracies in policy outputs; and long-term patterns of change and continuity over time and across regimes. Yet their theoretical perspectives, their concepts of society and policy, and their research methods and units of analysis show that CPP was already “more than the sum of its parts” (Donald Hancock, 1983, quoted in Heidenheimer, 1985, pp. 445–446).

Among the classical examples referred to by methodological textbooks are scholarships on democracy and democratization, the determinants of economic development, the democratic peace, the relationships between social movements and state reform, party systems, and political regimes (Geddes, 2003; Ragin, 2008a; Rihoux & Ragin, 2009; Landman, 2013; Berg-Schlosser, 2012). Basic issues include the objectives of public policies, policy content and instruments, the ways that a policy is adopted, and the interplays between the public and the dominant actors of public policies.

The evaluation of policy outputs, focusing on policy design (Schneider & Ingram, 1988) and policy implementation (May, 1992), was rather prescriptive and aimed at drawing lessons from policy failure or success. It could be related to policy areas (like health, pensions, incomes, education, housing, taxation, and employment), short-term policy impact effectiveness or long-term systemic consequences (like the welfare state, corporatism, and the crisis of democracy), and the administrative relationships between national and local governments.

Since the “comparative turn” in policy studies (Engeli & Rothmayr Allison, 2014b, p. 3) CPP has been dominated by the comparison of sectorial policies like macroeconomics (Hall, 1986), social welfare (Esping-Andersen, 1990; Castles, 1994), foreign policy (George & Bennett, 2005), and the environment (Steinberg, 2003). Yet, CPP has also produced a considerable volume of knowledge about policy problems across countries, such as institutional systems (Mény & Surel, 2009), implementation gaps (Lindquist, 2006), policy instruments (Dodds, 2013; Lodge, 2007), and bureaucracies (Peters & Pierre, 2004; Pierre & Ingraham, 2010).

Contemporary research questions dealing with the international and transnational dimensions of the policy process, particularly the inclusion of new geographical areas in Asia and Latin America, and new transversal issues from gender and cultural studies (Dodds, 2013; Engeli & Rothmayr Allison, 2014b). These transnational dimensions are an essential aspect of both globalization and regional integration, which have been widely studied since the 1990s by research on democratic governance (March & Olsen, 1995; Pierre & Peters, 2000; Kooiman, 2002). Further, scholars are more and more concerned with institutions (Scharpf, 2000), policy convergence, and policy diffusion as a result of a process articulating the transnational, national, and subnational levels with vertical or horizontal modalities of coercion (Hassenteufel, 2005, pp. 122–124).

A Persistent Conceptual Heterogeneity

Analytical frameworks are mostly used for comparative purposes, like the Advocacy Coalition Framework (Weible & Nohrstedt, 2013; Douglas, Ingold, Nohrstedt, & Weible, 2014), comparative policy agendas (Baumgartner & Green-Pedersen, 2008), the Institutional Analysis and Development Framework (Ostrom, 2011) and even the multiple-streams approach, which enjoys a new popularity among studies of the policy cycle (Béland & Howlett, 2016; Howlett, McConnell, & Perl, 2016). However, CPP is still a heterogenous field of research along many dimensions (Peters & Pierre, 2006b), including the conceptualization of public policies, the perspectives on patterns of decision-making, the analytical frameworks, and the methods of causal explanations (Schmitt, 2013, pp. 30–32).

Such heterogeneity comes from the difference between “policy research” and “basic research,” where the former is expected to change the world and the latter to understand it (Etzioni, 2006, p. 833). Both come from two traditions of policy research (Parsons, 2005) opposing the “policy analysis” inherited from Lasswell’s “policy sciences” (Lasswell, 1971) and policy studies, more practically oriented to decision-making. Hence, they make a different conceptual treatment of governments’ decision-making, whether the focus is on a policy’s impact, its outcomes, or its outputs.

Both traditions depart from different research questions and pursue different objectives, so they make different uses of comparison (Engeli & Rothmayr Allison, 2014b, p. 2). Thus, CPP can be defined in a pluralist fashion as “the use of the comparative approach to investigate policy processes, outputs, and outcomes,” whereby any research counts when either explicitly or implicitly contrasting those aspects from one or more units (Dodds, 2013, p. 13). However, according to a more restrictive definition, it is more likely “a logic of doing research, namely a commitment to the systematic investigation across states, domains, and time, not a particular method in terms of research strategies and instruments” (Lodge, 2007, p. 276). Like comparative politics, it implies a combination of substance and method (Mair, 1996; Caramani, 2014, pp. 3–4).

Either definition and approach may be legitimate, but comparative research commands a “conscious” acceptation of its theoretical and methodological implications (Sartori, 1970). Yet four common errors keep enhancing transnational CPP: fiction, distance, reduction, and selection bias (Hassenteufel, 2005, pp. 117–118). “Fictitious” comparison is the most frequent in collective publications on sectorial policies and institutional aspects, where the presentation of national cases is not based on a homogeneous analytical framework. “Distant” comparison is another common strategy that lacks credibility, for it relies on secondary sources, surveys, or interviews with local so-called policy experts instead of direct observation. A “reductive” comparison is based on quantitative indicators that hardly capture the complexity of a causal relationship by reducing it to dichotomous categories. A comparison is “biased” when scholars select a few cases to test a theory without actually considering other possible alternatives, which turns these cases into mere illustrations.

Finally, this dualism is also due to the expansion of comparative politics worldwide and the consecutive conceptual stretching of policy and politics. Most of all, it underlines the need for a common language and a widely accepted conceptual framework to describe the relationships between substantive policies and structural and environmental factors influencing the policy process (Fritz Scharpf, 1977, in Heidenheimer, 1985, p. 455). In particular, the lack of typologies—such as Theodore Lowi’s typology of public policies by degrees and modes of coercion (Lowi, 2008), which eventually fails to describe most non–United States arenas—still hinders cross-area CPP.

What Is CPP For?

Explaining Causation

Comparison is about searching for differences and similarities across cases. When comparing public policies, scholars usually seek to gain more knowledge on each process rather than to identify general patterns of explanation for many processes. By forcing them to verify their answers to research questions beyond single case studies, comparison widens their understanding of potential policy options and helps them to evaluate competing responses to a common event (Engeli & Rothmayr Allison, 2014b, p. 3). Hence, CPP can make a major contribution to the understanding and changing of the real world through theory building and testing, considering the real world as a “laboratory” (Peters, 2013, p. 3) to identify causal relationships through natural experiments in order to determine (at least theoretically) the best solutions for policy problems.

There are different ways to proceed, but when it comes to explain causality, they all seek to “maximize experimental variance, minimize error variance and control extraneous variance” (Peters, 2013, p. 31). Experimental variance points to variance in the outcomes (Y) of a research question as a result of a specific cause (X). Maximizing it means being certain that Y actually varies and avoid the bias due to case selection on their value on Y. Error variance refers to the random effects of an unmeasured variable or factor (R) on the theoretically relevant outcomes. Minimizing it means avoiding wrong observations and misinterpretation. Extraneous variance refers to the spuriousness caused by a variable (Z) that has a systematic relationship with X or Y. Controlling it means favoring parsimonious theories and carefully selecting and interpreting observations (Peters, 2013, pp. 33–34).

That being said, CPP can follow two different conceptions of causation that lead to different methods of comparison (Beach & Pedersen, 2016). One is probabilistic, and it states that an outcome or a cause (hence a causal relationship) can vary in degree over different situations (like a political crisis, economic development, or democracy). The other is deterministic, and it states that an outcome or a cause can vary in nature over different situations, which means that it can happen or not (like a war between two countries or a government change). These approaches have been qualified as “neo-positivist” and “realist” epistemologies (Furlong & Marsh, 2010, p. 20). Yet from a philosophical perspective, both belong to the same philosophical realm of dualism (as opposed to monism) in the relation between mind and world (Jackson, 2016, p. 33).

According to the probabilistic stance, the conditions of a theoretically relevant outcome are expected to be observed across as many cases as possible to identify mean causal effects. Therefore, the bigger the number of cases—synonym to empirical observations, for statisticians—, the stronger the theory. According to the deterministic stance, a causal relationship requires a description as detailed as possible to be proven. Therefore, the more the attributes of a single case, the better the theory if one aims at providing a sufficient explanation. Hence, a probabilistic logic leads to a variable-oriented research design, while a deterministic logic leads to case studies. Of course, this means a trade-off between leverage (or external validity) and precision among case studies, each of which might support a different claim for theory testing or theory building.

Theory Building and Theory Testing

Since World War II, the methodological developments of social sciences, particularly in political science, has been increasingly resorting to statistical methods and standard regression models, while comparative studies were evolving toward conceptions of causality contrary to the assumptions required for these methods, leading, for instance, to the theories of rational choice1 and path dependence2. The problem comes from the widening gap between the methods in comparative research and the ontologies that have been inspiring scholars from comparative politics after the “comparative revolution” of the 1970s (Hall, 2003) and the neoinstitutional rise of the 1980s.

This difference was epitomized by Lijphart’s definition of “the” (sic) comparative method, as opposed to experiments and statistics (Lijphart, 2008), which radicalized former neopositivist statements on comparative research designs (see, for instance, Przeworski & Teune, 1970). The experimental method is based on group comparisons (at least one experimental group versus one group of control), a well-known method in medicine that is difficult to replicate in social sciences due to practical and ethical limitations. The statistical method is a common substitute to experiments in social sciences, when a large population of cases is available and allows for testing the behavior of a limited number of variables across time or space. The comparative method or small-N comparison is another substitute, when the number of available cases is more limited.

These methods derive from different scientific ontologies which contend different conceptions of theory and causation (Hall, 2003, pp. 382–383; Beach & Pedersen, 2016). Statistical CPP derives from neo-positivism, aiming at developing law-like theories and assimilating a causal explanation to a predictable covariation (Jackson, 2016, p. 59). Small-N CPP derives from critical realism, aiming at developing middle-range theories and defining a causal explanation in terms of asymmetrical patterns of causation (Jackson, 2016, pp. 88–89), which means that the absence of a cause does not allow to predict the absence of an effect.

On the one hand, large-N comparative studies often focus on policy impacts and long-term cycles of the Welfare State expansion and retrenchment (Heidenheimer, Heclo, & Addams, 1990; Lodge, 2007, p. 275; Schmitt, 2013, p. 37). They are akin with econometrics and statistics, inasmuch as they rely on quantitative measurement to examine the relationships between social, economic, and political phenomena (Breunig & Ahlquist, 2014, p. 109). Their descriptions of single variables summarize the key properties of a distribution, such as to measure central tendencies (means) and dispersion (standard deviation). They use regression models to identify and assess causal relationships and to forecast policy impacts. Eventually, they seek to capture distinct types of a phenomenon along several dimensions, through typology formation and classification.

On the other hand, within-case longitudinal comparison and small-N comparative studies are particularly akin with political economy and historical sociology since they rely on a deep knowledge of combinations of causal factors in a specific national context or a sectorial area (Dodds, 2013; Kennett, 2006). They are better at explaining causality in particular cases than at producing theories on general causal effects and causal weight of variables across cases (George & Bennett, 2005, p. 26; Steinberg, 2007). In Lijphart’s typology, comparison is the poor little brother of proper methods of political studies, due to the “degree of freedom”3 problem, which does not allow proving causality beyond the specific cases being studied. Yet to overcome this limitation, multimethods have spread out in political science and policy studies, combining quantitative and qualitative techniques to establish causal relationships (Brady & Collier, 2010; Berg-Schlosser, 2012; Engeli & Rothmayr Allison, 2014a; Bennett & Checkel, 2015), turning the definition of sufficiency and necessity into a much more complex issue than when it is treated as dichotomous, without giving up on causal explanation.

Case-centered research designs are commonly used in CPP to achieve high levels of “conceptual validity” (George & Bennett, 2005, p. 19) regarding the indicators that best characterize the theory that we want to test. Hence, the increasing interest of policy analysis for process-tracing and the quest for causal mechanisms in policy outputs and outcomes (Blatter & Haverland, 2014; Kay & Baker, 2015; Charbonneau, Henderson, Ladouceur, & Pichet, 2016). Today, case studies are used as a complement to data sets and regression analysis in statistical research designs (King, Keohane, & Verba, 1994, p. 89, note 11; Gerring, 2009, 2007, p. 88). Statistics are also used in case-study research and small-N research designs to strengthen causal theories (Beach & Rohlfing, 2015).

The statistical treatment of necessary and sufficient conditions in small-N and intermediate-N studies includes probabilistic testing of deterministic causes, based on Bayesian logic4 (Bennett, 2008, 2010) and fuzzy-set in configurational research or qualitative comparative analysis (QCA), based on Boolean logic5 (Ragin, 2008a, 2008b; Rihoux & Ragin, 2009; Berg-Schlosser, 2012; Engeli & Rothmayr Allison, 2014c). In both cases, the objective of those tests is to increase the degree of empirical likelihood of a causal relationship, comparable to the quest for proofs in a criminal investigation. When causes (X) and outcomes (Y) are dichotomous (1; 0), the cases providing the best test of necessary causation are those in which Y occurs (based on a “positive on outcome design”), while the cases providing the main test of sufficient causation are those in which X occurs (based on a “positive on cause design”) (Charles Ragin, 2000, in Collier, Brady, & Seawright, 2010, p. 146).

How Should CPP Be Done?

Methodological Pitfalls

The importance of small-N and intermediate-N comparative designs in CPP might raise objections to its ability to produce strong theories and testing. But these methods actually foster middle-range theory building and testing, contributing to bridging the gap between policymaking and basic research (George & Bennett, 2005, p. 220). Here, case selection is key and should follow strict criteria to avoid common methodological pitfalls.

Besides the degree of freedom problem, four general conditions should be met in order to explain a causal relation. First, unit homogeneity is necessary so that what is compared is actually comparable (King, Keohane, & Verba, 1994, p. 91).6 Second, there should be no correlation between X1 included in the analysis and Xn excluded from it, but correlated with Y. Third, all cases should be fully independent so that X in a case A is not affected by the value of X or Y in other cases. Four, there should be no endogeneity7 or reciprocal causation between X and Y.

There are many ways to cope with those problems, although each one affects the leverage effect of any conclusion to be drawn. Endogeneity can be corrected statistically by parsing Y and studying only the parts that are actually a consequence of X, or parsing X to ensure that only the exogenous parts are taken into account (King, Keohane, & Verba, 1994, p. 187). Another way to cope with it is to transform the endogeneity problem into an omitted variable and controlling for this variable. It is also possible to select at least some observations without endogeneity.

To increase the degree of freedom, it is possible to reduce the “property-space” of the analysis by combining two or more variables into a single one (Lijphart, 2008, p. 250). More generally, scholars can focus on “comparable” cases, similar in a large number of important features but different regarding these variables (Lijphart, 2008, p. 250). This technique makes the area approach of particular interest for CPP—for instance, to compare sets of countries like the members of the Commonwealth, the European Union, Latin America, or southeastern Asia.

Other methodological pitfalls are more specific to CPP. First, although most public policies are affected by transnational structures and agencies, their design and implementation actually rely on local factors that scarcely travel across countries (ranging from the political system to the civic culture). For instance, the heterogeneity of statistics due to different unit definitions and measurement techniques from one country to another is a source of “ecological fallacy” (William S. Robinson, 1950, cited in Caramani, 2014, p. 13) that prevents one from generalizing conclusions drawn from single-case studies and therefore restrains the scope of comparison to a limited number of cases.

Further, the potential lack of independence between the selected cases, also known as “Galton’s problem” (George & Bennett, 2005, p. 31), can make it difficult to sort out diffusion effects caused by extraneous variables. This is a common issue regarding the effects of transnational factors and globalization on local processes, such as the prevalence of presidential regimes in Latin America due to the diplomatic influence of the United States, or the convergence of fiscal policies within the euro zone (Peters, 2013, p. 44; Dodds, 2013, p. 328; Keman, 2014, p. 57).

Last but not least, the orientation toward case-study methods in CPP raises problems of selection bias, which is known to be the Achilles heel of qualitative methods. In statistical terms, case selection is biased when some form of selection process “results in inferences that suffer from systematic error” (George & Bennett, 2005, p. 23). This is common when cases are selected from a population according to the “dependent variable” (Geddes, 2003, p. 89) or the outcome of a causal relationship without a systematic analysis of the whole universe of cases taking place. In particular, in a statistical data set, the selection on the dependent variable tends to limit observations to a partial range of variation where the extreme on a positive or a negative value of Y is observed, which alters the estimated causal effect on average, also known as “regression toward the mean” (James J. Heckman, 1976, 1979, in Goertz & Mahoney, 2012, pp. 178–179).

Hence, as a general caveat, statisticians recommend avoiding selecting cases on the outcome (Y) or at least to be aware of the risks implied by such a technique (King, Keohane, & Verba, 1994, pp. 129–130). This does not mean that values of Y should not be taken into account in a research design, but that the possible bias introduced by such a selection needs correction. Above all, the case selection should always allow some degree of variation on Y.

Still, there are good reasons to favor a “causes-of-effects” approach (Goertz & Mahoney, 2012, p. 42) in a qualitative research design in order to explain the outcomes or study the effects of particular causal factors in individual cases or small-N sets of cases. First, the different types of research questions are translations of why X causes Y (Goertz & Mahoney, 2012, p. 43). While a large-N research design seeks to explain the average effect of X1 on Y1 within a population of cases, a small-N research design deals with the factors (Xn) that explain Y1 for one case or a few specific cases.

Second, case selection on the outcome is useful to identify and discard variables according to their sufficient and necessary relationship with this outcome. It depends on what is claimed regarding the necessity and sufficiency of causal conditions. When looking for potential causal paths and variables leading to a specific outcome (Y1), CPP focuses on combinations of conditions (Xn) to produce multivariate explanations rather than experimentally isolated variables (George & Bennett, 2005, p. 23). A single variable can be necessary or sufficient for an outcome, considering an entire population of cases, a particular historical context, or a conjunction of variables (George & Bennett, 2005, p. 26).

Third, the relevance of selecting a case on X or Y for theory testing depends on what the theory states (Ragin & Schneider, 2011). If it states a necessary condition, it is logical to select a case on a positive outcome (Y=1), but if it states a sufficient condition, it is better to select on a positive causal factor (X=1) (Goertz & Mahoney, 2012, p. 181). Therefore, a precise justification of how and why cases are selected for comparison is the best guarantee against selection bias.

Case Selection

Case selection and variable definition are two critical issues in CPP since random case selection tends to be unrepresentative and uninformative. In this respect, the most common technique is the selection of “typical” cases, representative of a broader set of cases that exemplify a typical set of values (Gerring, 2007, p. 91). However, a typical case is not necessarily representative of a research problem, and its validity as “a case of” some theoretical problem cannot be taken for granted.

On the contrary, cases might be chosen for being unrepresentative of a theory so that their study could lead to identify new variables and hypothesis (George & Bennett, 2005, p. 20). The selection of “extreme” cases is based on extreme values of X1 or Y1; that is to say, observations that lie far from the mean (Gerring, 2007, p. 101). They are often considered to be paradigmatic of a phenomenon, but contrary to typical cases, they are chosen to maximize variance as an exploratory method. Likewise, “deviant” cases are useful to conduct an exploratory analysis inasmuch as they reveal anomalies regarding a general model. Finally, “influential” cases, which focus on unusual causes relative to a theory, may appear to invalidate it; but they may as well be the cases that prove the rule and, as such, end in a reinterpretation of the case for circumstances exogenous to the theory (Gerring, 2007, pp. 108–110). Therefore, their selection is based on the expectation of substantially changing the resulting estimates.

Another alternative to typical cases is the selection of “diverse” cases to achieve maximum variance along relevant dimensions. In cases whose variables are categorical (such as in a yes/no dichotomy), the identification is straightforward, but cases where variables are continuous (such as democracy, development, etc.) require the use of extreme values and mean or median (Gerring, 2007, p. 98). Another subset of diverse cases includes multiple variables instead of one single variable, following a logic of typological theorizing to reveal the pathways through which particular types relate to specific outcomes8. Cross-case techniques of case selection may include stratified random sampling through QCA, to introduce variation on the key variables of interest. Likewise, the need for internal validity of descriptive or causal inferences regarding the selected set of case commands to select variables or theoretical factors in a typological fashion (Seawright & Collier, 2010, p. 334).

A “crucial” case can be selected for two very different reasons, whether for a confirming or a disconfirming purpose (Harold N. Eckstein, 1975, in Gerring, 2007, p. 115). Either way, crucial cases are chosen because they are expected to fit a theory closely on all dimensions except the dimension of theoretical interest. Hence, the objective is to uncover relevant variables not considered previously (Lijphart, 2008, p. 257). A most-likely case can be used as disconfirming if a cause is predicted to achieve a certain outcome and yet it does not. Hence, the purpose of selecting this kind of case is to strengthen a theory in a positive way, which may be necessary when many cases still remain to study. Symmetrically, a least-likely case can be used as confirming if a cause is predicted not to achieve a certain outcome and yet it does. Hence, the purpose of selecting this kind of case is to strengthen a theory in a negative way, which may be necessary when many cases have already been studied.

Slightly different is the logic of elimination at work in a “pathway” (Gerring, 2007, p. 122). This is useful to elucidate causal mechanisms when the nature of a covariation is well known but requires congruence explanation. When dealing with binary variables, pathways always focus on a single causal factor (X1). Following a logic of crucial cases, the causal factor of interest in a pathway correctly predicts the outcome’s positive value (Y1=1), while other possible causes of Y1 (Xn) make wrong predictions (Gerring, 2007, p. 125). When dealing with continuous variables, two criteria ought to be met: the selected case should not be an outlier in the general model and the case’s score on the outcome should be strongly influenced by the theoretical variable of interest (X1), taking all other factors (Xn) into account (Gerring, 2007, p. 126).

Comparing Differences and Similarities

The protocols of identification of sufficient and necessary conditions, coined by Mill (1843) as the methods of “differences” and “agreement,” are the cornerstone of most CPP research designs9. The method of differences consists of comparing cases that differ with respect to X or Y but do not differ across comparable cases with respect to other variables10. It is used to identify necessary conditions to similar outcomes in different contexts, like the coming of social revolutions in countries as different as France, China, and Russia (Skocpol, 1979). By selecting cases that are similar in as many aspects as possible, following a “ceteris paribus” principle (Keman, 2014, pp. 52–54), this protocol aims at isolating a sufficient cause of a predicted outcome.

The method of agreement consists of comparing cases to detect the relationships between X and Y that remain similar in spite of differences in other features of the cases compared11. The objective here is to identify the factors that can produce similarities across different sets of elements, like presidential or parliamentary systems in Anglo-American democracies, or social regimes in Scandinavian countries (Peters, 2013, p. 40). By selecting cases that are as different as possible, following a “no matter what” principle (Keman, 2014, pp. 53–54), this protocol aims at isolating a necessary condition of a predicted outcome.

Both protocols differ whether the research design is case-oriented (to compare systems) or variable-oriented (to compare units of a system) (Collier, Brady, & Seawright, 2010, p. 145; Beach & Pedersen, 2013, p. 77). In a case-oriented design, they can be used either to identify the cause of an observed outcome (causes-of-effects design) or the effects of a known cause (effects-of-causes design) (Goertz & Mahoney, 2012). In practical terms, both methods are akin to a “causes-of-effects” approach, to explain outcomes which are produced by a combination of intervening variables or a “recipe” (Ragin, 2008a, p. 109), except that the method of differences is better at explaining similar outcomes, while the method of agreement is good at explaining different outcomes (Berg-Schlosser, 2012, p. 36).

In a variable-oriented design, they can be used either in a “most-different systems” or a “most-similar systems” design (Przeworski & Teune, 1970, pp. 33–39). In a most-similar systems design, scholars maximize the number of common elements between cases and treat them as controlled extraneous variables. Hence, this application of the method of differences is a way to isolate the causal factor of a divergence between two or more cases. Yet it does not provide a strong theory on the effects of a specific independent variable. Therefore, a common caveat in comparative politics is to favor a most-different systems design, maximizing the variance between cases that present the same theoretical causal relation. This application of the method of agreement allows one to isolate the causal factor of a convergence between two or more cases. Yet it is better at isolating the predictable effect of an independent variable.

Eventually, each method is of particular interest for theory building and testing (Peters, 2013, pp. 41, 43). The logic of a most-similar systems design is to control for extraneous variance by selecting cases. It can identify many possible causes without being able to eliminate one. Hence, it is more accurate for theory building than for theory testing. On the other hand, the logic of a most-different systems design is that of falsification—by the elimination of possible causes—rather than searching for positive relationships between X and Y. Its principal task is to find relationships among variables that can travel across many cases.


CPPs offer a wide field for theory building and testing regarding policy design and policy learning. However, it reflects the methodological and substantial heterogeneity of policy studies, where lawlike comparative studies following deductive inferences coexist with descriptive studies more akin to interpretive inferences. Such heterogeneity may be read as a sign of the importance of the comparative turn, in a context of increasing policy diffusion caused by globalization. Yet it also hinders the development of the discipline, inasmuch as it neither generates a consistent and unified material nor facilitates academic discussion between different schools of thought and so-called analytical frameworks or models.

This comparative turn encompasses both applied and fundamental research. The experimental logic of comparison leaves enough room for both approaches since it allows one to manipulate the real world through the selection of cases in order to determine regularities and causal relationships in the policy process. However, these traditions do not deal with causation problems likewise. On the one hand, fundamental studies are still influenced by comparative politics principles and methods, in search of lawlike relations between policy inputs, outputs, and outcomes. On the other hand, applied research still follows a pluralist tradition where thick descriptions serve contextual explanations aimed at providing insights on specific policy areas.

This extremely stylized representation purposefully underlines the conceptual stretching of CPP due to differences in the treatment of governments’ decision-making, whether the focus is on a policy’s impact, outcomes, or outputs. Most of all, it calls for a common language, techniques, and protocols that would actually make comparative studies comparable. This is particularly challenging to develop small-N comparisons, which are the most common scale nowadays after large-N and statistical research dominated the field until the 1980s.

The alignment of ontology with methodology appears to be the main challenge for CPP to produce studies suitable for leveraging. The increasing use of multimethods in research designs, combining large-N techniques (like standard regression) with case-study techniques (like fuzzy sets and process tracing), raises problems of internal and external consistency that have been dealt with unevenly. Using quantitative and qualitative techniques to explain the causal relationships between structural, behavioral, and institutional factors and policy outputs and outcomes means more than turning facts into figures or using triangulation as a way of validating hypothesis. It implies developing analytical frameworks aimed at middle-range theory building or testing, probably at the cost of renouncing to produce macrotheories based on predictive models.


Thanks to Derek Beach and Patrick T. Jackson for their comments. Any misinterpretation and personal statement, however, is my own responsibility.


Baumgartner, F., & Green-Pedersen, C. (Eds.). (2008). Comparative studies of policy agendas. Oxon, U.K.: Routledge.Find this resource:

Beach, D., & Pedersen, R. B. (2013). Process tracing methods: Foundations and guidelines. Ann Harbor: University of Michigan Press.Find this resource:

Beach, D., & Pedersen, R. B. (2016). Causal case study methods: Foundations and guidelines for comparing, matching, and tracing. Ann Arbor: University of Michigan Press.Find this resource:

Beach, D., & Rohlfing, I. (2015). Integrating cross-case analyses and process tracing in set-theoretic research: Strategies and parameters of debate. Sociological Methods and Research, online version.Find this resource:

Béland, D., & Howlett, M. (2016). The role and impact of the multiple-streams approach in comparative policy analysis. Journal of Comparative Policy Analysis: Research and Practice, 18(3), 221–227.Find this resource:

Bennett, A. (2008). Process tracing: A Bayesian perspective. In J. Box-Steffensmeier, H. Brady, & D. Collier (Eds.), Oxford handbook of political methodology (pp. 702–721). Oxford: Oxford University Press.Find this resource:

Bennett, A. (2010). Process tracing and causal inference. In H. Brady and D. Collier (Eds.), Rethinking inquiry: Diverse tools, shared standards (pp. 207–220). Lanham, MD: Rowman & Littlefield Publishers.Find this resource:

Bennett, A. (2015). Appendix: Disciplining our conjectures. Systematizing process tracing with Bayesian analysis. In Process tracing: From metaphor to analytic tool (pp. 276–298). Cambridge, U.K.: Cambridge University Press.Find this resource:

Bennett, A., & Checkel, J. (Eds.). (2015). Process tracing: From metaphor to analytic tool. Cambridge, U.K.: Cambridge University Press.Find this resource:

Berg-Schlosser, D. (2012). Mixed methods in comparative politics: Principles and applications. London: Palgrave Macmillan.Find this resource:

Blatter, J., & Haverland, M. (2014). Case studies and (causal-)process tracing. In I. Engeli & C. Rothmayr Allison (Eds.), Comparative policy studies: Conceptual and methodological challenges (pp. 59–83). London: Palgrave Macmillan.Find this resource:

Brady, H., & Collier, D. (Eds.). (2010). Rethinking inquiry: Diverse tools, shared standards. Lanham, MD: Rowman & Littlefield Publishers.Find this resource:

Breunig, C., & Ahlquist, J. S. (2014). Quantitative methodologies in public policy. In I. Engeli & C. Rothmayr Allison (Eds.), Comparative policy studies: Conceptual and methodological challenges (pp. 109–129). London: Palgrave Macmillan.Find this resource:

Caramani, D. (2014). Introduction to comparative politics. In D. Caramani (Ed.), Comparative politics (pp. 1–17). Oxford: Oxford University Press.Find this resource:

Castles, F. (1994). Is expenditure enough? On the nature of the dependent variable in comparative public policy analysis. Journal of Commonwealth and Comparative Politics, 32(3), 349–363.Find this resource:

Castles, F. (1998). Comparative public policy: Patterns of post-war transformation. Cheltenham, U.K.: Edward Elgar.Find this resource:

Charbonneau, E., Henderson, A. C., Ladouceur, B., & Pichet, P. (2016). Process tracing in public administration: The implications of practitioner insights for methods of inquiry. International Journal of Public Administration, online version.Find this resource:

Collier, D., Brady, H., & Seawright, J. (2010). Critiques, responses, and trade-offs: Drawing together the debate. In H. Brady & D. Collier (Eds.), Rethinking social inquiry: Diverse tools, shared standards (pp. 135–160). Plymouth, U.K.: Rowman & Littlefield.Find this resource:

DeLeon, P. (2006). The historical roots of the field. In M. Moran, M. Rein, & R. Goodin (Eds.), Oxford handbook of public policy (pp. 39–57). Oxford: Oxford University Press.Find this resource:

Dodds, A. (2013). Comparative public policy. London: Palgrave Macmillan.Find this resource:

Douglas, A., Ingold, K., Nohrsted, D., & Weible, C. (2014). Policy change in comparative contexts: Applying the advocacy coalition framework outside of Western Europe and North America. Journal of Comparative Policy Analysis: Research and Practice, 16(4), 299–312.Find this resource:

Engeli, I., & Rothmayr Allison, C. (Eds.). (2014a). Comparative policy studies: Conceptual and methodological challenges. London: Palgrave Macmillan.Find this resource:

Engeli, I., & Rothmayr Allison, C. (2014b). Conceptual and methodological challenges in comparative public policy. In I. Engeli & C. Rothmayr Allison (Eds.), Comparative policy studies: Conceptual and methodological challenges (pp. 1–13). London: Palgrave Macmillan.Find this resource:

Engeli, I., & Rothmayr Allison, C. (2014c). Intermediate-N comparison: Configurational comparative methods. In I. Engeli & C. Rothmayr Allison (Eds.), Comparative policy studies: Conceptual and methodological challenges (pp. 85–107). London: Palgrave Macmillan.Find this resource:

Esping-Andersen, G. (1990). The three worlds of welfare capitalism. Oxford: Polity Press.Find this resource:

Etzioni, A. (2006). The unique methodology of policy research. In M. Moran, M. Rein, & R. Goodin (Eds.), Oxford handbook of public policy (pp. 833–843). Oxford: Oxford University Press.Find this resource:

Flora P., & Heidenheimer, A. (1981). The development of welfare states in Europe and America. New Brunswick, NJ, and London: Transaction Publishers.Find this resource:

Furlong, P., & Marsh, D. (2010). A skin, not a sweater: Ontology and epistemology in political science. In G. Stocker & D. Marsh (Eds.), Theory and methods in political science (pp. 184–211). Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Geddes, B. (2003). Paradigms and sand castles: Theory building and research design in comparative politics. Ann Arbor: University of Michigan Press.Find this resource:

George, A., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press.Find this resource:

Gerring, J. (2007). Case study research: Principles and practices. Cambridge, U.K.: Cambridge University Press.Find this resource:

Gerring, J. (2009). The case study: What it is and what it does. In C. Boix & S. Stokes (Eds.), Oxford handbook of comparative politics (pp. 90–122). Oxford: Oxford University Press.Find this resource:

Goertz, G., & Mahoney, J. (2012). A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton, NJ: Princeton University Press.Find this resource:

Hall, P. (1986). Governing the economy: The politics of state intervention in Britain and France. Oxford: Oxford University Press.Find this resource:

Hall, P. (2003). Aligning ontology and methodology in comparative research. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 373–406). Cambridge, U.K.: Cambridge University Press.Find this resource:

Hassenteufel, P. (2005). De la comparaison internationale à la comparaison transnationale: Le déplacement de la construction d’objets comparatifs en matière de politiques publiques. Revue Française de Science Politique, 55(1), 113–132.Find this resource:

Heidenheimer, A. (1985). Comparative public policy at the crossroads. Journal of Public Policy, 5(4), 441–465.Find this resource:

Heidenheimer, A., Heclo, H., & Addams, C. T. (1990). The Comparative public policy: The politics of social choice in America, Europe and Japan. New-York: St Martin´s Press.Find this resource:

Hindmoor, A. (2006). Rational choice. Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Howlett, M. (2011). Designing public policies: Principles and instruments. New York: Routledge.Find this resource:

Howlett, M., McConnell, A., & Perl, A. (2016). Weaving the fabric of public policies: Comparing and integrating contemporary frameworks for the study of policy processes. Journal of Comparative Policy Analysis: Research and Practice, 18(3), 273–289.Find this resource:

Jackson, P. T. (2008). Foregrounding ontology: Dualism, monism, and IR theory. Review of International Studies, 34, 129–153.Find this resource:

Jackson, P. T. (2016). The conduct of inquiry in international relations: Philosophy of science and its implications for the study of world politics (2d ed.). London: Routledge.Find this resource:

Kay, A., & Baker, P. (2015). What can causal process tracing offer to policy studies? A review of the literature. Policy Studies Journal, 43(1), 1–21.Find this resource:

Keman, H. (2014). Comparative research methods. In D. Caramani (Ed.), Comparative politics (pp. 47–59). Oxford: Oxford University Press.Find this resource:

Kennett, P. (2006). A handbook of comparative social policy. Cheltenham, U.K.: Edward Elgar.Find this resource:

King, G., Keohane, R., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.Find this resource:

Kooiman, J. (Ed.). (2002). Governing as governance. London: SAGE.Find this resource:

Landman, T. (2013). Issues and methods in comparative politics: An introduction. New York: Routledge.Find this resource:

Lascoumes, P., & Le Galès, P. (2007). Introduction. Understanding public policy through its instruments: From the nature of instruments to the sociology of public policy instrumentation. Governance, 20(1), 1–21.Find this resource:

Lasswell, H. (1971). A pre-view of policy sciences. New York: Elsevier.Find this resource:

Lijphart, A. (2008). Thinking about democracy: Power sharing and majority rule in theory and practice. Oxon, U.K.: Routledge.Find this resource:

Lindquist, E. (2006). Organizing for policy implementation: The emergence and role of implementation units in policy design and oversight. Journal of Comparative Policy Analysis: Research and Practice, 8(4), 311–324.Find this resource:

Lodge, M. (2007). Comparative public policy. In F. Fischer, G. Miller, & M. Sidney (Eds.), Handbook of public policy analysis: Theory, politics and methods (pp. 273–288). Boca Raton, FL: CRC Press.Find this resource:

Lowi, T. (2008). Arenas of power. Boulder, CO, and London: Paradigm, 353p.Find this resource:

Mahoney, J. (2000). Path dependence in historical sociology. Theory and Society, 29, 507–548.Find this resource:

Mair, P. (1996). Comparative Politics: An Overview. In R. E. Goodin & H.-D. Klingermann (Eds.), A New Handbook of Political Science (pp. 309–335). Oxford: Oxford University Press.Find this resource:

March, J., & Olsen, J. (1995). Democratic governance. New York: Free Press.Find this resource:

May, P. (1992). Policy learning and failure. Journal of Public Policy, 12(4), 331–354.Find this resource:

Mény, Y., & Surel, Y. (2009). Politique comparée. Paris: Montchrétien.Find this resource:

Mill, J. S. (1843). A system of logic, ratiocinative and inductive: Being a connected view of the principles of evidence and the methods of scientific investigation. Vol. 1. London: Harrison & Co Printers. Available at this resource:

Ostrom, E. (2011). Background on the institutional analysis and development framework. Policy Studies Journal, 39(1), 7–27.Find this resource:

Parsons, W. (2005). Public policy: An introduction to the theory and practice of policy analysis. Cheltenham, U.K.: Edward Elgar.Find this resource:

Peters, B. G. (2013). Strategies for comparative research in political science. London: Palgrave Macmillan.Find this resource:

Peters, B. G., & Pierre, J. (2004). Politicization of the civil service in comparative perspective: The quest for control. New York: Routledge.Find this resource:

Peters, B. G., & Pierre, J. (2006). Introduction. In B. G. Peters & J. Pierre (Eds.), Handbook of public policy (pp. 3–9). London: SAGE.Find this resource:

Pierre, J., & Peters, B. G. (2000). Governance, politics, and the state. London: Macmillan.Find this resource:

Pierre, J., & Ingraham, P. (Ed.). (2010). Comparative administrative change and reform: Lessons learned. Quebec, Canada: McGill-Queen’s University Press.Find this resource:

Pierson, P. (2000). Increasing returns, path dependence, and the study of politics. American Political Science Review, 94(2), 251–267.Find this resource:

Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. New York: Wiley.Find this resource:

Ragin, C., & Schneider, A. G. (2011). Case-oriented theory-building and theory-testing. In M. Williams & W. P. Vogt (Eds.), Sage handbook of innovation in social science research (pp. 150–166). London: SAGE.Find this resource:

Ragin, C. C. (2008a). Redesigning social inquiry: Fuzzy sets and beyond. Chicago and London: University of Chicago Press.Find this resource:

Ragin, C. C. (2008b). Measurement versus calibration: A set-theoretic approach. In J. Box-Steffensmeier, H. Brady, & D. Collier (Eds.), Oxford handbook of political methodology (pp. 174–198). Oxford: Oxford University Press.Find this resource:

Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques. London: SAGE.Find this resource:

Rihoux, B., Rezöhazy, I., & Boll, D. (2011). Qualitative comparative analysis (QCA) in public policy analysis: An extensive review. German Policy Studies Journal, 7(3), 9–82.Find this resource:

Rose, R., & Shiratori, R. (1986). The welfare state east and west. Oxford: Oxford University Press.Find this resource:

Sartori, G. (1970). Concept mis-formation in comparative politics. American Political Science Review, 64(4), 1033–1053.Find this resource:

Scharpf, F. (2000). Institutions in comparative policy research. Comparative Political Studies, 33(6/7), 762–790.Find this resource:

Schmitt, S. (2013). Comparative approaches to the study of public policy-making. In E. Araral Jr., S. Fritzen, M. Howlett, M. Ramesh, & X. Wu (Eds.), Routledge handbook of public policy (pp. 29–43). London and New York: Routledge.Find this resource:

Schneider, A., & Ingram, H. (1988). Systematically pinching ideas: A comparative approach to policy design. Journal of Public Policy, 8(1), 61–80.Find this resource:

Seawright, J., & Collier, D. (2010). Glossary. In H. Brady & D. Collier (Eds.), Rethinking inquiry: Diverse tools, shared standards (pp. 313–359). Plymouth, U.K., and Lanham, MD: Rowman & Littlefield.Find this resource:

Skocpol, T. (1979). States and social revolutions: A comparative analysis of France, Russia, and China. Cambridge, U.K.: Cambridge University Press.Find this resource:

Steinberg, P. F. (2003). Understanding policy change in developing countries: The spheres of influence framework. Global Environmental Politics, 3(1), 11–32.Find this resource:

Steinberg, P. F. (2007). Causal assessment in small-N policy studies. Policy Studies Journal, 35(2), 181–204.Find this resource:

Weible, C., & Nohrstedt, D. (2013). The advocacy coalition framework: Coalitions, learning, and policy change. In E. Araral Jr., S. Fritzen, M. Howlett, M. Ramesh, & X. Wu (Eds.), Routledge handbook of public policy (pp. 125–137). London: Routledge.Find this resource:

Yamasaki, S., & Rihoux, B. (2009). A commented review of applications. In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques (pp. 123–146). London: SAGE.Find this resource:


(1.) Rational choice theories are related to strategic interactions according to which individuals base their behavior on the anticipation of a cost-benefit calculus according to the odds. (For an extensive presentation of the rational choice theory applications to political science, see Hindmoor, 2006.)

(2.) Path dependence theories are related to the causal sequence of historical events, following a critical juncture and through the intercession of lock-in mechanisms causing increasing returns. (For a discussion of this theory, see Mahoney, 2000; Pierson, 2000).

(3.) In statistics, having a negative degree of freedom means that the number of observations is inferior to the number of parameters or characteristics of the studied population (George & Bennett, 2005, p. 28). This is a source of indetermination since it is not possible to control for spuriousness (King, Kehoane & Verba, 1994, pp. 118–122). Since the degree of freedom is always negative in within-case, small-N, and even intermediate-N studies, it is impossible to establish the necessity and sufficiency of a causal condition beyond these cases (George & Bennett, 2005, p. 29).

(4.) The objective of test drawing from Bayesian statistics is to increase the degree of likelihood of a causal relationship through the multiplication and qualification of causal process observations, comparable to proofs in a criminal investigation. (For an extended presentation of Bayesian logics used in causal process tracing, see Bennett, 2015).

(5.) The objective of test drawing from Boolean algebra is to deal with complex causality by combining a set of independent variables or conditions and calibrating their respective contribution to the outcome to be explained. (For a detailed explanation of the use of Boolean statistics and fuzzy-set in configurational analysis and CPP, see Yamasaki & Rihoux, 2009; Rihoux, Rezöhazy, & Boll, 2011).

(6.) The causal homogeneity of a case-study means that “a given set of values for the explanatory variables always produces the same expected value for the dependent variable within a given set of cases” (Seawright & Collier, 2010, p. 316).

(7.) Endogeneity occurs when the values of the dependent variable or the outcome (Y) are also a consequence of the independent variable or cause (X) (King, Keohane, & Verba, 1994, p. 187).

(8.) A typological theory is “a theory that specifies independent variables, delineates them into the categories for which the researcher will measure the cases and their outcomes, and provides not only hypotheses on how these variables operate individually, but also contingent generalizations on how and under what conditions they behave in specified conjunctions or configurations to produce effects on specified dependent variables” (George & Bennett, 2005, p. 192).

(9.) A third protocol, known as the method of “concomitant variation,” is less common in CPP, partly because it does not allow coping with endogeneity by itself. According to Mill (1843, p. 471), “[w]hatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.”

(10.) In Mill´s terms: “If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon” (Mill, 1843, pp. 455–456).

(11.) In Mill´s words: “If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon” (Mill, 1843, p. 453).