Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS (politics.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 27 July 2017

Qualitative Comparative Analysis (QCA) and Set Theory

Summary and Keywords

Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.

There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.

Keywords: Qualitative Comparative Analysis (QCA), set theory, fuzzy sets, sufficient and necessary conditions, mixed methods designs, case-oriented research

Approaching Qualitative Comparative Analysis, Set Theory, and Configurative Comparative Methods

The Central Interest of the Article

The importance of Qualitative Comparative Analysis (QCA) as a social science method cannot be doubted. Alluding to the American social scientist Charles C. Ragin, who introduced QCA to a wider audience (for his seminal and best-known contributions, see Ragin, 1987, 2000, 2006, 2008), observers even coined the term “Ragin revolution” (Vaisey, 2009) highlighting QCA’s success story over recent years. In terms of research, comprehensive mapping shows an ever-increasing number of applications across various social science (sub)disciplines (Rihoux, Álamos-Concha, Bol, Marx, & Rezsöhazy, 2013; see also the numbers on this in Wagemann, Buche, & Siewert, 2016, p. 2533).1 This is complemented by intensive teaching and diffusion activities. Indeed, many course syllabi all over the world include single sessions or whole courses on QCA, and prestigious summer schools such as the ones organized by the professional organizations International Political Science Association (IPSA), European Consortium of Political Research (ECPR), and the Institute for Qualitative and Multi-Methods Research (IQMR) offer courses on QCA on a regular basis. Textbooks giving comprehensive overviews exist in English (Schneider & Wagemann, 2012), French (DeMeur, Rihoux, & Yamasaki, 2002), and German (Schneider & Wagemann, 2007); software applications are continuously refined and published (e.g., Thiem & Duşa, 2013b). Moreover, if we take the existence of vivid discussions about a method as an indicator for its rising prominence in research, then the numerous contributions to the debate lend insight (for just a selection, see Collier, 2014; DeMeur, Rihoux, & Yamasaki, 2009; Hug, 2013; Lieberson, 2004; Munck, 2016; Paine, 2016a, 2016b; Seawright, 2005).

This article follows the usual logic of contributions in this encyclopedia and includes information on the state of the art of QCA, best practices, problems in need for attention, and avenues for new research. Guidelines of best practices that have been elaborated and published over recent years (Schneider & Wagemann, 2008, 2010, 2012; Wagemann & Schneider, 2015) are mentioned throughout the presentation of the state of the art, whenever appropriate. This article also considers set theory, since it is the underlying logic of QCA and other methodological approaches. Subsequently, the problems in need of attention are discussed together with the avenues for new research, since the latter are direct consequences from the former). An outlook concludes the article.

Configurative Comparative Methods and Complex Explanations

Due to its key characteristics, QCA has become an essential part of the social science methodological repertoire within a very short time. However, it is not entirely new or something completely separate from other methodological approaches. With his book title The Comparative Method (Ragin, 1987), Ragin takes up Lijphart’s (1971) seminal contribution, which had presented the “comparative method” as an inferior alternative if the assumptions for experiments or statistical analysis were not met, and proposes QCA (not yet called QCA) as a methodologically sound and formalized way to compare social science phenomena and thus frames the term positively. The procedure that Ragin introduced rendered the underlying principles evident and thus illustrated that comparison can unfold in a systematic and reliable way.

Some years later, Hall (2003, p. 389) made yet another point, namely that QCA helped to deal with multiple conjunctural causation. With a similar motivation, Rihoux and Ragin (2009) introduced the term Configurative Comparative Methods (CCM). Briefly said, “configurative” means that the main focus of CCMs is not on individual explanatory factors but on configurations thereof. First, this approach acknowledges that causes rarely operate in isolation. For example, a hypothesis could claim that two conditions A and B have to be both present if a certain outcome, e.g., social protests, is to be observed; with only A or only B being present, the outcome is not observed. This is usually referred to as “conjunctural” (see also Schneider & Wagemann, 2012, pp. 78, 324), as Hall (2003, p. 389) had originally put it. However, the term “configurative” goes beyond conjunctions. It also includes, second, the idea of equifinality (Schneider & Wagemann, 2012, pp. 78, 326), according to which there is not only one but several possible explanations, all of which are equally valid. For example, there might be different trajectories of democratization processes in different parts of the world, captured in various explanatory paths. This also refers to theories of the middle range (Merton, 1957; for contingent generalizations, see also George & Bennett, 2005, p. 216), which only apply to certain parts of the universe. It goes without saying that such a configurative perspective, which does not look at causes in isolation (conjunctural causation) and permits for alternative explanations (equifinality), corresponds well to the complexity of the social world. Elsewhere, conditions that fulfill these requirements have been called Insufficient but necessary part of a condition which is itself unnecessary but sufficient for the result [INUS] conditions (Mackie, 1965, p. 246) and are regarded as particularly advanced representations of the empirical reality (Schneider & Wagemann, 2012, pp. 4, 79–80). In a similar way, although actually not referring to equifinality and conjunctural causation in the sense just laid out, Sufficient, but unnecessary part of a factor that is insufficient, but necessary for the result [SUIN] conditions (Mahoney, Kimball, & Koivu, 2009, p. 126) represent another variant of a “configuration” in that they allow for alternatively necessary components of an explanation (e.g., for the development of a strong welfare state, it might be necessary for a country to have strong social-democratic political parties or strong trade unions). QCA (and other CCMs) are well equipped to deal with INUS and SUIN conditions. Additionally, it is held that QCA also provides the user with asymmetric2 conclusions, i.e., it is assumed that the outcome and its complement have to be explained separately, eventually even relying on different explanatory factors (Schneider & Wagemann, 2012, pp. 78, 322). If we base our comparison on an ontology that sees social phenomena as configurative and on an epistemology that defines causality as asymmetric, then we can align ontology, epistemology, and methodology (Hall, 2003) through the use of CCMs.

In addition to QCA, the methodological family of CCMs also includes (among others) the well-known and well-established most similar and most different systems designs (Berg-Schlosser & DeMeur, 2009; Przeworksi & Teune, 1970), so it would be mistaken to refer to this group of methods as a new invention. The common principles of these methods are based in the characteristics of set theory, which are introduced subsequently.

State of the Art and Best Practice: Set Theory

Crisp Sets and Fuzzy Sets

It is since the publication of Ragin (2000) that QCA has been presented in terms of set theory. This changed the perspective on QCA a bit: earlier, QCA was placed within the “Boolean approach” (Ragin, 1987, p. 85), based on Boolean algebra, while the emphasis was placed on sets later. Nowadays, QCA makes use of a mixture of the terminology and the notational systems of Boolean (and fuzzy) algebra, set theory, and the logic of propositions (Schneider & Wagemann, 2012, pp. 54–55); however, these approaches are closely linked.

The usefulness of set theory is not limited to QCA. Goertz and Mahoney (2012) identify the logic of set-theoretical thinking as a core feature of one of their two cultures of social science methods (with the other being defined through probability calculus). Mahoney and Vanderpoel (2015) show how broadly set theory can be applied (and is applied, although mostly implicitly) in the social sciences, namely in “process tracing, concept formation, counterfactual analysis, [and] sequence elaboration” (Mahoney and Vanderpoel, 2015, p. 65).

It is central for the use of sets in the social sciences that sets define the membership of cases in them.3 For example, the set of countries that belong to the European Union (EU) contains France, Spain, and Lithuania, but not Norway, China, or Peru. This is a so-called crisp set in which membership and non-membership are clearly differentiated simply because the phenomenon of being part of the EU is dichotomous. If the focus is on formal membership, then a crisp set makes sense. However, there is relevant and irrelevant variation in data (Ragin, 2008, p. 83). Relevance of a variation cannot be defined per se but depends on the research context. Indeed, the EU knows much more variation than can be captured through a dichotomy because of the various modes of participating in common policies (not all countries participate in the euro currency), association agreements, and bilateral treaties (Leuffen, Rittberger, & Schimmelfennig, 2012). Consequently, EU membership is less dichotomous than often thought.

While the exclusive focus on crisp sets limited a broad recognition of Ragin’s (1987) first publication on QCA (Bollen, Entwisle, & Alderson, 1993; Goldthorpe, 1997), the introduction of fuzzy sets in Ragin (2000) was a major breakthrough. Fuzzy sets were nothing new at the time; Zadeh (1965, 1968) had already shared them with a broader public, and Cioffi-Revilla (1981) had even worked with them from a social science perspective long before Ragin wrote on fuzzy set QCA (fsQCA).

With fuzzy sets, cases can have partial memberships in sets. For example, leaving aside the famous discussion if it is conceptually more frutiful to distinguish only between democracies and non-democracies, or if a more differentiated view should be adopted (see Collier & Adcock, 1999; Sartori, 1970), our daily observation of the world tells us that some democracies are more democratic than others and non-democracies also provide us with different levels of not being democratic. This example already draws our attention to the fundamental principle of fuzzy sets: while they keep the dichotomy of a democracy/non-democracy, they differentiate it, further maintaining this dichotomy. Consequently, Schneider and Wagemann (2012, p. 27) present fuzzy sets as a combination of differences-in-kind and differences-in-degree.

Technically speaking, numbers are assigned to cases in order to represent the degree of set membership. This process is called “calibration” (Ragin, 2008; Schneider & Wagemann, 2012). Fuzzy values can range anywhere between 0 (full non-membership) and 1 (full membership), while in crisp sets 1 (fully in) and 0 (fully out) are the only values that can be assigned. Therefore, a crisp set is a special case of a fuzzy set; the rules that apply to fuzzy sets also apply to crisp sets but not vice versa.

Most concepts in the social sciences are fuzzy. They combine the differences-in-kind (i.e., they are dichotomous in principle) with the differences-in-degree perspective. Through the latter, research reality recognizes the complexity of the social world surrounding us, applying more fine-grained understandings. Different major political science concepts such as peace, compliance, goal achievement, neo-corporatism, hegemonic political parties, or contentious politics can be mentioned in this context. The many indices used in the social sciences demonstrate this.

While the use of fuzzy sets represents the current state of the art in set theory, crisp sets (which are nothing else than very restrained fuzzy sets anyway) have not lost their analytical importance. Indeed, fuzzy sets should not automatically be the prime way to go. Deciding in favor of crisp sets means to make clearer statements about characteristics of cases (e.g., on the termination of a civil war). With such clarity, it is easier and analytically more beneficial to include these factors in arguments. If researchers are vague on the categories they use, the results will also be vague.

Calibration of Sets

If sets are central components of set-theoretic methods, the question arises how set membership values are decided and determined. This process is called “calibration” (Ragin, 2008).

There can certainly not be a formal recipe for assigning fuzzy values to sets. Following Ragin, “[i]n the hands of a social scientist […], a fuzzy set can be seen as a fine-grained, continuous measure that has been carefully calibrated using substantive and theoretical knowledge” (Ragin, 2000, p. 7). We therefore need both theoretical considerations and empirical data (“substantive knowledge”) for calibration. If, for example, we want to establish a fuzzy membership value for France in the set of all democracies, we need (theoretical) knowledge about the concept we want to describe and (substantial) information on the case (France). Sometimes, as for the French political system, case knowledge is easy to achieve, while this can be more difficult for fairly unknown or even new phenomena.4 Set theory is eclectic in the manner in which data are collected (and thus a truly pluralistic method), but researchers have to be explicit about how empirical information is converted into fuzzy values (Schneider & Wagemann, 2012, pp. 277–278; Wagemann & Schneider, 2015, p. 39). Generally, when it comes to concept formation, there is a vast literature full of rules and recommendations about how to derive concepts, but without any univocal recipes (Collier & Mahon, 1993; Goertz, 2006; Mair, 2008; Sartori, 1970, 1984); this is also why proceedings in concept formation are so complex (for a stepwise procedure, see Adcock & Collier, 1991). When, additionally, fuzzy values as numerical representations of the presence and absence of concepts are used, things become even more difficult, since researchers are deprived of the possibility to use long verbalizations for concepts that might be easier modes than numbers to communicate the complex nature of concepts.

Not all researchers might be happy about such vague indications on how to calibrate. Indeed, Ragin (2008) later proposed a semi-automatic procedure, the so-called “direct calibration.” This is an alternative option if quantitative raw data are available (e.g., GDP values as proxies for economic wealth or various democracy values). Here, the researcher needs to define theoretically the so-called qualitative anchors, i.e., the fuzzy values of 1, 0.5 (as the maximum point of indifference, where the dichotomy changes), and 0. A logarithmic function is then used for transforming the remaining underlying raw data into fuzzy set values.

Despite the fact that quantitative raw data are not always available and that fuzzy values that use more than one decimal digit are capable of introducing too much differentiation into the concept (e.g., there can hardly be any analytically useful substantial distinction between a democracy with a fuzzy value of 0.61 and another one with 0.62), there is yet another pitfall involved: the danger that concepts are no longer thought through carefully. Instead, it is assumed that the values other than 0, 0.5 and 1 follow a mathematical function that represents the differentiated concept. Such an assumption is highly arbitrary and resembles the often unfortunately overquick use of proxies in quantitative proceedings. Yet, this strategy is currently on the rise and is frequently used in disciplines that are highly inspired by quantitative methods such as business and management studies (Wagemann et al., 2016). However, as Wagemann et al. (2016) also show, this does not per se come with an improvement in the quality of the analysis—quite the contrary. Therefore, while today’s state of the art seems to be to opt for direct calibration whenever possible, it can certainly not be considered a best practice.

Set Operations

So far, we have shown that case properties can be represented in sets. However, the usefulness of set theory for Configurative Comparative Methods mainly goes back to the operations that can be executed with sets.

A first helpful operation creates set intersections. The intersection of two (or more) sets describes those elements that are common to the individual sets. If we create, e.g., the set of all federal democracies, then the set will contain all countries that are both democratic and federal. Intersections are based on the logical AND. The mathematical rule (Klir, St. Clair, & Yuan, 1997, p. 93) foresees that the fuzzy membership value of a given case in an intersection is equal to the minimum of the membership values of the single sets. The rationale behind this is that a country that lacks federalism cannot compensate for this by being a democracy. Since France is a democracy, but not a particularly federal country, it also must receive low membership levels in the intersection.

From yet another operation, set unions result. A union includes all those cases that are members in at least one of the individual sets and is based on the logical OR. An example for such a union is to define a left-leaning country through the presence of high vote shares for left-wing political parties or, alternatively, high membership rates of trade unions. Mathematically, the fuzzy membership value of a case in a union is the maximum of the single membership values (Klir et al., 1997, p. 92).

Set operations help to create more sophisticated social science concepts. Sartori (1970) shows that concepts can be refined when adding more attributes, and Collier and Mahon (1993) introduce the notion of radial concepts; both variants can be represented through set intersections. Goertz (2006) enlarges this by proposing more complex ways of constructing concepts: necessary components of concepts can be combined through intersections (since intersections include all elements), and the idea of unions refers to concepts in which certain elements are fully substitutable.

While intersections and unions are useful for descriptive purposes such as concept formation, subset and superset relations enable to draw further inferences since they represent “if … then” statements. As for formal definitions, all elements that are part of a subset are also contained in the superset, but not vice versa.5 For example, all countries that are members of the EU are also members of the Council of Europe, but not all members of the Council of Europe are part of the EU. In other words, “if” a country is part of the EU, “then” it is also a member of the Council of Europe. Membership in the Council of Europe (superset) can be automatically deduced from EU membership (subset). The subset thus represents the “if component” of an “if . . . then” statement, and the superset the “then component.” While this is, at first sight, nothing else than a specific relation between two (or more) sets, this situation enjoys a particular interpretation (Schneider & Wagemann, 2012, p. 53). Let us imagine one of the sets represents an outcome we want to explain (e.g., successful democratic consolidation), while the other set stands for an explanatory factor we hypothesize to be somehow causally responsible for the presence of the outcome (e.g., a well-working capitalist economy). If the condition is a subset of the outcome, i.e., if all capitalist economies are also consolidated democracies, then we can interpret the condition to be sufficient for the outcome. If, on the other hand, we suspect, a causal relation between EU membership (outcome) and a ban of capital punishment (condition), then we observe the opposite set relation: the set of EU member countries is a subset of the set of countries without capital punishment. In such a situation, the condition is interpreted as necessary: if the outcome is there, then the condition is also there. In brief, subset and superset relations help us to establish sufficient and necessary conditions.

Applicability of Set Theory for the Social Sciences

Set theory is suitable for many social science applications (for a broad overview, see Goertz & Mahoney, 2012; Mahoney & Vanderpoel, 2015). Here, concept formation, the creation of typologies, and QCA are presented as important selected examples.

First, as shown, the logic of intersections and unions makes it possible to arrive at complex concepts characterized both by necessary components and other elements that are mutually substitutable. Since set membership can be numerically defined, it is also possible to calculate the membership of a given case in a set designed in a more complex way, using the rules of fuzzy algebra. An exhaustive overview of these procedures is given in Goertz (2006). In this way, set theory helps to mirror the complex social world in concepts that can then be used for social science analyses. Advances in set theory imply an enhanced state of the art in concept formation.

Second, set membership values allow us to identify which cases are similar to one another and, thus, have to be grouped within the same type. This idea is inspired by Lazarsfeld’s (1937) thoughts on property spaces, which locate cases with regard to the empirical representations of their properties, as they can be expressed through set membership values (see also Barton, 1955). Following this, similar cases occupy neighboring locations in a property space. Kvist (2007) developed this approach further into what is called “fuzzy set ideal type analysis” (see also Fiss, 2011, who applies fuzzy sets to the building of typologies in organization research). This is yet another example of how the use of set theory has led to developments in another methodological research field.

Third, QCA is also an application of set theory and, following the main argument of the remainder of this article, probably its currently most prominent. Among two other characteristics (the use of truth tables and the application of procedures of logical minimization), Schneider and Wagemann (2012, pp. 8–9) differentiate QCA from other set-theoretic applications through its claim to be causally oriented. While concept formation and the identification of typologies do not have direct causal implications, this is different for QCA. Certainly, QCA can also just be used as a way to reduce the complexity of a dataset, but the use of a terminology such as “conditions,” “outcomes,” “explanatory paths,” and “contradictions” indicates an orientation toward causal inference. The term “condition” as opposed to “cause” seems to be somewhat cautious and careful, but this does not mean that QCA’s central interest is not causal. Rather, the use of “condition” instead of “cause” draws our attention to the fact that identifying (causal) conditions does not automatically equal finding causal mechanisms that give us reasons why the condition implies the outcome (or vice versa) (on causal mechanisms, see Bennett & Checkel, 2015b, pp. 3–4; Blatter & Haverland, 2012; Elster, 1998; George & Bennett, 2005, p. 214; Gerring, 2008, 2014; Hedström & Ylikovski, 2010; Mahoney, 2001; Mayntz, 2004; Rohlfing, 2012; Waldner, 2012).

State of the Art and Best Practice: Qualitative Comparative Analysis (QCA)

Before Executing the Algorithm

Wagemann and Schneider (2010) distinguish between QCA as a “research approach” and as an “analytical technique” (see also Berg-Schlosser, deMeur, Rihoux, & Ragin, 2009). Their motivation to do so might have been to make clear that the execution of the algorithm (the so-called “truth table analysis”) is just one step of a larger analytical design and that QCA should not be reduced to a simple formal procedure. Indeed, similar to standard statistical techniques such as regression analysis, the QCA software produces a result in any instance, no matter how thoughtful or thoughtless the research design was. This means that, although parameters of fit such as consistency or coverage may raise the attention for what has gone wrong in the preparatory phase prior to executing the algorithm, competence limited to the technical side of QCA is not enough.

With regard to non-technical issues, most problems typical of other methodological approaches are also relevant for QCA. For example, pitfalls such as flawed case selection, choosing conditions that are irrelevant for the outcome (or omitting relevant ones), basing the analysis on data from questionable sources, or not considering data collection rules are all problematic in a QCA. Therefore, there cannot be any QCA-specific indication about the choice of conditions or of cases; the methodological literature provides enough (textbook) rules about comparative research design (e.g., Gerring, 2012; Schmitter, 2008) so that no one set of rules on this has to be invented for QCA.

However, a central design question is whether QCA is the appropriate method. Indeed, it is a shared understanding that QCA should be employed if researchers want to investigate social phenomena they assume to be equifinal, conjunctural, and asymmetric. Usually, this also implies references to “if . . . then” hypotheses. If researchers do not adhere to such thinking, then they should opt for alternatives. Additionally, it has become a selling point of QCA to claim that it would be above all useful for mid-sized case numbers, while it would be better taken to say that it can also be used for mid-sized case numbers (which are usually difficult to be analyzed with other designs; see Schneider & Wagemann, 2012, p. 12). One more criterion is that QCA only works with categories that are dichotomous in principle (no matter whether they are then further differentiated into fuzzy sets or not). If any of the conditions or the outcome cannot be thought of in set memberships, then an application of QCA does not make much sense.6 Since QCA seeks to explain outcomes through the described complex configurations of conditions, it is reasonable to apply it above all (though not exclusively) for y-centered research in which causes for effects are searched (Gerring, 2007, p. 71; Rohlfing, 2008, p. 1505).

When speaking about “if . . . then” hypotheses, it is, of course, an illusion that, in practical research reality, researchers will be able to formulate ex ante their hypotheses in a way that makes assumptions about all possible complex configurations. It is much more plausible that researchers will limit their thoughts about hypotheses to the choice of conditions since they assume them to be somehow part of the explanation, i.e., to be “causally relevant” (Baumgartner, 2009b, pp. 74–75), and that they will agree in principle with the logic of “if . . . then” hypotheses in general. However, this does not automatically lead to explicitly formulated and detailed hypotheses. The best practice is that researchers rather start with theoretical hunches and leave the details of the various explanatory configurations to the analysis. In this way, QCA becomes a continuous move between ideas and evidence (Ragin, 1994, p. 76) and combines inductive and deductive thinking (for the exploratory character of QCA, see Ragin, 2008, p. 190).

The decision as to whether QCA is the appropriate method is also linked to the research results it produces. As explicated further below, there are good reasons not to conduct re-analyses of QCA-based applications with other methods and vice versa (see also Buche, Buche, & Siewert, 2016; Thiem, Baumgartner, & Bol, 2016). Whether a method is “successful” or not depends very much on how much it contributes to the accumulated knowledge of a (sub‑)discipline (on this notion of “knowledge accumulation,” see Mahoney, 2012). However, the question as to how far QCA applications really made a difference in political science research is problematic because we lack, first, the comparative case of a world without QCA and, second, a research agenda of specific to QCA. In sociology, Ragin and Fiss (2017) undertook the effort to re-run the famous study on the Bell Curve (Fischer et al., 1996; Herrnstein & Murray, 1994) and found out—not very surprisingly for very visible QCA people—that intersectional or configurative methods really have an added value. If we look to applications in political science, Schneider (2009) impressively demonstrates the usefulness of QCA for the analysis of a mid-sized number of countries with regard to their democratic consolidation (a typical fuzzy set) and concludes with a goodness-of-fit argument that would have been difficult to obtain through other methods. More from a policy studies perspective, Cacciatore, Natalini, and Wagemann (2015) approach the topic of Europeanization, which is also modeled as a fuzzy set. Cebotari and Vink (2013) conduct research on a political sociology topic, namely protest by ethnic minorities, and stress their specific interest in conjunctural causation and a formal approach toward case studies (Cebotari & Vink, 2013, p. 299). Many more studies could be named, but a full overview is still missing (Rihoux et al., 2013, undertake an initial attempt of a mapping); already the sheer quantity of the production suggests that set-theoretic methods have been useful in deriving substantial results.

The Analysis of Necessary Conditions

The recommendation is to start the QCA with the analysis of necessary conditions. This is done in order to avoid the pitfalls of false or hidden necessary conditions (Schneider & Wagemann, 2012, pp. 220–232) or of untenable assumptions in the analysis of sufficiency that contradict the findings of the analysis of necessity (Schneider & Wagemann, 2012, pp. 201–203).

As mentioned before, necessary conditions are defined as supersets of the outcome. If the outcome is present, then the necessary condition is also present. While this means for crisp sets that there must not be any case in which the outcome is present but not the necessary condition, the rule for fuzzy sets is that the fuzzy membership value of a given case in the condition must be greater than or equal to the case’s membership in the outcome (for the technicalities, see Schneider and Wagemann, 2012, pp. 75–76). However, in real research situations, it is often difficult to meet such a requirement perfectly. A limited number of cases might contradict the finding. While, deterministically speaking, only a single deviant case already falsifies the existence of necessity, it is certainly important to note how many deviant cases there are and how deviant they are. In order to assess this, Ragin (2006) introduced the notion of consistency. Consistency varies between 0 and 1 and is 1 if there is no deviation.

Qualitative Comparative Analysis (QCA) and Set TheoryClick to view larger

Figure 1. XY Plot—Distribution of Cases for Necessary Condition X.

This can also be represented graphically. The XY plot in Figure 1 indicates the fuzzy values for a given condition and the outcome for all cases. XY plots can be visualized for all conditions and combinations thereof. If a condition is necessary and, therefore, all X values are greater than or equal to the Y values, all cases will fall below the diagonal. Cases above the diagonal are deviant from the statement of necessity and lower the consistency value. A case in the upper left angle is more problematic than a case just a little above the diagonal. The consistency formula (Schneider & Wagemann, 2012, p. 141) respects this difference between deviant cases.

Although Schneider and Wagemann (2012, pp. 143, 227) advise users to only consider those conditions as potentially necessary that superate a consistency threshold of 0.9, such a decision cannot be standardized. However, more reasoning is needed than just a parameter in order to conclude that a factor is necessary for a given outcome (Schneider & Wagemann, 2016); a consistency score only gives indications about the set relations but cannot substitute for a theory-guided assessment of the necessity of the condition. However, reading through QCA applications suggests that a confirmation of set relations is unfortunately too often equated with a decision on the status of a condition.

This is not the only problematic zone in the plot. Imagine that many cases can be found toward the right of the plot. While this indicates a good fit with the necessity requirement (because they are below the diagonal), this also means that the condition is nearly omnipresent and thus close to being banal. If we hold that being born is a necessary condition for becoming the president of the United States of America, then this is certainly true but, because of the trivialness of the necessary condition, also banal. The coverage value indicates whether such a problem of trivialness exists (for the formula, see Schneider & Wagemann, 2012, p. 144).7

As a best practice recommendation, it is common sense that coverage should only be calculated if consistency is high enough. Only if we can ascertain (through high consistency values) that the condition can be deemed truly necessary does it make sense to decide whether it is trivial or not.

The Analysis of Sufficient Conditions

When speaking about the state of the art of QCA, we have to mention a “sufficiency bias” (Schneider & Wagemann, 2012, p. 220) since QCA is often wrongly reduced to the analysis of sufficiency. Indeed, many QCA studies leave out the analysis of necessity but not a single one the analysis of sufficiency (Buche & Siewert, 2015; Wagemann et al., 2016).

The analysis of sufficiency departs from a truth table. Such a truth table is built up by all possible combinations of conditions and their complements, which form the truth table rows. If an analysis uses the conditions A, B, and C, then the truth table rows will be ABC, AB~C, A~BC, ~ABC, A~B~C, ~AB~C, ~A~BC, and ~A~B~C, with the tilde indicating the absence of the condition. The number of possible truth table rows is 2k, with k being the number of conditions. While with three conditions, the number of truth table rows is 8, there are 16 truth table rows in case of four conditions and even 1,024 when there are 10 conditions. Each truth table row can be regarded as a potential sufficient condition. This means that 2k sufficiency tests have to be executed.

The rules for assessing sufficiency are inverse to the rules for necessity. A condition is necessary if it is a superset of the outcome; if it is always present when the outcome is present; if its fuzzy values are greater than or equal to the fuzzy values of the outcome (X ≥ Y); and if the dots in an XY plot are below the diagonal. By contrast, a condition is sufficient if it is a subset of the outcome (see above); if the outcome is always present when the condition is present (sufficiency implies the outcome); if the fuzzy values of the condition are smaller than or equal to the fuzzy values of the outcome (X ≤ Y); and if the dots in an XY plot are above the diagonal (Fig. 2).

Qualitative Comparative Analysis (QCA) and Set TheoryClick to view larger

Figure 2. XY Plot—Distribution of Cases for Sufficient Condition X.

Similar to the analysis of necessity, a perfect situation can hardly be achieved. Consistency values indicate in how far the empirical data situation departs from a situation of perfect sufficiency (for the formula, see Schneider & Wagemann, 2012, p. 126). The formula respects that a case in the lower right angle of the plot is particularly problematic since the condition is fully there (high fuzzy values) but the outcome is not (low fuzzy values). Therefore, the condition is not sufficient to imply the outcome in this case. This is also called a “true logical contradiction” (Schneider & Wagemann, 2012, p. 334) and lowers the consistency value considerably.

Again similar to necessity, not all consistent cases provide us with the same analytical leverage. For example, the cases toward the left of the plot share the characteristic that the condition is rather absent (low fuzzy values on the x-axis). If these cases also show low values in the outcome, then the case is not informative about sufficiency since it shares neither the sufficient condition nor the presumable outcome. If these cases instead show high values in the outcome, then the outcome is simply not explained since the potential explanation has too low fuzzy values. This is captured by the coverage value (for the formula, see Schneider & Wagemann, 2012, p. 131). Different from the interpretation of coverage for necessity (see above), it tells us how much of the outcome is explained by the sufficient condition. Thus, it corresponds quite well to similar parameters in the quantitative tradition such as the R². Similar to the case of necessity, best practice is that coverage should only be calculated if consistency values are high because nobody is interested in knowing how broadly applicable a bad explanation is.

These procedures of calculating consistency (and coverage) values have to be executed for all 2k combinations. So-called raw consistency values result, which indicate whether a given truth table row can be counted as a sufficient condition. In this way, it is possible to decide about every truth table row. Note that the evaluation of sufficiency should not only be based on the pure numerical value of the raw consistency score. Researchers might want to consider whether there are true logical contradictions or whether the deviant cases are theoretically important. If, for example, we explain something related to presidential political systems, and the United States is a deviant case in the XY plot, then we might not consider this configuration to be sufficient no matter how high its raw consistency value is. Unfortunately, research practice seems to unfold in the way that users rank the truth table rows in the order of their raw consistency values, look for a gap in the (high end of the) values, and define all truth table rows above that gap as sufficient and the others as non-sufficient. While admittedly this very often works out quite well, it should not be an automatic procedure (which is, however, unfortunately an option in many software applications). Research experience shows that all truth table rows with a raw consistency value of above 0.95 can be easily considered fully sufficient, while those below 0.75 should not. As for the other raw consistency levels, the recommendation is not to opt for a fix threshold but to check row by row. Having a look at the XY plot and identifying the single cases in the plot helps. This can also lead to a situation where a few number of rows with lower consistency scores is considered sufficient while others with higher scores are not.

This exercise will result in defining a number of rows as sufficient conditions. Following the idea of equifinality, combining them through a logical OR leads to an explanation of the outcome because the outcome is implied by the first sufficient truth table row or the second one or the third one, and so forth. Usually, this results in a long equation of terms connected by logical ORs. Applying the rules of Boolean algebra, this long term is then minimized with the Quine-McCluskey algorithm (Schneider & Wagemann, 2012, pp. 104–111). The execution of this algorithm also makes use of the so-called Prime Implicant (PI) chart (Schneider & Wagemann, 2012, pp. 110–111) through which those components that are redundant (but nevertheless sufficient) for the final solution are omitted with the goal of arriving at the smallest number of potential sufficient conditions (Ragin, 1987, p. 97) and which results in only non-redundant elements of the solution for sufficiency. Among QCA scholars, this has led to a very lengthy discussion about the usefulness of the algorithm, which is usually captured with the term of “model ambiguity” (Baumgartner & Thiem, 2015b).

At the end of this operation, a (much) shorter term will result, which could have the (hypothetical) form:

A~B+ABCY

This is a configurative term. First, it is equifinal since it indicates two explanatory paths (A~B and ~ABC). Second, both these paths are conjunctural because they require the presence of all their components. Thrid, all elements of this solution (A, ~B, ~A, B, and C) fulfill the requirements for INUS conditions (see above), which are neither necessary nor sufficient but are parts of such a configurative solution. For the overall solution, as well as for the two paths, consistency and coverage values can be calculated.

However, there is a noteworthy problem with regard to the analysis of sufficiency, namely the phenomenon of “limited diversity” (Ragin, 1987, p. 104). It appears when not all truth table rows contain enough empirical information. Graphically, this is defined through all dots clustering to the left of the x value of 0.5 of an XY plot; there is limited diversity if no case has a fuzzy membership of greater than 0.5 in the combination of conditions under investigation. The combination does not really have a single empirical case as an empirical reference and therefore counts as a “logical remainder.” This can simply happen because of the complexity of the social world that surrounds us. Since social scientists usually do not generate their data in laboratories, and because social phenomena only rarely behave like in an experimental situation, some combinations of conditions simply do not occur. There has never been a U.S. president younger than 40 years (although the constitution allows a 35-year-old). There is no African welfare state. There is no EU country with a system of direct democracy. Limited diversity may eventually be the result of a badly done research design, but more often than not, social reality is limited in its diversity.

This is, of course, a problem for the proceedings with regard to the analysis of sufficiency. If there are no empirical instances of a combination of conditions (which also implies low coverage values), then it is difficult to decide whether this combination would be sufficient or not and should thus be included into minimization. The combination does not really exist, so that testing it is meaningless. Nevertheless, a decision on these truth table rows has to be made since including or not including them has relevant effects on the result of the analysis.

Three procedures are proposed: first is not to include any logical remainder into the minimization process, i.e., not to consider any single one as (potentially) sufficient. This is also called the “conservative solution” (Schneider & Wagemann, 2012, p. 162) and usually leads to very complex solution terms. A second approach is to include only those logical remainders into the minimization process that render the “most parsimonious solution” (Schneider & Wagemann, 2012, p. 165). Finally, an “intermediate solution” is proposed that, in addition to the criterion of parsimony, accepts only those remainders into the minimization process that are so-called “easy counterfactuals,” i.e., assumptions that can be easily made, given prior theoretical knowledge and empirical knowledge with regard to existing cases (on the technicalities, see Schneider & Wagemann, 2012, pp. 167–175; for the original proposal, see Ragin & Sonnett, 2004). This intermediate solution becomes increasingly popular (Rubinson, 2013, p. 2866). All three options taken together are considered as the current state of the art (Wagemann & Schneider, 2015, p. 40) and are called the “Standard Analysis.”

However, there are two developments of the “Standard Analysis,” namely the “Enhanced Standard Analysis” (ESA; Schneider & Wagemann, 2012, pp. 200–211, 2013) and the “Theory-Guided Enhanced Standard Analysis” (TESA; Schneider & Wagemann, 2012, pp. 211–217). ESA is a way to avoid untenable assumptions for logical remainders based on various contradictions, i.e., with regard to the analysis of necessity, the analysis of the complement of the outcome, or simply common sense. The central idea is to exclude these remainders from the minimization process before starting the Standard Analysis. Going beyond this, TESA also admits those logical remainders into logical minimization, which do not improve the parsimony of the final result if theoretical assumptions recommend such a proceeding.

In general, the analysis of sufficiency results in more than one solution term depending on how logical remainders are treated. Note that all these various solutions treat empirical information equally. Their difference rather derives from how logical remainders are dealt with.

Problems in Need of Attention and Avenues for New Research

In this section, the topics of problems in need of attention and avenues for new research are combined since the latter are consequences of the former. First, the question of standards is raised as a central problem of QCA in general. Then, a distinction is made between QCA as a research approach and as an analytical technique (taking up the differentiation made in Wagemann & Schneider, 2010), although there is no clear separation. The problems that occur with regard to QCA as a research approach ultimately also affect the algorithm and other technical aspects and vice versa. For presentational purposes, the distinction is nevertheless maintained here.

The Question of Standards

Throughout this article the question of standards has been addressed. Motivated through a not always thoughtful execution of QCA, Schneider and Wagemann (2010, 2012, pp. 276–284; for a concentration on transparency issues, see Wagemann & Schneider, 2015) have developed a list of recommendations for a “good-quality QCA.” Indeed, there is a need for such a list.

First, not all QCA analyses that are executed live up technically to the minimum standards of what would be expected. In other words, even the peer-review system allows for the publication of articles that use the technique wrongly so that results become even doubtful with regard to content, as Wagemann et al. (2016) demonstrate for the area of business and management studies.

Second, even if all technicalities are considered, recent modifications of the technique have invited users to apply QCA in a more mechanical way than previously. As the very few existing evaluations of QCA show (Buche & Siewert, 2015; Wagemann et al., 2016), there is a trend toward “direct calibration,” toward decisions on the sufficiency of truth table rows that are exclusively based on arbitrary cutoff values of raw consistency scores, or toward a default decision for the most parsimonious solution with all its problems, not to mention the lack of attention for untenable assumptions or transparency issues when it comes to limited diversity.

Third, the existence of an algorithm and the high degree of formalization in QCA (at least for a qualitative comparative method) might have deprived QCA of a central feature, namely of its goal to achieve valid explanations for patterns found in really existing cases. Cases have properties and stories, and social reality cannot be exclusively portrayed in a formula. Thus, the question of standards can also be extended to what the central mission of QCA is.

There is certainly a rank order in these necessities: while a technically correct execution of the proceedings and the algorithm is indispensable, the trend toward more standardized procedures might be unpleasant but at least does not produce wrong results. And whether QCA should really be regarded as a case-oriented method (and, if yes, how this would be displayed in the research design and the analysis) is arguable. Nevertheless, the discussion on standards gives applicants, readers, and reviewers a good list of issues to be aware of. And a debate about standards is certainly always fruitful for research communities because such a debate leads to the development of communication arenas which finally advance the method.

QCA as a Research Approach

An important development can be observed with regard to the “mixed methods” debate in the social sciences (Berg-Schlosser, 2012; Creswell, 2014; Lieberman, 2005; Tashakkori & Teddlie, 2010).

Comparisons between QCA and various statistical procedures (Fiss, Sharapov, & Cronqvist, 2013; Grofman & Schneider, 2009; Lucas & Szatrowski, 2014; Paine, 2016a; 2016b; Schneider, 2016; Seawright, 2005; Vis, 2012) have not resulted in a clear conclusion, perhaps because the methodological traditions of set-theoretic methods and statistical techniques are too different that comparison would make sense (for an example that compares statistical techniques and QCA, see Stockemer, 2013, and the response by Buche et al., 2016).

It seems to be more promising to investigate how to combine QCA and other case study designs (Rohlfing & Schneider, 2013; Schneider & Rohlfing, 2013, 2016). More precisely, this discussion shows how QCA results could be used in order to select cases for subsequent process tracing, i.e., for the identification of causal mechanisms. Indeed, although QCA is claimed to be a causal method, the result of the analysis is sometimes not more than just a summary of the set relations in the data. Strictly speaking, the computer software produces a formula, for which narratives, interpretations, and conclusions have to be found subsequently. This can certainly also happen in a non-systematic way by researchers making sense of the results through theory-guided argumentation or references to plausibility. However, there is both a literature on case selection (for an extremely coherent overview, see Gerring & Cojocaru, 2016) and on process tracing (for the most up-to-date publications, see Beach & Pedersen, 2012; Bennett & Checkel, 2015a; Blatter & Haverland, 2012; Mahoney, 2012). This literature is currently being connected to QCA: proposals for which cases to compare or also to look at individually are made and subsequent process tracing is proposed (Schneider & Rohlfing, 2013). This is not a very surprising move since process tracing can be considered the most important social science method to establish causality in a case-oriented research setting (for an overview of the history of the method, see Collier, 2011). As George and Bennett’s (2005, pp. 205–206, 212, 215, 224) and Bennett and Checkel’s (2015b, p. 21) expositions of the properties of process tracing show, many of the basic assumptions are shared with QCA; this also holds for set theory in general (Mahoney & Vanderpoel, 2015). However, it is also observed that “[p]rocess tracing is fundamentally different from […] comparisons across cases” (George & Bennett, 2005, p. 207), not least since the focus of process tracing is on the unfolding of the causal process, rather than on the mere detection of regularities for moderate or large numbers of cases. Nevertheless, there seems to be a “division of labor” between two approaches that are similar enough in their epistemology to allow for a fruitful combination. George and Bennett (2005, p. 214) even explicitly propose controlled comparison and process tracing as complementary proceedings.

As its most prominent representatives state, process tracing has a buzzword problem (Bennett & Checkel, 2015b, p. 5). Although a systematic evaluation of applications is missing, some users might identify it with a simple reconstruction of a time process; the sequence of events is only one focus of process tracing (George & Bennett, 2005, p. 212; see, instead, the much broader definition in Bennett & Checkel, 2015b, pp. 7, 12; George & Bennett, 2005, p. 137). This corresponds to the discovery of the importance of time for (comparative) case study designs in general.8 Thus, it is no wonder that the QCA community is struggling for an explicit consideration of time. However, despite some attempts to introduce time into QCA, there is no powerful strategy for how to model time processes, not least since “time” as a purely quantitative variable is generally difficult to integrate into a qualitative case-oriented research design. Schneider and Wagemann (2012, pp. 265–266) make some informal proposals such as comparing the solution terms for various QCAs at different points in time, the introduction of additional conditions that capture aspects related to time, and differentiating between (temporally) remote and proximate conditions in a two-step approach (Schneider & Wagemann, 2006). A formalized attempt (temporal QCA, tQCA) was made by Caren and Panofsky (2005), who—with the help of an additional logical operator THEN—added a component to QCA that helped to represent the temporal order in a causal chain. Their example, however, was complicated, although the authors permitted only two out of four conditions to appear in a different time order (Caren & Panofsky, 2005, pp. 158–159) in order “to limit the geometric explosion of possible configurations” (Caren & Panofsky, 2005, p. 159). While this sounds like a confession that the potential of tQCA might be very small, this also is an acknowledgment of the fact that it might be an illusion to be able to successfully add such a complicated concept as time to an already complex configurative understanding of causality. What is more, in their rejoinder, Ragin and Strand (2008) show that the same results could have been achieved by adding only one more condition (the one that captures the sequence of the two conditions whose time order is flexible) to a conventional fsQCA. This has been an important moment in the development of QCA since it showed, on the one hand, that time can be integrated into QCA but that this was not a promising operation, on the other.

While the discussion about tQCA has become tacit, there is rising interest in a procedure called Coincidence Analysis (CNA) (Baumgartner 2009a, 2009b, 2013, 2015). CNA is based on its own algorithm (derived from formal logic) and does not distinguish between conditions and the outcome but treats them equally as “factors” (Baumgartner, 2013, p. 14). This makes it possible to detect more complex causal structures in the data such as causal chains and common causes (Baumgartner, 2009b, p. 91). As Baumgartner (2013, p. 14) himself says, this might not be wanted by a researcher who prefers to start the analysis with a clear idea about the potential conditions and the outcome. This caveat is correct. Social science reality is rarely so exploratory and usually does not work on research questions for which no suspicion about the distinction between an explanans and an explanandum is made. Usually, QCA is y-centered in the sense that a given phenomenon (“outcome”) is to be explained (see above), and various factors are hypothesized to be part of the explanation. Thus, a prior decision about the outcome is made and usually even constitutes the very reason for the research project. Nevertheless, CNA is an interesting and powerful way to shed light on those causal structures for which researchers are reluctant to assume independence of conditions (Baumgartner, 2009b, p. 95). As a positive side effect, CNA interprets limited diversity as a natural phenomenon of causal structures (and, thus, does not have to deal with it), since, “[i]f there exists any kind of (deterministic) causal dependency among n factors, it follows that not all 2n logically possible configurations of these factors are also empirically possible” (Baumgartner, 2013, p. 14). Thus, limited diversity is interpreted as an absence of a possible but not empirically realized causal structure.

QCA as an Analytical Technique

When it comes to QCA as an analytical technique, there are of course numerous attempts for refinements. Some of them disappear as quickly as they have appeared because their helpful effect is smaller than thought, or because they just combine creatively what is already around, or because they just touch upon very minor and specialized issues. The focus here is on three discussions that have been around for quite some while or that have provoked lively debate in the QCA community.

A first discussion is connected to CNA and concerns the use of the Quine-McCluskey algorithm for logical minimization. As mentioned above, Baumgartner and Thiem (2015b, p. 5) criticize parts of it, namely the use of so-called Prime Implicant Charts for the discovery of redundancies from necessary parts of the solution. They argue that if several solutions (“models,” as they call them) result from a QCA, the researchers decide too quickly and tacitly for one of the models, often motivated to do so through the makeup of the standard software (Baumgartner & Thiem, 2015b, p. 2). This is an important warning, although the danger of using the standard (Quine-McCluskey) algorithm is not entirely clear. While it is claimed that results are published “that [the] data did not warrant” (Baumgartner & Thiem, 2015b, p. 3), this statement is restricted by saying that this would not mean that all results are incorrect (Baumgartner & Thiem, 2015b, p. 28n4). It is certainly correct that most QCA applicants are not aware of so-called “model ambiguities” (Baumgartner & Thiem, 2015b, p. 4) and that the most frequently used software fsQCA does not render the multitude of models easily visible (although it is not impossible).

This discussion has then been continued, claiming that the most parsimonious solution of a limited diversity situation is the only solution that has at least the potential for causal explanations (and not the conservative/complex or the intermediate solution) (Baumgartner, 2015). This statement is based on the assumption that “[o]nly maximally parsimonious solution formulas can represent causal structures” (Baumgartner, 2015, p. 840), which takes up the “regularity understanding” of causality which is one (but not the only) way to define causal patterns (Baumgartner, 2008; Maxwell, 2004).

Although the CNA algorithm is promising in many respects, it has not become yet the current standard for analysis, perhaps because the available expositions of the procedures are highly formalized and not easily digestible by the ordinary applicant.

A second discussion looks at the stability of QCA results. The quantitative tradition would call this “robustness.” This addresses the question as to how far (minor) changes in the set-up of a QCA change the result (Goldthorpe, 1997; Hug, 2013; Lieberson, 2004; Maggetti & Levi-Faur, 2013; Seawright, 2005; Skaaning, 2011). “[Robustness] tests, however, need to stay true to the fundamental principles and nature of set-theoretic methods and thus cannot be a mere copy of robustness tests known to standard quantitative techniques” (Schneider & Wagemann, 2012, p. 285). In this spirit, Schneider and Wagemann (2012, pp. 287–295) ask researchers to pay attention to the effects of changing calibration; changing raw consistency levels; and dropping or adding cases. Rohlfing (2016) proposes simulations in order to evaluate the stability and robustness of QCA results. Nevertheless, the discussion continues.

Third, there is still no conclusion as for how to deal with limited diversity that goes beyond the Standard Analysis and some recommendations with regard to the avoidance of its pitfalls made by Schneider and Wagemann (2013). This might also be due to the obvious difficulty that limited diversity is an effect of the complex world we live in and that no procedure can make up for information which we simply do not possess.

Lastly, some words have to be dedicated to software development. The traditional program fsQCA still enjoys great popularity,9 although the visibility of the freeware R (Duşa, 2007; Thiem & Duşa, 2013a, 2013b) rapidly increases. CNA is also implemented in R (Baumgartner & Thiem, 2015a). Unfortunately, various R packages currently co-exist and even compete with one another. This does not make the use of R more popular, not least because some of them should not be loaded in one and the same R session. Other QCA software options are outdated (TOSMANA), are not regularly updated (STATA commands) (Longest & Vaisey, 2008), or are not broadly used (Kirq).

QCA and Configurative Comparative Methods: Challenges Ahead

QCA is currently exposed to various challenges.

First, there are the continuing discussions with scholars who are more into quantitative methods. The existence of such discussion is also a kind of appreciation of QCA, since QCA seems to be established and known enough to put the hegemony of standard statistics as the dominant and traditional highly formalized method at risk. It should be stressed, however, that this has never been the goal of QCA. Rather, it should serve in a complementary mode.

Second, the discussion also includes the other side of the methodological spectrum. QCA’s relation with other rather small-N–oriented comparative and case study methods is certainly a big challenge. The question is whether case study methodologists acknowledge QCA to be part of “their” family and thus a true case study method, or if QCA should be placed somewhere in between a comparative case study and quantitative methods. It seems that the high degree of formalization that QCA enjoys, not least since the introduction of fuzzy sets, has rendered the closeness of the approaches less visible.

Third, QCA, with Q standing for “qualitative,” can also be an object of attack for those qualitative methodologists who do not adhere to the empirical-analytical paradigm. This seems to be (mainly, but not exclusively) a problem of the European social sciences, where qualitative methods tend to be defined in the interpretative way (for an attempt at reconciliation, see Blatter, Langer, & Wagemann, 2017).

Fourth, from the inside, QCA is exposed to a discussion between those members of the community who put more emphasis on the research design aspects of QCA as a social science method and others who concentrate more on the technicalities. Since both perspectives are central pillars of the popularity of QCA, communication between the two is definitely needed. Although the importance of technical discussion should not be negated, the importance of research design issues in QCA, in particular case knowledge and theoretical reasoning, is underlined here (see also Mahoney, 2010). This should ultimately also lead to a collaborative effort (needless to say, also a common interest) of defining standards for executing a QCA. Being a rather novel method whose success was beneficial not only in terms of an appropriate application and execution, shared understandings are of great importance. It is not least the effectiveness of a method in helping researchers to analyze the social world that guarantees its enduring success.

Acknowledgments

The author is grateful to Markus B. Siewert and Lars Paulus for their helpful comments and editing services. Furthermore, the author thanks very much the two reviewers of this article for their useful input.

References

Adcock, R., & Collier, D. (1991). Measurement validity: A shared standard for qualitative and quantitative research. American Political Science Review, 95(3), 529–546.Find this resource:

Bartolini, S. (1993). On time and comparative research. Journal of Theoretical Politics, 5(2), 131–167.Find this resource:

Barton, A. H. (1955). The concept of property space in social research. In P. Lazarsfeld & M. Rosenberg (Eds.), The language of social research: A reader in the methodology of the social sciences (pp. 40–53). New York: The Free Press.Find this resource:

Baumgartner, M. (2008). Regulatory theories reassessed. Philosophia, 36, 327–354.Find this resource:

Baumgartner, M. (2009a). Inferring causal complexity. Sociological Methods & Research, 38(1), 71–101.Find this resource:

Baumgartner, M. (2009b). Uncovering deterministic causal structures: A Boolean approach. Synthese, 170(1), 71–96.Find this resource:

Baumgartner, M. (2013). Detecting causal chains in small-N data. Field Methods, 25(1), 3–24.Find this resource:

Baumgartner, M. (2015). Parsimony and causality. Quality & Quantity, 49(2), 839–856.Find this resource:

Baumgartner, M., & Thiem, A. (2015a). Identifying complex causal dependencies in configurational data with coincidence analysis. The R Journal, 7(1), 176–184.Find this resource:

Baumgartner, M., & Thiem, A. (2015b). Model ambiguities in configurational comparative research. Sociological Methods and Research.Find this resource:

Beach, D., & Pedersen, R. B. (2012). Process-tracing methods: Foundations and guidelines. Ann Arbor: The University of Michigan Press.Find this resource:

Bennett, A., & Checkel, J. T. (Eds.). (2015a). Process tracing. From metaphor to analytical tool. Cambridge, U.K.: Cambridge University Press.Find this resource:

Bennett, A., & Checkel, J. T. (2015b). Process tracing: From philosophical roots to best practices. In A. Bennett & J. T. Checkel (Eds.), Process tracing. From metaphor to analytical tool (pp. 3–38). Cambridge, U.K.: Cambridge University Press.Find this resource:

Berg-Schlosser, D. (2012). Mixed methods in comparative politics. Houndmills, Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Berg-Schlosser, D., & deMeur, G. (2009). Comparative research designs. In B. Rihoux, & C. C. Ragin (Eds.), Configurational comparative methods (pp. 19–32). Thousand Oaks, CA: SAGE.Find this resource:

Berg-Schlosser, D., deMeur, G., Rihoux, B., & Ragin, C. C. (2009). Qualitative Comparative Analysis (QCA) as an approach. In B. Rihoux & C. C. Ragin (Eds.), configurational comparative methods (pp. 1–18). Thousand Oaks, CA: SAGE.Find this resource:

Blatter, J., & Haverland, M. (2012). Designing case studies. Explanatory approaches in small-N research. Houndmills, Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Blatter, J., Langer, P. C., & Wagemann, C. (2017). Qualitative Methoden in der Politikwissenschaft. Wiesbaden, Germany: VS Verlag für Sozialwissenschaften.Find this resource:

Bollen, K. A., Entwisle, B., & Alderson, A. S. (1993). Macro-comparative research methods. Annual Review of Sociology, 19, 321–351.Find this resource:

Buche, A., Buche, J., & Siewert, M. B. (2016). Fuzzy logic or fuzzy application? A response to Stockemer’s “Fuzzy Set or Fuzzy Logic?” European Political Science, 15(3), 357–378.Find this resource:

Buche, J., & Siewert, M. B. (2015). Qualitative Comparative Analysis (QCA) in der Soziologie—Perspektiven, Potentiale und Anwendungsbereiche. Zeitschrift für Soziologie, 44(6), 386–406.Find this resource:

Cacciatore, F., Natalini, A., & Wagemann, C. (2015). Clustered Europeanization and national reform programmes: A Qualitative Comparative Analysis. Journal of European Public Policy, 22(8), 1186–1211.Find this resource:

Caren, N., & Panofsky, A. (2005). TQCA. A technique for adding temporality to Qualitative Comparative Analysis. Sociological Methods & Research, 34(2), 147–172.Find this resource:

Cebotari, V., & Vink, M. P. (2013). A configurational analysis of ethnic protest in Europe. International Journal of Comparative Sociology, 54(4), 298–324.Find this resource:

Cioffi-Revilla, C. (1981). Fuzzy sets and models of international relations. American Journal of Political Science, 25(1), 129–159.Find this resource:

Collier, D. (2011). Understanding process tracing. Political Science and Politics, 44(4), 823–830.Find this resource:

Collier, D. (2014). Comment: QCA should set aside the algorithms. Sociological Methodology, 44(1), 122–126.Find this resource:

Collier, D., & Adcock, R. (1999). Democracy and dichotomies: A pragmatic approach to choices about concepts. Annual Review of Political Science, 2, 537–565.Find this resource:

Collier, D., & Mahon, J. (1993). Conceptual “stretching” revisited: Alternative views of categories in comparative analysis. American Political Science Review, 87(4), 845–855.Find this resource:

Creswell, J. W. (2014). A concise introduction to mixed methods research. Thousand Oaks, CA: SAGE.Find this resource:

Cronqvist, L, & Berg-Schlosser, D. (2009). Multi-value QCA (mvQCA). In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods. Qualitative Comparative Analysis (QCA) and related techniques (pp. 69–86). Thousand Oaks, CA: SAGE.Find this resource:

DeMeur, G., Rihoux, B., & Yamasaki, S. (2002). L’analyse quali-quantitative comparée (AQQC-QCA), approche, techniques et applications en sciences humaine.s Louvain-La-Neuve, Belgium: Academia-Bruylant.Find this resource:

DeMeur, G., Rihoux, B., & Yamasaki, S. (2009). Addressing the critiques of QCA. In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods. Qualitative Comparative Analysis (QCA) and related techniques (pp. 147–167). Thousand Oaks, CA: SAGE.Find this resource:

Duşa, A. (2007). User manual for the QCA(GUI) package in R. Journal of Business Research, 60(5), 576–586.Find this resource:

Elster, J. (1998). A plea for mechanisms. In P. Hedström & R. Swedberg (Eds.), Social mechanisms: An analytical approach to social theory (pp. 45–73). Cambridge, U.K.: Cambridge University Press.Find this resource:

Emmenegger, P., Kvist, J., & Skaaning, S.-E. (2013). Making the most of configurational comparative analysis: An assessment of QCA applications in comparative welfare-state research. Political Research Quarterly, 66(1), 185–190.Find this resource:

Fischer, C. S., Hout, M., Sánchez Jankowski, M., Lucas, S. R., Swidler, A., & Voss, K. (1996). Inequality by design: Cracking the bell curve myth. Princeton, NJ: Princeton University Press.Find this resource:

Fiss, P. (2011). Building better causal theories: A fuzzy set approach to typologies in organization research. Academy of Management Journal, 54(2), 393–420.Find this resource:

Fiss, P., Sharapov, D., & Cronqvist, L. (2013). Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Political Research Quarterly, 66(1), 191–197.Find this resource:

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press.Find this resource:

Gerring, J. (2007). Case study research. Cambridge, U.K.: Cambridge University Press.Find this resource:

Gerring, J. (2008). The mechanistic worldview: Thinking inside the box. British Journal of Political Science, 38(1), 161–179.Find this resource:

Gerring, J. (2012). Social science methodology. Cambridge, U.K.: Cambridge University Press.Find this resource:

Gerring, J. (2014). Causal mechanisms: Yet, but … Comparative Political Studies, 43(11), 1499–1526.Find this resource:

Gerring, J., & Cojocaru, L. (2016). Selecting cases for intensive analysis: A diversity of goals and methods. Sociological Methods & Research, 45(3), 392–423.Find this resource:

Goertz, G. (2006). Social science concepts. A user’s guide. Princeton, NJ: Princeton University Press.Find this resource:

Goertz, G., & Mahoney, J. (2012). A tale of two cultures. Princeton, NJ: Princeton University Press.Find this resource:

Goldthorpe, J. H. (1997). Current issues in comparative macrosociology: A debate on methodological issues. Comparative Social Research, 16, 1–26.Find this resource:

Grofman, B., & Schneider, C. Q. (2009). An introduction to crisp set QCA, with a comparison to binary logistic regression. Political Research Quarterly, 62(4), 662–672.Find this resource:

Grzymala-Busse, A. (2011). Time will tell? Temporality and the analysis of causal mechanisms and processes. Comparative Political Studies, 44(9), 1267–1297.Find this resource:

Hall, P. (2003). Aligning ontology and methodology in comparative research. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 373–404). Cambridge, U.K.: Cambridge University Press.Find this resource:

Hedström, P., & Ylikovski, P. (2010). Causal mechanisms in the social sciences. Annual Review of Sociology, 36, 49–67.Find this resource:

Herrnstein, R. J., & Murray, C. A. (1994). The bell curve: Intelligence and class structure in American life. New York: Free Press.Find this resource:

Hug, S. (2013). Qualitative Comparative Analysis: How inductive use and measurement error lead to problematic inference. Political Analysis, 21(2), 252–265.Find this resource:

Klir, G. J., St. Clair, U. H., & Yuan, B. (1997). Fuzzy set theory. Foundations and applications. Upper Saddle River, NJ: Prentice Hall.Find this resource:

Kvist, J. (2007). Fuzzy set ideal type analysis. Journal of Business Research, 60(5), 474–481.Find this resource:

Lazarsfeld, P. (1937). Some remarks on typological procedures in social research. Zeitschrift für Sozialforschung, 6, 119–139.Find this resource:

Leuffen, D., Rittberger, B., & Schimmelfennig, F. (2012). Differentiated integration. Explaining variation in the European Union. Houndmills, Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Lieberman, E. S. (2005). Nested analysis as a mixed-method strategy for comparative research. American Political Science Review, 99(3), 435–452.Find this resource:

Lieberson, S. (2004). Comments on the use and utility of QCA. Qualitative Methods, 2(2), 13–14.Find this resource:

Lijphart, A. (1971). Comparative politics and the comparative method. American Political Science Review, 65(3), 682–693.Find this resource:

Longest, K. C., & Vaisey, S. (2008). Fuzzy: A program for performing qualitative comparative analyses (QCA) in Stata. Stata Journal, 8(1), 79–104.Find this resource:

Lucas, S. R., & Szatrowski, A. (2014). Qualitative comparative analysis in critical perspective. Sociological Methodology, 44(1), 1–79.Find this resource:

Mackie, J. L. (1965). Causes and conditions. American Psychological Quarterly, 2, 245–264.Find this resource:

Maggetti, M., & Levi-Faur, D. (2013). Dealing with errors in QCA. Political Research Quarterly, 66(1), 198–204.Find this resource:

Mahoney, J. (2001). Review—beyond correlational analysis: Recent innovations in theory and method. Sociological Forum, 16(3), 575–593.Find this resource:

Mahoney, J. (2003). Knowledge accumulation in comparative historical research: The case of democracy and authoritarianism. In J. Mahoney & D. Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp. 131–174). Cambridge, U.K.: Cambridge University Press.Find this resource:

Mahoney, J. (2010). After KKV. The new methodology of qualitative research. World Politics, 62(1), 120–147.Find this resource:

Mahoney, J. (2012). The logic of process tracing tests in the social sciences. Sociological Methods & Research, 41(4), 566–590.Find this resource:

Mahoney, J., Kimball, E., & Koivu, K. L. (2009). The logic of historical explanation in the social sciences. Comparative Political Studies, 42(1), 114–146.Find this resource:

Mahoney, J., & Sweet Vanderpoel, R. (2015). Set diagrams and qualitative research. Comparative Political Studies, 48(1), 65–100.Find this resource:

Mair, P. (2008). Concepts and concept formation. In D. Della Porta & M. Keating (Eds.), Approaches and methodologies in the social sciences (pp. 177–197). Cambridge, U.K.: Cambridge University Press.Find this resource:

Marx, A., Cambré, B., & Fiss, P. C. (2013). Crisp-set qualitative comparative analysis in organizational studies. In P. C. Fiss, B. Cambré, & A. Marx (Eds.), Configurational theory and methods in organizational research (pp. 23–47). Bingley, U.K.: Emerald.Find this resource:

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11.Find this resource:

Mayntz, R. (2004). Mechanisms in the analysis of macro-social phenomena. Philosophy of the Social Sciences, 34(2), 237–259.Find this resource:

Merton, R. K. (1957). On sociological theories of the middle range. In R. K. Merton (Ed.), On theoretical sociology. Five essays, old and new (pp. 39–72). New York: The Free Press.Find this resource:

Munck, G. L. (2016). Assessing set-theoretic comparative methods: A tool for qualitative comparativists? Comparative Political Studies, 49(6), 775–780.Find this resource:

Paine, J. (2016a). Set-theoretic comparative methods. Less distinctive than claimed. Comparative Political Studies, 49(6), 703–741.Find this resource:

Paine, J. (2016b). Still searching for the value added: Persistent concerns about set-theoretic comparative methods. Comparative Political Studies, 49(6), 793–800.Find this resource:

Przeworski, A., & Teune, H. (1970). Logic of comparative social inquiry. New York: Wiley.Find this resource:

Ragin, C. C. (1987). The comparative method. Moving beyond qualitative and quantitative strategies. Berkeley: University of California Press.Find this resource:

Ragin, C. C. (1994). Constructing social research. The unity and diversity of method. Thousand Oaks, CA: Pine Forge Press.Find this resource:

Ragin, C. C. (2000). Fuzzy-set social science. Chicago: University of Chicago Press.Find this resource:

Ragin, C. C. (2006). Set relations in social research: Evaluating their consistency and coverage. Political Analysis, 14, 291–310.Find this resource:

Ragin, C. C. (2008). Redesigning social inquiry. Fuzzy sets and beyond. Chicago: University of Chicago Press.Find this resource:

Ragin, C. C., & Fiss, P. (2017). Intersectional inequality. Chicago: University of Chicago Press.Find this resource:

Ragin, C. C., & Sonnett, J. (2004). Between complexity and parsimony: Limited diversity, counterfactual cases and comparative analysis. In S. Kropp & M. Minkenberg (Eds.), Vergleichen in der Politikwissenschaft (pp. 180–197). Wiesbaden, Germany: VS Verlag für Sozialwissenschaften.Find this resource:

Ragin, C. C., & Strand, S. I. (2008). Using qualitative comparative analysis to study causal order. Sociological Methods & Research, 36(4), 431–441.Find this resource:

Rihoux, B., Álamos-Concha, P., Bol, D., Marx, A., & Rezsöhazy, I. (2013). From niche to mainstream? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Political Research Quarterly, 66(1), 175–184.Find this resource:

Rihoux, B., & Ragin, C. C. (Eds.). (2009). Configurational comparative methods. Qualitative Comparative Analysis (QCA) and related techniques. Thousand Oaks, CA: SAGE.Find this resource:

Rihoux, B., Rezsöhazy I., & Bol, D. (2009). Qualitative Comparative Analysis (QCA) in public policy analysis: An extensive review. German Political Studies, 7(3), 9–82.Find this resource:

Rohlfing, I. (2008). What you see and what you get: Pitfalls and principles of nested analysis in comparative research. Comparative Political Studies, 41(11), 1492–1514.Find this resource:

Rohlfing, I. (2012). Case studies and causal inference: An integrative framework. Houndmills, Basingstoke, U.K.: Palgrave Macmillan.Find this resource:

Rohlfing, I. (2016). Why simulations are appropriate for evaluating Qualitative Comparative Analysis. Quality & Quantity, 50(5), 2073–2084.Find this resource:

Rohlfing, I., & Schneider, C. Q. (2013). Improving research on necessary conditions: Formalized case selection for process tracing after QCA. Political Research Quarterly, 66(1), 220–235.Find this resource:

Rubinson, C. (2013). Contradictions in fsQCA. Quality & Quantity, 47(5), 2847–2867.Find this resource:

Sartori, G. (1970). Concept misformation in comparative politics. American Political Science Review, 64(4), 1033–1053.Find this resource:

Sartori, G. (1984). Guidelines for concept analysis. In G. Sartori (Ed.), Social science concepts (pp. 15–85). Beverly Hills, CA: SAGE.Find this resource:

Schmitter, P. C. (2008). The design of social and political research. In D. Della Porta, & M. Keating (Eds.), Approaches and methodologies in the social sciences (pp. 263–295). Cambridge, U.K.: Cambridge University Press.Find this resource:

Schneider, C. Q. (2009). The consolidation of democracy: Comparing Europe and Latin America. New York: Routledge.Find this resource:

Schneider, C. Q. (2016). Real differences and overlooked similarities. Set-methods in comparative perspective. Comparative Political Studies, 49(6), 781–792.Find this resource:

Schneider, C. Q., & Rohlfing, I. (2013). Combining QCA and process tracing in set-theoretic multi-method research. Sociological Methods & Research, 42(4), 559–597.Find this resource:

Schneider, C. Q., & Rohlfing, I. (2016). Case studies nested in fuzzy-set QCA on sufficiency. Formalizing case selection and causal inference. Sociological Methods & Research, 45(3), 526–568.Find this resource:

Schneider, C. Q., & Wagemann, C. (2006). Reducing complexity in Qualitative Comparative Analysis (QCA), remote and proximate factors and the consolidation of democracy. European Journal of Political Research, 45, 751–786.Find this resource:

Schneider, C. Q., & Wagemann, C. (2007). QCA und fsQCA. Ein einführendes Lehrbuch für Anwender und jene, die es werden wollen. Opladen, Germany: Barbara-Budrich-Verlag.Find this resource:

Schneider, C. Q., & Wagemann, C. (2008). Standards guter Praxis in Qualitative Comparative Analysis (QCA) und fuzzy-sets. In S. Pickel, G. Pickel, H.-J. Lauth, & D. Jahn (Eds.), Methoden der vergleichenden Politik- und Sozialwissenschaft: Neue Entwicklungen und Anwendungen (pp. 361–386). Wiesbaden, Germany: Verlag für Sozialwissenschaften.Find this resource:

Schneider, C. Q., & Wagemann, C. (2010). Standards of good practice in Qualitative Comparative Analysis (QCA) and fuzzy sets. Comparative Sociology, 9(3), 397–418.Find this resource:

Schneider, C. Q., & Wagemann, C. (2012). Set-theoretic methods for the social sciences. A guide for Qualitative Comparative Analysis and fuzzy sets in social science. Cambridge, U.K.: Cambridge University Press.Find this resource:

Schneider, C. Q., & Wagemann, C. (2013). Doing justice to logical remainders in QCA: Moving beyond the standard analysis. Political Research Quarterly, 66(1), 211–220.Find this resource:

Schneider, C. Q., & Wagemann, C. (2016). Assessing ESA on what it is designed for: A reply to Cooper and Glaesser. Field Methods, 28(3), 316–321.Find this resource:

Seawright, J. (2005). Qualitative Comparative Analysis vis-à-vis regression. Studies in Comparative and International Development, 40(1), 3–26.Find this resource:

Skaaning, S.-E. (2011). Assessing the robustness of crisp-set and fuzzy-set QCA results. Sociological Methods & Research, 40(2), 391–408.Find this resource:

Stockemer, D. (2013). Fuzzy set or fuzzy logic? Comparing the value of Qualitative Comparative Analysis (fsQCA) versus regression analysis for the study of women’s legislative representation. European Political Science, 12(1), 86–101.Find this resource:

Tashakkori, A. M., & Teddlie, C. B. (Eds.). (2010). Sage handbook of mixed methods in social & behavioral research. Thousand Oaks, CA: SAGE.Find this resource:

Thiem, A. (2013). Clearly crisp, and not fuzzy: A reassessment of the (putative) pitfalls of multi-value QCA. Field Methods, 25(2), 197–207.Find this resource:

Thiem, A. (2014). Mill’s methods, induction and case sensivity in Qualitative Comparative Analysis: A comment on Hug. Newsletter of the Qualitative Methods Section, American Political Science Association, 12(2), 19–24.Find this resource:

Thiem, A., Baumgartner, M., & Bol, D. (2016). Still lost in translation! A correction of three misunderstandings between configurational comparativists and regressional analysis. Comparative Political Studies, 49(6), 742–774.Find this resource:

Thiem, A., & Duşa, A. (2012). Introducing the QCA package: A market analysis and software review. Qualitative & Multi-Method Research, 10, 45–49.Find this resource:

Thiem, A., & Duşa, A. (2013a). QCA: A package for Qualitative Comparative Analysis. The R Journal, 5(1), 87–97.Find this resource:

Thiem, A., & Duşa, A. (2013b). Qualitative Comparative Analysis with R. A user’s guide. Wiesbaden, Germany: Springer.Find this resource:

Vaisey, S. (2009). QCA 3.0: The “ragin revolution” continues. Contemporary Sociology. A Journal of Reviews, 38, 308–312.Find this resource:

Vink, M. P., & van Vliet, O. (2009). Not quite crisp, not yet fuzzy? Assessing the potentials and pitfalls of multi-value QCA. Field Methods, 21, 265–289.Find this resource:

Vis, B. (2012). The comparative advantages of fsQCA and regression analysis for moderately large-N analyses. Sociological Methods & Research, 41(1), 168–198.Find this resource:

Wagemann, C. (2014). Qualitative Comparative Analysis: What it is, what it does, and how it works. In D. Della Porta (Ed.), Methodological practices in social movement research (pp. 43–66). Oxford: Oxford University Press.Find this resource:

Wagemann, C., Buche, J., & Siewert, M. B. (2016). QCA and business research: Work in progress or consolidated agenda? Journal of Business Research, 69(7), 2531–2540.Find this resource:

Wagemann, C., & Schneider, C. Q. (2010). Qualitative Comparative Analysis (QCA) and fuzzy sets: The agenda for a research approach and a data analysis technique. Comparative Sociology, 9(3), 376–396.Find this resource:

Wagemann, C., & Schneider, C. Q. (2015). Transparency standards in Qualitative Comparative Analysis. Newsletter of the Qualitative Methods Section, American Political Science Association, 13(1), 38–42.Find this resource:

Waldner, D. (2012). Process tracing and causal mechanisms. In H. Kincaid (Ed.), The Oxford handbook of philosophy of social science (pp. 65–84). New York: Oxford University Press.Find this resource:

Williams, T., & Gemperle, S. M. (2016). Sequence will tell! Integrating temporality into set-theoretic multi-method research combining comparative process tracing and Qualitative Comparative Analysis. International Journal of Social Research Methodology, 20(2), 121–135.Find this resource:

Zadeh, L. (1965). Fuzzy sets. Information and Control, 8, 338–353.Find this resource:

Zadeh, L. (1968). Fuzzy algorithms. Information and Control, 12, 99–102.Find this resource:

Notes:

(1.) Overviews on QCA applications exist for public policy analysis (Rihoux et al., 2009), sociology (Buche & Siewert, 2015), organizational studies (Marx et al., 2013), business and management studies (Wagemann et al., 2016), welfare state research (Emmenegger et al., 2013) and—with fewer references to empirical research—social movement studies (Wagemann, 2014).

(2.) The use of asymmetry is different from non-symmetry, which implies that a sufficiency statement cannot be inverted (Baumgartner, 2009b, p. 75).

(3.) In this sense, sets represent borders for cases—some are in, others are out. This is also nicely expressed by the Latin word for border, namely finis, which draws our attention to the fact that definitions occur through the establishment of borders.

(4.) This shows that the claim “case knowledge per se is not strictly required at any stage of the analysis” (Thiem, 2014, p. 491) is misleading, and including the word “strictly” does not make the claim any better. If a method requires one to assign numerical values to empirically existing cases, then knowledge about these cases is of course a prerequisite for doing so. It is held that “[t]o avoid the danger of overly mechanical applications […], Ragin has repeatedly urged that the methods not be used without extensive case knowledge” (Mahoney, 2010, p. 135).

(5.) The formal rule in fuzzy set analysis is that a set A is a subset of another set B, if, for all cases, the fuzzy value of A is smaller than or equal to the fuzzy value of B.

(6.) With multi-value QCA (mvQCA), an alternative for non-dichotomous conditions was introduced (Cronqvist & Berg-Schlosser, 2009). However, next to some critique on the nature of mvQCA (Thiem, 2013; Vink & van Vliet, 2009), Schneider and Wagemann (2012, pp. 260–263) show that mvQCA does not solve any problems that remain unresolved in a conventional QCA.

(7.) Schneider and Wagemann (2012, pp. 233–237) show that the formula for the coverage value of necessary conditions is not without problems and develop an alternative measure. Their main critique of Ragin’s (2006) proposal is that coverage values might be flawed if the fuzzy values of the outcome increase. In the consequence, coverage values become rather high and indicate wrongly that there would be no problems with regard to trivialness. So, while the state of the art is to use Ragin’s (2006) original formula, more research has to be conducted in order to correct for obvious problems inherent in it.

(8.) Bartolini (1993, p. 131) starts his seminal article from the observation that variance over time is largely overlooked in comparative methodology. Gerring (2007, p. 28) then explicitly recognizes temporal variation as an important determinant of comparative case study designs. Williams and Gemperle (2016) propose the combination of QCA and process tracing as a way to add a temporal perspective to QCA. Grzymala-Busse (2011) provides us with a typology of understandings of time in comparative research. Set theory was added to the discussion when Mahoney et al. (2009) proposed “sequence elaboration,” which uses the logic of sufficient, necessary, SUIN, and INUS conditions in order to determine the causal effect of antecedent or intervening conditions.

(9.) Thiem and Duşa (2012, p. 45) indicate a combined market share of 90% for fsQCA and TOSMANA.