# Case Selection in Small-N Research

## Summary and Keywords

Recent methodological work on systematic case selection techniques offers ways of choosing cases for in-depth analysis such that the probability of learning from the cases is enhanced. This research has undermined several long-standing ideas about case selection. In particular, random selection of cases, paired or grouped selection of cases for purposes of controlled comparison, typical cases, and extreme cases on the outcome variable all appear to be much less useful than their reputations have suggested. Instead, it appears that scholars gain the most in terms of making new discoveries about causal relationships when they study extreme cases on the causal variable or deviant cases.

Keywords: case selection, case studies, multi-method research, comparative method, qualitative methods

Selecting cases is a defining challenge in social science case-study research. How cases are selected helps determine the uses to which they can be put, and so the narrative of a research project’s case selection stays with that project indefinitely. Scholars thus must pay attention to case selection, and a range of debates follow naturally from that attention. First, should scholars engage in systematic case selection, using knowledge about potential cases and more or less formal decision rules to focus in on cases for in-depth analysis? Should they pick cases randomly, or should they simply pick cases in an unstructured way? Second, should cases be selected one at a time or in coordinated (often matched or most similar) pairs or groups? Third, if cases are to be selected in a systematic way, which decision rule or rules should be used? Fourth, what should scholars do in contexts where there is limited prior knowledge of the set of potentially relevant cases?

A first central concern for methodological debates about case selection in the social sciences is whether scholars ought to select cases systematically at all. On one side, some scholars advocate for random case selection so that scholars avoid the temptation to select cases deceptively in order to confirm their argument. On the other side, many scholars in practice select cases in informal, often ad hoc, ways that are not carefully described or rationalized. Against these two alternatives, systematic case selection techniques offer ways of choosing cases for in-depth analysis such that the probability of learning from the cases is enhanced.

The main methodological justification for random sampling in case selection is that it prevents scholars from selecting cases on the basis of their fit to the substantive argument of interest (Fearon & Laitin, 2008). Yet scholars rarely carry out case studies on the hundreds to thousands of cases that would be needed to justify invoking the law of large numbers in conjunction with random sampling. With smaller samples of perhaps a dozen or fewer cases, random sampling has few desirable properties—other than eliminating scholarly license to manipulate results. Yet in fact, any systematic case selection algorithm has this same virtue; thus, there is really no viable justification for randomly selecting case studies.

Single-Case or Comparative-Case Selection

While systematic case selection has important advantages vis-a-vis unsystematic selection, and certainly no disadvantages in comparison with random sampling, the specific advantages of systematic case selection depend heavily on how cases are, in fact, selected.

The range of options is large (Seawright & Gerring, 2008), and different approaches offer divergent sets of advantages and disadvantages (Seawright, 2016). Before approaching specific case-selection algorithms, however, it is useful to consider the broader debate about whether cases should be selected individually or in comparative pairs (or sometimes larger groupings).

Some methodologists have argued against selecting cases in pairs or larger comparative groupings. The arguments emphasize two concerns. First, selecting cases comparatively may distract from the distinctive strength of within-case qualitative causal inference through methods such as process tracing. Second, comparative-case selection is often motivated by a logic of controlling for alternative explanations—a logic that is incomplete and may not buy researchers as much leverage for causal inference as they expect.

Central to most arguments for comparative-case selection is the idea of controlling for alternative explanations. Lijphart’s (1971) classic discussion of the comparative method treats controlling for alternative explanations as comparison’s central and defining goal. Slater and Ziblatt’s (2013) sophisticated contemporary argument in favor of comparative qualitative designs also emphasizes ideas of control, using the concept as one of three key criteria (alongside generality and representativeness) that characterize good qualitative cross-case comparisons. The central idea is to choose cases that are as similar as possible on a set of variables linked to alternative explanations. Then, it is inferred that any differences between those cases on the outcome variable of interest cannot be due to those alternative explanations. After all, whatever the causal effect of the variables connected to those explanations, it must be shared between the cases under study because they have the same value on the variables in question.

For example, Tudor (2013) compares long-term regime trajectories in India and Pakistan, justifying case selection based on the fact that the two countries shared a single colonial governing regime while part of the British Empire. The logic is that explanations of long-term political trajectories based on colonial heritage (Mahoney, 2010; Mattingly, 2015) cannot explain differences between India and Pakistan, because those countries had the same colonial experience and therefore should share the same effects of colonialism.

An unspoken assumption behind this kind of reasoning is that the selected cases share an underlying causal structure. If the selected cases are fundamentally different in ways that make the causal effect of matched variables diverge, then the effects of the variables will differ even though the two cases have the same scores on those variables. To continue the Tudor example, one might reasonably worry that the regions that would become India and Pakistan had preexisting economic, geographic, political, or social differences that made a notionally uniform British colonial administration function in ways that had quite different long-term political effects. When the structure of causal effects is not the same among compared cases, then matching cases on variables connected to alternative hypotheses produces only the illusion of controlling for those hypotheses. In order for this strategy to work, prior in-depth work would be needed to demonstrate that the causal structures connected with alternative explanations are indeed the same between the compared cases—pushing analysts heavily in the direction of within-case causal analysis.

Even when causal structures turn out to be highly similar among compared cases, and therefore control is feasible in principle, paired comparisons can never really make a strong case that they have eliminated rival explanations. The reason is a subtle recurrence of the same issues involved in selecting cases randomly. Because the law of large numbers does not apply to small samples, it is reasonable to expect randomness to be a significant and hard-to-resolve critique of attempts to eliminate rival explanations based on pairs of cases.

Consider a research design in which a scholar has carefully selected two cases to be matched in every important regard other than the main independent variable of interest. The cases turn out to also differ substantially on the outcome variable. One reasonable reaction would be to propose that there is no causal structure involved at all, and that the outcome variable was simply a purely random toss-up with a 50% chance of having either score. Under this rival explanation, the probability of seeing a high score on the outcome in the first case and a low score in the second is 25%—a quite respectably high figure. The scholar would need to include at least three more cases to push this probability below the conventional threshold of 5%, and things can easily get worse with more complex chance models.

Finally, there is the generally acknowledged problem that even carefully designed comparative-case studies in the social sciences can never choose cases that are genuinely identical on most of the variables of interest (Slater & Ziblatt, 2013, p. 1313). Even if cases share the same causal structure, some relevant variables may be unobservable, as yet undiscovered, or there may simply not be another case with a matching score on one of the variables of interest. Some cases may be relatively more closely matched than others, an assertion that is sometimes grounded in shared regional history or in before-after comparisons within a single case. Yet this kind of relative similarity does not resolve the issue; it only takes one plausibly relevant difference between cases to undermine the logic of control. To preserve that logic, then, scholars would need detailed evidence that existing differences between comparison cases do not in fact produce the observed difference between them on the outcome variable—once again pushing scholars heavily toward within-case causal inference as a means of assessing whether alternative explanations can be ruled out and support can be found for the explanation of interest.

These issues notwithstanding, there are still valuable reasons to consider selecting comparative cases. When scholars wish to make patterns of causal heterogeneity an explicit focus of research, comparative-case selection is effectively unavoidable. In making arguments about the boundaries of a relevant universe of cases or about types of cases with different reactions to a given cause, a comparison across cases is absolutely essential to make the phenomenon of heterogeneity visible. After all, how could a scholar credibly argue that their cause of interest works distinctively in a given set of cases without showing how it works in one of those cases and in a case from outside of the set? This kind of design need not derive causal inferential leverage from the comparison and may well do most of its work within cases, but the comparison is essential in making clear that a pattern of variation in causal effects in fact exists.

More generally, the selection of comparative cases can help make outcomes more surprising and worthy of explanation. If two cases are similar in a number of important ways but diverge powerfully in an outcome of interest, then the outcomes seems naturally more puzzling and worthy of explanation than either would be in isolation. For example, Lieberman’s (2003) analysis of divergences in tax compliance rates and AIDS policy between Brazil and South Africa benefits from this puzzle-making aspect of paired comparison. In isolation, it may not seem terribly striking that South Africans tend to comply with tax laws; presumably this is true in many other countries as well. Likewise, for anyone familiar with Latin American political stereotypes, it may well be unsurprising that Brazilians tend to avoid taxes. Hence, the comparative conjunction of the two countries, with a range of economic similarities, has the benefit of making each country’s outcome more surprising and more evidently in need of explanation than would be the case in a single-country study. By clarifying that the outcome could plausibly have been other than it was in each case, comparative-case selection helps impart importance to each case study.

Selecting Cases for Comparative Analysis

Discussions of how to select cases comparatively begin with the Mill’s Methods, developed by John Stuart Mill in 1891, with attention addressed primarily to social scientific reinterpretations of the Method of Difference and the Method of Agreement (Mill, 2002; and see Skocpol, 1979, pp. 35–37). The Method of Agreement involves selecting a series of cases that are as different as possible on all variables other than the outcome; then, a search is conducted for any variables that might turn out to be similar between the two cases, and any identifiable similarity is a candidate cause. Unfortunately, if multivariate causation is a viable alternative hypothesis, the logic of the Method of Agreement is spurious: if A and B are both powerful causes of the outcome, then the first case might have a high score on the outcome because of variable A, while the second has a similar score because of variable B. Since the two cases do not share scores on the main causal variables, the Method of Agreement incorrectly rules those variables out. Thus, the Method of Agreement is a useful case-selection strategy only when there is good reason to believe that multivariate causation is not possible.

For this and similar reasons, most methodologists focus instead on the Method of Difference, in which cases are chosen to be as similar as possible, apart from a contrast on one variable. Then, in-depth analysis hopefully reveals a second difference, and because everything else is as similar as possible, that second difference is inferred to be either a cause or an effect of the first difference. Thus, for example, Lieberman shows Brazil and South Africa to have similar economic structures but differ strikingly in terms of the structure of political identity cleavages—as well as in terms of tax compliance. The inference is, therefore, that identity cleavage structure is connected with tax compliance. Of course, for the reasons discussed above, this inferential logic is weak; nonetheless, the Method of Difference remains valuable because it is the comparative framework that highlights differences in outcomes and makes clear why each outcome deserves analytic attention.

Most scholars have historically carried out Method of Difference case selection informally, using their in-depth knowledge to find a pair of cases that appear strikingly similar in terms of some important background causes. While this informal mode of case selection has been used to produce outstanding studies, it is highly unlikely that it can often produce the best possible pair of matched cases. After all, the number of possible matches available for evaluation grows exponentially in the number of cases, and the complexity of each comparison rises in the number of variables to be considered. Given the cognitive limitations of even the smartest scholars, it is highly likely that they often miss the best-matched pairs of cases.

For these reasons, methodologists have recommended a move toward computerized, often statistical matching algorithms for comparative-case selection. Seawright and Gerring (2008) raise the possibility of matching as a way to choose similar cases, and Nielsen (2015) advances the discussion by suggesting the concrete algorithm of coarsened exact matching, a procedure that allows intimate control of the matching process while still retaining algorithmic precision. Given the large number of other matching and clustering algorithms that have been developed, there are, no doubt, other alternatives with their own sets of virtues and drawbacks. Furthermore—as with all case-selection algorithms—it is important that the statistical procedures involved be done well. Thus, for example, if inspection of initial case selection results reveals an obvious confounding variable that was omitted from the analysis, then the matching should be redone with the new confounder included. Regardless of the specific details of the procedure that is used, there is good reason to suggest that scholars draw on formal algorithms to select comparable cases, rather than using traditional, informal alternatives. Of course, these algorithms—along with the single-case algorithms to be discussed in the next section—require the existence of a data set with an adequate array of indicators measuring the variables of interest. We will return below to the question of what to do when this is not the case.

Methods for Single-Case Selection

While comparative framing can help motivate case-study analysis, most of the work of descriptive and causal inference in this mode of research happens within individual cases. Therefore, it usually makes sense to choose cases individually, with the goal of maximizing the probability of causal discovery. The causal discovery in question may be a surprising revision of received wisdom, involving sources of measurement error, novel alternative hypotheses, or new information about the nature of the causal pathway connecting the cause and the effect. Alternatively, a case study may simply help a scholar discover the best available evidence in support of the existing theory. In any case, maximizing the probability of some causally relevant discovery is a way of helping insure the value of the case study.^{1}

How, then, can scholars systematically select single cases for in-depth study? Seawright and Gerring (2008) discuss a broad set of alternatives. In brief, a systematic case selection process requires a preexisting cross-case data set. If scholars do not know enough about the cases to either collect existing data or score the relevant variables themselves, then case selection will be more difficult, as will be discussed later on.

Assuming that comparable measures do exist, most variables naturally fall into one of three categories: hypothesized causes, outcomes, and background or control variables. Because these are usually the only kinds of measured variables, they define the set of possible case-selection strategies. Scholars might select variables at the extreme ranges of their values, creating an extreme-*X* case-selection rule on the hypothesized cause or an extreme-*Y* rule on the outcome.^{2} They might also analyze cases where the observed score on the outcome is close to (typical case selection) or far from (deviant case selection) the conditional mean of the dependent variable, given the hypothesized cause and the background variables. Virtually all other proposed algorithms for single-case selection are weighted combinations of these initial possibilities.

Of these options, qualitative and multi-method scholars have long emphasized three: typical cases,^{3} meaning those whose outcome is close to its conditional mean and therefore as unsurprising as possible in relation to the general pattern; deviant cases, that is, those whose outcome is as far from the conditional mean as possible; and extreme cases on the outcome or *Y*. Lieberman (2005), in an influential analysis, proposed that typical cases should serve a confirmatory role as tests of an already credible hypothesis, while deviant cases should serve an exploratory role as a source of insights into new causal factors or other problems with the existing analysis. Extreme cases on *Y* are perhaps the most common case-selection rule in qualitative practice to date, because of the intuitive value of studying the purest examples of a given phenomenon (Collier & Mahoney, 1996).

Recent, more formal analysis raises concerns about these traditional strategies (e.g., Seawright, 2016), as well as about potentially overlooked strengths of these and other case-selection approaches. In the discussion below, I assume, along with several qualitative methodologists, that researchers are most likely to detect features of a case that are unusual in comparison with the relevant universe of cases (Collier & Mahoney, 1996, pp. 72–75; Flyvbjerg, 2006, pp. 224–228; Ragin, 2004, pp. 128–130). For example, in a cross-national regression, where overall levels of economic inequality are an important omitted variable, it would be relatively easy to notice that variable in case studies of countries like Brazil, South Africa, or Namibia (among the most unequal societies), or of countries like Denmark, Sweden, or the Czech Republic (among the least unequal societies). By contrast, it may be much harder to realize the importance of inequality through in-depth study of countries like Madagascar, Turkey, or Mexico, which fall somewhere in the middle of the global distribution of inequality. That is, a case study is more likely to succeed the further the quantity to be discovered is from its population mean.

## Typical Cases

Consider first typical cases. They are, by definition, cases that fit closely with the overall descriptive pattern across the population of interest. Therefore, these are cases that are distinctively unlikely to contain problems of measurement error or surprising omitted variables—because those sorts of discoveries tend to push cases away from the cross-case pattern. Specifically, typical-case selection tends to *reduce* scholars’ probability of making discoveries about sources of measurement error, omitted and confounding variables, variables that help explain a causal pathway and point toward the nature of the causal mechanism, and unknown sources of causal heterogeneity.

Case-study methodologists have long shared an intuition that typical-case selection should be a good idea. After all, these are the cases that best fit the overall relationships among variables. Yet this is exactly why typical-case selection is ineffective when the goal is to test a given theory or to discover more about the relationship in question than what is already known. Simply put, it is hard to learn about problems with the background state of knowledge, as represented in a regression, by looking at the cases that fit well in that regression. Because typical-case selection thus minimizes the probability of discovering major errors in current understanding, this approach is unusually weak as a test of a theory. If typical cases are used to test whether a given hypothesis is true, the case-selection rule reduces or sometimes even minimizes the chance of discovering the sorts of facts that would disprove the hypothesis. Hence, even a successful demonstration of evidence consistent with a theory in a typical case is of reduced value, because the odds of finding problems if they exist have been undermined by the case-selection rule.

## Deviant Cases

The received wisdom regarding case selection is that deviant cases are good for finding omitted variables. This claim turns out to be problematic (Seawright, 2016). Deviant cases are attractive in searching for omitted variables in that they subtract out the effects of independent variables that are already known, highlighting large unknown effects. Yet the technical working of most families of statistical models undermines the value of deviant cases. Statistical techniques generally attribute the effects of omitted variables on the included variables to the greatest extent that they can; as a result, deviant cases will tend to not have extreme scores on omitted variables that are related to the included variables, but rather only on unrelated or weakly related variables. But in fact, the most important omitted variables are the ones that are related to the included variables, because those are the ones that distort cross-case causal inferences. Hence, deviant-case selection is less helpful than one might wish in finding omitted variables.

Nonetheless, deviant cases can be more broadly useful. This case-selection rule has value as a way of finding sources of measurement error in the dependent variable; in a regression model, such measurement problems are often pushed into the error term, and therefore can be discovered via close study of cases with extreme estimated values on that error term. Furthermore, and perhaps a bit surprisingly, deviant cases can be a useful way to discover new information about causal pathways connecting the main independent with the main dependent variable. Deviant cases are, perhaps surprisingly, also a useful way of discovering unknown sources of causal heterogeneity.

Deviant-case selection can also help scholars find out about reasons for measurement error in the outcome variable. The reason is that measurement error has to go somewhere in the regression, and when the error is in the outcome variable, the natural place for it to go is into the residual. In fact, with a large enough sample size, the estimated residual from a regression predicting *Y* amounts to the measurement error plus the residual from the regression using *Y* when measured without error. For that reason, choosing the residual to be as far from zero as possible will increase the probability of choosing cases with large amounts of measurement error (as well as cases with large residuals for other reasons, a possibly unwanted side effect). Thus, deviant-case selection can help find cases with large amounts of measurement error, and therefore, can facilitate discovery of the reasons for measurement error in the outcome variable.

A further surprising finding is that deviant-case selection can help with the goal of learning about the variables that make up a causal pathway. The residual of a regression of *Y* on *X* will, in part, measure the extent to which a given case has an unusually large or small causal effect of *X _{i}* on

*Y*(Morgan & Winship, 2007, p. 135). Given standard representations of causal pathways, there are three ways a case’s overall causal effect can be unusual. First,

_{i}*X*may have an unusual effect on the causal pathway variables for this case, which in turn have about the usual effect on

_{i}*Y*. Second, the causal pathway variables may have an unusual effect on

_{i}*Y*for this case. Third,

_{i}*X*may, in this instance, have an unusual direct effect on

_{i}*Y*, net of the causal pathway of interest. The second and third of these patterns will produce cases that are unhelpful in terms of learning about causal pathway variables and discovering mechanisms; the first, however, will tend to produce cases that help. Thus, deviant-case selection can help in finding out about unknown or incompletely understood causal pathways.

_{i}The same basic logic also allows deviant-case selection to uncover evidence of unknown sources of causal heterogeneity. Recall that the source of heterogeneity is, by definition, correlated with the magnitude of the main causal effect. As argued in the last paragraph, cases for which the effect of *X _{i}* on

*Y*is quite different from the population average also tend to have regression residuals that are large in absolute value. Hence, selection based on the regression residual has a reasonable chance of turning up cases for which

_{i}*P*is far from its mean, and therefore facilitating case-study discovery of that source of causal heterogeneity.

_{i}However, there are some goals for which deviant-case selection is simply not helpful. Consider first the goal of discovering sources of measurement error in *X*. Usually, scholars assume that measurement error is independent of systematic variables and is not terribly large in variance. Under those assumptions, deviant-case selection is only marginally useful in finding sources of measurement error in *X*; the only contribution comes because a portion of that measurement error will end up in the residual. However, the value is limited because only part of the measurement error’s variance is combined with the whole variance of the true residual, resulting in indirect and watered down case selection.

Of course, if the goal is to find a case where the effect of *X* on *Y* is close to the population average, the selection of a deviant case would be outright harmful, which is why no one has ever suggested studying deviant cases for this purpose. As discussed earlier, deviant-case selection increases the probability of selecting cases with extremely atypical causal effects, and therefore works *against* this goal.

To summarize, deviant cases are valuable for several kinds of discovery: learning about sources of measurement error in the outcome, discovering information about the causal pathway connecting *X* and *Y*, and finding out about sources of causal heterogeneity. The technique also has some limited value for discovering confounding variables. The value of this case-selection rule has been underestimated and misunderstood in the literature to date, which has mostly emphasized its potential contribution in terms of omitted variables.

## Extreme Cases

There are two variants of the extreme-cases strategy: extreme cases on the independent variable and the more frequently discussed extreme cases on the outcome. Just like selecting deviant cases, choosing cases with extreme values of *X _{i}* is a valuable and underrated strategy, and is more broadly applicable than selecting cases with extreme values of the dependent variable,

*Y*. Indeed, while extreme cases on

_{i}*Y*can be helpful, deviant cases are often superior.

Consider first the task of learning about the reasons for measurement error; here, success requires selecting cases in which the variable of interest (*X* or *Y*) is especially badly measured. Choosing cases as far as possible from the mean on the error-laden version of *X* is, by definition, the same as maximizing the combination of the true value of *X* and the measurement error. Hence, as long as the measurement error is not negatively correlated with the true value, extreme-case selection on *X* increases a scholar’s chances at finding cases with a good deal of measurement error.

Obviously, this argument applies equally to the task of finding measurement error on *Y* using extreme-case selection on the outcome variable. However, and perhaps somewhat surprisingly, deviant-case selection will typically outperform extreme-case selection on the dependent variable for the task of finding measurement error on that variable. This is because the regression filters out some of the true variance on *Y*, leaving a residual whose variance is more heavily composed of measurement error than the original variable.

When the goal of case-study research is to discover omitted variables, extreme-case selection on the dependent variable can have real value. The outcome variable can be partitioned into three components relative to a regression model when there is an omitted variable: the part of the variable that can be systematically predicted within the regression; the unexplainable part of the variable that has nothing to do with the omitted variable in question; and the part of the variable that cannot be predicted within the regression, but is nonetheless linked to the omitted variable. The second of these components is always useless for discovering omitted variables, and the third is always helpful. If the omitted variable is independent of the included variables, the first component is completely irrelevant. On the other hand, if the omitted variable is related to the main causal variable or some included control variable, then the first component will be contaminated by the omitted variable—and thus will also contribute to finding the omitted variable. The extreme-case selection on the dependent variable works well in this context, which is the one in which the stakes are highest.

Extreme-case selection on the main causal variable is also a good idea when the stakes are highest. If the omitted variables are not confounders, that is, they are independent of the included variables, then extreme cases on the main causal variable should be altogether unhelpful. After all, the cause in this situation contains no information about the omitted variables. However, when the omitted variable is correlated with the main causal variable, then the success rate of an extreme-case selection rule on the main causal variable will depend directly on the strength of the correlation. Since omitted variables matter most when they are strong confounders—and therefore substantially related to the cause of interest, this technique should have an advantage when it matters most.

In contexts where the emphasis in the case-study research is on discovering or demonstrating the existence of a pathway variable P causally connecting a causal variable C and the effect E by selecting cases with extreme values on P, extreme-case selection on C can be a very strong approach. When the average effect of C on P is large, then a case where the C takes on an unusual value will be more likely to have an unusual value for P. Hence, when the key independent variable is an important cause of the outcome, and the pathway of interest captures a large share of the overall effect, extreme-case selection on the cause is a good idea.

Extreme-case selection on the outcome will also work well when the pathway variable explains much or most of the variation in the outcome—because in these contexts, extreme cases on the outcome are likely to be cases where the pathway variable is extreme, as well. Deciding whether extreme-case selection on the dependent variable beats other approaches requires some analytic thought, however. Suppose that the outcome takes on an extreme value because the pathway variable also takes on an extreme value. This can happen in one of two ways. First, the pathway variable may take on an extreme value because the main cause also takes on an extreme value. In this case, selection on the cause should usually be as useful as selection on the outcome. Second, the pathway variable may take on an extreme value even though the cause does not, because of some kind of unobserved uniqueness in the case in question. If this is so, then deviant-case selection is likely to pick up the case in question. Either way, whenever extreme-case selection on the outcome is useful for finding pathway variables, it is to be expected that either selection on the cause or deviant-case selection would be as good or better.

When scholars wish to discover unknown sources of causal heterogeneity, extreme cases are less useful than deviant cases. In the first place, extreme cases on the independent variable are altogether unhelpful here. After all, the *value* of the main causal variable should generally tell us little about the *causal effect* of that variable on the outcome, and in fact the two quantities are usually assumed to be independent.

Extreme cases on the outcome are more relevant, but still not as good as deviant cases. Intuitively, when the effect of the independent variable for a given case is unusually large or small, and when that variable takes on an unusual value, it stands to reason that the outcome also will take on an unusual value. Unfortunately, the outcome can also take on an unusual value even when the causal effect for the case is perfectly average—if the cause takes on a sufficiently unusual value. Deviant-case selection deals with this possibility because the residual for a given case accounts for the value of the main causal variable in that case. Thus, deviant-case selection captures the good of extreme cases on the outcome for this goal while also eliminating one scenario in which the latter procedure fails.

Overall, extreme-case selection on the main independent variable is a powerful, underappreciated approach to choosing cases for in-depth analysis. This is a strong approach for discovering measurement error and examining causal pathways, and it can also be useful in some omitted-variable scenarios. Case-study scholars should seriously consider adding this approach to their applied repertoire.

## Summary of Cases

Deviant cases and extreme cases on the main independent variable are the most efficient ways to choose cases for close analysis when the goal is discovery. Deviant cases are valuable for discovering sources of measurement error, information about causal pathways, and sources of causal heterogeneity. Extreme cases on *X* are useful for inquiring into sources of measurement error on the treatment variable and for discovering the most important and powerful confounding variables. Perhaps somewhat surprisingly, deviant-case selection or extreme-case selection on *X* usually achieves the same goals more efficiently than the frequently discussed and applied approaches of selecting typical cases or extreme cases on the dependent variable. These techniques stand in need of a different kind of justification if they are to continue in use.

Choosing Cases Without Systematic Data

The discussion thus far has assumed that scholars possess systematic data about the set of cases that constitute the population of interest, as well as about all relevant variables (including the outcome of interest, the main cause or causes, and any possible confounding variables). Such data might be at various levels of measurement, or might even be purely typological (Elman, 2005); as long as scholars know enough about the cases to assign a measurement for each relevant variable to all of the cases of interest, the discussion stands.

However, this condition is not always met. With some topics, scores will only be available for some cases, perhaps because they involve emergent scholarly concerns for which little systematic evidence has yet been collected or because measurement procedures are resource-intensive and unwieldy. In other situations, scores may be readily available for some but not all of the variables of interest.

When scholars do not have enough information to fully and systematically describe the cases of interest, they may still be able to use some of the techniques described above. For example, suppose that the available data do not measure the outcome of interest or all possible confounders, but they do contain a usable measure of the key causal variable. In such a scenario, it would be impossible to select deviant cases or extreme cases on the outcome, but it would be entirely feasible to select extreme cases on the cause. More generally, it is a good idea to explore the option of using whatever partial data exist to carry out whichever systematic case-selection algorithms are possible.

However, in some research contexts, there will simply be no meaningful data. Here, not enough is known to systematically score cases on the outcome or the cause. Hence, regression-type approaches, matching-based alternatives, and extreme-case options are all out of the question. What might scholars do in such extreme research scenarios?

One feasible option is to select cases via informal approximation of the extreme-cases approach. When very little is known, one cannot select cases toward the top or the bottom of an empirical distribution—but one can seek out the cases that represent the purest or best feasible example of the main causal variable of interest. While it is unlikely that a purely qualitative search for best examples of the cause will correspond exactly with a formal extreme-cases on *X* strategy, such an approach nonetheless preserves the logic that makes such cases powerful tools for discovery—and provides an option in the most challenging of circumstances.

Table 1. Intended Goals and Efficient Uses of Case-Selection Techniques

Case Selection Technique | Intended Goal | Efficient Uses |
---|---|---|

Random | Representativeness Elimination of bias | None |

Most-Similar | Quasi-statistical control | None |

Typical | Test of Theory Demonstration of theory in favorable context | Demonstration of theory in favorable context |

Deviant | Discovery of omitted variables Discovery of measurement error Discovery of causal pathway | Discovery of measurement error Discovery of causal pathway |

Extreme on | Discovery of omitted variables Discovery of measurement error Discovery of causal pathway | Discovery of omitted variables Discovery of causal pathway |

Extreme on Y | Discovery of omitted variables Discovery of measurement error Discovery of causal pathway | None |

Conclusions

Methodological advice about case selection has been in a state of rapid development in recent years. No longer is it necessary for scholars to begin each case study by constructing a bespoke case-selection procedure. Instead, increasingly strong guidance is available. Methodological arguments in this domain push against much of the traditional practice of case-study research. Paired or grouped selection of cases appears to offer less benefit, and to rely more on other sources of analytic leverage, than current practice might suggest; for single-case selection, the best approaches are often not those that appeal most to scholars’ intuition. Table 1 reviews the arguments explored above, emphasizing the surprisingly robust contribution of extreme cases on the main causal variable and of deviant cases, as well as the relatively limited efficient uses of other approaches. These perhaps counter-intuitive results are a mark of the health of the debate about case selection. By working through this research step analytically, scholars have learned that we can do better than our traditional approaches.

## References

Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. *World Politics*, *49*(1), 56–91.Find this resource:

Elman, C. (2005). Explanatory typologies in qualitative studies of international politics. *International Organization,* 5*9*(2), 293–326.Find this resource:

Fearon, J. D., & Laitin, D. D. (2008). Integrating qualitative and quantitative methods. In J. Box-Steffensmeier, H. E. Brady, & D. Collier (Eds.), *The Oxford handbook of political methodology* (pp. 300–318). New York: Oxford University Press.Find this resource:

Flyvbjerg, B. (2006). Five Misunderstandings about Case-Study Research. *Qualitative Inquiry*, *12*(2), 219–245.Find this resource:

Herron, M. C., & Quinn, K. M. (2015). A careful look at modern case selection methods. *Sociological Methods and Research*. Advance online publication.Find this resource:

Lieberman, E. S. (2003). *Race and regionalism in the politics of taxation in brazil and south africa*. Cambridge, U.K.: Cambridge University Press.Find this resource:

Lieberman, E. S. (2005). Nested analysis as a mixed-method strategy for comparative research. *American Political Science Review*, *99*, 435–452.Find this resource:

Lijphart, A. (1971). Comparative politics and the comparative method. *American Political Science Review*, *65*, 682–693.Find this resource:

Mahoney, J. A. (2010) *Colonialism and Post-Colonial Development: Spanish America in Comparative Perspective.* Cambridge, U.K.: Cambridge University Press.Find this resource:

Mattingly, Daniel C. (2015). Colonial legacies and state institutions in China: Evidence from a natural experiment. *Comparative Political Studies*. Advance online publication.

Mill, J. S. (2002). *A system of logic: Ratiocinative and inductive*. Honolulu: University Press of the Pacific. The original work was published in 1891.Find this resource:

Morgan, S. L., & Winship, C. (2007). *Counterfactuals and causal inference: Methods and principles for social research*. Cambridge, U.K.: Cambridge University Press.Find this resource:

Nielsen, R. A. (2015). Case selection via matching. *Sociological Methods and Research*. Advance online publication.Find this resource:

Ragin, C. C. (2004). Turning the tables: How case-oriented research challenges variable-oriented research. In H. E. Brady & D. Collier (Eds.), *Rethinking social inquiry: Diverse tools, shared standards* (pp. 123–138). Lanham, MD: Rowman and Littlefield.Find this resource:

Seawright, J. (2016). The case for selecting cases that are deviant or extreme on the independent variable. *Sociological Methods and Research*. Advance online publication.Find this resource:

Seawright, J., & Gerring, J. (2008). Case selection techniques in case study research: A menu of qualitative and quantitative options. *Political Research Quarterly*, *61*, 294–308.Find this resource:

Skocpol, T. (1979). *States and social revolutions: A comparative analysis of France, Russia, and China*. Cambridge, U.K.: Cambridge University Press.Find this resource:

Slater, D., & Ziblatt, D. (2013). The enduring indispensability of the controlled comparison. *Comparative Political Studies*, *46*, 1301–1327.Find this resource:

Tudor, M. (2013). *The promise of power: The origins of democracy in India and autocracy in Pakistan*. Cambridge, U.K.: Cambridge University Press.Find this resource:

## Notes:

(1.) There are other ways of framing case selection. Herron and Quinn (2015), for example, analyze case selection as a way of learning about the joint distributions of the cause, the outcome, and a previously unmeasured confounding variable. Using Bayesian methods, it is then possible to correct an estimate of the overall causal effect so as to remove the distortions caused by the confounder. This is an interesting suggestion, but one that seems to depend heavily on prior causal discoveries of the sort discussed in this section.

(2.) It is hard to imagine a situation in which selecting extreme values on the control variables would be useful.

(3.) The phrase “typical case” can mean several different things. On a common language reading, it might refer to a case that does a good job of representing the population or category of which it is a member. This is a misleading usage for social science purposes, as no case can ever adequately represent the diversity present in a social population. An alternative meaning might focus on typicality with respect to a given variable, as in choosing a case near the mean or mode of that variable. This approach would be the opposite of an extreme case selection rule and would thus have strengths and weaknesses that are the mirror image of that algorithm. Since extreme case selection is, on balance, quite attractive, a strategy of choosing cases in the middle of the distribution on a given variable would logically be unattractive.