Diana Kapiszewski, Lauren M. MacLean, and Benjamin L. Read
Generations of political scientists have set out for destinations near and far to pursue field research. Even in a digitally networked era, the researcher’s personal presence and engagement with the field context continue to be essential. Yet exactly what does fieldwork mean, what is it good for, and how can scholars make their time in the field as reflective and productive as possible? Thinking of field research in broad terms—as leaving one’s home institution to collect information, generate data, and/or develop insights that significantly inform one’s research—reveals that scholars of varying epistemological commitments, methodological bents, and substantive foci all engage in fieldwork. Moreover, they face similar challenges, engage in comparable practices, and even follow similar principles. Thus, while every scholar’s specific project is unique, we also have much to learn from each other.
In preparing for and conducting field research, political scientists connect the high-level fundamentals of their research design with the practicalities of day-to-day inquiry. While in the field, they take advantage of the multiplicity of opportunities that the field setting provides and often triangulate by cross-checking among different perspectives or data sources. To a large extent, they do not regard initial research design decisions as final; instead, they iteratively update concepts, hypotheses, the research question itself, and other elements of their projects—carefully justifying these adaptations—as their fieldwork unfolds. Incorporating what they are learning in a dynamic and ongoing fashion, while also staying on task, requires both flexibility and discipline.
Political scientists are increasingly writing about the challenges of special types of field environments (such as authoritarian regimes or conflict settings) and about issues of positionality that arise from their own particular identities interacting with those of the people they study or with whom they work. So too, they are grappling with what it means to conduct research in a way that aligns with their ethical commitments, and what the possibilities and limits of research transparency are in relation to fieldwork. In short, political scientists have joined other social scientists in undertaking critical reflection on what they do in the field—and this self-awareness is itself a hallmark of high-quality research.
Gaurav Sood and Yphtach Lelkes
The news media have been disrupted. Broadcasting has given way to narrowcasting, editorial control to control by “friends” and personalization algorithms, and a few reputable producers to millions with shallower reputations. Today, not only is there a much broader variety of news, but there is also more of it. The news is also always on. And it is available almost everywhere. The search costs have come crashing down, so much so that much of the world’s information is at our fingertips. Google anything and the chances are that there will be multiple pages of relevant results.
Such a dramatic expansion of choice and access is generally considered a Pareto improvement. But the worry is that we have fashioned defeat from the bounty by choosing badly. The expansion in choice is blamed for both, increasing the “knowledge gap,” the gap between how much the politically interested and politically disinterested know about politics, and increasing partisan polarization. We reconsider the evidence for the claims. The claim about media’s role in rising knowledge gaps does not need explaining because knowledge gaps are not increasing. For polarization, the story is nuanced. Whatever evidence exists suggests that the effect is modest, but measuring long-term effects of a rapidly changing media landscape is hard and may explain the results.
As we also find, even describing trends in basic explanatory variables is hard. Current measures are beset with five broad problems. The first is conceptual errors. For instance, people frequently equate preference for information from partisan sources with a preference for congenial information. Second, survey measures of news consumption are heavily biased. Third, behavioral survey experimental measures are unreliable and inapt for learning how much information of a particular kind people consume in their real lives. Fourth, measures based on passive observation of behavior only capture a small (likely biased) set of the total information consumed by people. Fourth, content is often coded crudely—broad judgments are made about coarse units, eliding over important variation.
These measurement issues impede our ability to answer the extent to which people choose badly and the attendant consequences of such. Improving measures will do much to advance our ability to answer important questions.
Micah Dillard and Jon C.W. Pevehouse
Scholarship in international relations has taken a more quantitative turn in the past four decades. The field of foreign policy analysis was arguably the forerunner in the development and application of quantitative methodologies in international relations. From public opinion surveys to events data to experimental methods, many of the earliest uses of quantitative methodologies can be found in foreign policy analysis. On substantive questions ranging from the causes of war to the dynamics of public opinion, the analysis of data quantitatively has informed numerous debates in foreign policy analysis and international relations. Emerging quantitative methods will be useful in future efforts to analyze foreign policy.
Shannon Carcelli and Erik A. Gartzke
Deterrence theory is slowly beginning to emerge from a long sleep after the Cold War, and from its theoretical origins over half a century ago. New realities have led to a diversification of deterrence in practice, as well as to new avenues for its study and empirical analysis. Three major categories of changes in the international system—new actors, new means of warfare, and new contexts—have led to corresponding changes in the way that deterrence is theorized and studied. First, the field of deterrence has broadened to include nonstate and nonnuclear actors, which has challenged scholars with new types of theories and tests. Second, cyberthreats, terrorism, and diverse nuclear force structures have led scholars to consider means in new ways. Third, the likelihood of an international crisis has shifted as a result of physical, economic, and normative changes in the costs of crisis, which had led scholars to more closely address the crisis context itself. The assumptions of classical deterrence are breaking down, in research as well as in reality. However, more work needs to be done in understanding these international changes and building successful deterrence policy. A better understanding of new modes of deterrence will aid policymakers in managing today’s threats and in preventing future deterrence failures, even as it prompts the so-called virtuous cycle of new theory and additional empirical testing.
Agent-based computational modeling (ABM, for short) is a formal and supplementary methodological approach used in international relations (IR) theory and research, based on the general ABM paradigm and computational methodology as applied to IR phenomena. ABM of such phenomena varies according to three fundamental dimensions: scale of organization—spanning foreign policy, international relations, regional systems, and global politics—as well as by geospatial and temporal scales. ABM is part of the broader complexity science paradigm, although ABMs can also be applied without complexity concepts. There have been scores of peer-reviewed publications using ABM to develop IR theory in recent years, based on earlier pioneering work in computational IR that originated in the 1960s that was pre-agent based. Main areas of theory and research using ABM in IR theory include dynamics of polity formation (politogenesis), foreign policy decision making, conflict dynamics, transnational terrorism, and environment impacts such as climate change. Enduring challenges for ABM in IR theory include learning the applicable ABM methodology itself, publishing sufficiently complete models, accumulation of knowledge, evolving new standards and methodology, and the special demands of interdisciplinary research, among others. Besides further development of main themes identified thus far, future research directions include ABM applied to IR in political interaction domains of space and cyber; new integrated models of IR dynamics across domains of land, sea, air, space, and cyber; and world order and long-range models.
Nazli Choucri and Gaurav Agarwal
The term lateral pressure refers to any tendency (or propensity) of states, firms, and other entities to expand their activities and exert influence and control beyond their established boundaries, whether for economic, political, military, scientific, religious, or other purposes. Framed by Robert C. North and Nazli Choucri, the theory addresses the sources and consequences of such a tendency. This chapter presents the core features—assumptions, logic, core variables, and dynamics—and summarizes the quantitative work undertaken to date. Some aspects of the theory analysis are more readily quantifiable than others. Some are consistent with conventional theory in international relations. Others are based on insights and evidence from other areas of knowledge, thus departing from tradition in potentially significant ways.
Initially applied to the causes of war, the theory focuses on the question of: Who does what, when, how, and with what consequences? The causal logic in lateral pressure theory runs from the internal drivers (i.e., the master variables that shape the profiles of states) through the intervening variables (i.e., aggregated and articulated demands given prevailing capabilities), and the outcomes often generate added complexities. To the extent that states expand their activities outside territorial boundaries, driven by a wide range of capabilities and motivations, they are likely to encounter other states similarly engaged. The intersection among spheres of influence is the first step in complex dynamics that lead to hostilities, escalation, and eventually conflict and violence.
The quantitative analysis of lateral pressure theory consists of six distinct phases. The first phase began with a large-scale, cross-national, multiple equation econometric investigation of the 45 years leading to World War I, followed by a system of simultaneous equations representing conflict dynamics among competing powers in the post–World War II era. The second phase is a detailed econometric analysis of Japan over the span of more than a century and two World Wars. The third phase of lateral pressure involves system dynamics modeling of growth and expansion of states from 1970s to the end of the 20th century and explores the use of fuzzy logic in this process. The fourth phase focuses on the state-based sources of anthropogenic greenhouse gases to endogenize the natural environment in the study of international relations. The fifth phase presents a detailed ontology of the driving variables shaping lateral pressure and their critical constituents in order to (a) frame their interconnections, (b) capture knowledge on sustainable development, (c) create knowledge management methods for the search, retrieval, and use of knowledge on sustainable development and (d) examine the use of visualization techniques for knowledge display and analysis. The sixth, and most recent, phase of lateral pressure theory and empirical analysis examines the new realities created by the construction of cyberspace and interactions with the traditional international order.
Qualitative Comparative Analysis (QCA) is a method, developed by the American social scientist Charles C. Ragin since the 1980s, which has had since then great and ever-increasing success in research applications in various political science subdisciplines and teaching programs. It counts as a broadly recognized addition to the methodological spectrum of political science. QCA is based on set theory. Set theory models “if … then” hypotheses in a way that they can be interpreted as sufficient or necessary conditions. QCA differentiates between crisp sets in which cases can only be full members or not, while fuzzy sets allow for degrees of membership. With fuzzy sets it is, for example, possible to distinguish highly developed democracies from less developed democracies that, nevertheless, are rather democracies than not. This means that fuzzy sets account for differences in degree without giving up the differences in kind. In the end, QCA produces configurational statements that acknowledge that conditions usually appear in conjunction and that there can be more than one conjunction that implies an outcome (equifinality). There is a strong emphasis on a case-oriented perspective. QCA is usually (but not exclusively) applied in y-centered research designs. A standardized algorithm has been developed and implemented in various software packages that takes into account the complexity of the social world surrounding us, also acknowledging the fact that not every theoretically possible variation of explanatory factors also exists empirically. Parameters of fit, such as consistency and coverage, help to evaluate how well the chosen explanatory factors account for the outcome to be explained. There is also a range of graphical tools that help to illustrate the results of a QCA. Set theory goes well beyond an application in QCA, but QCA is certainly its most prominent variant.
There is a very lively QCA community that currently deals with the following aspects: the establishment of a code of standards for QCA applications; QCA as part of mixed-methods designs, such as combinations of QCA and statistical analyses, or a sequence of QCA and (comparative) case studies (via, e.g., process tracing); the inclusion of time aspects into QCA; Coincidence Analysis (CNA, where an a priori decision on which is the explanatory factor and which the condition is not taken) as an alternative to the use of the Quine-McCluskey algorithm; the stability of results; the software development; and the more general question whether QCA development activities should rather target research design or technical issues. From this, a methodological agenda can be derived that asks for the relationship between QCA and quantitative techniques, case study methods, and interpretive methods, but also for increased efforts in reaching a shared understanding of the mission of QCA.
More Than Mixed Results: What We Have Learned From Quantitative Research on the Diversionary Hypothesis
Benjamin O. Fordham
In the three decades since Jack Levy published his seminal review essay on the topic, there has been a great deal of quantitative research on the proposition that state leaders can use international conflict to enhance their political prospects at home. The findings of this work are frequently described as “mixed” or “inconsistent.” This characterization is superficially correct, but it is also misleading in some important respects. Focusing on two of Levy’s most important concerns about previous research, there has been substantial progress in our understanding of this phenomenon.
First, as Levy suggests in his essay, researchers have elaborated a range of different mechanisms linking domestic political trouble with international conflict rather than a single diversionary argument. Processes creating diversionary incentives bear a family resemblance to one another but can have different behavioral implications. Four of them are (1) in-group/out-group dynamics, (2) agenda setting, (3) leader efforts to demonstrate competence in foreign policy, and (4) efforts to blame foreign leaders or perhaps domestic minorities for problems. In addition, researchers have identified some countervailing mechanisms that may inhibit state leaders’ ability to pursue diversionary strategies, the most important of which is the possibility that potential targets may strategically avoid conflict with leaders likely to behave aggressively.
Second, research has identified scope conditions that limit the applicability of diversionary arguments, another of Levy’s concerns about the research he reviewed. Above all, diversionary uses of military force (though not other diversionary strategies) may be possible for only a narrow range of states. Though very powerful states may pursue such a strategy against a wide range of targets, the leaders of less powerful states may have this option only during fairly serious episodes of interstate hostility, such as rivalries and territorial disputes. A substantial amount of research has focused exclusively on the United States, a country that clearly has the capacity to pursue this strategy. While the findings of this work cannot be generalized to many other states, they have revealed some important nuances in the processes that create diversionary incentives. The extent to which these incentives hinge on highly specific political and institutional characteristics point to the difficulty of applying realistic diversionary arguments to a large sample of states. Research on smaller, more homogenous samples or individual states is more promising, even though it will not produce an answer to the broad question of how prevalent diversionary behavior is. As with many broad questions about political phenomena, the only correct answer may be “it depends.” Diversionary foreign policy happens, but not in the same way in every instance and not in every state in the international system.
Krista E. Wiegand
Despite the decline in interstate wars, there remain dozens of interstate disputes that could erupt into diplomatic crises and evolve into military escalation. By far the most difficult interstate dispute that exists are territorial disputes, followed by maritime and river boundary disputes. These disputes are not only costly for the states involved, but also potentially dangerous for states in the region and allies of disputant states who could become entrapped in armed conflicts. Fortunately, though many disputes remain unresolved and some disputes endure for decades or more than a century, many other disputes are peacefully resolved through conflict management tools.
Understanding the factors that influence conflict management—the means by which governments decide their foreign policy strategies relating to interstate disputes and civil conflicts—is critical to policy makers and scholars interested in the peaceful resolution of such disputes. Though conflict management of territorial and maritime disputes can include a spectrum of management tools, including use of force, most conflict management tools are peaceful, involving direct bilateral negotiations between the disputant states, non-binding third party mediation, or binding legal dispute resolution. Governments most often attempt the most direct dispute resolution method, which is bilateral negotiations, but often, such negotiations break down due to uncompromising positions of the disputing states, leading governments to turn to other resolution methods. There are pros and cons of each of the dispute resolution methods and certain factors will influence the decisions that governments make about the management of their territorial and maritime disputes. Overall, the peaceful resolution of territorial and maritime disputes is an important but complicated issue for states both directly involved and indirectly affected by the persistence of such disputes.
Capitalist peace theory (CPT) has gained considerable attention in international relations theory and the conflict literature. Its proponents maintain that a capitalist organization of an economy pacifies states internally and externally. They portray CPT either as a complement or as a substitute to other liberal explanations such as the democratic peace thesis. They, however, disagree about the facet of capitalism that is supposed to reduce the risk of political violence. Key contributions have identified three main drivers of the capitalist peace phenomenon: the fiscal constraints that a laissez-faire regimen puts on potentially aggressive governments, the mollifying norms that a capitalist organization creates; and the increased ability of capitalist governments to signal their intentions effectively in a confrontation with an adversary. Defining capitalism narrowly through the freedom entrepreneurs enjoy domestically, this article evaluates the key causal mechanisms and empirical evidence that have been advanced in support of these competing claims. The article argues that CPT needs to be based on a narrow definition of capitalism and that it should scrutinize motives and constraints of the main actors more deeply. Future contributions to the CPT literature should also pay close attention to classic theories of capitalism, which all considered individual risk taking and the dramatic changes between booms and busts to be key constitutive features of this form of economic governance. Finally, empirical tests of the proposed causal mechanism should rely on data sets in which capitalists appear as actors and not as “structures.” If the literature takes these objections seriously, CPT could establish itself as central theory of peace and war in two respects. First, it could serve as an antidote to the theory of imperialism and other “critical” approaches that see in capitalism a source of conflict rather than of peace. Second, it could become an important complement to commercial liberalism that stresses the external openness rather than the internal freedoms as an economic cause of peace and that particularly sees trade and foreign direct investment as pacifying forces.